Merge pull request #26141 from jimangel/remove-redis-and-master-slave-terminology-2
Remove Master/Slave terminology from stateless application tutorial.pull/26412/head
commit
af76e77672
|
@ -33,7 +33,7 @@ Before walking through each tutorial, you may want to bookmark the
|
|||
|
||||
* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/)
|
||||
|
||||
* [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/)
|
||||
* [Example: Deploying PHP Guestbook application with MongoDB](/docs/tutorials/stateless-application/guestbook/)
|
||||
|
||||
## Stateful Applications
|
||||
|
||||
|
|
|
@ -1,460 +0,0 @@
|
|||
---
|
||||
title: "Example: Add logging and metrics to the PHP / Redis Guestbook example"
|
||||
reviewers:
|
||||
- sftim
|
||||
content_type: tutorial
|
||||
weight: 21
|
||||
card:
|
||||
name: tutorials
|
||||
weight: 31
|
||||
title: "Example: Add logging and metrics to the PHP / Redis Guestbook example"
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
This tutorial builds upon the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial. Lightweight log, metric, and network data open source shippers, or *Beats*, from Elastic are deployed in the same Kubernetes cluster as the guestbook. The Beats collect, parse, and index the data into Elasticsearch so that you can view and analyze the resulting operational information in Kibana. This example consists of the following components:
|
||||
|
||||
* A running instance of the [PHP Guestbook with Redis tutorial](/docs/tutorials/stateless-application/guestbook)
|
||||
* Elasticsearch and Kibana
|
||||
* Filebeat
|
||||
* Metricbeat
|
||||
* Packetbeat
|
||||
|
||||
## {{% heading "objectives" %}}
|
||||
|
||||
* Start up the PHP Guestbook with Redis.
|
||||
* Install kube-state-metrics.
|
||||
* Create a Kubernetes Secret.
|
||||
* Deploy the Beats.
|
||||
* View dashboards of your logs and metrics.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
{{< version-check >}}
|
||||
|
||||
Additionally you need:
|
||||
|
||||
* A running deployment of the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial.
|
||||
|
||||
* A running Elasticsearch and Kibana deployment. You can use [Elasticsearch Service in Elastic Cloud](https://cloud.elastic.co),
|
||||
run the [downloaded files](https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-elastic-stack.html)
|
||||
on your workstation or servers, or the [Elastic Helm Charts](https://github.com/elastic/helm-charts).
|
||||
|
||||
<!-- lessoncontent -->
|
||||
|
||||
## Start up the PHP Guestbook with Redis
|
||||
|
||||
This tutorial builds on the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial. If you have the guestbook application running, then you can monitor that. If you do not have it running then follow the instructions to deploy the guestbook and do not perform the **Cleanup** steps. Come back to this page when you have the guestbook running.
|
||||
|
||||
## Add a Cluster role binding
|
||||
|
||||
Create a [cluster level role binding](/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) so that you can deploy kube-state-metrics and the Beats at the cluster level (in kube-system).
|
||||
|
||||
```shell
|
||||
kubectl create clusterrolebinding cluster-admin-binding \
|
||||
--clusterrole=cluster-admin --user=<your email associated with the k8s provider account>
|
||||
```
|
||||
|
||||
## Install kube-state-metrics
|
||||
|
||||
Kubernetes [*kube-state-metrics*](https://github.com/kubernetes/kube-state-metrics) is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects. Metricbeat reports these metrics. Add kube-state-metrics to the Kubernetes cluster that the guestbook is running in.
|
||||
|
||||
```shell
|
||||
git clone https://github.com/kubernetes/kube-state-metrics.git kube-state-metrics
|
||||
kubectl apply -f kube-state-metrics/examples/standard
|
||||
```
|
||||
|
||||
### Check to see if kube-state-metrics is running
|
||||
|
||||
```shell
|
||||
kubectl get pods --namespace=kube-system -l app.kubernetes.io/name=kube-state-metrics
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kube-state-metrics-89d656bf8-vdthm 1/1 Running 0 21s
|
||||
```
|
||||
|
||||
## Clone the Elastic examples GitHub repo
|
||||
|
||||
```shell
|
||||
git clone https://github.com/elastic/examples.git
|
||||
```
|
||||
|
||||
The rest of the commands will reference files in the `examples/beats-k8s-send-anywhere` directory, so change dir there:
|
||||
|
||||
```shell
|
||||
cd examples/beats-k8s-send-anywhere
|
||||
```
|
||||
|
||||
## Create a Kubernetes Secret
|
||||
|
||||
A Kubernetes {{< glossary_tooltip text="Secret" term_id="secret" >}} is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in an image; putting it in a Secret object allows for more control over how it is used, and reduces the risk of accidental exposure.
|
||||
|
||||
{{< note >}}
|
||||
There are two sets of steps here, one for *self managed* Elasticsearch and Kibana (running on your servers or using the Elastic Helm Charts), and a second separate set for the *managed service* Elasticsearch Service in Elastic Cloud. Only create the secret for the type of Elasticsearch and Kibana system that you will use for this tutorial.
|
||||
{{< /note >}}
|
||||
|
||||
{{< tabs name="tab_with_md" >}}
|
||||
{{% tab name="Self Managed" %}}
|
||||
|
||||
### Self managed
|
||||
|
||||
Switch to the **Managed service** tab if you are connecting to Elasticsearch Service in Elastic Cloud.
|
||||
|
||||
### Set the credentials
|
||||
|
||||
There are four files to edit to create a k8s secret when you are connecting to self managed Elasticsearch and Kibana (self managed is effectively anything other than the managed Elasticsearch Service in Elastic Cloud). The files are:
|
||||
|
||||
1. `ELASTICSEARCH_HOSTS`
|
||||
1. `ELASTICSEARCH_PASSWORD`
|
||||
1. `ELASTICSEARCH_USERNAME`
|
||||
1. `KIBANA_HOST`
|
||||
|
||||
Set these with the information for your Elasticsearch cluster and your Kibana host. Here are some examples (also see [*this configuration*](https://stackoverflow.com/questions/59892896/how-to-connect-from-minikube-to-elasticsearch-installed-on-host-local-developme/59892897#59892897))
|
||||
|
||||
#### `ELASTICSEARCH_HOSTS`
|
||||
|
||||
1. A nodeGroup from the Elastic Elasticsearch Helm Chart:
|
||||
|
||||
```
|
||||
["http://elasticsearch-master.default.svc.cluster.local:9200"]
|
||||
```
|
||||
|
||||
1. A single Elasticsearch node running on a Mac where your Beats are running in Docker for Mac:
|
||||
|
||||
```
|
||||
["http://host.docker.internal:9200"]
|
||||
```
|
||||
|
||||
1. Two Elasticsearch nodes running in VMs or on physical hardware:
|
||||
|
||||
```
|
||||
["http://host1.example.com:9200", "http://host2.example.com:9200"]
|
||||
```
|
||||
|
||||
Edit `ELASTICSEARCH_HOSTS`:
|
||||
|
||||
```shell
|
||||
vi ELASTICSEARCH_HOSTS
|
||||
```
|
||||
|
||||
#### `ELASTICSEARCH_PASSWORD`
|
||||
|
||||
Just the password; no whitespace, quotes, `<` or `>`:
|
||||
|
||||
```
|
||||
<yoursecretpassword>
|
||||
```
|
||||
|
||||
Edit `ELASTICSEARCH_PASSWORD`:
|
||||
|
||||
```shell
|
||||
vi ELASTICSEARCH_PASSWORD
|
||||
```
|
||||
|
||||
#### `ELASTICSEARCH_USERNAME`
|
||||
|
||||
Just the username; no whitespace, quotes, `<` or `>`:
|
||||
|
||||
```
|
||||
<your ingest username for Elasticsearch>
|
||||
```
|
||||
|
||||
Edit `ELASTICSEARCH_USERNAME`:
|
||||
|
||||
```shell
|
||||
vi ELASTICSEARCH_USERNAME
|
||||
```
|
||||
|
||||
#### `KIBANA_HOST`
|
||||
|
||||
1. The Kibana instance from the Elastic Kibana Helm Chart. The subdomain `default` refers to the default namespace. If you have deployed the Helm Chart using a different namespace, then your subdomain will be different:
|
||||
|
||||
```
|
||||
"kibana-kibana.default.svc.cluster.local:5601"
|
||||
```
|
||||
|
||||
1. A Kibana instance running on a Mac where your Beats are running in Docker for Mac:
|
||||
|
||||
```
|
||||
"host.docker.internal:5601"
|
||||
```
|
||||
1. Two Elasticsearch nodes running in VMs or on physical hardware:
|
||||
|
||||
```
|
||||
"host1.example.com:5601"
|
||||
```
|
||||
|
||||
Edit `KIBANA_HOST`:
|
||||
|
||||
```shell
|
||||
vi KIBANA_HOST
|
||||
```
|
||||
|
||||
### Create a Kubernetes Secret
|
||||
|
||||
This command creates a Secret in the Kubernetes system level namespace (`kube-system`) based on the files you just edited:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic dynamic-logging \
|
||||
--from-file=./ELASTICSEARCH_HOSTS \
|
||||
--from-file=./ELASTICSEARCH_PASSWORD \
|
||||
--from-file=./ELASTICSEARCH_USERNAME \
|
||||
--from-file=./KIBANA_HOST \
|
||||
--namespace=kube-system
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Managed service" %}}
|
||||
|
||||
## Managed service
|
||||
|
||||
This tab is for Elasticsearch Service in Elastic Cloud only, if you have already created a secret for a self managed Elasticsearch and Kibana deployment, then continue with [Deploy the Beats](#deploy-the-beats).
|
||||
|
||||
### Set the credentials
|
||||
|
||||
There are two files to edit to create a Kubernetes Secret when you are connecting to the managed Elasticsearch Service in Elastic Cloud. The files are:
|
||||
|
||||
1. `ELASTIC_CLOUD_AUTH`
|
||||
1. `ELASTIC_CLOUD_ID`
|
||||
|
||||
Set these with the information provided to you from the Elasticsearch Service console when you created the deployment. Here are some examples:
|
||||
|
||||
#### `ELASTIC_CLOUD_ID`
|
||||
|
||||
```
|
||||
devk8s:ABC123def456ghi789jkl123mno456pqr789stu123vwx456yza789bcd012efg345hijj678klm901nop345zEwOTJjMTc5YWQ0YzQ5OThlN2U5MjAwYTg4NTIzZQ==
|
||||
```
|
||||
|
||||
#### `ELASTIC_CLOUD_AUTH`
|
||||
|
||||
Just the username, a colon (`:`), and the password, no whitespace or quotes:
|
||||
|
||||
```
|
||||
elastic:VFxJJf9Tjwer90wnfTghsn8w
|
||||
```
|
||||
|
||||
### Edit the required files:
|
||||
|
||||
```shell
|
||||
vi ELASTIC_CLOUD_ID
|
||||
vi ELASTIC_CLOUD_AUTH
|
||||
```
|
||||
|
||||
### Create a Kubernetes Secret
|
||||
|
||||
This command creates a Secret in the Kubernetes system level namespace (`kube-system`) based on the files you just edited:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic dynamic-logging \
|
||||
--from-file=./ELASTIC_CLOUD_ID \
|
||||
--from-file=./ELASTIC_CLOUD_AUTH \
|
||||
--namespace=kube-system
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Deploy the Beats
|
||||
|
||||
Manifest files are provided for each Beat. These manifest files use the secret created earlier to configure the Beats to connect to your Elasticsearch and Kibana servers.
|
||||
|
||||
### About Filebeat
|
||||
|
||||
Filebeat will collect logs from the Kubernetes nodes and the containers running in each pod running on those nodes. Filebeat is deployed as a {{< glossary_tooltip text="DaemonSet" term_id="daemonset" >}}. Filebeat can autodiscover applications running in your Kubernetes cluster. At startup Filebeat scans existing containers and launches the proper configurations for them, then it will watch for new start/stop events.
|
||||
|
||||
Here is the autodiscover configuration that enables Filebeat to locate and parse Redis logs from the Redis containers deployed with the guestbook application. This configuration is in the file `filebeat-kubernetes.yaml`:
|
||||
|
||||
```yaml
|
||||
- condition.contains:
|
||||
kubernetes.labels.app: redis
|
||||
config:
|
||||
- module: redis
|
||||
log:
|
||||
input:
|
||||
type: docker
|
||||
containers.ids:
|
||||
- ${data.kubernetes.container.id}
|
||||
slowlog:
|
||||
enabled: true
|
||||
var.hosts: ["${data.host}:${data.port}"]
|
||||
```
|
||||
|
||||
This configures Filebeat to apply the Filebeat module `redis` when a container is detected with a label `app` containing the string `redis`. The redis module has the ability to collect the `log` stream from the container by using the docker input type (reading the file on the Kubernetes node associated with the STDOUT stream from this Redis container). Additionally, the module has the ability to collect Redis `slowlog` entries by connecting to the proper pod host and port, which is provided in the container metadata.
|
||||
|
||||
### Deploy Filebeat:
|
||||
|
||||
```shell
|
||||
kubectl create -f filebeat-kubernetes.yaml
|
||||
```
|
||||
|
||||
#### Verify
|
||||
|
||||
```shell
|
||||
kubectl get pods -n kube-system -l k8s-app=filebeat-dynamic
|
||||
```
|
||||
|
||||
### About Metricbeat
|
||||
|
||||
Metricbeat autodiscover is configured in the same way as Filebeat. Here is the Metricbeat autodiscover configuration for the Redis containers. This configuration is in the file `metricbeat-kubernetes.yaml`:
|
||||
|
||||
```yaml
|
||||
- condition.equals:
|
||||
kubernetes.labels.tier: backend
|
||||
config:
|
||||
- module: redis
|
||||
metricsets: ["info", "keyspace"]
|
||||
period: 10s
|
||||
|
||||
# Redis hosts
|
||||
hosts: ["${data.host}:${data.port}"]
|
||||
```
|
||||
|
||||
This configures Metricbeat to apply the Metricbeat module `redis` when a container is detected with a label `tier` equal to the string `backend`. The `redis` module has the ability to collect the `info` and `keyspace` metrics from the container by connecting to the proper pod host and port, which is provided in the container metadata.
|
||||
|
||||
### Deploy Metricbeat
|
||||
|
||||
```shell
|
||||
kubectl create -f metricbeat-kubernetes.yaml
|
||||
```
|
||||
|
||||
#### Verify
|
||||
|
||||
```shell
|
||||
kubectl get pods -n kube-system -l k8s-app=metricbeat
|
||||
```
|
||||
|
||||
### About Packetbeat
|
||||
|
||||
Packetbeat configuration is different than Filebeat and Metricbeat. Rather than specify patterns to match against container labels the configuration is based on the protocols and port numbers involved. Shown below is a subset of the port numbers.
|
||||
|
||||
{{< note >}}
|
||||
If you are running a service on a non-standard port add that port number to the appropriate type in `filebeat.yaml` and delete/create the Packetbeat DaemonSet.
|
||||
{{< /note >}}
|
||||
|
||||
```yaml
|
||||
packetbeat.interfaces.device: any
|
||||
|
||||
packetbeat.protocols:
|
||||
- type: dns
|
||||
ports: [53]
|
||||
include_authorities: true
|
||||
include_additionals: true
|
||||
|
||||
- type: http
|
||||
ports: [80, 8000, 8080, 9200]
|
||||
|
||||
- type: mysql
|
||||
ports: [3306]
|
||||
|
||||
- type: redis
|
||||
ports: [6379]
|
||||
|
||||
packetbeat.flows:
|
||||
timeout: 30s
|
||||
period: 10s
|
||||
```
|
||||
|
||||
#### Deploy Packetbeat
|
||||
|
||||
```shell
|
||||
kubectl create -f packetbeat-kubernetes.yaml
|
||||
```
|
||||
|
||||
#### Verify
|
||||
|
||||
```shell
|
||||
kubectl get pods -n kube-system -l k8s-app=packetbeat-dynamic
|
||||
```
|
||||
|
||||
## View in Kibana
|
||||
|
||||
Open Kibana in your browser and then open the **Dashboard** application. In the search bar type Kubernetes and click on the Metricbeat dashboard for Kubernetes. This dashboard reports on the state of your Nodes, deployments, etc.
|
||||
|
||||
Search for Packetbeat on the Dashboard page, and view the Packetbeat overview.
|
||||
|
||||
Similarly, view dashboards for Apache and Redis. You will see dashboards for logs and metrics for each. The Apache Metricbeat dashboard will be blank. Look at the Apache Filebeat dashboard and scroll to the bottom to view the Apache error logs. This will tell you why there are no metrics available for Apache.
|
||||
|
||||
To enable Metricbeat to retrieve the Apache metrics, enable server-status by adding a ConfigMap including a mod-status configuration file and re-deploy the guestbook.
|
||||
|
||||
## Scale your Deployments and see new pods being monitored
|
||||
|
||||
List the existing Deployments:
|
||||
|
||||
```shell
|
||||
kubectl get deployments
|
||||
```
|
||||
|
||||
The output:
|
||||
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
frontend 3/3 3 3 3h27m
|
||||
redis-master 1/1 1 1 3h27m
|
||||
redis-slave 2/2 2 2 3h27m
|
||||
```
|
||||
|
||||
Scale the frontend down to two pods:
|
||||
|
||||
```shell
|
||||
kubectl scale --replicas=2 deployment/frontend
|
||||
```
|
||||
|
||||
The output:
|
||||
|
||||
```
|
||||
deployment.extensions/frontend scaled
|
||||
```
|
||||
|
||||
Scale the frontend back up to three pods:
|
||||
|
||||
```shell
|
||||
kubectl scale --replicas=3 deployment/frontend
|
||||
```
|
||||
|
||||
## View the changes in Kibana
|
||||
|
||||
See the screenshot, add the indicated filters and then add the columns to the view. You can see the ScalingReplicaSet entry that is marked, following from there to the top of the list of events shows the image being pulled, the volumes mounted, the pod starting, etc.
|
||||

|
||||
|
||||
## {{% heading "cleanup" %}}
|
||||
|
||||
Deleting the Deployments and Services also deletes any running Pods. Use labels to delete multiple resources with one command.
|
||||
|
||||
1. Run the following commands to delete all Pods, Deployments, and Services.
|
||||
|
||||
```shell
|
||||
kubectl delete deployment -l app=redis
|
||||
kubectl delete service -l app=redis
|
||||
kubectl delete deployment -l app=guestbook
|
||||
kubectl delete service -l app=guestbook
|
||||
kubectl delete -f filebeat-kubernetes.yaml
|
||||
kubectl delete -f metricbeat-kubernetes.yaml
|
||||
kubectl delete -f packetbeat-kubernetes.yaml
|
||||
kubectl delete secret dynamic-logging -n kube-system
|
||||
```
|
||||
|
||||
1. Query the list of Pods to verify that no Pods are running:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
The response should be this:
|
||||
|
||||
```
|
||||
No resources found.
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn about [tools for monitoring resources](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
|
||||
* Read more about [logging architecture](/docs/concepts/cluster-administration/logging/)
|
||||
* Read more about [application introspection and debugging](/docs/tasks/debug-application-cluster/)
|
||||
* Read more about [troubleshoot applications](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: "Example: Deploying PHP Guestbook application with Redis"
|
||||
title: "Example: Deploying PHP Guestbook application with MongoDB"
|
||||
reviewers:
|
||||
- ahmetb
|
||||
content_type: tutorial
|
||||
|
@ -7,22 +7,19 @@ weight: 20
|
|||
card:
|
||||
name: tutorials
|
||||
weight: 30
|
||||
title: "Stateless Example: PHP Guestbook with Redis"
|
||||
title: "Stateless Example: PHP Guestbook with MongoDB"
|
||||
min-kubernetes-server-version: v1.14
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
This tutorial shows you how to build and deploy a simple, multi-tier web application using Kubernetes and [Docker](https://www.docker.com/). This example consists of the following components:
|
||||
This tutorial shows you how to build and deploy a simple _(not production ready)_, multi-tier web application using Kubernetes and [Docker](https://www.docker.com/). This example consists of the following components:
|
||||
|
||||
* A single-instance [Redis](https://redis.io/) master to store guestbook entries
|
||||
* Multiple [replicated Redis](https://redis.io/topics/replication) instances to serve reads
|
||||
* A single-instance [MongoDB](https://www.mongodb.com/) to store guestbook entries
|
||||
* Multiple web frontend instances
|
||||
|
||||
|
||||
|
||||
## {{% heading "objectives" %}}
|
||||
|
||||
* Start up a Redis master.
|
||||
* Start up Redis slaves.
|
||||
* Start up a Mongo database.
|
||||
* Start up the guestbook frontend.
|
||||
* Expose and view the Frontend Service.
|
||||
* Clean up.
|
||||
|
@ -39,24 +36,28 @@ This tutorial shows you how to build and deploy a simple, multi-tier web applica
|
|||
|
||||
<!-- lessoncontent -->
|
||||
|
||||
## Start up the Redis Master
|
||||
## Start up the Mongo Database
|
||||
|
||||
The guestbook application uses Redis to store its data. It writes its data to a Redis master instance and reads data from multiple Redis slave instances.
|
||||
The guestbook application uses MongoDB to store its data.
|
||||
|
||||
### Creating the Redis Master Deployment
|
||||
### Creating the Mongo Deployment
|
||||
|
||||
The manifest file, included below, specifies a Deployment controller that runs a single replica Redis master Pod.
|
||||
The manifest file, included below, specifies a Deployment controller that runs a single replica MongoDB Pod.
|
||||
|
||||
{{< codenew file="application/guestbook/redis-master-deployment.yaml" >}}
|
||||
{{< codenew file="application/guestbook/mongo-deployment.yaml" >}}
|
||||
|
||||
1. Launch a terminal window in the directory you downloaded the manifest files.
|
||||
1. Apply the Redis Master Deployment from the `redis-master-deployment.yaml` file:
|
||||
1. Apply the MongoDB Deployment from the `mongo-deployment.yaml` file:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml
|
||||
kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-deployment.yaml
|
||||
```
|
||||
<!---
|
||||
for local testing of the content via relative file path
|
||||
kubectl apply -f ./content/en/examples/application/guestbook/mongo-deployment.yaml
|
||||
-->
|
||||
|
||||
1. Query the list of Pods to verify that the Redis Master Pod is running:
|
||||
1. Query the list of Pods to verify that the MongoDB Pod is running:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
|
@ -66,32 +67,33 @@ The manifest file, included below, specifies a Deployment controller that runs a
|
|||
|
||||
```shell
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis-master-1068406935-3lswp 1/1 Running 0 28s
|
||||
mongo-5cfd459dd4-lrcjb 1/1 Running 0 28s
|
||||
```
|
||||
|
||||
1. Run the following command to view the logs from the Redis Master Pod:
|
||||
1. Run the following command to view the logs from the MongoDB Deployment:
|
||||
|
||||
```shell
|
||||
kubectl logs -f POD-NAME
|
||||
kubectl logs -f deployment/mongo
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Replace POD-NAME with the name of your Pod.
|
||||
{{< /note >}}
|
||||
### Creating the MongoDB Service
|
||||
|
||||
### Creating the Redis Master Service
|
||||
The guestbook application needs to communicate to the MongoDB to write its data. You need to apply a [Service](/docs/concepts/services-networking/service/) to proxy the traffic to the MongoDB Pod. A Service defines a policy to access the Pods.
|
||||
|
||||
The guestbook application needs to communicate to the Redis master to write its data. You need to apply a [Service](/docs/concepts/services-networking/service/) to proxy the traffic to the Redis master Pod. A Service defines a policy to access the Pods.
|
||||
{{< codenew file="application/guestbook/mongo-service.yaml" >}}
|
||||
|
||||
{{< codenew file="application/guestbook/redis-master-service.yaml" >}}
|
||||
|
||||
1. Apply the Redis Master Service from the following `redis-master-service.yaml` file:
|
||||
1. Apply the MongoDB Service from the following `mongo-service.yaml` file:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-service.yaml
|
||||
kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-service.yaml
|
||||
```
|
||||
|
||||
1. Query the list of Services to verify that the Redis Master Service is running:
|
||||
<!---
|
||||
for local testing of the content via relative file path
|
||||
kubectl apply -f ./content/en/examples/application/guestbook/mongo-service.yaml
|
||||
-->
|
||||
|
||||
1. Query the list of Services to verify that the MongoDB Service is running:
|
||||
|
||||
```shell
|
||||
kubectl get service
|
||||
|
@ -102,77 +104,17 @@ The guestbook application needs to communicate to the Redis master to write its
|
|||
```shell
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1m
|
||||
redis-master ClusterIP 10.0.0.151 <none> 6379/TCP 8s
|
||||
mongo ClusterIP 10.0.0.151 <none> 6379/TCP 8s
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
This manifest file creates a Service named `redis-master` with a set of labels that match the labels previously defined, so the Service routes network traffic to the Redis master Pod.
|
||||
This manifest file creates a Service named `mongo` with a set of labels that match the labels previously defined, so the Service routes network traffic to the MongoDB Pod.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
## Start up the Redis Slaves
|
||||
|
||||
Although the Redis master is a single pod, you can make it highly available to meet traffic demands by adding replica Redis slaves.
|
||||
|
||||
### Creating the Redis Slave Deployment
|
||||
|
||||
Deployments scale based off of the configurations set in the manifest file. In this case, the Deployment object specifies two replicas.
|
||||
|
||||
If there are not any replicas running, this Deployment would start the two replicas on your container cluster. Conversely, if there are more than two replicas running, it would scale down until two replicas are running.
|
||||
|
||||
{{< codenew file="application/guestbook/redis-slave-deployment.yaml" >}}
|
||||
|
||||
1. Apply the Redis Slave Deployment from the `redis-slave-deployment.yaml` file:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-slave-deployment.yaml
|
||||
```
|
||||
|
||||
1. Query the list of Pods to verify that the Redis Slave Pods are running:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
The response should be similar to this:
|
||||
|
||||
```shell
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis-master-1068406935-3lswp 1/1 Running 0 1m
|
||||
redis-slave-2005841000-fpvqc 0/1 ContainerCreating 0 6s
|
||||
redis-slave-2005841000-phfv9 0/1 ContainerCreating 0 6s
|
||||
```
|
||||
|
||||
### Creating the Redis Slave Service
|
||||
|
||||
The guestbook application needs to communicate to Redis slaves to read data. To make the Redis slaves discoverable, you need to set up a Service. A Service provides transparent load balancing to a set of Pods.
|
||||
|
||||
{{< codenew file="application/guestbook/redis-slave-service.yaml" >}}
|
||||
|
||||
1. Apply the Redis Slave Service from the following `redis-slave-service.yaml` file:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-slave-service.yaml
|
||||
```
|
||||
|
||||
1. Query the list of Services to verify that the Redis slave service is running:
|
||||
|
||||
```shell
|
||||
kubectl get services
|
||||
```
|
||||
|
||||
The response should be similar to this:
|
||||
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 2m
|
||||
redis-master ClusterIP 10.0.0.151 <none> 6379/TCP 1m
|
||||
redis-slave ClusterIP 10.0.0.223 <none> 6379/TCP 6s
|
||||
```
|
||||
|
||||
## Set up and Expose the Guestbook Frontend
|
||||
|
||||
The guestbook application has a web frontend serving the HTTP requests written in PHP. It is configured to connect to the `redis-master` Service for write requests and the `redis-slave` service for Read requests.
|
||||
The guestbook application has a web frontend serving the HTTP requests written in PHP. It is configured to connect to the `mongo` Service to store Guestbook entries.
|
||||
|
||||
### Creating the Guestbook Frontend Deployment
|
||||
|
||||
|
@ -184,6 +126,11 @@ The guestbook application has a web frontend serving the HTTP requests written i
|
|||
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml
|
||||
```
|
||||
|
||||
<!---
|
||||
for local testing of the content via relative file path
|
||||
kubectl apply -f ./content/en/examples/application/guestbook/frontend-deployment.yaml
|
||||
-->
|
||||
|
||||
1. Query the list of Pods to verify that the three frontend replicas are running:
|
||||
|
||||
```shell
|
||||
|
@ -201,12 +148,12 @@ The guestbook application has a web frontend serving the HTTP requests written i
|
|||
|
||||
### Creating the Frontend Service
|
||||
|
||||
The `redis-slave` and `redis-master` Services you applied are only accessible within the container cluster because the default type for a Service is [ClusterIP](/docs/concepts/services-networking/service/#publishing-services---service-types). `ClusterIP` provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster.
|
||||
The `mongo` Services you applied is only accessible within the Kubernetes cluster because the default type for a Service is [ClusterIP](/docs/concepts/services-networking/service/#publishing-services---service-types). `ClusterIP` provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster.
|
||||
|
||||
If you want guests to be able to access your guestbook, you must configure the frontend Service to be externally visible, so a client can request the Service from outside the container cluster. Minikube can only expose Services through `NodePort`.
|
||||
If you want guests to be able to access your guestbook, you must configure the frontend Service to be externally visible, so a client can request the Service from outside the Kubernetes cluster. However a Kubernetes user you can use `kubectl port-forward` to access the service even though it uses a `ClusterIP`.
|
||||
|
||||
{{< note >}}
|
||||
Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, support external load balancers. If your cloud provider supports load balancers and you want to use it, simply delete or comment out `type: NodePort`, and uncomment `type: LoadBalancer`.
|
||||
Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, support external load balancers. If your cloud provider supports load balancers and you want to use it, simply uncomment `type: LoadBalancer`.
|
||||
{{< /note >}}
|
||||
|
||||
{{< codenew file="application/guestbook/frontend-service.yaml" >}}
|
||||
|
@ -217,6 +164,11 @@ Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, su
|
|||
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml
|
||||
```
|
||||
|
||||
<!---
|
||||
for local testing of the content via relative file path
|
||||
kubectl apply -f ./content/en/examples/application/guestbook/frontend-service.yaml
|
||||
-->
|
||||
|
||||
1. Query the list of Services to verify that the frontend Service is running:
|
||||
|
||||
```shell
|
||||
|
@ -227,29 +179,27 @@ Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, su
|
|||
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
frontend NodePort 10.0.0.112 <none> 80:31323/TCP 6s
|
||||
frontend ClusterIP 10.0.0.112 <none> 80/TCP 6s
|
||||
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4m
|
||||
redis-master ClusterIP 10.0.0.151 <none> 6379/TCP 2m
|
||||
redis-slave ClusterIP 10.0.0.223 <none> 6379/TCP 1m
|
||||
mongo ClusterIP 10.0.0.151 <none> 6379/TCP 2m
|
||||
```
|
||||
|
||||
### Viewing the Frontend Service via `NodePort`
|
||||
### Viewing the Frontend Service via `kubectl port-forward`
|
||||
|
||||
If you deployed this application to Minikube or a local cluster, you need to find the IP address to view your Guestbook.
|
||||
|
||||
1. Run the following command to get the IP address for the frontend Service.
|
||||
1. Run the following command to forward port `8080` on your local machine to port `80` on the service.
|
||||
|
||||
```shell
|
||||
minikube service frontend --url
|
||||
kubectl port-forward svc/frontend 8080:80
|
||||
```
|
||||
|
||||
The response should be similar to this:
|
||||
|
||||
```
|
||||
http://192.168.99.100:31323
|
||||
Forwarding from 127.0.0.1:8080 -> 80
|
||||
Forwarding from [::1]:8080 -> 80
|
||||
```
|
||||
|
||||
1. Copy the IP address, and load the page in your browser to view your guestbook.
|
||||
1. load the page [http://localhost:8080](http://localhost:8080) in your browser to view your guestbook.
|
||||
|
||||
### Viewing the Frontend Service via `LoadBalancer`
|
||||
|
||||
|
@ -295,9 +245,7 @@ You can scale up or down as needed because your servers are defined as a Service
|
|||
frontend-3823415956-k22zn 1/1 Running 0 54m
|
||||
frontend-3823415956-w9gbt 1/1 Running 0 54m
|
||||
frontend-3823415956-x2pld 1/1 Running 0 5s
|
||||
redis-master-1068406935-3lswp 1/1 Running 0 56m
|
||||
redis-slave-2005841000-fpvqc 1/1 Running 0 55m
|
||||
redis-slave-2005841000-phfv9 1/1 Running 0 55m
|
||||
mongo-1068406935-3lswp 1/1 Running 0 56m
|
||||
```
|
||||
|
||||
1. Run the following command to scale down the number of frontend Pods:
|
||||
|
@ -318,9 +266,7 @@ You can scale up or down as needed because your servers are defined as a Service
|
|||
NAME READY STATUS RESTARTS AGE
|
||||
frontend-3823415956-k22zn 1/1 Running 0 1h
|
||||
frontend-3823415956-w9gbt 1/1 Running 0 1h
|
||||
redis-master-1068406935-3lswp 1/1 Running 0 1h
|
||||
redis-slave-2005841000-fpvqc 1/1 Running 0 1h
|
||||
redis-slave-2005841000-phfv9 1/1 Running 0 1h
|
||||
mongo-1068406935-3lswp 1/1 Running 0 1h
|
||||
```
|
||||
|
||||
|
||||
|
@ -332,20 +278,18 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels
|
|||
1. Run the following commands to delete all Pods, Deployments, and Services.
|
||||
|
||||
```shell
|
||||
kubectl delete deployment -l app=redis
|
||||
kubectl delete service -l app=redis
|
||||
kubectl delete deployment -l app=guestbook
|
||||
kubectl delete service -l app=guestbook
|
||||
kubectl delete deployment -l app.kubernetes.io/name=mongo
|
||||
kubectl delete service -l app.kubernetes.io/name=mongo
|
||||
kubectl delete deployment -l app.kubernetes.io/name=guestbook
|
||||
kubectl delete service -l app.kubernetes.io/name=guestbook
|
||||
```
|
||||
|
||||
The responses should be:
|
||||
|
||||
```
|
||||
deployment.apps "redis-master" deleted
|
||||
deployment.apps "redis-slave" deleted
|
||||
service "redis-master" deleted
|
||||
service "redis-slave" deleted
|
||||
deployment.apps "frontend" deleted
|
||||
deployment.apps "mongo" deleted
|
||||
service "mongo" deleted
|
||||
deployment.apps "frontend" deleted
|
||||
service "frontend" deleted
|
||||
```
|
||||
|
||||
|
@ -365,7 +309,6 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Add [ELK logging and monitoring](/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk/) to your Guestbook application
|
||||
* Complete the [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) Interactive Tutorials
|
||||
* Use Kubernetes to create a blog using [Persistent Volumes for MySQL and Wordpress](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog)
|
||||
* Read more about [connecting applications](/docs/concepts/services-networking/connect-applications-service/)
|
||||
|
|
|
@ -3,22 +3,24 @@ kind: Deployment
|
|||
metadata:
|
||||
name: frontend
|
||||
labels:
|
||||
app: guestbook
|
||||
app.kubernetes.io/name: guestbook
|
||||
app.kubernetes.io/component: frontend
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: guestbook
|
||||
tier: frontend
|
||||
app.kubernetes.io/name: guestbook
|
||||
app.kubernetes.io/component: frontend
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: guestbook
|
||||
tier: frontend
|
||||
app.kubernetes.io/name: guestbook
|
||||
app.kubernetes.io/component: frontend
|
||||
spec:
|
||||
containers:
|
||||
- name: php-redis
|
||||
image: gcr.io/google-samples/gb-frontend:v4
|
||||
- name: guestbook
|
||||
image: paulczar/gb-frontend:v5
|
||||
# image: gcr.io/google-samples/gb-frontend:v4
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
|
@ -26,13 +28,5 @@ spec:
|
|||
env:
|
||||
- name: GET_HOSTS_FROM
|
||||
value: dns
|
||||
# Using `GET_HOSTS_FROM=dns` requires your cluster to
|
||||
# provide a dns service. As of Kubernetes 1.3, DNS is a built-in
|
||||
# service launched automatically. However, if the cluster you are using
|
||||
# does not have a built-in DNS service, you can instead
|
||||
# access an environment variable to find the master
|
||||
# service's host. To do so, comment out the 'value: dns' line above, and
|
||||
# uncomment the line below:
|
||||
# value: env
|
||||
ports:
|
||||
- containerPort: 80
|
||||
|
|
|
@ -3,16 +3,14 @@ kind: Service
|
|||
metadata:
|
||||
name: frontend
|
||||
labels:
|
||||
app: guestbook
|
||||
tier: frontend
|
||||
app.kubernetes.io/name: guestbook
|
||||
app.kubernetes.io/component: frontend
|
||||
spec:
|
||||
# comment or delete the following line if you want to use a LoadBalancer
|
||||
type: NodePort
|
||||
# if your cluster supports it, uncomment the following to automatically create
|
||||
# an external load-balanced IP for the frontend service.
|
||||
# type: LoadBalancer
|
||||
ports:
|
||||
- port: 80
|
||||
selector:
|
||||
app: guestbook
|
||||
tier: frontend
|
||||
app.kubernetes.io/name: guestbook
|
||||
app.kubernetes.io/component: frontend
|
||||
|
|
|
@ -0,0 +1,31 @@
|
|||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: mongo
|
||||
labels:
|
||||
app.kubernetes.io/name: mongo
|
||||
app.kubernetes.io/component: backend
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: mongo
|
||||
app.kubernetes.io/component: backend
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: mongo
|
||||
app.kubernetes.io/component: backend
|
||||
spec:
|
||||
containers:
|
||||
- name: mongo
|
||||
image: mongo:4.2
|
||||
args:
|
||||
- --bind_ip
|
||||
- 0.0.0.0
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
ports:
|
||||
- containerPort: 27017
|
|
@ -0,0 +1,14 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: mongo
|
||||
labels:
|
||||
app.kubernetes.io/name: mongo
|
||||
app.kubernetes.io/component: backend
|
||||
spec:
|
||||
ports:
|
||||
- port: 27017
|
||||
targetPort: 27017
|
||||
selector:
|
||||
app.kubernetes.io/name: mongo
|
||||
app.kubernetes.io/component: backend
|
|
@ -1,29 +0,0 @@
|
|||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: redis-master
|
||||
labels:
|
||||
app: redis
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: redis
|
||||
role: master
|
||||
tier: backend
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
role: master
|
||||
tier: backend
|
||||
spec:
|
||||
containers:
|
||||
- name: master
|
||||
image: k8s.gcr.io/redis:e2e # or just image: redis
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
ports:
|
||||
- containerPort: 6379
|
|
@ -1,17 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: redis-master
|
||||
labels:
|
||||
app: redis
|
||||
role: master
|
||||
tier: backend
|
||||
spec:
|
||||
ports:
|
||||
- name: redis
|
||||
port: 6379
|
||||
targetPort: 6379
|
||||
selector:
|
||||
app: redis
|
||||
role: master
|
||||
tier: backend
|
|
@ -1,40 +0,0 @@
|
|||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: redis-slave
|
||||
labels:
|
||||
app: redis
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: redis
|
||||
role: slave
|
||||
tier: backend
|
||||
replicas: 2
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
role: slave
|
||||
tier: backend
|
||||
spec:
|
||||
containers:
|
||||
- name: slave
|
||||
image: gcr.io/google_samples/gb-redisslave:v3
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
env:
|
||||
- name: GET_HOSTS_FROM
|
||||
value: dns
|
||||
# Using `GET_HOSTS_FROM=dns` requires your cluster to
|
||||
# provide a dns service. As of Kubernetes 1.3, DNS is a built-in
|
||||
# service launched automatically. However, if the cluster you are using
|
||||
# does not have a built-in DNS service, you can instead
|
||||
# access an environment variable to find the master
|
||||
# service's host. To do so, comment out the 'value: dns' line above, and
|
||||
# uncomment the line below:
|
||||
# value: env
|
||||
ports:
|
||||
- containerPort: 6379
|
|
@ -1,15 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: redis-slave
|
||||
labels:
|
||||
app: redis
|
||||
role: slave
|
||||
tier: backend
|
||||
spec:
|
||||
ports:
|
||||
- port: 6379
|
||||
selector:
|
||||
app: redis
|
||||
role: slave
|
||||
tier: backend
|
Loading…
Reference in New Issue