9.6 KiB
title | reviewers | content_template | weight | |
---|---|---|---|---|
Example: Deploying Cassandra with Stateful Sets |
|
templates/tutorial | 30 |
{{% capture overview %}}
This tutorial shows you how to develop a native cloud Cassandra deployment on Kubernetes. In this example, a custom Cassandra SeedProvider
enables Cassandra to discover new Cassandra nodes as they join the cluster.
StatefulSet
s make it easier to deploy stateful applications within a clustered environment. For more information on the features used in this tutorial, see the StatefulSet
documentation.
Cassandra on Docker
The Pod
s in this tutorial use the gcr.io/google-samples/cassandra:v13
image from Google's container registry.
The Docker image above is based on debian-base
and includes OpenJDK 8.
This image includes a standard Cassandra installation from the Apache Debian repo.
By using environment variables you can change values that are inserted into cassandra.yaml
.
ENV VAR | DEFAULT VALUE |
---|---|
CASSANDRA_CLUSTER_NAME |
'Test Cluster' |
CASSANDRA_NUM_TOKENS |
32 |
CASSANDRA_RPC_ADDRESS |
0.0.0.0 |
{{% /capture %}}
{{% capture objectives %}}
- Create and validate a Cassandra headless
Service
. - Use a
StatefulSet
to create a Cassandra ring. - Validate the
StatefulSet
. - Modify the
StatefulSet
. - Delete the
StatefulSet
and itsPod
s. {{% /capture %}}
{{% capture prerequisites %}}
To complete this tutorial, you should already have a basic familiarity with Pod
s, Service
s, and StatefulSet
s. In addition, you should:
-
Install and Configure the
kubectl
command-line tool -
Download
cassandra-service.yaml
andcassandra-statefulset.yaml
-
Have a supported Kubernetes cluster running
{{< note >}} Note: Please read the getting started guides if you do not already have a cluster. {{< /note >}}
Additional Minikube Setup Instructions
{{< caution >}} Caution: Minikube defaults to 1024MB of memory and 1 CPU. Running Minikube with the default resource configuration may result in insufficient resource errors during this tutorial. To avoid these errors, we recommend running Minikube with 5 GB of memory and 4 CPUs:
minikube start --memory 5120 --cpus=4
{{< /caution >}}
{{% /capture %}}
{{% capture lessoncontent %}}
Creating a Cassandra Headless Service
A Kubernetes Service
describes a set of Pod
s that perform the same task.
The following Service
is used for DNS lookups between Cassandra Pod
s and clients within the Kubernetes cluster.
{{< codenew file="application/cassandra/cassandra-service.yaml" >}}
-
Launch a terminal window in the directory you downloaded the manifest files.
-
Create a
Service
to track all CassandraStatefulSet
nodes from thecassandra-service.yaml
file:kubectl create -f https://k8s.io/examples/application/cassandra/cassandra-service.yaml
Validating (optional)
Get the Cassandra Service
.
kubectl get svc cassandra
The response is
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cassandra None <none> 9042/TCP 45s
Service creation failed if anything else is returned. Read Debug Services for common issues.
Using a StatefulSet to Create a Cassandra Ring
The StatefulSet
manifest, included below, creates a Cassandra ring that consists of three Pod
s.
{{< note >}}
Note: This example uses the default provisioner for Minikube. Please update the following StatefulSet
for the cloud you are working with.
{{< /note >}}
{{< codenew file="application/cassandra/cassandra-statefulset.yaml" >}}
-
Update the
StatefulSet
if necessary. -
Create the Cassandra
StatefulSet
from thecassandra-statefulset.yaml
file:kubectl create -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml
Validating The Cassandra StatefulSet
-
Get the Cassandra
StatefulSet
:kubectl get statefulset cassandra
The response should be:
NAME DESIRED CURRENT AGE cassandra 3 0 13s
The
StatefulSet
resource deploysPod
s sequentially. -
Get the
Pod
s to see the ordered creation status:kubectl get pods -l="app=cassandra"
The response should be:
NAME READY STATUS RESTARTS AGE cassandra-0 1/1 Running 0 1m cassandra-1 0/1 ContainerCreating 0 8s
Please note that it may take several minutes for all three
Pod
s to deploy. Once they are deployed, the same command returns:NAME READY STATUS RESTARTS AGE cassandra-0 1/1 Running 0 10m cassandra-1 1/1 Running 0 9m cassandra-2 1/1 Running 0 8m
-
Run the Cassandra nodetool to display the status of the ring.
kubectl exec -it cassandra-0 -- nodetool status
The response should look something like this:
Datacenter: DC1-K8Demo ====================== Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 172.17.0.5 83.57 KiB 32 74.0% e2dd09e6-d9d3-477e-96c5-45094c08db0f Rack1-K8Demo UN 172.17.0.4 101.04 KiB 32 58.8% f89d6835-3a42-4419-92b3-0e62cae1479c Rack1-K8Demo UN 172.17.0.6 84.74 KiB 32 67.1% a6a1e8c2-3dc5-4417-b1a0-26507af2aaad Rack1-K8Demo
Modifying the Cassandra StatefulSet
Use kubectl edit
to modify the size of a Cassandra StatefulSet
.
-
Run the following command:
kubectl edit statefulset cassandra
This command opens an editor in your terminal. The line you need to change is the
replicas
field. The following sample is an excerpt of theStatefulSet
file:# Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: StatefulSet metadata: creationTimestamp: 2016-08-13T18:40:58Z generation: 1 labels: app: cassandra name: cassandra namespace: default resourceVersion: "323" selfLink: /apis/apps/v1/namespaces/default/statefulsets/cassandra uid: 7a219483-6185-11e6-a910-42010a8a0fc0 spec: replicas: 3
-
Change the number of replicas to 4, and then save the manifest.
The
StatefulSet
now contains 4Pod
s. -
Get the Cassandra
StatefulSet
to verify:kubectl get statefulset cassandra
The response should be
NAME DESIRED CURRENT AGE cassandra 4 4 36m
{{% /capture %}}
{{% capture cleanup %}}
Deleting or scaling a StatefulSet
down does not delete the volumes associated with the StatefulSet
. This ensures safety first: your data is more valuable than an auto purge of all related StatefulSet resources.
{{< warning >}}
Warning: Depending on the storage class and reclaim policy, deleting the PersistentVolumeClaim
s may cause the associated volumes to also be deleted. Never assume you’ll be able to access data if its volume claims are deleted.
{{< /warning >}}
-
Run the following commands (chained together into a single command) to delete everything in the Cassandra
StatefulSet
:grace=$(kubectl get po cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodSeconds}') \ && kubectl delete statefulset -l app=cassandra \ && echo "Sleeping $grace" \ && sleep $grace \ && kubectl delete pvc -l app=cassandra
-
Run the following command to delete the Cassandra
Service
.kubectl delete service -l app=cassandra
{{% /capture %}}
{{% capture whatsnext %}}
- Learn how to Scale a
StatefulSet
. - Learn more about the
KubernetesSeedProvider
- See more custom Seed Provider Configurations
{{% /capture %}}