commit
a926c3959e
|
@ -86,7 +86,6 @@ Before you start deploying applications as Pet Sets, there are a few limitations
|
|||
* As with all alpha/beta resources, it can be disabled through the `--runtime-config` option passed to the apiserver, and in fact most likely will be disabled on hosted offerings of Kubernetes.
|
||||
* The only updatable field on a PetSet is `replicas`
|
||||
* The storage for a given pet must either be provisioned by a [dynamic storage provisioner](http://releases.k8s.io/{{page.githubbranch}}/examples/experimental/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin. Note that dynamic volume provisioning is also currently in alpha.
|
||||
* Deleting the Pet Set *will not* delete any pets. You will either have to manually scale it down to 0 pets first, or delete the pets yourself.
|
||||
* Deleting and/or scaling a PetSet down will *not* delete the volumes associated with the PetSet. This is done to ensure safety first, your data is more valuable than an auto purge of all related PetSet resources. **Deleting the Persistent Volume Claims will result in a deletion of the associated volumes**.
|
||||
* All PetSets currently require a "governing service", or a Service responsible for the network identity of the pets. The user is responsible for this Service.
|
||||
* Updating an existing PetSet is currently a manual process, meaning you either need to deploy a new PetSet with the new image version, or orphan Pets one by one, update their image, and join them back to the cluster.
|
||||
|
@ -122,7 +121,7 @@ web-1 1/1 Running 0 10m
|
|||
|
||||
### Stable storage
|
||||
|
||||
2 persistent volumes, one per pod. This is auto created by the Pet Set based on the `volumeTemplate` field
|
||||
2 persistent volumes, one per pod. This is auto created by the PetSet based on the `volumeClaimTemplate` field
|
||||
|
||||
```shell
|
||||
$ kubectl get pv
|
||||
|
@ -243,7 +242,7 @@ nginx.default.svc.cluster.local service = 10 50 0 web-0.ub.default.svc.cluster.l
|
|||
|
||||
## Updating a PetSet
|
||||
|
||||
You cannot update any field of the PetSet except `spec.replicas`. You can update the replicas field using standard kubectl update commands like [patch](/docs/user-guide/kubectl/kubectl_patch) and [edit](/docs/user-guide/kubectl/kubectl_edit). Pet Set currently *does not* support image upgrade as noted in the section on [limitations](#alpha-limitations).
|
||||
You cannot update any field of the PetSet except `spec.replicas` and the `containers` in the podTemplate. Updating `spec.replicas` will scale the PetSet, updating `containers` will not have any effect till a Pet is deleted, at which time it is recreated with the modified podTemplate.
|
||||
|
||||
## Scaling a PetSet
|
||||
|
||||
|
@ -268,12 +267,97 @@ web-1 1/1 Running 0 46s
|
|||
web-2 1/1 Running 0 8s
|
||||
```
|
||||
|
||||
## Deleting a Pet Set
|
||||
|
||||
Cleaning up a Pet Set is somewhat manual, as noted in the [limitations section](#alpha-limitations). You can delete a Pet Set using Kubectl, but this will *not* scale it down to 0:
|
||||
You can also use the `kubectl scale` command:
|
||||
|
||||
```shell
|
||||
$ kubectl delete -f petset.yaml
|
||||
$ kubectl get petset
|
||||
NAME DESIRED CURRENT AGE
|
||||
web 3 3 24m
|
||||
|
||||
$ kubectl scale petset web --replicas=5
|
||||
petset "web" scaled
|
||||
|
||||
$ kubectl get po --watch-only
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 1/1 Running 0 10m
|
||||
web-1 1/1 Running 0 27m
|
||||
web-2 1/1 Running 0 10m
|
||||
web-3 1/1 Running 0 3m
|
||||
web-4 0/1 ContainerCreating 0 48s
|
||||
|
||||
$ kubectl get petset web
|
||||
NAME DESIRED CURRENT AGE
|
||||
web 5 5 30m
|
||||
```
|
||||
|
||||
Note however, that scaling up to N and back down to M *will not* delete the volumes of the M-N pets, as described in the section on [deletion](#deleting-a-petset), i.e scaling back up to M creates new pets that use the same volumes. To see this in action, scale the PetSet back down to 3:
|
||||
|
||||
```shell
|
||||
$ kubectl get po --watch-only
|
||||
web-4 1/1 Terminating 0 4m
|
||||
web-4 1/1 Terminating 0 4m
|
||||
web-3 1/1 Terminating 0 6m
|
||||
web-3 1/1 Terminating 0 6m
|
||||
```
|
||||
|
||||
Note that we still have 5 pvcs:
|
||||
|
||||
```shell
|
||||
$ kubectl get pvc
|
||||
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
|
||||
www-web-0 Bound pvc-42ca5cef-8113-11e6-82f6-42010af00002 1Gi RWO 32m
|
||||
www-web-1 Bound pvc-42de30af-8113-11e6-82f6-42010af00002 1Gi RWO 32m
|
||||
www-web-2 Bound pvc-ba416413-8115-11e6-82f6-42010af00002 1Gi RWO 14m
|
||||
www-web-3 Bound pvc-ba45f19c-8115-11e6-82f6-42010af00002 1Gi RWO 14m
|
||||
www-web-4 Bound pvc-ba47674a-8115-11e6-82f6-42010af00002 1Gi RWO 14m
|
||||
```
|
||||
|
||||
This allows you to upgrade the image of a petset and have it come back up with the same data, as described in the next section.
|
||||
|
||||
## Image upgrades
|
||||
|
||||
PetSet currently *does not* support automated image upgrade as noted in the section on [limitations](#alpha-limitations), however you can update the `image` field of any container in the podTemplate and delete Pets one by one, the PetSet controller will recreate it with the new image.
|
||||
|
||||
Edit the image on the PetSet to `gcr.io/google_containers/nginx-slim:0.7` and delete 1 Pet:
|
||||
|
||||
```shell{% raw %}
|
||||
$ for p in 0 1 2; do kubectl get po web-$p --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done
|
||||
gcr.io/google_containers/nginx-slim:0.8
|
||||
gcr.io/google_containers/nginx-slim:0.8
|
||||
gcr.io/google_containers/nginx-slim:0.8
|
||||
|
||||
$ kubectl delete po web-0
|
||||
pod "web-0" deleted
|
||||
|
||||
$ for p in 0 1 2; do kubectl get po web-$p --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done
|
||||
gcr.io/google_containers/nginx-slim:0.7
|
||||
gcr.io/google_containers/nginx-slim:0.8
|
||||
gcr.io/google_containers/nginx-slim:0.8
|
||||
{% endraw %}```
|
||||
|
||||
Delete the remaining 2:
|
||||
|
||||
```shell
|
||||
$ kubectl delete po web-1 web-2
|
||||
pod "web-1" deleted
|
||||
pod "web-2" deleted
|
||||
```
|
||||
|
||||
Wait till the PetSet is stable and check the images:
|
||||
|
||||
```shell{% raw %}
|
||||
$ for p in 0 1 2; do kubectl get po web-$p --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done
|
||||
gcr.io/google_containers/nginx-slim:0.7
|
||||
gcr.io/google_containers/nginx-slim:0.7
|
||||
gcr.io/google_containers/nginx-slim:0.7
|
||||
{% endraw %}```
|
||||
|
||||
## Deleting a PetSet
|
||||
|
||||
Deleting a PetSet through kubectl will scale it down to 0, thereby deleting all the Pets. If you wish to delete just the PetSet and not the Pets, use `--cascade=false`:
|
||||
|
||||
```shell
|
||||
$ kubectl delete -f petset.yaml --cascade=false
|
||||
petset "web" deleted
|
||||
|
||||
$ kubectl get po -l app=nginx
|
||||
|
|
Loading…
Reference in New Issue