--- title: Managing Workloads content_type: concept reviewers: - janetkuo weight: 40 --- You've deployed your application and exposed it via a Service. Now what? Kubernetes provides a number of tools to help you manage your application deployment, including scaling and updating. ## Organizing resource configurations Many applications require multiple resources to be created, such as a Deployment along with a Service. Management of multiple resources can be simplified by grouping them together in the same file (separated by `---` in YAML). For example: {{% code_sample file="application/nginx-app.yaml" %}} Multiple resources can be created the same way as a single resource: ```shell kubectl apply -f https://k8s.io/examples/application/nginx-app.yaml ``` ```none service/my-nginx-svc created deployment.apps/my-nginx created ``` The resources will be created in the order they appear in the manifest. Therefore, it's best to specify the Service first, since that will ensure the scheduler can spread the pods associated with the Service as they are created by the controller(s), such as Deployment. `kubectl apply` also accepts multiple `-f` arguments: ```shell kubectl apply -f https://k8s.io/examples/application/nginx/nginx-svc.yaml \ -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml ``` It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, you can deploy all of the components of your stack together. A URL can also be specified as a configuration source, which is handy for deploying directly from manifests in your source control system: ```shell kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml ``` ```none deployment.apps/my-nginx created ``` If you need to define more manifests, such as adding a ConfigMap, you can do that too. ### External tools This section lists only the most common tools used for managing workloads on Kubernetes. To see a larger list, view [Application definition and image build](https://landscape.cncf.io/guide#app-definition-and-development--application-definition-image-build) in the {{< glossary_tooltip text="CNCF" term_id="cncf" >}} Landscape. #### Helm {#external-tool-helm} {{% thirdparty-content single="true" %}} [Helm](https://helm.sh/) is a tool for managing packages of pre-configured Kubernetes resources. These packages are known as _Helm charts_. #### Kustomize {#external-tool-kustomize} [Kustomize](https://kustomize.io/) traverses a Kubernetes manifest to add, remove or update configuration options. It is available both as a standalone binary and as a [native feature](/docs/tasks/manage-kubernetes-objects/kustomization/) of kubectl. ## Bulk operations in kubectl Resource creation isn't the only operation that `kubectl` can perform in bulk. It can also extract resource names from configuration files in order to perform other operations, in particular to delete the same resources you created: ```shell kubectl delete -f https://k8s.io/examples/application/nginx-app.yaml ``` ```none deployment.apps "my-nginx" deleted service "my-nginx-svc" deleted ``` In the case of two resources, you can specify both resources on the command line using the resource/name syntax: ```shell kubectl delete deployments/my-nginx services/my-nginx-svc ``` For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using `-l` or `--selector`, to filter resources by their labels: ```shell kubectl delete deployment,services -l app=nginx ``` ```none deployment.apps "my-nginx" deleted service "my-nginx-svc" deleted ``` ### Chaining and filtering Because `kubectl` outputs resource names in the same syntax it accepts, you can chain operations using `$()` or `xargs`: ```shell kubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ ) kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service/ | xargs -i kubectl get '{}' ``` The output might be similar to: ```none NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx-svc LoadBalancer 10.0.0.208 80/TCP 0s ``` With the above commands, first you create resources under `examples/application/nginx/` and print the resources created with `-o name` output format (print each resource as resource/name). Then you `grep` only the Service, and then print it with [`kubectl get`](/docs/reference/kubectl/generated/kubectl_get/). ### Recursive operations on local files If you happen to organize your resources across several subdirectories within a particular directory, you can recursively perform the operations on the subdirectories also, by specifying `--recursive` or `-R` alongside the `--filename`/`-f` argument. For instance, assume there is a directory `project/k8s/development` that holds all of the {{< glossary_tooltip text="manifests" term_id="manifest" >}} needed for the development environment, organized by resource type: ```none project/k8s/development ├── configmap │   └── my-configmap.yaml ├── deployment │   └── my-deployment.yaml └── pvc └── my-pvc.yaml ``` By default, performing a bulk operation on `project/k8s/development` will stop at the first level of the directory, not processing any subdirectories. If you had tried to create the resources in this directory using the following command, we would have encountered an error: ```shell kubectl apply -f project/k8s/development ``` ```none error: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin) ``` Instead, specify the `--recursive` or `-R` command line argument along with the `--filename`/`-f` argument: ```shell kubectl apply -f project/k8s/development --recursive ``` ```none configmap/my-config created deployment.apps/my-deployment created persistentvolumeclaim/my-pvc created ``` The `--recursive` argument works with any operation that accepts the `--filename`/`-f` argument such as: `kubectl create`, `kubectl get`, `kubectl delete`, `kubectl describe`, or even `kubectl rollout`. The `--recursive` argument also works when multiple `-f` arguments are provided: ```shell kubectl apply -f project/k8s/namespaces -f project/k8s/development --recursive ``` ```none namespace/development created namespace/staging created configmap/my-config created deployment.apps/my-deployment created persistentvolumeclaim/my-pvc created ``` If you're interested in learning more about `kubectl`, go ahead and read [Command line tool (kubectl)](/docs/reference/kubectl/). ## Updating your application without an outage At some point, you'll eventually need to update your deployed application, typically by specifying a new image or image tag. `kubectl` supports several update operations, each of which is applicable to different scenarios. You can run multiple copies of your app, and use a _rollout_ to gradually shift the traffic to new healthy Pods. Eventually, all the running Pods would have the new software. This section of the page guides you through how to create and update applications with Deployments. Let's say you were running version 1.14.2 of nginx: ```shell kubectl create deployment my-nginx --image=nginx:1.14.2 ``` ```none deployment.apps/my-nginx created ``` Ensure that there is 1 replica: ```shell kubectl scale --replicas 1 deployments/my-nginx --subresource='scale' --type='merge' -p '{"spec":{"replicas": 1}}' ``` ```none deployment.apps/my-nginx scaled ``` and allow Kubernetes to add more temporary replicas during a rollout, by setting a _surge maximum_ of 100%: ```shell kubectl patch --type='merge' -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge": "100%" }}}}' ``` ```none deployment.apps/my-nginx patched ``` To update to version 1.16.1, change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1` using `kubectl edit`: ```shell kubectl edit deployment/my-nginx # Change the manifest to use the newer container image, then save your changes ``` That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are being updated, and only a certain number of new replicas may be created above the desired number of pods. To learn more details about how this happens, visit [Deployment](/docs/concepts/workloads/controllers/deployment/). You can use rollouts with DaemonSets, Deployments, or StatefulSets. ### Managing rollouts You can use [`kubectl rollout`](/docs/reference/kubectl/generated/kubectl_rollout/) to manage a progressive update of an existing application. For example: ```shell kubectl apply -f my-deployment.yaml # wait for rollout to finish kubectl rollout status deployment/my-deployment --timeout 10m # 10 minute timeout ``` or ```shell kubectl apply -f backing-stateful-component.yaml # don't wait for rollout to finish, just check the status kubectl rollout status statefulsets/backing-stateful-component --watch=false ``` You can also pause, resume or cancel a rollout. Visit [`kubectl rollout`](/docs/reference/kubectl/generated/kubectl_rollout/) to learn more. ## Canary deployments Another scenario where multiple labels are needed is to distinguish deployments of different releases or configurations of the same component. It is common practice to deploy a *canary* of a new application release (specified via image tag in the pod template) side by side with the previous release so that the new release can receive live production traffic before fully rolling it out. For instance, you can use a `track` label to differentiate different releases. The primary, stable release would have a `track` label with value as `stable`: ```none name: frontend replicas: 3 ... labels: app: guestbook tier: frontend track: stable ... image: gb-frontend:v3 ``` and then you can create a new release of the guestbook frontend that carries the `track` label with different value (i.e. `canary`), so that two sets of pods would not overlap: ```none name: frontend-canary replicas: 1 ... labels: app: guestbook tier: frontend track: canary ... image: gb-frontend:v4 ``` The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the `track` label), so that the traffic will be redirected to both applications: ```yaml selector: app: guestbook tier: frontend ``` You can tweak the number of replicas of the stable and canary releases to determine the ratio of each release that will receive live production traffic (in this case, 3:1). Once you're confident, you can update the stable track to the new application release and remove the canary one. ## Updating annotations Sometimes you would want to attach annotations to resources. Annotations are arbitrary non-identifying metadata for retrieval by API clients such as tools or libraries. This can be done with `kubectl annotate`. For example: ```shell kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx' kubectl get pods my-nginx-v4-9gw19 -o yaml ``` ```shell apiVersion: v1 kind: pod metadata: annotations: description: my frontend running nginx ... ``` For more information, see [annotations](/docs/concepts/overview/working-with-objects/annotations/) and [kubectl annotate](/docs/reference/kubectl/generated/kubectl_annotate/). ## Scaling your application When load on your application grows or shrinks, use `kubectl` to scale your application. For instance, to decrease the number of nginx replicas from 3 to 1, do: ```shell kubectl scale deployment/my-nginx --replicas=1 ``` ```none deployment.apps/my-nginx scaled ``` Now you only have one pod managed by the deployment. ```shell kubectl get pods -l app=nginx ``` ```none NAME READY STATUS RESTARTS AGE my-nginx-2035384211-j5fhi 1/1 Running 0 30m ``` To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do: ```shell # This requires an existing source of container and Pod metrics kubectl autoscale deployment/my-nginx --min=1 --max=3 ``` ```none horizontalpodautoscaler.autoscaling/my-nginx autoscaled ``` Now your nginx replicas will be scaled up and down as needed, automatically. For more information, please see [kubectl scale](/docs/reference/kubectl/generated/kubectl_scale/), [kubectl autoscale](/docs/reference/kubectl/generated/kubectl_autoscale/) and [horizontal pod autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/) document. ## In-place updates of resources Sometimes it's necessary to make narrow, non-disruptive updates to resources you've created. ### kubectl apply It is suggested to maintain a set of configuration files in source control (see [configuration as code](https://martinfowler.com/bliki/InfrastructureAsCode.html)), so that they can be maintained and versioned along with the code for the resources they configure. Then, you can use [`kubectl apply`](/docs/reference/kubectl/generated/kubectl_apply/) to push your configuration changes to the cluster. This command will compare the version of the configuration that you're pushing with the previous version and apply the changes you've made, without overwriting any automated changes to properties you haven't specified. ```shell kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml ``` ```none deployment.apps/my-nginx configured ``` To learn more about the underlying mechanism, read [server-side apply](/docs/reference/using-api/server-side-apply/). ### kubectl edit Alternatively, you may also update resources with [`kubectl edit`](/docs/reference/kubectl/generated/kubectl_edit/): ```shell kubectl edit deployment/my-nginx ``` This is equivalent to first `get` the resource, edit it in text editor, and then `apply` the resource with the updated version: ```shell kubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml vi /tmp/nginx.yaml # do some edit, and then save the file kubectl apply -f /tmp/nginx.yaml deployment.apps/my-nginx configured rm /tmp/nginx.yaml ``` This allows you to do more significant changes more easily. Note that you can specify the editor with your `EDITOR` or `KUBE_EDITOR` environment variables. For more information, please see [kubectl edit](/docs/reference/kubectl/generated/kubectl_edit/). ### kubectl patch You can use [`kubectl patch`](/docs/reference/kubectl/generated/kubectl_patch/) to update API objects in place. This subcommand supports JSON patch, JSON merge patch, and strategic merge patch. See [Update API Objects in Place Using kubectl patch](/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/) for more details. ## Disruptive updates In some cases, you may need to update resource fields that cannot be updated once initialized, or you may want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can modify your original configuration file: ```shell kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force ``` ```none deployment.apps/my-nginx deleted deployment.apps/my-nginx replaced ``` ## {{% heading "whatsnext" %}} - Learn about [how to use `kubectl` for application introspection and debugging](/docs/tasks/debug/debug-application/debug-running-pod/).