To update a service without an outage, `kubectl` supports what is called ['rolling update'](/docs/user-guide/kubectl/v1.6/#rolling-update), which updates one pod at a time, rather than taking down the entire service at the same time. See the [rolling update design document](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/simple-rolling-update.md) and the [example of rolling update](/docs/tasks/run-application/rolling-update-replication-controller/) for more information.
consider switching them to [Deployments](/docs/concepts/workloads/controllers/deployment/). A Deployment is a higher-level controller that automates rolling updates
of applications declaratively, and therefore is recommended. If you still want to keep your Replication Controllers and use `kubectl rolling-update`, keep reading:
A rolling update applies changes to the configuration of pods being managed by
a replication controller. The changes can be passed as a new replication
controller configuration file; or, if only updating the image, a new container
image can be specified directly.
A rolling update works by:
1. Creating a new replication controller with the updated configuration.
2. Increasing/decreasing the replica count on the new and old controllers until
the correct number of replicas is reached.
3. Deleting the original replication controller.
Rolling updates are initiated with the `kubectl rolling-update` command:
$ kubectl rolling-update NAME \
([NEW_NAME] --image=IMAGE | -f FILE)
## Passing a configuration file
To initiate a rolling update using a configuration file, pass the new file to
`kubectl rolling-update`:
$ kubectl rolling-update NAME -f FILE
The configuration file must:
* Specify a different `metadata.name` value.
* Overwrite at least one common label in its `spec.selector` field.
* Use the same `metadata.namespace`.
Replication controller configuration files are described in
*`NAME`: The name of the replication controller to update.
as well as either:
*`-f FILE`: A replication controller configuration file, in either JSON or
YAML format. The configuration file must specify a new top-level `id` value
and include at least one of the existing `spec.selector` key:value pairs.
See the
[Run Stateless AP Replication Controller](/docs/tutorials/stateless-application/run-stateless-ap-replication-controller/#replication-controller-configuration-file)
page for details.
<br>
<br>
or:
<br>
<br>
*`--image IMAGE:TAG`: The name and tag of the image to update to. Must be
different than the current image:tag currently specified.
Optional fields are:
*`NEW_NAME`: Only used in conjunction with `--image` (not with `-f FILE`). The
name to assign to the new replication controller.
*`--poll-interval DURATION`: The time between polling the controller status
after update. Valid units are `ns` (nanoseconds), `us` or `µs` (microseconds),
`ms` (milliseconds), `s` (seconds), `m` (minutes), or `h` (hours). Units can
be combined (e.g. `1m30s`). The default is `3s`.
*`--timeout DURATION`: The maximum time to wait for the controller to update a
pod before exiting. Default is `5m0s`. Valid units are as described for
`--poll-interval` above.
*`--update-period DURATION`: The time to wait between updating pods. Default
is `1m0s`. Valid units are as described for `--poll-interval` above.
Additional information about the `kubectl rolling-update` command is available
Let's say you were running version 1.7.9 of nginx:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 5
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
```
To update to version 1.9.1, you can use [`kubectl rolling-update --image`](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/simple-rolling-update.md) to specify the new image:
In another window, you can see that `kubectl` added a `deployment` label to the pods, whose value is a hash of the configuration, to distinguish the new pods from the old:
This is one example where the immutability of containers is a huge asset.
If you need to update more than just the image (e.g., command arguments, environment variables), you can create a new replication controller, with a new name and distinguishing label value, such as: