2017-03-31 19:13:47 +00:00
---
2018-02-27 18:51:46 +00:00
reviewers:
2017-03-31 19:13:47 +00:00
- janetkuo
title: Deployments
2018-09-14 21:47:24 +00:00
feature:
title: Automated rollouts and rollbacks
description: >
Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn't kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you. Take advantage of a growing ecosystem of deployment solutions.
2018-05-05 16:00:51 +00:00
content_template: templates/concept
2018-06-06 23:51:26 +00:00
weight: 30
2017-03-31 19:13:47 +00:00
---
2018-05-05 16:00:51 +00:00
{{% capture overview %}}
2017-03-31 19:13:47 +00:00
2017-08-01 21:40:46 +00:00
A _Deployment_ controller provides declarative updates for [Pods ](/docs/concepts/workloads/pods/pod/ ) and
[ReplicaSets ](/docs/concepts/workloads/controllers/replicaset/ ).
2017-07-31 23:43:08 +00:00
2019-07-02 20:10:29 +00:00
You describe a _desired state_ in a Deployment, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
2017-08-01 21:40:46 +00:00
2018-05-05 16:00:51 +00:00
{{< note > }}
2019-07-02 20:10:29 +00:00
Do not manage ReplicaSets owned by a Deployment. Consider opening an issue in the main Kubernetes repository if your use case is not covered below.
2018-05-05 16:00:51 +00:00
{{< / note > }}
2017-07-31 23:43:08 +00:00
2018-05-05 16:00:51 +00:00
{{% /capture %}}
2017-07-31 23:43:08 +00:00
2017-05-25 13:17:11 +00:00
2018-05-05 16:00:51 +00:00
{{% capture body %}}
2017-07-31 23:43:08 +00:00
## Use Case
2017-03-31 19:13:47 +00:00
2017-08-01 21:40:46 +00:00
The following are typical use cases for Deployments:
2017-03-31 19:13:47 +00:00
2017-04-21 12:42:28 +00:00
* [Create a Deployment to rollout a ReplicaSet ](#creating-a-deployment ). The ReplicaSet creates Pods in the background. Check the status of the rollout to see if it succeeds or not.
2017-07-31 23:43:08 +00:00
* [Declare the new state of the Pods ](#updating-a-deployment ) by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment.
* [Rollback to an earlier Deployment revision ](#rolling-back-a-deployment ) if the current state of the Deployment is not stable. Each rollback updates the revision of the Deployment.
2017-09-01 17:55:03 +00:00
* [Scale up the Deployment to facilitate more load ](#scaling-a-deployment ).
2017-04-21 12:42:28 +00:00
* [Pause the Deployment ](#pausing-and-resuming-a-deployment ) to apply multiple fixes to its PodTemplateSpec and then resume it to start a new rollout.
2017-09-01 17:55:03 +00:00
* [Use the status of the Deployment ](#deployment-status ) as an indicator that a rollout has stuck.
* [Clean up older ReplicaSets ](#clean-up-policy ) that you don't need anymore.
2017-04-21 12:42:28 +00:00
2017-03-31 19:13:47 +00:00
## Creating a Deployment
2017-09-01 17:55:03 +00:00
The following is an example of a Deployment. It creates a ReplicaSet to bring up three `nginx` Pods:
2017-03-31 19:13:47 +00:00
2018-07-03 00:35:20 +00:00
{{< codenew file = "controllers/nginx-deployment.yaml" > }}
2017-03-31 19:13:47 +00:00
2017-09-01 17:55:03 +00:00
In this example:
2018-06-12 23:30:28 +00:00
* A Deployment named `nginx-deployment` is created, indicated by the `.metadata.name` field.
2017-09-08 00:09:24 +00:00
* The Deployment creates three replicated Pods, indicated by the `replicas` field.
2017-10-25 23:11:41 +00:00
* The `selector` field defines how the Deployment finds which Pods to manage.
2018-08-20 19:11:21 +00:00
In this case, you simply select a label that is defined in the Pod template (`app: nginx`).
2017-10-25 23:11:41 +00:00
However, more sophisticated selection rules are possible,
as long as the Pod template itself satisfies the rule.
2019-07-02 20:10:29 +00:00
{{< note > }}
The `matchLabels` field is a map of {key,value} pairs. A single {key,value} in the `matchLabels` map
is equivalent to an element of `matchExpressions` , whose key field is "key" the operator is "In",
and the values array contains only "value".
All of the requirements, from both `matchLabels` and `matchExpressions` , must be satisfied in order to match.
{{< / note > }}
2018-08-20 19:11:21 +00:00
* The `template` field contains the following sub-fields:
* The Pods are labeled `app: nginx` using the `labels` field.
* The Pod template's specification, or `.template.spec` field, indicates that
2017-09-08 00:09:24 +00:00
the Pods run one container, `nginx` , which runs the `nginx`
2019-01-02 21:21:36 +00:00
[Docker Hub ](https://hub.docker.com/ ) image at version 1.7.9.
2018-08-20 19:11:21 +00:00
* Create one container and name it `nginx` using the `name` field.
2017-09-01 17:55:03 +00:00
2019-07-02 20:10:29 +00:00
Follow the steps given below to create the above Deployment:
Before you begin, make sure your Kubernetes cluster is up and running.
1. Create the Deployment by running the following command:
{{< note > }}
You may specify the `--record` flag to write the command executed in the resource annotation `kubernetes.io/change-cause` . It is useful for future introspection.
For example, to see the commands executed in each Deployment revision.
{{< / note > }}
```shell
kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
```
2. Run `kubectl get deployments` to check if the Deployment was created. If the Deployment is still being created, the output is similar to the following:
```shell
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 0 0 0 1s
```
When you inspect the Deployments in your cluster, the following fields are displayed:
* `NAME` lists the names of the Deployments in the cluster.
* `DESIRED` displays the desired number of _replicas_ of the application, which you define when you create the Deployment. This is the _desired state_ .
* `CURRENT` displays how many replicas are currently running.
* `UP-TO-DATE` displays the number of replicas that have been updated to achieve the desired state.
* `AVAILABLE` displays how many replicas of the application are available to your users.
* `AGE` displays the amount of time that the application has been running.
Notice how the number of desired replicas is 3 according to `.spec.replicas` field.
3. To see the Deployment rollout status, run `kubectl rollout status deployment.v1.apps/nginx-deployment` . The output is similar to this:
```shell
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
deployment.apps/nginx-deployment successfully rolled out
```
4. Run the `kubectl get deployments` again a few seconds later. The output is similar to this:
```shell
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 18s
```
Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.
5. To see the ReplicaSet (`rs`) created by the Deployment, run `kubectl get rs` . The output is similar to this:
```shell
NAME DESIRED CURRENT READY AGE
nginx-deployment-75675f5897 3 3 3 18s
```
Notice that the name of the ReplicaSet is always formatted as `[DEPLOYMENT-NAME]-[RANDOM-STRING]` . The random string is
randomly generated and uses the pod-template-hash as a seed.
6. To see the labels automatically generated for each Pod, run `kubectl get pods --show-labels` . The following output is returned:
```shell
NAME READY STATUS RESTARTS AGE LABELS
nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
```
The created ReplicaSet ensures that there are three `nginx` Pods.
2017-04-21 12:42:28 +00:00
2019-07-02 20:10:29 +00:00
{{< note > }}
You must specify an appropriate selector and Pod template labels in a Deployment (in this case,
`app: nginx` ). Do not overlap labels or selectors with other controllers (including other Deployments and StatefulSets). Kubernetes doesn't stop you from overlapping, and if multiple controllers have overlapping selectors those controllers might conflict and behave unexpectedly.
{{< / note > }}
2017-03-31 19:13:47 +00:00
2017-05-23 10:26:16 +00:00
### Pod-template-hash label
2018-05-05 16:00:51 +00:00
{{< note > }}
2018-11-08 12:44:05 +00:00
Do not change this label.
2018-05-05 16:00:51 +00:00
{{< / note > }}
2017-05-23 10:26:16 +00:00
2018-02-02 21:28:09 +00:00
The `pod-template-hash` label is added by the Deployment controller to every ReplicaSet that a Deployment creates or adopts.
2017-09-01 17:55:03 +00:00
This label ensures that child ReplicaSets of a Deployment do not overlap. It is generated by hashing the `PodTemplate` of the ReplicaSet and using the resulting hash as the label value that is added to the ReplicaSet selector, Pod template labels,
and in any existing Pods that the ReplicaSet might have.
2017-05-23 10:26:16 +00:00
2017-03-31 19:13:47 +00:00
## Updating a Deployment
2018-05-05 16:00:51 +00:00
{{< note > }}
2019-07-02 20:10:29 +00:00
A Deployment's rollout is triggered if and only if the Deployment's Pod template (that is, `.spec.template` )
2017-08-01 09:00:28 +00:00
is changed, for example if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout.
2018-05-05 16:00:51 +00:00
{{< / note > }}
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
Follow the steps given below to update your Deployment:
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
1. Let's update the nginx Pods to use the `nginx:1.9.1` image instead of the `nginx:1.7.9` image.
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
```shell
kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1
```
The output is similar to this:
```
deployment.apps/nginx-deployment image updated
```
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
Alternatively, you can `edit` the Deployment and change `.spec.template.spec.containers[0].image` from `nginx:1.7.9` to `nginx:1.9.1` :
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
```shell
kubectl edit deployment.v1.apps/nginx-deployment
```
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
The output is similar to this:
```
deployment.apps/nginx-deployment edited
```
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
2. To see the rollout status, run:
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
```shell
kubectl rollout status deployment.v1.apps/nginx-deployment
```
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
The output is similar to this:
```
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
```
or
```
deployment.apps/nginx-deployment successfully rolled out
```
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
Get more details on your updated Deployment:
2017-04-21 12:42:28 +00:00
2019-07-02 20:10:29 +00:00
* After the rollout succeeds, you can view the Deployment by running `kubectl get deployments` .
The output is similar to this:
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 36s
```
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
* Run `kubectl get rs` to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it
up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
```shell
kubectl get rs
```
The output is similar to this:
```
NAME DESIRED CURRENT READY AGE
nginx-deployment-1564180365 3 3 3 6s
nginx-deployment-2035384211 0 0 0 36s
```
* Running `get pods` should now show only the new Pods:
```shell
kubectl get pods
```
The output is similar to this:
```
NAME READY STATUS RESTARTS AGE
nginx-deployment-1564180365-khku8 1/1 Running 0 14s
nginx-deployment-1564180365-nacti 1/1 Running 0 14s
nginx-deployment-1564180365-z9gth 1/1 Running 0 14s
```
Next time you want to update these Pods, you only need to update the Deployment's Pod template again.
Deployment ensures that only a certain number of Pods are down while they are being updated. By default,
2019-09-12 17:10:28 +00:00
it ensures that at least 75% of the desired number of Pods are up (25% max unavailable).
2019-07-02 20:10:29 +00:00
Deployment also ensures that only a certain number of Pods are created above the desired number of Pods.
2019-09-24 16:55:29 +00:00
By default, it ensures that at most 125% of the desired number of Pods are up (25% max surge).
2019-07-02 20:10:29 +00:00
For example, if you look at the above Deployment closely, you will see that it first created a new Pod,
then deleted some old Pods, and created new ones. It does not kill old Pods until a sufficient number of
new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed.
It makes sure that at least 2 Pods are available and that at max 4 Pods in total are available.
* Get details of your Deployment:
```shell
kubectl describe deployments
```
The output is similar to this:
```
Name: nginx-deployment
Namespace: default
CreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision=2
Selector: app=nginx
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.9.1
Port: 80/TCP
Environment: < none >
Mounts: < none >
Volumes: < none >
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: < none >
NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3
Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1
Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2
Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2
Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1
Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3
Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0
```
Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211)
and scaled it up to 3 replicas directly. When you updated the Deployment, it created a new ReplicaSet
(nginx-deployment-1564180365) and scaled it up to 1 and then scaled down the old ReplicaSet to 2, so that at
least 2 Pods were available and at most 4 Pods were created at all times. It then continued scaling up and down
the new and the old ReplicaSet, with the same rolling update strategy. Finally, you'll have 3 available replicas
in the new ReplicaSet, and the old ReplicaSet is scaled down to 0.
2017-03-31 19:13:47 +00:00
2017-05-23 14:14:45 +00:00
### Rollover (aka multiple updates in-flight)
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
Each time a new Deployment is observed by the Deployment controller, a ReplicaSet is created to bring up
the desired Pods. If the Deployment is updated, the existing ReplicaSet that controls Pods whose labels
2017-04-21 12:42:28 +00:00
match `.spec.selector` but whose template does not match `.spec.template` are scaled down. Eventually, the new
2019-07-02 20:10:29 +00:00
ReplicaSet is scaled to `.spec.replicas` and all old ReplicaSets is scaled to 0.
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
If you update a Deployment while an existing rollout is in progress, the Deployment creates a new ReplicaSet
as per the update and start scaling that up, and rolls over the ReplicaSet that it was scaling up previously
-- it will add it to its list of old ReplicaSets and start scaling it down.
2017-03-31 19:13:47 +00:00
For example, suppose you create a Deployment to create 5 replicas of `nginx:1.7.9` ,
2019-07-02 20:10:29 +00:00
but then update the Deployment to create 5 replicas of `nginx:1.9.1` , when only 3
replicas of `nginx:1.7.9` had been created. In that case, the Deployment immediately starts
killing the 3 `nginx:1.7.9` Pods that it had created, and starts creating
`nginx:1.9.1` Pods. It does not wait for the 5 replicas of `nginx:1.7.9` to be created
2017-03-31 19:13:47 +00:00
before changing course.
2017-05-23 14:14:45 +00:00
### Label selector updates
It is generally discouraged to make label selector updates and it is suggested to plan your selectors up front.
In any case, if you need to perform a label selector update, exercise great caution and make sure you have grasped
all of the implications.
2018-05-05 16:00:51 +00:00
{{< note > }}
2018-11-08 12:44:05 +00:00
In API version `apps/v1` , a Deployment's label selector is immutable after it gets created.
2018-05-05 16:00:51 +00:00
{{< / note > }}
Release 1.8 (#5659)
* GC now supports non-core resources
* Add two examples about how to analysis audits of kube-apiserver (#4264)
* Deprecate system:nodes binding
* [1.8] StatefulSet `initialized` annotation is now ignored.
* inits the kubeadm upgrade docs
addresses kubernetes/kubernetes.github.io/issues/4689
* adds kubeadm upgrade cmd to ToC
addresses kubernetes/kubernetes.github.io/issues/4689
* add workload placement docs
* ScaleIO - document udpate for 1.8
* Add documentation on storageClass.mountOptions and PV.mountOptions (#5254)
* Add documentation on storageClass.mountOptions and PV.mountOptions
* convert notes into callouts
* Add docs for CustomResource validation
add info about supported fields
* advanced audit beta features (#5300)
* Update job workload doc with backoff failure policy (#5319)
Add to the Jobs documentation how to use the new backoffLimit field that
limit the number of Pod failure before considering the Job as failed.
* Documented additional AWS Service annotations (#4864)
* Add device plugin doc under concepts/cluster-administration. (#5261)
* Add device plugin doc under concepts/cluster-administration.
* Update device-plugins.md
* Update device-plugins.md
Add meta description. Fix typo. Change bare metal deployment to manual deployment.
* Update device-plugins.md
Fix typo again.
* Update page.version. (#5341)
* Add documentation on storageClass.reclaimPolicy (#5171)
* [Advanced audit] use new herf for audit-api (#5349)
This tag contains all the changes in v1beta1 version. Update it now.
* Added documentation around creating the InitializerConfiguration for the persistent volume label controller in the cloud-controller-manager (#5255)
* Documentation for kubectl plugins (#5294)
* Documentation for kubectl plugins
* Update kubectl-plugins.md
* Update kubectl-plugins.md
* Updated CPU manager docs to match implementation. (#5332)
* Noted limitation of alpha static cpumanager.
* Updated CPU manager docs to match implementation.
- Removed references to CPU pressure node condition and evictions.
- Added note about new --cpu-manager-reconcile-period flag.
- Added note about node allocatable requirements for static policy.
- Noted limitation of alpha static cpumanager.
* Move cpu-manager task link to rsc mgmt section.
* init containers annotation removed in 1.8 (#5390)
* Add documentation for TaintNodesByCondition (#5352)
* Add documentation for TaintNodesByCondition
* Update nodes.md
* Update taint-and-toleration.md
* Update daemonset.md
* Update nodes.md
* Update taint-and-toleration.md
* Update daemonset.md
* Fix deployments (#5421)
* Document extended resources and OIR deprecation. (#5399)
* Document extended resources and OIR deprecation.
* Updated extended resources doc per reviews.
* reverts extra spacing in _data/tasks.yml
* addresses `kubeadm upgrade` review comments
Feedback from @chenopis, @luxas, and @steveperry-53 addressed with this commit
* HugePages documentation (#5419)
* Update cpu-management-policies.md (#5407)
Fixed the bad link.
Modified "cpu" to "CPU".
Added more 'yaml' as supplement.
* Update RBAC docs for v1 (#5445)
* Add user docs for pod priority and preemption (#5328)
* Add user docs for pod priority and preemption
* Update pod-priority-preemption.md
* More updates
* Update docs/admin/kubeadm.md for 1.8 (#5440)
- Made a couple of minor wording changes (not strictly 1.8 related).
- Did some reformatting (not strictly 1.8 related).
- Updated references to the default token TTL (was infinite, now 24 hours).
- Documented the new `--discovery-token-ca-cert-hash` and `--discovery-token-unsafe-skip-ca-verification` flags for `kubeadm join`.
- Added references to the new `--discovery-token-ca-cert-hash` flag in all the default examples.
- Added a new _Security model_ section that describes the security tradeoffs of the various discovery modes.
- Documented the new `--groups` flag for `kubeadm token create`.
- Added a note of caution under _Automating kubeadm_ that references the _Security model_ section.
- Updated the component version table to drop 1.6 and add 1.8.
- Update `_data/reference.yml` to try to get the sidebar fixed up and more consistent with `kubefed`.
* Update StatefulSet Basics for 1.8 release (#5398)
* addresses `kubeadm upgrade` review comments
2nd iteration review comments by @luxas
* adds kubelet upgrade section to kubeadm upgrade
* Fix a bulleted list on docs/admin/kubeadm.md. (#5458)
I updated this doc yesterday and I was absolutely sure I fixed this, but I just saw that this commit got lost somehow.
This was introduced recently in https://github.com/kubernetes/kubernetes.github.io/pull/5440.
* Clarify the API to check for device plugins
* Moving Flexvolume to separate out-of-tree section
* addresses `kubeadm upgrade` review comments
CC: @luxas
* fixes kubeadm upgrade index
* Update Stackdriver Logging documentation (#5495)
* Re-update WordPress and MySQL PV doc to use apps/v1beta2 APIs (#5526)
* Update statefulset concepts doc to use apps/v1beta2 APIs (#5420)
* add document on kubectl's behavior regarding initializers (#5505)
* Update docs/admin/kubeadm.md to cover self-hosting in 1.8. (#5497)
This is a new beta feature in 1.8.
* Update kubectl patch doc to use apps/v1beta2 APIs (#5422)
* [1.8] Update "Run Applications" tasks to apps/v1beta2. (#5525)
* Update replicated stateful application task for 1.8.
* Update single instance stateful app task for 1.8.
* Update stateless app task for 1.8.
* Update kubectl patch task for 1.8.
* fix the link of persistent storage (#5515)
* update the admission-controllers.md index.md what-is-kubernetes.md link
* fix the link of persistent storage
* Add quota support for local ephemeral storage (#5493)
* Add quota support for local ephemeral storage
update the doc to this alpha feature
* Update resource-quotas.md
* Updated Deployments concepts doc (#5491)
* Updated Deployments concepts doc
* Addressed comments
* Addressed more comments
* Modify allocatable storage to ephemeral-storage (#5490)
Update the doc to use ephemeral-storage instead of storage
* Revamped concepts doc for ReplicaSet (#5463)
* Revamped concepts doc for ReplicaSet
* Minor changes to call out specific versions for selector defaulting and
immutability
* Addressed doc review comments
* Remove petset documentations (#5395)
* Update docs to use batch/v1beta1 cronjobs (#5475)
* add federation job doc (#5485)
* add federation job doc
* Update job.md
Edits for clarity and consistency
* Update job.md
Fixed a typo
* update DaemonSet concept for 1.8 release (#5397)
* update DaemonSet concept for 1.8 release
* Update daemonset.md
Fix typo. than -> then
* Update bootstrap tokens doc for 1.8. (#5479)
* Update bootstrap tokens doc for 1.8.
This has some changes I missed when I was updating the main kubeadm documention:
- Bootstrap tokens are now beta, not alpha (https://github.com/kubernetes/features/issues/130)
- The apiserver flag to enable the authenticator changedin 1.8 (https://github.com/kubernetes/kubernetes/pull/51198)
- Added `auth-extra-groups` documentaion (https://github.com/kubernetes/kubernetes/pull/50933)
- Updated the _Token Management with `kubeadm`_ section to link to the main kubeadm docs, since it was just duplicated information.
* Update bootstrap-tokens.md
* Updated the Cassandra tutorial to use apps/v1beta2 (#5548)
* add docs for AllowPrivilegeEscalation (#5448)
Signed-off-by: Jess Frazelle <acidburn@microsoft.com>
* Add local ephemeral storage alpha feature in managing compute resource (#5522)
* Add local ephemeral storage alpha feature in managing compute resource
Since 1.8, we add the local ephemeral storage alpha feature as one
resource type to manage. Add this feature into the doc.
* Update manage-compute-resources-container.md
* Update manage-compute-resources-container.md
* Update manage-compute-resources-container.md
* Update manage-compute-resources-container.md
* Update manage-compute-resources-container.md
* Update manage-compute-resources-container.md
* Added documentation for Metrics Server (#5560)
* authorization: improve authorization debugging docs (#5549)
* Document mount propagation (#5544)
* Update /docs/setup/independent/create-cluster-kubeadm.md for 1.8. (#5524)
This introduction needed a couple of small tweaks to cover the `--discovery-token-ca-cert-hash` flag added in https://github.com/kubernetes/kubernetes/pull/49520 and some version bumps.
* Add task doc for alpha dynamic kubelet configuration (#5523)
* Fix input/output of selfsubjectaccess review (#5593)
* Add docs for implementing resize (#5528)
* Add docs for implementing resize
* Update admission-controllers.md
* Added link to PVC section
* minor typo fixes
* Update NetworkPolicy concept guide with egress and CIDR changes (#5529)
* update zookeeper tutorial for 1.8 release
* add doc for hostpath type (#5503)
* Federated Hpa feature doc (#5487)
* Federated Hpa feature doc
* Federated Hpa feature doc review fixes
* Update hpa.md
* Update hpa.md
* update cloud controller manager docs for v1.8
* Update cronjob with defaults information (#5556)
* Kubernetes 1.8 reference docs (#5632)
* Kubernetes 1.8 reference docs
* Kubectl reference docs for 1.8
* Update side bar with 1.8 kubectl and api ref docs links
* remove petset.md
* update on state of HostAlias in 1.8 with hostNetwork Pod support (#5644)
* Fix cron job deletion section (#5655)
* update imported docs (#5656)
* Add documentation for certificate rotation. (#5639)
* Link to using kubeadm page
* fix the command output
fix the command output
* fix typo in api/resources reference: "Worloads"
* Add documentation for certificate rotation.
* Create TOC entry for cloud controller manager. (#5662)
* Updates for new versions of API types
* Followup 5655: fix link to garbage collection (#5666)
* Temporarily redirect resources-reference to api-reference. (#5668)
* Update config for 1.8 release. (#5661)
* Update config for 1.8 release.
* Address reviewer comments.
* Switch references in HPA docs from alpha to beta (#5671)
The HPA docs still referenced the alpha version. This switches them to
talk about v2beta1, which is the appropriate version for Kubernetes 1.8
* Deprecate openstack heat (#5670)
* Fix typo in pod preset conflict example
Move container port definition to the correct line.
* Highlight openstack-heat provider deprecation
The openstack-heat provider for kube-up is being deprecated and will be
removed in a future release.
* Temporarily fix broken links by redirecting. (#5672)
* Fix broken links. (#5675)
* Fix render of code block (#5674)
* Fix broken links. (#5677)
* Add a small note about auto-bootstrapped CSR ClusterRoles (#5660)
* Update kubeadm install doc for v1.8 (#5676)
* add draft workloads api content for 1.8 (#5650)
* add draft workloads api content for 1.8
* edits per review, add tables, for 1.8 workloads api doc
* fix typo
* Minor fixes to kubeadm 1.8 upgrade guide. (#5678)
- The kubelet upgrade instructions should be done on every host, not
just worker nodes.
- We should just upgrade all packages, instead of calling out kubelet
specifically. This will also upgrade kubectl, kubeadm, and
kubernetes-cni, if installed.
- Draining nodes should also ignore daemonsets, and master errors can be
ignored.
- Make sure that the new kubeadm download is chmoded correctly.
- Add a step to run `kubeadm version` to verify after downloading.
- Manually approve new kubelet CSRs if rotation is enabled (known issue).
* Release 1.8 (#5680)
* Fix versions for 1.8 API ref docs
* Updates for 1.8 kubectl reference docs
* Kubeadm /docs/admin/kubeadm.md cleanup, editing. (#5681)
* Update docs/admin/kubeadm.md (mostly 1.8 related).
This is Fabrizio's work, which I'm committing along with my edits (in a commit on top of this).
* A few of my own edits to clarify and clean up some Markdown.
2017-09-29 04:46:51 +00:00
2019-07-02 20:10:29 +00:00
* Selector additions require the Pod template labels in the Deployment spec to be updated with the new label too,
2017-05-23 14:14:45 +00:00
otherwise a validation error is returned. This change is a non-overlapping one, meaning that the new selector does
not select ReplicaSets and Pods created with the old selector, resulting in orphaning all old ReplicaSets and
creating a new ReplicaSet.
2019-07-02 20:10:29 +00:00
* Selector updates changes the existing value in a selector key -- result in the same behavior as additions.
* Selector removals removes an existing key from the Deployment selector -- do not require any changes in the
Pod template labels. Existing ReplicaSets are not orphaned, and a new ReplicaSet is not created, but note that the
2017-07-31 23:43:08 +00:00
removed label still exists in any existing Pods and ReplicaSets.
2017-05-23 14:14:45 +00:00
2017-03-31 19:13:47 +00:00
## Rolling Back a Deployment
2019-07-02 20:10:29 +00:00
Sometimes, you may want to rollback a Deployment; for example, when the Deployment is not stable, such as crash looping.
2017-04-21 12:42:28 +00:00
By default, all of the Deployment's rollout history is kept in the system so that you can rollback anytime you want
2017-08-27 10:05:17 +00:00
(you can change that by modifying revision history limit).
2017-03-31 19:13:47 +00:00
2018-05-05 16:00:51 +00:00
{{< note > }}
2018-11-08 12:44:05 +00:00
A Deployment's revision is created when a Deployment's rollout is triggered. This means that the
2019-07-02 20:10:29 +00:00
new revision is created if and only if the Deployment's Pod template (`.spec.template`) is changed,
2017-07-31 23:43:08 +00:00
for example if you update the labels or container images of the template. Other updates, such as scaling the Deployment,
2018-08-20 19:11:21 +00:00
do not create a Deployment revision, so that you can facilitate simultaneous manual- or auto-scaling.
2019-07-02 20:10:29 +00:00
This means that when you roll back to an earlier revision, only the Deployment's Pod template part is
2017-07-31 23:43:08 +00:00
rolled back.
2018-05-05 16:00:51 +00:00
{{< / note > }}
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
* Suppose that you made a typo while updating the Deployment, by putting the image name as `nginx:1.91` instead of `nginx:1.9.1` :
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
```shell
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true
```
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
The output is similar to this:
```
deployment.apps/nginx-deployment image updated
```
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
* The rollout gets stuck. You can verify it by checking the rollout status:
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
```shell
kubectl rollout status deployment.v1.apps/nginx-deployment
```
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
The output is similar to this:
```
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...
```
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
* Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts,
[read more here ](#deployment-status ).
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
* You see that the number of old replicas (`nginx-deployment-1564180365` and `nginx-deployment-2035384211` ) is 2, and new replicas (nginx-deployment-3066724191) is 1.
```shell
kubectl get rs
```
The output is similar to this:
```
NAME DESIRED CURRENT READY AGE
nginx-deployment-1564180365 3 3 3 25s
nginx-deployment-2035384211 0 0 0 36s
nginx-deployment-3066724191 1 1 0 6s
```
* Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.
```shell
kubectl get pods
```
The output is similar to this:
```
NAME READY STATUS RESTARTS AGE
nginx-deployment-1564180365-70iae 1/1 Running 0 25s
nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s
nginx-deployment-1564180365-hysrc 1/1 Running 0 25s
nginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s
```
{{< note > }}
The Deployment controller stops the bad rollout automatically, and stops scaling up the new
ReplicaSet. This depends on the rollingUpdate parameters (`maxUnavailable` specifically) that you have specified.
Kubernetes by default sets the value to 25%.
{{< / note > }}
* Get the description of the Deployment:
```shell
kubectl describe deployment
```
The output is similar to this:
```
Name: nginx-deployment
Namespace: default
CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700
Labels: app=nginx
Selector: app=nginx
Replicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.91
Port: 80/TCP
Host Port: 0/TCP
Environment: < none >
Mounts: < none >
Volumes: < none >
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True ReplicaSetUpdated
OldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created)
NewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created)
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2
21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1
21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3
13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0
13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1
```
To fix this, you need to rollback to a previous revision of Deployment that is stable.
2017-03-31 19:13:47 +00:00
### Checking Rollout History of a Deployment
2019-07-02 20:10:29 +00:00
Follow the steps given below to check the rollout history:
1. First, check the revisions of this Deployment:
```shell
kubectl rollout history deployment.v1.apps/nginx-deployment
```
The output is similar to this:
```
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true
2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true
3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true
```
`CHANGE-CAUSE` is copied from the Deployment annotation `kubernetes.io/change-cause` to its revisions upon creation. You can specify the`CHANGE-CAUSE` message by:
* Annotating the Deployment with `kubectl annotate deployment.v1.apps/nginx-deployment kubernetes.io/change-cause="image updated to 1.9.1"`
* Append the `--record` flag to save the `kubectl` command that is making changes to the resource.
* Manually editing the manifest of the resource.
2. To see the details of each revision, run:
```shell
kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2
```
The output is similar to this:
```
deployments "nginx-deployment" revision 2
Labels: app=nginx
pod-template-hash=1159050644
Annotations: kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true
Containers:
nginx:
Image: nginx:1.9.1
Port: 80/TCP
QoS Tier:
cpu: BestEffort
memory: BestEffort
Environment Variables: < none >
No volumes.
```
2017-03-31 19:13:47 +00:00
### Rolling Back to a Previous Revision
2019-07-02 20:10:29 +00:00
Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.
1. Now you've decided to undo the current rollout and rollback to the previous revision:
```shell
kubectl rollout undo deployment.v1.apps/nginx-deployment
```
The output is similar to this:
```
deployment.apps/nginx-deployment
```
Alternatively, you can rollback to a specific revision by specifying it with `--to-revision` :
```shell
kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2
```
The output is similar to this:
```
deployment.apps/nginx-deployment
```
For more details about rollout related commands, read [`kubectl rollout` ](/docs/reference/generated/kubectl/kubectl-commands#rollout ).
The Deployment is now rolled back to a previous stable revision. As you can see, a `DeploymentRollback` event
for rolling back to revision 2 is generated from Deployment controller.
2. Check if the rollback was successful and the Deployment is running as expected, run:
```shell
kubectl get deployment nginx-deployment
```
The output is similar to this:
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 30m
```
3. Get the description of the Deployment:
```shell
kubectl describe deployment nginx-deployment
```
The output is similar to this:
```
Name: nginx-deployment
Namespace: default
CreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision=4
kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true
Selector: app=nginx
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.9.1
Port: 80/TCP
Host Port: 0/TCP
Environment: < none >
Mounts: < none >
Volumes: < none >
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: < none >
NewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1
Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2
Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3
Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1
Normal DeploymentRollback 15s deployment-controller Rolled back deployment "nginx-deployment" to revision 2
Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0
```
2017-03-31 19:13:47 +00:00
## Scaling a Deployment
You can scale a Deployment by using the following command:
```shell
2019-03-07 09:31:05 +00:00
kubectl scale deployment.v1.apps/nginx-deployment --replicas=10
```
2019-07-02 20:10:29 +00:00
The output is similar to this:
2019-03-07 09:31:05 +00:00
```
2018-08-21 13:49:09 +00:00
deployment.apps/nginx-deployment scaled
2017-03-31 19:13:47 +00:00
```
2019-07-02 20:10:29 +00:00
Assuming [horizontal Pod autoscaling ](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ ) is enabled
2017-03-31 19:13:47 +00:00
in your cluster, you can setup an autoscaler for your Deployment and choose the minimum and maximum number of
Pods you want to run based on the CPU utilization of your existing Pods.
```shell
2019-03-07 09:31:05 +00:00
kubectl autoscale deployment.v1.apps/nginx-deployment --min=10 --max=15 --cpu-percent=80
```
2019-07-02 20:10:29 +00:00
The output is similar to this:
2019-03-07 09:31:05 +00:00
```
2018-10-15 07:44:13 +00:00
deployment.apps/nginx-deployment scaled
2017-03-31 19:13:47 +00:00
```
2017-04-21 12:42:28 +00:00
### Proportional scaling
2017-03-31 19:13:47 +00:00
RollingUpdate Deployments support running multiple versions of an application at the same time. When you
or an autoscaler scales a RollingUpdate Deployment that is in the middle of a rollout (either in progress
2019-07-02 20:10:29 +00:00
or paused), the Deployment controller balances the additional replicas in the existing active
2017-03-31 19:13:47 +00:00
ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called *proportional scaling* .
For example, you are running a Deployment with 10 replicas, [maxSurge ](#max-surge )=3, and [maxUnavailable ](#max-unavailable )=2.
2019-07-02 20:10:29 +00:00
* Ensure that the 10 replicas in your Deployment are running.
```shell
kubectl get deploy
```
The output is similar to this:
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 10 10 10 10 50s
```
* You update to a new image which happens to be unresolvable from inside the cluster.
```shell
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:sometag
```
The output is similar to this:
```
deployment.apps/nginx-deployment image updated
```
* The image update starts a new rollout with ReplicaSet nginx-deployment-1989198191, but it's blocked due to the
`maxUnavailable` requirement that you mentioned above. Check out the rollout status:
```shell
kubectl get rs
```
The output is similar to this:
```
NAME DESIRED CURRENT READY AGE
nginx-deployment-1989198191 5 5 0 9s
nginx-deployment-618515232 8 8 8 1m
```
* Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas
2018-08-20 19:11:21 +00:00
to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you weren't using
proportional scaling, all 5 of them would be added in the new ReplicaSet. With proportional scaling, you
2017-03-31 19:13:47 +00:00
spread the additional replicas across all ReplicaSets. Bigger proportions go to the ReplicaSets with the
most replicas and lower proportions go to ReplicaSets with less replicas. Any leftovers are added to the
ReplicaSet with the most replicas. ReplicaSets with zero replicas are not scaled up.
2019-07-02 20:10:29 +00:00
In our example above, 3 replicas are added to the old ReplicaSet and 2 replicas are added to the
2017-03-31 19:13:47 +00:00
new ReplicaSet. The rollout process should eventually move all replicas to the new ReplicaSet, assuming
2019-07-02 20:10:29 +00:00
the new replicas become healthy. To confirm this, run:
2019-08-05 00:49:51 +00:00
```shell
kubectl get deploy
```
2019-07-02 20:10:29 +00:00
2019-08-05 00:49:51 +00:00
The output is similar to this:
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 15 18 7 8 7m
```
The rollout status confirms how the replicas were added to each ReplicaSet.
```shell
kubectl get rs
```
2019-07-02 20:10:29 +00:00
2019-08-05 00:49:51 +00:00
The output is similar to this:
```
NAME DESIRED CURRENT READY AGE
nginx-deployment-1989198191 7 7 0 7m
nginx-deployment-618515232 11 11 11 7m
```
2017-03-31 19:13:47 +00:00
2017-07-28 15:23:11 +00:00
## Pausing and Resuming a Deployment
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
You can pause a Deployment before triggering one or more updates and then resume it. This allows you to
2017-08-03 16:25:26 +00:00
apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts.
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
* For example, with a Deployment that was just created:
Get the Deployment details:
```shell
kubectl get deploy
```
The output is similar to this:
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 3 3 3 3 1m
```
Get the rollout status:
```shell
kubectl get rs
```
The output is similar to this:
```
NAME DESIRED CURRENT READY AGE
nginx-2142116321 3 3 3 1m
```
* Pause by running the following command:
```shell
kubectl rollout pause deployment.v1.apps/nginx-deployment
```
The output is similar to this:
```
deployment.apps/nginx-deployment paused
```
* Then update the image of the Deployment:
```shell
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1
```
The output is similar to this:
```
deployment.apps/nginx-deployment image updated
```
* Notice that no new rollout started:
```shell
kubectl rollout history deployment.v1.apps/nginx-deployment
```
The output is similar to this:
```
deployments "nginx"
REVISION CHANGE-CAUSE
1 < none >
```
* Get the rollout status to ensure that the Deployment is updates successfully:
```shell
kubectl get rs
```
The output is similar to this:
```
NAME DESIRED CURRENT READY AGE
nginx-2142116321 3 3 3 2m
```
* You can make as many updates as you wish, for example, update the resources that will be used:
```shell
kubectl set resources deployment.v1.apps/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi
```
The output is similar to this:
```
deployment.apps/nginx-deployment resource requirements updated
```
The initial state of the Deployment prior to pausing it will continue its function, but new updates to
the Deployment will not have any effect as long as the Deployment is paused.
* Eventually, resume the Deployment and observe a new ReplicaSet coming up with all the new updates:
```shell
kubectl rollout resume deployment.v1.apps/nginx-deployment
```
The output is similar to this:
```
deployment.apps/nginx-deployment resumed
```
* Watch the status of the rollout until it's done.
```shell
kubectl get rs -w
```
The output is similar to this:
```
NAME DESIRED CURRENT READY AGE
nginx-2142116321 2 2 2 2m
nginx-3926361531 2 2 0 6s
nginx-3926361531 2 2 1 18s
nginx-2142116321 1 2 2 2m
nginx-2142116321 1 2 2 2m
nginx-3926361531 3 2 1 18s
nginx-3926361531 3 2 1 18s
nginx-2142116321 1 1 1 2m
nginx-3926361531 3 3 1 18s
nginx-3926361531 3 3 2 19s
nginx-2142116321 0 1 1 2m
nginx-2142116321 0 1 1 2m
nginx-2142116321 0 0 0 2m
nginx-3926361531 3 3 3 20s
```
* Get the status of the latest rollout:
```shell
kubectl get rs
```
The output is similar to this:
```
NAME DESIRED CURRENT READY AGE
nginx-2142116321 0 0 0 2m
nginx-3926361531 3 3 3 28s
```
2018-05-05 16:00:51 +00:00
{{< note > }}
2018-11-08 12:44:05 +00:00
You cannot rollback a paused Deployment until you resume it.
2018-05-05 16:00:51 +00:00
{{< / note > }}
2017-03-31 19:13:47 +00:00
## Deployment status
2017-04-21 12:42:28 +00:00
A Deployment enters various states during its lifecycle. It can be [progressing ](#progressing-deployment ) while
rolling out a new ReplicaSet, it can be [complete ](#complete-deployment ), or it can [fail to progress ](#failed-deployment ).
2017-03-31 19:13:47 +00:00
### Progressing Deployment
Kubernetes marks a Deployment as _progressing_ when one of the following tasks is performed:
2017-05-20 19:28:29 +00:00
* The Deployment creates a new ReplicaSet.
* The Deployment is scaling up its newest ReplicaSet.
* The Deployment is scaling down its older ReplicaSet(s).
* New Pods become ready or available (ready for at least [MinReadySeconds ](#min-ready-seconds )).
2017-03-31 19:13:47 +00:00
You can monitor the progress for a Deployment by using `kubectl rollout status` .
### Complete Deployment
Kubernetes marks a Deployment as _complete_ when it has the following characteristics:
* All of the replicas associated with the Deployment have been updated to the latest version you've specified, meaning any
updates you've requested have been completed.
2017-05-20 19:28:29 +00:00
* All of the replicas associated with the Deployment are available.
* No old replicas for the Deployment are running.
2017-03-31 19:13:47 +00:00
2017-04-21 12:42:28 +00:00
You can check if a Deployment has completed by using `kubectl rollout status` . If the rollout completed
successfully, `kubectl rollout status` returns a zero exit code.
2017-03-31 19:13:47 +00:00
```shell
2019-03-07 09:31:05 +00:00
kubectl rollout status deployment.v1.apps/nginx-deployment
```
2019-07-02 20:10:29 +00:00
The output is similar to this:
2019-03-07 09:31:05 +00:00
```
2017-03-31 19:13:47 +00:00
Waiting for rollout to finish: 2 of 3 updated replicas are available...
2018-10-15 07:44:13 +00:00
deployment.apps/nginx-deployment successfully rolled out
2017-03-31 19:13:47 +00:00
$ echo $?
0
```
### Failed Deployment
2017-04-21 12:42:28 +00:00
Your Deployment may get stuck trying to deploy its newest ReplicaSet without ever completing. This can occur
due to some of the following factors:
2017-03-31 19:13:47 +00:00
* Insufficient quota
* Readiness probe failures
* Image pull errors
* Insufficient permissions
* Limit ranges
* Application runtime misconfiguration
2017-04-21 12:42:28 +00:00
One way you can detect this condition is to specify a deadline parameter in your Deployment spec:
2018-05-22 14:30:55 +00:00
([`.spec.progressDeadlineSeconds`](#progress-deadline-seconds)). `.spec.progressDeadlineSeconds` denotes the
2017-07-31 23:43:08 +00:00
number of seconds the Deployment controller waits before indicating (in the Deployment status) that the
2017-04-21 12:42:28 +00:00
Deployment progress has stalled.
2017-03-31 19:13:47 +00:00
2017-04-21 12:42:28 +00:00
The following `kubectl` command sets the spec with `progressDeadlineSeconds` to make the controller report
lack of progress for a Deployment after 10 minutes:
2017-03-31 19:13:47 +00:00
```shell
2019-03-07 09:31:05 +00:00
kubectl patch deployment.v1.apps/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'
```
2019-07-02 20:10:29 +00:00
The output is similar to this:
2019-03-07 09:31:05 +00:00
```
2018-10-15 07:44:13 +00:00
deployment.apps/nginx-deployment patched
2017-03-31 19:13:47 +00:00
```
2017-04-21 12:42:28 +00:00
Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the following
2018-06-07 18:45:23 +00:00
attributes to the Deployment's `.status.conditions` :
2017-03-31 19:13:47 +00:00
* Type=Progressing
* Status=False
* Reason=ProgressDeadlineExceeded
2019-04-15 01:14:00 +00:00
See the [Kubernetes API conventions ](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties ) for more information on status conditions.
2017-03-31 19:13:47 +00:00
2018-05-05 16:00:51 +00:00
{{< note > }}
2019-07-02 20:10:29 +00:00
Kubernetes takes no action on a stalled Deployment other than to report a status condition with
2017-04-21 12:42:28 +00:00
`Reason=ProgressDeadlineExceeded` . Higher level orchestrators can take advantage of it and act accordingly, for
example, rollback the Deployment to its previous version.
2018-05-05 16:00:51 +00:00
{{< / note > }}
2017-03-31 19:13:47 +00:00
2018-05-05 16:00:51 +00:00
{{< note > }}
2018-11-08 12:44:05 +00:00
If you pause a Deployment, Kubernetes does not check progress against your specified deadline. You can
2017-04-21 12:42:28 +00:00
safely pause a Deployment in the middle of a rollout and resume without triggering the condition for exceeding the
deadline.
2018-05-05 16:00:51 +00:00
{{< / note > }}
2017-03-31 19:13:47 +00:00
2017-04-21 12:42:28 +00:00
You may experience transient errors with your Deployments, either due to a low timeout that you have set or
due to any other kind of error that can be treated as transient. For example, let's suppose you have
insufficient quota. If you describe the Deployment you will notice the following section:
2017-03-31 19:13:47 +00:00
```shell
2019-03-07 09:31:05 +00:00
kubectl describe deployment nginx-deployment
```
2019-07-02 20:10:29 +00:00
The output is similar to this:
2019-03-07 09:31:05 +00:00
```
2017-03-31 19:13:47 +00:00
< ... >
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True ReplicaSetUpdated
ReplicaFailure True FailedCreate
< ... >
```
2019-07-02 20:10:29 +00:00
If you run `kubectl get deployment nginx-deployment -o yaml` , the Deployment status is similar to this:
2017-03-31 19:13:47 +00:00
```
status:
availableReplicas: 2
conditions:
- lastTransitionTime: 2016-10-04T12:25:39Z
lastUpdateTime: 2016-10-04T12:25:39Z
message: Replica set "nginx-deployment-4262182780" is progressing.
reason: ReplicaSetUpdated
status: "True"
type: Progressing
- lastTransitionTime: 2016-10-04T12:25:42Z
lastUpdateTime: 2016-10-04T12:25:42Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2016-10-04T12:25:39Z
lastUpdateTime: 2016-10-04T12:25:39Z
message: 'Error creating: pods "nginx-deployment-4262182780-" is forbidden: exceeded quota:
object-counts, requested: pods=1, used: pods=3, limited: pods=2'
reason: FailedCreate
status: "True"
type: ReplicaFailure
observedGeneration: 3
replicas: 2
unavailableReplicas: 2
```
2017-04-21 12:42:28 +00:00
Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the
reason for the Progressing condition:
2017-03-31 19:13:47 +00:00
```
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing False ProgressDeadlineExceeded
ReplicaFailure True FailedCreate
```
2017-04-21 12:42:28 +00:00
You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other
controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota
conditions and the Deployment controller then completes the Deployment rollout, you'll see the
Deployment's status update with a successful condition (`Status=True` and `Reason=NewReplicaSetAvailable` ).
2017-03-31 19:13:47 +00:00
```
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
```
`Type=Available` with `Status=True` means that your Deployment has minimum availability. Minimum availability is dictated
by the parameters specified in the deployment strategy. `Type=Progressing` with `Status=True` means that your Deployment
is either in the middle of a rollout and it is progressing or that it has successfully completed its progress and the minimum
required new replicas are available (see the Reason of the condition for the particulars - in our case
`Reason=NewReplicaSetAvailable` means that the Deployment is complete).
2017-04-21 12:42:28 +00:00
You can check if a Deployment has failed to progress by using `kubectl rollout status` . `kubectl rollout status`
returns a non-zero exit code if the Deployment has exceeded the progression deadline.
2017-03-31 19:13:47 +00:00
```shell
2019-03-07 09:31:05 +00:00
kubectl rollout status deployment.v1.apps/nginx-deployment
```
2019-07-02 20:10:29 +00:00
The output is similar to this:
2019-03-07 09:31:05 +00:00
```
2017-03-31 19:13:47 +00:00
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
error: deployment "nginx" exceeded its progress deadline
$ echo $?
1
```
### Operating on a failed deployment
All actions that apply to a complete Deployment also apply to a failed Deployment. You can scale it up/down, roll back
2019-07-02 20:10:29 +00:00
to a previous revision, or even pause it if you need to apply multiple tweaks in the Deployment Pod template.
2017-03-31 19:13:47 +00:00
2017-04-21 12:42:28 +00:00
## Clean up Policy
You can set `.spec.revisionHistoryLimit` field in a Deployment to specify how many old ReplicaSets for
this Deployment you want to retain. The rest will be garbage-collected in the background. By default,
2018-04-10 15:54:11 +00:00
it is 10.
2017-04-21 12:42:28 +00:00
2018-05-05 16:00:51 +00:00
{{< note > }}
2018-11-08 12:44:05 +00:00
Explicitly setting this field to 0, will result in cleaning up all the history of your Deployment
2017-04-21 12:42:28 +00:00
thus that Deployment will not be able to roll back.
2018-05-05 16:00:51 +00:00
{{< / note > }}
2017-04-21 12:42:28 +00:00
2019-07-02 20:10:29 +00:00
## Canary Deployment
2017-03-31 19:13:47 +00:00
2017-04-21 12:42:28 +00:00
If you want to roll out releases to a subset of users or servers using the Deployment, you
can create multiple Deployments, one for each release, following the canary pattern described in
[managing resources ](/docs/concepts/cluster-administration/manage-deployment/#canary-deployments ).
2017-03-31 19:13:47 +00:00
## Writing a Deployment Spec
2017-04-21 12:42:28 +00:00
As with all other Kubernetes configs, a Deployment needs `apiVersion` , `kind` , and `metadata` fields.
For general information about working with config files, see [deploying applications ](/docs/tutorials/stateless-application/run-stateless-application-deployment/ ),
2019-05-24 12:00:19 +00:00
configuring containers, and [using kubectl to manage resources ](/docs/concepts/overview/working-with-objects/object-management/ ) documents.
2017-03-31 19:13:47 +00:00
2019-04-15 01:14:00 +00:00
A Deployment also needs a [`.spec` section ](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status ).
2017-03-31 19:13:47 +00:00
### Pod Template
2019-04-15 00:45:59 +00:00
The `.spec.template` and `.spec.selector` are the only required field of the `.spec` .
2017-03-31 19:13:47 +00:00
2019-07-02 20:10:29 +00:00
The `.spec.template` is a [Pod template ](/docs/concepts/workloads/pods/pod-overview/#pod-templates ). It has exactly the same schema as a [Pod ](/docs/concepts/workloads/pods/pod/ ), except it is nested and does not have an
2017-03-31 19:13:47 +00:00
`apiVersion` or `kind` .
2019-07-02 20:10:29 +00:00
In addition to required fields for a Pod, a Pod template in a Deployment must specify appropriate
2017-07-31 23:43:08 +00:00
labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See [selector ](#selector )).
2017-03-31 19:13:47 +00:00
2018-05-22 22:00:26 +00:00
Only a [`.spec.template.spec.restartPolicy` ](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy ) equal to `Always` is
2017-04-21 12:42:28 +00:00
allowed, which is the default if not specified.
2017-03-31 19:13:47 +00:00
### Replicas
2017-04-21 12:42:28 +00:00
`.spec.replicas` is an optional field that specifies the number of desired Pods. It defaults to 1.
2017-03-31 19:13:47 +00:00
### Selector
2019-05-16 11:53:41 +00:00
`.spec.selector` is an required field that specifies a [label selector ](/docs/concepts/overview/working-with-objects/labels/ )
2019-07-02 20:10:29 +00:00
for the Pods targeted by this Deployment.
2017-03-31 19:13:47 +00:00
Release 1.8 (#5659)
* GC now supports non-core resources
* Add two examples about how to analysis audits of kube-apiserver (#4264)
* Deprecate system:nodes binding
* [1.8] StatefulSet `initialized` annotation is now ignored.
* inits the kubeadm upgrade docs
addresses kubernetes/kubernetes.github.io/issues/4689
* adds kubeadm upgrade cmd to ToC
addresses kubernetes/kubernetes.github.io/issues/4689
* add workload placement docs
* ScaleIO - document udpate for 1.8
* Add documentation on storageClass.mountOptions and PV.mountOptions (#5254)
* Add documentation on storageClass.mountOptions and PV.mountOptions
* convert notes into callouts
* Add docs for CustomResource validation
add info about supported fields
* advanced audit beta features (#5300)
* Update job workload doc with backoff failure policy (#5319)
Add to the Jobs documentation how to use the new backoffLimit field that
limit the number of Pod failure before considering the Job as failed.
* Documented additional AWS Service annotations (#4864)
* Add device plugin doc under concepts/cluster-administration. (#5261)
* Add device plugin doc under concepts/cluster-administration.
* Update device-plugins.md
* Update device-plugins.md
Add meta description. Fix typo. Change bare metal deployment to manual deployment.
* Update device-plugins.md
Fix typo again.
* Update page.version. (#5341)
* Add documentation on storageClass.reclaimPolicy (#5171)
* [Advanced audit] use new herf for audit-api (#5349)
This tag contains all the changes in v1beta1 version. Update it now.
* Added documentation around creating the InitializerConfiguration for the persistent volume label controller in the cloud-controller-manager (#5255)
* Documentation for kubectl plugins (#5294)
* Documentation for kubectl plugins
* Update kubectl-plugins.md
* Update kubectl-plugins.md
* Updated CPU manager docs to match implementation. (#5332)
* Noted limitation of alpha static cpumanager.
* Updated CPU manager docs to match implementation.
- Removed references to CPU pressure node condition and evictions.
- Added note about new --cpu-manager-reconcile-period flag.
- Added note about node allocatable requirements for static policy.
- Noted limitation of alpha static cpumanager.
* Move cpu-manager task link to rsc mgmt section.
* init containers annotation removed in 1.8 (#5390)
* Add documentation for TaintNodesByCondition (#5352)
* Add documentation for TaintNodesByCondition
* Update nodes.md
* Update taint-and-toleration.md
* Update daemonset.md
* Update nodes.md
* Update taint-and-toleration.md
* Update daemonset.md
* Fix deployments (#5421)
* Document extended resources and OIR deprecation. (#5399)
* Document extended resources and OIR deprecation.
* Updated extended resources doc per reviews.
* reverts extra spacing in _data/tasks.yml
* addresses `kubeadm upgrade` review comments
Feedback from @chenopis, @luxas, and @steveperry-53 addressed with this commit
* HugePages documentation (#5419)
* Update cpu-management-policies.md (#5407)
Fixed the bad link.
Modified "cpu" to "CPU".
Added more 'yaml' as supplement.
* Update RBAC docs for v1 (#5445)
* Add user docs for pod priority and preemption (#5328)
* Add user docs for pod priority and preemption
* Update pod-priority-preemption.md
* More updates
* Update docs/admin/kubeadm.md for 1.8 (#5440)
- Made a couple of minor wording changes (not strictly 1.8 related).
- Did some reformatting (not strictly 1.8 related).
- Updated references to the default token TTL (was infinite, now 24 hours).
- Documented the new `--discovery-token-ca-cert-hash` and `--discovery-token-unsafe-skip-ca-verification` flags for `kubeadm join`.
- Added references to the new `--discovery-token-ca-cert-hash` flag in all the default examples.
- Added a new _Security model_ section that describes the security tradeoffs of the various discovery modes.
- Documented the new `--groups` flag for `kubeadm token create`.
- Added a note of caution under _Automating kubeadm_ that references the _Security model_ section.
- Updated the component version table to drop 1.6 and add 1.8.
- Update `_data/reference.yml` to try to get the sidebar fixed up and more consistent with `kubefed`.
* Update StatefulSet Basics for 1.8 release (#5398)
* addresses `kubeadm upgrade` review comments
2nd iteration review comments by @luxas
* adds kubelet upgrade section to kubeadm upgrade
* Fix a bulleted list on docs/admin/kubeadm.md. (#5458)
I updated this doc yesterday and I was absolutely sure I fixed this, but I just saw that this commit got lost somehow.
This was introduced recently in https://github.com/kubernetes/kubernetes.github.io/pull/5440.
* Clarify the API to check for device plugins
* Moving Flexvolume to separate out-of-tree section
* addresses `kubeadm upgrade` review comments
CC: @luxas
* fixes kubeadm upgrade index
* Update Stackdriver Logging documentation (#5495)
* Re-update WordPress and MySQL PV doc to use apps/v1beta2 APIs (#5526)
* Update statefulset concepts doc to use apps/v1beta2 APIs (#5420)
* add document on kubectl's behavior regarding initializers (#5505)
* Update docs/admin/kubeadm.md to cover self-hosting in 1.8. (#5497)
This is a new beta feature in 1.8.
* Update kubectl patch doc to use apps/v1beta2 APIs (#5422)
* [1.8] Update "Run Applications" tasks to apps/v1beta2. (#5525)
* Update replicated stateful application task for 1.8.
* Update single instance stateful app task for 1.8.
* Update stateless app task for 1.8.
* Update kubectl patch task for 1.8.
* fix the link of persistent storage (#5515)
* update the admission-controllers.md index.md what-is-kubernetes.md link
* fix the link of persistent storage
* Add quota support for local ephemeral storage (#5493)
* Add quota support for local ephemeral storage
update the doc to this alpha feature
* Update resource-quotas.md
* Updated Deployments concepts doc (#5491)
* Updated Deployments concepts doc
* Addressed comments
* Addressed more comments
* Modify allocatable storage to ephemeral-storage (#5490)
Update the doc to use ephemeral-storage instead of storage
* Revamped concepts doc for ReplicaSet (#5463)
* Revamped concepts doc for ReplicaSet
* Minor changes to call out specific versions for selector defaulting and
immutability
* Addressed doc review comments
* Remove petset documentations (#5395)
* Update docs to use batch/v1beta1 cronjobs (#5475)
* add federation job doc (#5485)
* add federation job doc
* Update job.md
Edits for clarity and consistency
* Update job.md
Fixed a typo
* update DaemonSet concept for 1.8 release (#5397)
* update DaemonSet concept for 1.8 release
* Update daemonset.md
Fix typo. than -> then
* Update bootstrap tokens doc for 1.8. (#5479)
* Update bootstrap tokens doc for 1.8.
This has some changes I missed when I was updating the main kubeadm documention:
- Bootstrap tokens are now beta, not alpha (https://github.com/kubernetes/features/issues/130)
- The apiserver flag to enable the authenticator changedin 1.8 (https://github.com/kubernetes/kubernetes/pull/51198)
- Added `auth-extra-groups` documentaion (https://github.com/kubernetes/kubernetes/pull/50933)
- Updated the _Token Management with `kubeadm`_ section to link to the main kubeadm docs, since it was just duplicated information.
* Update bootstrap-tokens.md
* Updated the Cassandra tutorial to use apps/v1beta2 (#5548)
* add docs for AllowPrivilegeEscalation (#5448)
Signed-off-by: Jess Frazelle <acidburn@microsoft.com>
* Add local ephemeral storage alpha feature in managing compute resource (#5522)
* Add local ephemeral storage alpha feature in managing compute resource
Since 1.8, we add the local ephemeral storage alpha feature as one
resource type to manage. Add this feature into the doc.
* Update manage-compute-resources-container.md
* Update manage-compute-resources-container.md
* Update manage-compute-resources-container.md
* Update manage-compute-resources-container.md
* Update manage-compute-resources-container.md
* Update manage-compute-resources-container.md
* Added documentation for Metrics Server (#5560)
* authorization: improve authorization debugging docs (#5549)
* Document mount propagation (#5544)
* Update /docs/setup/independent/create-cluster-kubeadm.md for 1.8. (#5524)
This introduction needed a couple of small tweaks to cover the `--discovery-token-ca-cert-hash` flag added in https://github.com/kubernetes/kubernetes/pull/49520 and some version bumps.
* Add task doc for alpha dynamic kubelet configuration (#5523)
* Fix input/output of selfsubjectaccess review (#5593)
* Add docs for implementing resize (#5528)
* Add docs for implementing resize
* Update admission-controllers.md
* Added link to PVC section
* minor typo fixes
* Update NetworkPolicy concept guide with egress and CIDR changes (#5529)
* update zookeeper tutorial for 1.8 release
* add doc for hostpath type (#5503)
* Federated Hpa feature doc (#5487)
* Federated Hpa feature doc
* Federated Hpa feature doc review fixes
* Update hpa.md
* Update hpa.md
* update cloud controller manager docs for v1.8
* Update cronjob with defaults information (#5556)
* Kubernetes 1.8 reference docs (#5632)
* Kubernetes 1.8 reference docs
* Kubectl reference docs for 1.8
* Update side bar with 1.8 kubectl and api ref docs links
* remove petset.md
* update on state of HostAlias in 1.8 with hostNetwork Pod support (#5644)
* Fix cron job deletion section (#5655)
* update imported docs (#5656)
* Add documentation for certificate rotation. (#5639)
* Link to using kubeadm page
* fix the command output
fix the command output
* fix typo in api/resources reference: "Worloads"
* Add documentation for certificate rotation.
* Create TOC entry for cloud controller manager. (#5662)
* Updates for new versions of API types
* Followup 5655: fix link to garbage collection (#5666)
* Temporarily redirect resources-reference to api-reference. (#5668)
* Update config for 1.8 release. (#5661)
* Update config for 1.8 release.
* Address reviewer comments.
* Switch references in HPA docs from alpha to beta (#5671)
The HPA docs still referenced the alpha version. This switches them to
talk about v2beta1, which is the appropriate version for Kubernetes 1.8
* Deprecate openstack heat (#5670)
* Fix typo in pod preset conflict example
Move container port definition to the correct line.
* Highlight openstack-heat provider deprecation
The openstack-heat provider for kube-up is being deprecated and will be
removed in a future release.
* Temporarily fix broken links by redirecting. (#5672)
* Fix broken links. (#5675)
* Fix render of code block (#5674)
* Fix broken links. (#5677)
* Add a small note about auto-bootstrapped CSR ClusterRoles (#5660)
* Update kubeadm install doc for v1.8 (#5676)
* add draft workloads api content for 1.8 (#5650)
* add draft workloads api content for 1.8
* edits per review, add tables, for 1.8 workloads api doc
* fix typo
* Minor fixes to kubeadm 1.8 upgrade guide. (#5678)
- The kubelet upgrade instructions should be done on every host, not
just worker nodes.
- We should just upgrade all packages, instead of calling out kubelet
specifically. This will also upgrade kubectl, kubeadm, and
kubernetes-cni, if installed.
- Draining nodes should also ignore daemonsets, and master errors can be
ignored.
- Make sure that the new kubeadm download is chmoded correctly.
- Add a step to run `kubeadm version` to verify after downloading.
- Manually approve new kubelet CSRs if rotation is enabled (known issue).
* Release 1.8 (#5680)
* Fix versions for 1.8 API ref docs
* Updates for 1.8 kubectl reference docs
* Kubeadm /docs/admin/kubeadm.md cleanup, editing. (#5681)
* Update docs/admin/kubeadm.md (mostly 1.8 related).
This is Fabrizio's work, which I'm committing along with my edits (in a commit on top of this).
* A few of my own edits to clarify and clean up some Markdown.
2017-09-29 04:46:51 +00:00
`.spec.selector` must match `.spec.template.metadata.labels` , or it will be rejected by the API.
Release 1.9 (#5978)
* Trivial change to open release branch
* Undo trivial change
* add service ipvs overview
* Add instructions on how to setup kubectl
* Document conntrack dependency for kube-proxy
* Add an a
This is kind of jarring / missing an article. I'm guessing it should either be ' to a rack of bare metal servers.' or '...to racks of bare metal servers.'.
* adding example responses for common issues
- support request
- code bug report
* Trivial change to open release branch
* Undo trivial change
* Signed-off-by: Ziqi Zhao <zhaoziqi@qiniu.com> (#5366)
Fix the not-working test case yaml for /doc/concepts/storage/volumes.md
* kubectl-overview
* temp fix for broken pod and deployment links
* Update Table of Solutions for Juju
* Revise certificates documentation (#5965)
* Update review-issues.md
Some edits for clarity and condensed language.
* Update init-containers.md
Fix leading spaces in commands.
* Update kubectl-overview.md
Fix format.
* Update clc.md
Fix format.
* Update openstack-heat.md
The url no need. just highlight.
* Typo
I believe this should be "users" not "uses"
* making explicit hostname uniq requirement
* Update scheduling-hugepages.md
* Update update-daemon-set.md
* fix redirection of PersistentVolume
* Update hpa.md
* update kubectl instruction
* Use the format of kubeadm init
* fix spelling error
guarnatees to guarantees
* add matchLabels description (#6020)
* search and replace for k8s.github.io to website (#6019)
* fix scale command of object-management (#6011)
* Update replicaset.md (#6009)
* Update secret.md (#6008)
* specify password for mysql image (#5990)
* specify password for mysql image
* specify password for mysql image
* link error for run-stateless-application-deployment.md (#5985)
* link error for run-stateless-application-deployment.md
* link error for run-stateless-application-deployment.md
* Add performance implications of inter-pod affinity/anti-affinity (#5979)
* 404 monthly maintenance - October 2017 (#5977)
* Updated redirects
* More redirects
* Add conjure-up to Turnkey Cloud Solutions list (#5973)
* Add conjure-up to Turnkey Cloud Solutions list
* Changed wording slightly
* change the StatefulSet to ReplicaSet in reference (#5968)
* Clarification of failureThreshold of probes (#5963)
* Mention usage of block storage version param (#5925)
Mention usage of block storage version (bs-version) parameter to
workaround attachment issues using older K8S versions on an OpenStack
cloud with path-based endpoints.
Resolves: https://github.com/kubernetes/kubernetes.github.io/issues/5924
* Update sysctl-cluster.md (#5894)
Include guide on enabling unsafe sysctls in minikube
* Avoid Latin phrases & format note (#5889)
* Avoid Latin phrases & format note
according the Documentation Style Guide
* Update scratch.md
* Update scratch.md
* resolves jekyll rendering error (#5976)
- chinese isn't understood for keys in YAML frontmatter in jekyll, so
replaced it with the english equivalent that doesn't throw the
following error on rendering:
Error reading file src/kubernetes.github.io/cn/docs/concepts/cluster-administration/device-plugins.md: (<unknown>): could not find expected ':' while scanning a simple key at line 4 column 1
* Change VM to pod. (#6022)
* Add link to custom metrics. (#6023)
* Rephrase core group. (#6024)
* Added explanation on context to when joining (#6018)
* Update create-cluster-kubeadm.md (#5761)
Update Canal version in pod network apply commands
* Fixes issue #5620 (#5869)
* Fixes issue #5620
Signed-off-by: Brad Topol <btopol@us.ibm.com>
* Restructured so that review process is for both current and upcoming
releases. Added content describing the use of tech reviewers.
* Removed incorrect Kubernetes reviewer link.
* Fixed tech reviewer URL to now use website
* Update pod-priority-preemption.md
fix-wrong-link-to-pod-preemption
* pod-security-policy.md: add links to the page about admission plugins.
* Adding all files for BlaBlaCar case study (#5857)
* Adding all files for BlaBlaCar case study
* Update blablacar.html
* Fix changed URL for google containers
* Add /docs/reference/auto-generated directory
* correct the downwardapi redirect
* Remove links using "here"
* Rename to /docs/reference/generated directory
* add Concept template
* Change title to just Ingress
* Link mistake (#6038)
* link mistake
* link mistake
* skip title check for skip_title_check.txt
* skip title check for skip_title_check.txt
* remove doesn't exist link.
* Fix podpreset task (#5705)
* Add a simple pod manifest to pod overview (#5986)
* Split PodPreset concept out from task doc (#5984)
* Add selector spec description (#5789)
* Add selector spec description
* Fix selector field explanation
* Put orphaned topics in TOC. (#6051)
* static-pod example bad format in the final page (#6050)
* static-pod example bad format in the final page
* static-pod example bad format in the final page
* static-pod example bad format in the final page
* static-pod example bad format in the final page
* static-pod example bad format in the final page
* Fix `backoffLimit` field misplacement (#6042)
It should be placed in JobSpec according to:
https://github.com/kubernetes/kubernetes/blob/master/api/swagger-spec/batch_v1.json#L1488-L1514
* Update addons.md (#6061)
* add info about VMware NSX-T CNI plugin (#5987)
* add info about VMware NSX-T CNI plugin
Hello,
I'm VMware Networking and Security Architect and would like to include short information about our CNI plugin implementation similar to what other vendors did
Best regards
Emil Gagala
* Update networking.md
* Update networking.md
* Update networking.md
* Update: Using universal zsh configuration (#5669)
* Update install-kubectl.md
Zsh is not only oh-my-zsh, so I added universal configuration for zsh that also can be used in prezto.
* fix merge error after rebase
* Operating etcd cluster for Kubernetes bad format in the final page (#6056)
* Operating etcd cluster for Kubernetes bad format in the final page
* Update configure-upgrade-etcd.md
* Update configure-upgrade-etcd.md
* Usage note and warning tags. (#6053)
* Usage note and warning tags.
* Update configure-upgrade-etcd.md
* Update configure-upgrade-etcd.md
* Document jekyll includes snippets
* Add jekyll includes to docs home toc
- Remove extra kubernetes home in toc
* document docker cgroupdriver req (#5937)
* Update test blacklists (#6063)
* Update toc check blacklist
* Update title check blacklist
* wip
* wip
* Fix typo
* Document unconfined apparmor profile
* Revert "Document the unconfined profile for AppArmor" (#6268)
* CRD Validation: remove alpha warning, change enable instructions to (#6066)
disable
* Documented service annotation for AWS ELB SSL policy
* kubeadm: add a note about the new `--print-join-command` flag.
This is a new flag for the `kubeadm token create` command.
* Add a note to PDB page
* Improve Kubeadm reference doc (#6103)
* automatically-generated kubeadm reference doc
* user-mantained kubeadm reference doc
* Documentation for CSIPersistentVolume
* change replicaset documentation to use apps/v1 APIs
* Update service.md
ipvs alpha version -> beta version
* Updated Deployment concept docs (#6494)
* Updated Deployment concept docs
* Addressed comments
* Documentation for volume scheduling alpha feature
* Update admission control docs for webhooks
* Improve DNS documentation (#6479)
* update ds for 1.9
* Update service.md
* Update service.md
* Revert "begin updating webhook documentation" (#6575)
* Update version numbers to include 1.9 (#6518)
* Update site versions for 1.9
* Removed 1.4 docs
* Update _config.yml
* Update _config.yml
* updates for raw block devices
* rbac: docs for aggregated cluster roles (#6474)
* Added IPv6 information for Kubelet arguments (#6498)
* Added IPv6 info to kube-proxy arguments
* Added IPv6 information for argument for kubelet
* Update PVC resizing documentation (#6487)
* Updates for Windows Server version 1709 with K8s v1.8 (#6180)
* Updated for WSv1709 and K8s v1.8
* Updated picture and CNI config
* Fixed formatting on CNI Config
* Updated docs to reference Microsoft/SDN GitHub docs
* fix typo
* Workaround for Jekyllr frontmatter
* Added section on features and limitations, with example yaml files.
* Update index.md
* Added kubeadm section, few other small fixes
* Few minor grammar fixes
* Update access-cluster.md with a comment that for IPv6
the user should use [::1] for the localhost
* Addressed a number of issues brought up against the base PR
* Fixed windows-host-setup link
* Rewrite PodSecurityPolicy guide
* Update index.md
Signed-off-by: Alin Balutoiu <abalutoiu@cloudbasesolutions.com>
Signed-off-by: Alin Gabriel Serdean <aserdean@ovn.org>
* Spelling correction and sentence capitalization.
- Corrected the spelling error for storing, was put in as 'stoing'.
- Capitalized list items.
- Added '.' at end of sentences in the list items.
* Update index.md
* Update index.md
* Addressed comments and rebased
* Fixed formatting
* Fixed formatting
* Updated header link
* Updated hyperlinks
* Updated warning
* formatting
* formatting
* formatting
* Revert "Update access-cluster.md with a comment that for IPv6"
This reverts commit 31e4dbdc25a60e4584ce01a6b1915e13ac63bc67.
* Revert "fix typo"
This reverts commit c05678752d3b481e2907bc53d3971bb49eab6609.
* Revert "Workaround for Jekyllr frontmatter"
This reverts commit b84ac59624b625e6534ccd97bb4ba65e51b441e4.
* Fixed grammatical issues and reverted non-related commits
* Revert "Rewrite PodSecurityPolicy guide"
This reverts commit 5d39cfeae41b3237a5e1247bc1c1f98e0727c5fd.
* Revert "Spelling correction and sentence capitalization."
This reverts commit 47eed4346e4491c9a63c2e0cb76bdd37bff5677c.
* Fixed auto-numbering
* Minor formatting updates
* CoreDNS feature documentation (#6463)
* Initial placeholder PR for CoreDNS feature documentation
* Remove from admin, add content
* Fix missing endcapture
* Add to tasks.yml
* Review feedback
* Postpone Deletion of a Persistent Volume Claim in case It Is Used by a Pod (#6415)
* Postpone Deletion of a Persistent Volume Claim in case It Is Used by a Pod
A new feature PVC Protection was added into K8s 1.9 that's why this documentation change is needed.
* Added tag at the top of each new area.
* Fix typo
* Fix: switched on in (all kubelets) -> (all K8s components).
* Added link to admission controller
* Moved PVC Protection configuration into Before you begin section.
* Added steps how to verify PVC Protection feature.
* Fixes for admission controller plugin description and for PVC Protection description in PVC lifecycle.
* Testing official rendering of enumerations (1., 2., 3., etc.)
* Re-write to address comments from review.
* Fixed definition when a PVC is in active use by a pod.
* Change auditing docs page for 1.9 release (#6427)
* Change auditing docs page for 1.9 release
Signed-off-by: Mik Vyatskov <vmik@google.com>
* Address review comments
Signed-off-by: Mik Vyatskov <vmik@google.com>
* Address review comments
Signed-off-by: Mik Vyatskov <vmik@google.com>
* Address review comments
Signed-off-by: Mik Vyatskov <vmik@google.com>
* Fix broken link
Signed-off-by: Mik Vyatskov <vmik@google.com>
* short circuit deny docs (#6536)
* line wrap
* short circuit deny
* address comments
* Add kubeadm 1.9 upgrade docs (#6485)
* kubeadm: Improve kubeadm documentation for v1.9 (#6645)
* Update admission control docs for webhooks (re-send #6368) (#6650)
* Update admission control docs for webhooks
* update in response to comments
* Revamp rkt and add CRI-O as alternative runtime (#6371)
Signed-off-by: Lorenzo Fontana <lo@linux.com>
* Documented NLB for Kubernetes 1.9 (#6260)
* Added IPV6 information to setup cluster using kubeadm (#6465)
* Added IPV6 information to setup cluster using kubeadm
* Updated kubeadm.md & create-cluster-kubeadm.md with IPv6 related information
* Added IPv6 options for kubeadm --init & automated address binding for kube-proxy based on version of IP configured for API server)
* Changes to kubeadm.md as per comments
* Modified kubeadm.md and create-cluster-kubeadm.md
* Implemented changes requested by zacharysarah
* Removed autogenerated kubeadm.md changes
* StatefulSet 1.9 updates. (#6550)
* updates sts concept and tutorials to use 1.9 apps/v1
* Update statefulset.md
* clarify pod name label
* Garbage collection updates for 1.9 (#6555)
* 1.9 gc policy update
* carify deletion
* Couple nits for dnsConfig doc (#6652)
* Add doc for AllowedFlexVolume (#6563)
* Update OpenStack Cloud Provider API support for v1.9 (#6638)
* Flex volume is GA. Remove alpha notation. (#6666)
* Update generated ref docs for Kubernetes and Federation components. (#6658)
* Update generated ref docs for Kubernetes and Federation components.
* Rename kubectl-options to kubectl.
* Add title to kubectl.
* Fix double synopsis.
* Update Federation API ref docs for 1.9. (#6636)
* Update federation API ref docs.
* Move and redirect.
* Move generated Federation docs to the generated directory.
* Fix titles.
* Type
* Fix titles
* Update auto-generated Kubernetes APi ref docs. (#6646)
* Update kubectl commands for 1.9 (#6635)
* add ExtendedResourceToleration admission controller (#6618)
* Update API reference paths for v1.9 (#6681)
2017-12-15 23:36:13 +00:00
In API version `apps/v1` , `.spec.selector` and `.metadata.labels` do not default to `.spec.template.metadata.labels` if not set. So they must be set explicitly. Also note that `.spec.selector` is immutable after creation of the Deployment in `apps/v1` .
2017-03-31 19:13:47 +00:00
2017-07-31 23:43:08 +00:00
A Deployment may terminate Pods whose labels match the selector if their template is different
2017-08-07 00:53:17 +00:00
from `.spec.template` or if the total number of such Pods exceeds `.spec.replicas` . It brings up new
2017-07-31 23:43:08 +00:00
Pods with `.spec.template` if the number of Pods is less than the desired number.
2017-03-31 19:13:47 +00:00
2018-05-05 16:00:51 +00:00
{{< note > }}
2019-07-02 20:10:29 +00:00
You should not create other Pods whose labels match this selector, either directly, by creating
2017-07-31 23:43:08 +00:00
another Deployment, or by creating another controller such as a ReplicaSet or a ReplicationController. If you
2019-07-02 20:10:29 +00:00
do so, the first Deployment thinks that it created these other Pods. Kubernetes does not stop you from doing this.
2018-05-05 16:00:51 +00:00
{{< / note > }}
2017-03-31 19:13:47 +00:00
2017-04-21 12:42:28 +00:00
If you have multiple controllers that have overlapping selectors, the controllers will fight with each
2017-07-31 23:43:08 +00:00
other and won't behave correctly.
2017-03-31 19:13:47 +00:00
### Strategy
`.spec.strategy` specifies the strategy used to replace old Pods by new ones.
`.spec.strategy.type` can be "Recreate" or "RollingUpdate". "RollingUpdate" is
the default value.
#### Recreate Deployment
2017-04-21 12:42:28 +00:00
All existing Pods are killed before new ones are created when `.spec.strategy.type==Recreate` .
2017-03-31 19:13:47 +00:00
#### Rolling Update Deployment
2017-04-21 12:42:28 +00:00
The Deployment updates Pods in a [rolling update ](/docs/tasks/run-application/rolling-update-replication-controller/ )
fashion when `.spec.strategy.type==RollingUpdate` . You can specify `maxUnavailable` and `maxSurge` to control
2017-03-31 19:13:47 +00:00
the rolling update process.
##### Max Unavailable
2017-04-21 12:42:28 +00:00
`.spec.strategy.rollingUpdate.maxUnavailable` is an optional field that specifies the maximum number
2017-07-31 23:43:08 +00:00
of Pods that can be unavailable during the update process. The value can be an absolute number (for example, 5)
or a percentage of desired Pods (for example, 10%). The absolute number is calculated from percentage by
rounding down. The value cannot be 0 if `.spec.strategy.rollingUpdate.maxSurge` is 0. The default value is 25%.
2017-03-31 19:13:47 +00:00
2017-04-21 12:42:28 +00:00
For example, when this value is set to 30%, the old ReplicaSet can be scaled down to 70% of desired
Pods immediately when the rolling update starts. Once new Pods are ready, old ReplicaSet can be scaled
down further, followed by scaling up the new ReplicaSet, ensuring that the total number of Pods available
at all times during the update is at least 70% of the desired Pods.
2017-03-31 19:13:47 +00:00
##### Max Surge
2017-04-21 12:42:28 +00:00
`.spec.strategy.rollingUpdate.maxSurge` is an optional field that specifies the maximum number of Pods
2017-07-31 23:43:08 +00:00
that can be created over the desired number of Pods. The value can be an absolute number (for example, 5) or a
percentage of desired Pods (for example, 10%). The value cannot be 0 if `MaxUnavailable` is 0. The absolute number
is calculated from the percentage by rounding up. The default value is 25%.
2017-03-31 19:13:47 +00:00
2017-04-21 12:42:28 +00:00
For example, when this value is set to 30%, the new ReplicaSet can be scaled up immediately when the
2017-07-31 23:43:08 +00:00
rolling update starts, such that the total number of old and new Pods does not exceed 130% of desired
2017-04-21 12:42:28 +00:00
Pods. Once old Pods have been killed, the new ReplicaSet can be scaled up further, ensuring that the
total number of Pods running at any time during the update is at most 130% of desired Pods.
2017-03-31 19:13:47 +00:00
### Progress Deadline Seconds
`.spec.progressDeadlineSeconds` is an optional field that specifies the number of seconds you want
to wait for your Deployment to progress before the system reports back that the Deployment has
[failed progressing ](#failed-deployment ) - surfaced as a condition with `Type=Progressing` , `Status=False` .
2019-07-02 20:10:29 +00:00
and `Reason=ProgressDeadlineExceeded` in the status of the resource. The Deployment controller will keep
retrying the Deployment. In the future, once automatic rollback will be implemented, the Deployment
2017-03-31 19:13:47 +00:00
controller will roll back a Deployment as soon as it observes such a condition.
If specified, this field needs to be greater than `.spec.minReadySeconds` .
### Min Ready Seconds
2017-04-21 12:42:28 +00:00
`.spec.minReadySeconds` is an optional field that specifies the minimum number of seconds for which a newly
created Pod should be ready without any of its containers crashing, for it to be considered available.
This defaults to 0 (the Pod will be considered available as soon as it is ready). To learn more about when
a Pod is considered ready, see [Container Probes ](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes ).
2017-03-31 19:13:47 +00:00
### Rollback To
Release 1.9 (#5978)
* Trivial change to open release branch
* Undo trivial change
* add service ipvs overview
* Add instructions on how to setup kubectl
* Document conntrack dependency for kube-proxy
* Add an a
This is kind of jarring / missing an article. I'm guessing it should either be ' to a rack of bare metal servers.' or '...to racks of bare metal servers.'.
* adding example responses for common issues
- support request
- code bug report
* Trivial change to open release branch
* Undo trivial change
* Signed-off-by: Ziqi Zhao <zhaoziqi@qiniu.com> (#5366)
Fix the not-working test case yaml for /doc/concepts/storage/volumes.md
* kubectl-overview
* temp fix for broken pod and deployment links
* Update Table of Solutions for Juju
* Revise certificates documentation (#5965)
* Update review-issues.md
Some edits for clarity and condensed language.
* Update init-containers.md
Fix leading spaces in commands.
* Update kubectl-overview.md
Fix format.
* Update clc.md
Fix format.
* Update openstack-heat.md
The url no need. just highlight.
* Typo
I believe this should be "users" not "uses"
* making explicit hostname uniq requirement
* Update scheduling-hugepages.md
* Update update-daemon-set.md
* fix redirection of PersistentVolume
* Update hpa.md
* update kubectl instruction
* Use the format of kubeadm init
* fix spelling error
guarnatees to guarantees
* add matchLabels description (#6020)
* search and replace for k8s.github.io to website (#6019)
* fix scale command of object-management (#6011)
* Update replicaset.md (#6009)
* Update secret.md (#6008)
* specify password for mysql image (#5990)
* specify password for mysql image
* specify password for mysql image
* link error for run-stateless-application-deployment.md (#5985)
* link error for run-stateless-application-deployment.md
* link error for run-stateless-application-deployment.md
* Add performance implications of inter-pod affinity/anti-affinity (#5979)
* 404 monthly maintenance - October 2017 (#5977)
* Updated redirects
* More redirects
* Add conjure-up to Turnkey Cloud Solutions list (#5973)
* Add conjure-up to Turnkey Cloud Solutions list
* Changed wording slightly
* change the StatefulSet to ReplicaSet in reference (#5968)
* Clarification of failureThreshold of probes (#5963)
* Mention usage of block storage version param (#5925)
Mention usage of block storage version (bs-version) parameter to
workaround attachment issues using older K8S versions on an OpenStack
cloud with path-based endpoints.
Resolves: https://github.com/kubernetes/kubernetes.github.io/issues/5924
* Update sysctl-cluster.md (#5894)
Include guide on enabling unsafe sysctls in minikube
* Avoid Latin phrases & format note (#5889)
* Avoid Latin phrases & format note
according the Documentation Style Guide
* Update scratch.md
* Update scratch.md
* resolves jekyll rendering error (#5976)
- chinese isn't understood for keys in YAML frontmatter in jekyll, so
replaced it with the english equivalent that doesn't throw the
following error on rendering:
Error reading file src/kubernetes.github.io/cn/docs/concepts/cluster-administration/device-plugins.md: (<unknown>): could not find expected ':' while scanning a simple key at line 4 column 1
* Change VM to pod. (#6022)
* Add link to custom metrics. (#6023)
* Rephrase core group. (#6024)
* Added explanation on context to when joining (#6018)
* Update create-cluster-kubeadm.md (#5761)
Update Canal version in pod network apply commands
* Fixes issue #5620 (#5869)
* Fixes issue #5620
Signed-off-by: Brad Topol <btopol@us.ibm.com>
* Restructured so that review process is for both current and upcoming
releases. Added content describing the use of tech reviewers.
* Removed incorrect Kubernetes reviewer link.
* Fixed tech reviewer URL to now use website
* Update pod-priority-preemption.md
fix-wrong-link-to-pod-preemption
* pod-security-policy.md: add links to the page about admission plugins.
* Adding all files for BlaBlaCar case study (#5857)
* Adding all files for BlaBlaCar case study
* Update blablacar.html
* Fix changed URL for google containers
* Add /docs/reference/auto-generated directory
* correct the downwardapi redirect
* Remove links using "here"
* Rename to /docs/reference/generated directory
* add Concept template
* Change title to just Ingress
* Link mistake (#6038)
* link mistake
* link mistake
* skip title check for skip_title_check.txt
* skip title check for skip_title_check.txt
* remove doesn't exist link.
* Fix podpreset task (#5705)
* Add a simple pod manifest to pod overview (#5986)
* Split PodPreset concept out from task doc (#5984)
* Add selector spec description (#5789)
* Add selector spec description
* Fix selector field explanation
* Put orphaned topics in TOC. (#6051)
* static-pod example bad format in the final page (#6050)
* static-pod example bad format in the final page
* static-pod example bad format in the final page
* static-pod example bad format in the final page
* static-pod example bad format in the final page
* static-pod example bad format in the final page
* Fix `backoffLimit` field misplacement (#6042)
It should be placed in JobSpec according to:
https://github.com/kubernetes/kubernetes/blob/master/api/swagger-spec/batch_v1.json#L1488-L1514
* Update addons.md (#6061)
* add info about VMware NSX-T CNI plugin (#5987)
* add info about VMware NSX-T CNI plugin
Hello,
I'm VMware Networking and Security Architect and would like to include short information about our CNI plugin implementation similar to what other vendors did
Best regards
Emil Gagala
* Update networking.md
* Update networking.md
* Update networking.md
* Update: Using universal zsh configuration (#5669)
* Update install-kubectl.md
Zsh is not only oh-my-zsh, so I added universal configuration for zsh that also can be used in prezto.
* fix merge error after rebase
* Operating etcd cluster for Kubernetes bad format in the final page (#6056)
* Operating etcd cluster for Kubernetes bad format in the final page
* Update configure-upgrade-etcd.md
* Update configure-upgrade-etcd.md
* Usage note and warning tags. (#6053)
* Usage note and warning tags.
* Update configure-upgrade-etcd.md
* Update configure-upgrade-etcd.md
* Document jekyll includes snippets
* Add jekyll includes to docs home toc
- Remove extra kubernetes home in toc
* document docker cgroupdriver req (#5937)
* Update test blacklists (#6063)
* Update toc check blacklist
* Update title check blacklist
* wip
* wip
* Fix typo
* Document unconfined apparmor profile
* Revert "Document the unconfined profile for AppArmor" (#6268)
* CRD Validation: remove alpha warning, change enable instructions to (#6066)
disable
* Documented service annotation for AWS ELB SSL policy
* kubeadm: add a note about the new `--print-join-command` flag.
This is a new flag for the `kubeadm token create` command.
* Add a note to PDB page
* Improve Kubeadm reference doc (#6103)
* automatically-generated kubeadm reference doc
* user-mantained kubeadm reference doc
* Documentation for CSIPersistentVolume
* change replicaset documentation to use apps/v1 APIs
* Update service.md
ipvs alpha version -> beta version
* Updated Deployment concept docs (#6494)
* Updated Deployment concept docs
* Addressed comments
* Documentation for volume scheduling alpha feature
* Update admission control docs for webhooks
* Improve DNS documentation (#6479)
* update ds for 1.9
* Update service.md
* Update service.md
* Revert "begin updating webhook documentation" (#6575)
* Update version numbers to include 1.9 (#6518)
* Update site versions for 1.9
* Removed 1.4 docs
* Update _config.yml
* Update _config.yml
* updates for raw block devices
* rbac: docs for aggregated cluster roles (#6474)
* Added IPv6 information for Kubelet arguments (#6498)
* Added IPv6 info to kube-proxy arguments
* Added IPv6 information for argument for kubelet
* Update PVC resizing documentation (#6487)
* Updates for Windows Server version 1709 with K8s v1.8 (#6180)
* Updated for WSv1709 and K8s v1.8
* Updated picture and CNI config
* Fixed formatting on CNI Config
* Updated docs to reference Microsoft/SDN GitHub docs
* fix typo
* Workaround for Jekyllr frontmatter
* Added section on features and limitations, with example yaml files.
* Update index.md
* Added kubeadm section, few other small fixes
* Few minor grammar fixes
* Update access-cluster.md with a comment that for IPv6
the user should use [::1] for the localhost
* Addressed a number of issues brought up against the base PR
* Fixed windows-host-setup link
* Rewrite PodSecurityPolicy guide
* Update index.md
Signed-off-by: Alin Balutoiu <abalutoiu@cloudbasesolutions.com>
Signed-off-by: Alin Gabriel Serdean <aserdean@ovn.org>
* Spelling correction and sentence capitalization.
- Corrected the spelling error for storing, was put in as 'stoing'.
- Capitalized list items.
- Added '.' at end of sentences in the list items.
* Update index.md
* Update index.md
* Addressed comments and rebased
* Fixed formatting
* Fixed formatting
* Updated header link
* Updated hyperlinks
* Updated warning
* formatting
* formatting
* formatting
* Revert "Update access-cluster.md with a comment that for IPv6"
This reverts commit 31e4dbdc25a60e4584ce01a6b1915e13ac63bc67.
* Revert "fix typo"
This reverts commit c05678752d3b481e2907bc53d3971bb49eab6609.
* Revert "Workaround for Jekyllr frontmatter"
This reverts commit b84ac59624b625e6534ccd97bb4ba65e51b441e4.
* Fixed grammatical issues and reverted non-related commits
* Revert "Rewrite PodSecurityPolicy guide"
This reverts commit 5d39cfeae41b3237a5e1247bc1c1f98e0727c5fd.
* Revert "Spelling correction and sentence capitalization."
This reverts commit 47eed4346e4491c9a63c2e0cb76bdd37bff5677c.
* Fixed auto-numbering
* Minor formatting updates
* CoreDNS feature documentation (#6463)
* Initial placeholder PR for CoreDNS feature documentation
* Remove from admin, add content
* Fix missing endcapture
* Add to tasks.yml
* Review feedback
* Postpone Deletion of a Persistent Volume Claim in case It Is Used by a Pod (#6415)
* Postpone Deletion of a Persistent Volume Claim in case It Is Used by a Pod
A new feature PVC Protection was added into K8s 1.9 that's why this documentation change is needed.
* Added tag at the top of each new area.
* Fix typo
* Fix: switched on in (all kubelets) -> (all K8s components).
* Added link to admission controller
* Moved PVC Protection configuration into Before you begin section.
* Added steps how to verify PVC Protection feature.
* Fixes for admission controller plugin description and for PVC Protection description in PVC lifecycle.
* Testing official rendering of enumerations (1., 2., 3., etc.)
* Re-write to address comments from review.
* Fixed definition when a PVC is in active use by a pod.
* Change auditing docs page for 1.9 release (#6427)
* Change auditing docs page for 1.9 release
Signed-off-by: Mik Vyatskov <vmik@google.com>
* Address review comments
Signed-off-by: Mik Vyatskov <vmik@google.com>
* Address review comments
Signed-off-by: Mik Vyatskov <vmik@google.com>
* Address review comments
Signed-off-by: Mik Vyatskov <vmik@google.com>
* Fix broken link
Signed-off-by: Mik Vyatskov <vmik@google.com>
* short circuit deny docs (#6536)
* line wrap
* short circuit deny
* address comments
* Add kubeadm 1.9 upgrade docs (#6485)
* kubeadm: Improve kubeadm documentation for v1.9 (#6645)
* Update admission control docs for webhooks (re-send #6368) (#6650)
* Update admission control docs for webhooks
* update in response to comments
* Revamp rkt and add CRI-O as alternative runtime (#6371)
Signed-off-by: Lorenzo Fontana <lo@linux.com>
* Documented NLB for Kubernetes 1.9 (#6260)
* Added IPV6 information to setup cluster using kubeadm (#6465)
* Added IPV6 information to setup cluster using kubeadm
* Updated kubeadm.md & create-cluster-kubeadm.md with IPv6 related information
* Added IPv6 options for kubeadm --init & automated address binding for kube-proxy based on version of IP configured for API server)
* Changes to kubeadm.md as per comments
* Modified kubeadm.md and create-cluster-kubeadm.md
* Implemented changes requested by zacharysarah
* Removed autogenerated kubeadm.md changes
* StatefulSet 1.9 updates. (#6550)
* updates sts concept and tutorials to use 1.9 apps/v1
* Update statefulset.md
* clarify pod name label
* Garbage collection updates for 1.9 (#6555)
* 1.9 gc policy update
* carify deletion
* Couple nits for dnsConfig doc (#6652)
* Add doc for AllowedFlexVolume (#6563)
* Update OpenStack Cloud Provider API support for v1.9 (#6638)
* Flex volume is GA. Remove alpha notation. (#6666)
* Update generated ref docs for Kubernetes and Federation components. (#6658)
* Update generated ref docs for Kubernetes and Federation components.
* Rename kubectl-options to kubectl.
* Add title to kubectl.
* Fix double synopsis.
* Update Federation API ref docs for 1.9. (#6636)
* Update federation API ref docs.
* Move and redirect.
* Move generated Federation docs to the generated directory.
* Fix titles.
* Type
* Fix titles
* Update auto-generated Kubernetes APi ref docs. (#6646)
* Update kubectl commands for 1.9 (#6635)
* add ExtendedResourceToleration admission controller (#6618)
* Update API reference paths for v1.9 (#6681)
2017-12-15 23:36:13 +00:00
Field `.spec.rollbackTo` has been deprecated in API versions `extensions/v1beta1` and `apps/v1beta1` , and is no longer supported in API versions starting `apps/v1beta2` . Instead, `kubectl rollout undo` as introduced in [Rolling Back to a Previous Revision ](#rolling-back-to-a-previous-revision ) should be used.
2017-03-31 19:13:47 +00:00
### Revision History Limit
2019-02-13 09:38:44 +00:00
A Deployment's revision history is stored in the ReplicaSets it controls.
2017-03-31 19:13:47 +00:00
2017-04-21 12:42:28 +00:00
`.spec.revisionHistoryLimit` is an optional field that specifies the number of old ReplicaSets to retain
2019-02-19 20:36:02 +00:00
to allow rollback. These old ReplicaSets consume resources in `etcd` and crowd the output of `kubectl get rs` . The configuration of each Deployment revision is stored in its ReplicaSets; therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of Deployment. By default, 10 old ReplicaSets will be kept, however its ideal value depends on the frequency and stability of new Deployments.
2017-03-31 19:13:47 +00:00
2019-02-13 09:38:44 +00:00
More specifically, setting this field to zero means that all old ReplicaSets with 0 replicas will be cleaned up.
2017-04-21 12:42:28 +00:00
In this case, a new Deployment rollout cannot be undone, since its revision history is cleaned up.
2017-03-31 19:13:47 +00:00
### Paused
2017-04-21 12:42:28 +00:00
`.spec.paused` is an optional boolean field for pausing and resuming a Deployment. The only difference between
a paused Deployment and one that is not paused, is that any changes into the PodTemplateSpec of the paused
Deployment will not trigger new rollouts as long as it is paused. A Deployment is not paused by default when
it is created.
2017-03-31 19:13:47 +00:00
## Alternative to Deployments
### kubectl rolling update
2018-04-11 21:22:55 +00:00
[`kubectl rolling update` ](/docs/reference/generated/kubectl/kubectl-commands#rolling-update ) updates Pods and ReplicationControllers
2017-04-21 12:42:28 +00:00
in a similar fashion. But Deployments are recommended, since they are declarative, server side, and have
additional features, such as rolling back to any previous revision even after the rolling update is done.
2017-07-31 23:43:08 +00:00
2018-05-05 16:00:51 +00:00
{{% /capture %}}