Changes for move to Netlify (#4464)

* disable jekyll-redirect-from gem

* add _redirects file

* disable 404 redirect script

* add 301 redirect to test

* retain _redirects file

* Convert redirect_from's to _redirect file. (#4409)

* Remove redirect_from's. (#4424)

* Add 301's to _redirects. (#4427)

* add whitespace before 301

* move redirects in /js/redirects/js to _redirects

* add disabled option for cn redirect

* convert include to array in _config.yml

* enable redirects.js script for legacy support
pull/4524/head^2
Andrew Chen 2017-07-28 08:23:11 -07:00 committed by GitHub
parent 130b72927c
commit 2e257d9707
158 changed files with 624 additions and 1011 deletions

View File

@ -32,11 +32,14 @@ defaults:
permalink: pretty
gems:
- jekyll-redirect-from
- jekyll-feed
- jekyll-sitemap
- jekyll-seo-tag
- jekyll-include-cache
# disabled gems
# - jekyll-redirect-from
include: [_redirects]
# SEO
logo: /images/favicon.png

242
_redirects Normal file
View File

@ -0,0 +1,242 @@
#
# set server-side redirects in this file
# see https://www.netlify.com/docs/redirects/
#
/docs/admin/addons /docs/concepts/cluster-administration/addons 301
/docs/admin/apparmor/ /docs/tutorials/clusters/apparmor 301
/docs/admin/audit /docs/tasks/debug-application-cluster/audit 301
/docs/admin/cluster-components /docs/concepts/overview/components 301
/docs/admin/cluster-management /docs/tasks/administer-cluster/cluster-management 301
/docs/admin/cluster-troubleshooting /docs/tasks/debug-application-cluster/debug-cluster 301
/docs/admin/daemons /docs/concepts/workloads/controllers/daemonset 301
/docs/admin/disruptions /docs/concepts/workloads/pods/disruptions 301
/docs/admin/dns /docs/concepts/services-networking/dns-pod-service 301
/docs/admin/etcd /docs/tasks/administer-cluster/configure-upgrade-etcd 301
/docs/admin/etcd_upgrade /docs/tasks/administer-cluster/configure-upgrade-etcd 301
/docs/admin/federation/kubefed /docs/tasks/federation/set-up-cluster-federation-kubefed 301
/docs/admin/garbage-collection /docs/concepts/cluster-administration/kubelet-garbage-collection 301
/docs/admin/ha-master-gce /docs/tasks/administer-cluster/highly-available-master 301
/docs/admin/ /docs/concepts/cluster-administration/cluster-administration-overview 301
/docs/admin/kubeadm-upgrade-1-7 /docs/tasks/administer-cluster/kubeadm-upgrade-1-7 301
/docs/admin/limitrange/ /docs/tasks/administer-cluster/cpu-memory-limit 301
/docs/admin/master-node-communication /docs/concepts/architecture/master-node-communication 301
/docs/admin/multi-cluster /docs/concepts/cluster-administration/federation 301
/docs/admin/multiple-schedulers /docs/tasks/administer-cluster/configure-multiple-schedulers 301
/docs/admin/namespaces/ /docs/tasks/administer-cluster/namespaces 301
/docs/admin/namespaces/walkthrough /docs/tasks/administer-cluster/namespaces-walkthrough 301
/docs/admin/network-plugins /docs/concepts/cluster-administration/network-plugins 301
/docs/admin/networking /docs/concepts/cluster-administration/networking 301
/docs/admin/node /docs/concepts/architecture/nodes 301
/docs/admin/node-allocatable /docs/tasks/administer-cluster/reserve-compute-resources 301
/docs/admin/node-problem /docs/tasks/debug-application-cluster/monitor-node-health 301
/docs/admin/out-of-resource /docs/tasks/administer-cluster/out-of-resource 301
/docs/admin/rescheduler /docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods 301
/docs/admin/resourcequota/ /docs/concepts/policy/resource-quotas 301
/docs/admin/resourcequota/limitstorageconsumption /docs/tasks/administer-cluster/limit-storage-consumption 301
/docs/admin/resourcequota/walkthrough /docs/tasks/administer-cluster/apply-resource-quota-limit 301
/docs/admin/static-pods /docs/tasks/administer-cluster/static-pod 301
/docs/admin/sysctls /docs/concepts/cluster-administration/sysctl-cluster 301
/docs/admin/upgrade-1-6 /docs/tasks/administer-cluster/upgrade-1-6 301
/docs/api /docs/concepts/overview/kubernetes-api 301
/docs/concepts/abstractions/controllers/garbage-collection /docs/concepts/workloads/controllers/garbage-collection 301
/docs/concepts/abstractions/controllers/petsets /docs/concepts/workloads/controllers/petset 301
/docs/concepts/abstractions/controllers/statefulsets /docs/concepts/workloads/controllers/statefulset 301
/docs/concepts/abstractions/init-containers /docs/concepts/workloads/pods/init-containers 301
/docs/concepts/abstractions/overview /docs/concepts/overview/working-with-objects/kubernetes-objects 301
/docs/concepts/abstractions/pod /docs/concepts/workloads/pods/pod-overview 301
/docs/concepts/cluster-administration/access-cluster /docs/tasks/access-application-cluster/access-cluster 301
/docs/concepts/cluster-administration/audit /docs/tasks/debug-application-cluster/audit 301
/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig /docs/tasks/access-application-cluster/authenticate-across-clusters-kubeconfig 301
/docs/concepts/cluster-administration/cluster-management /docs/tasks/administer-cluster/cluster-management 301
/docs/concepts/cluster-administration/configure-etcd /docs/tasks/administer-cluster/configure-upgrade-etcd 301
/docs/concepts/cluster-administration/etcd-upgrade /docs/tasks/administer-cluster/configure-upgrade-etcd 301
/docs/concepts/cluster-administration/federation-service-discovery /docs/tasks/federation/federation-service-discovery 301
/docs/concepts/cluster-administration/guaranteed-scheduling-critical-addon-pods /docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods 301
/docs/concepts/cluster-administration/master-node-communication /docs/concepts/architecture/master-node-communication 301
/docs/concepts/cluster-administration/multiple-clusters /docs/concepts/cluster-administration/federation 301
/docs/concepts/cluster-administration/out-of-resource /docs/tasks/administer-cluster/out-of-resource 301
/docs/concepts/cluster-administration/resource-usage-monitoring /docs/tasks/debug-application-cluster/resource-usage-monitoring 301
/docs/concepts/cluster-administration/static-pod /docs/tasks/administer-cluster/static-pod 301
/docs/concepts/clusters/logging /docs/concepts/cluster-administration/logging 301
/docs/concepts/configuration/container-command-arg /docs/tasks/inject-data-application/define-command-argument-container/docs/concepts/ecosystem/thirdpartyresource 301 /docs/tasks/access-kubernetes-api/extend-api-third-party-resource
/docs/concepts/jobs/cron-jobs /docs/concepts/workloads/controllers/cron-jobs 301
/docs/concepts/jobs/run-to-completion-finite-workloads /docs/concepts/workloads/controllers/jobs-run-to-completion 301
/docs/concepts/nodes/node /docs/concepts/architecture/nodes 301
/docs/concepts/storage/etcd-store-api-object /docs/tasks/administer-cluster/configure-upgrade-etcd 301
/docs/concepts/tools/kubectl/object-management-overview /docs/tutorials/object-management-kubectl/object-management 301
/docs/concepts/tools/kubectl/object-management-using-declarative-config /docs/tutorials/object-management-kubectl/declarative-object-management-configuration 301
/docs/concepts/tools/kubectl/object-management-using-imperative-commands /docs/tutorials/object-management-kubectl/imperative-object-management-command 301
/docs/concepts/tools/kubectl/object-management-using-imperative-config /docs/tutorials/object-management-kubectl/imperative-object-management-configuration 301
/docs/getting-started-guides/ /docs/setup/pick-right-solution 301
/docs/getting-started-guides/kubeadm /docs/setup/independent/create-cluster-kubeadm 301
/docs/getting-started-guides/network-policy/calico /docs/tasks/administer-cluster/calico-network-policy 301
/docs/getting-started-guides/network-policy/romana /docs/tasks/administer-cluster/romana-network-policy 301
/docs/getting-started-guides/network-policy/walkthrough /docs/tasks/administer-cluster/declare-network-policy 301
/docs/getting-started-guides/network-policy/weave /docs/tasks/administer-cluster/weave-network-policy 301
/docs/getting-started-guides/running-cloud-controller /docs/tasks/administer-cluster/running-cloud-controller 301
/docs/getting-started-guides/ubuntu/calico /docs/getting-started-guides/ubuntu/ 301
/docs/hellonode /docs/tutorials/stateless-application/hello-minikube 301
/docs/ /docs/home/ 301
/docs/samples /docs/tutorials/ 301
/docs/tasks/administer-cluster/assign-pods-nodes /docs/tasks/configure-pod-container/assign-pods-nodes 301
/docs/tasks/administer-cluster/overview /docs/concepts/cluster-administration/cluster-administration-overview 301
/docs/tasks/configure-pod-container/apply-resource-quota-limit /docs/tasks/administer-cluster/apply-resource-quota-limit 301
/docs/tasks/configure-pod-container/calico-network-policy /docs/tasks/administer-cluster/calico-network-policy 301
/docs/tasks/configure-pod-container/communicate-containers-same-pod /docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume 301
/docs/tasks/configure-pod-container/declare-network-policy /docs/tasks/administer-cluster/declare-network-policy 301
/docs/tasks/configure-pod-container/define-environment-variable-container /docs/tasks/inject-data-application/define-environment-variable-container 301
/docs/tasks/configure-pod-container/distribute-credentials-secure /docs/tasks/inject-data-application/distribute-credentials-secure 301
/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information /docs/tasks/inject-data-application/downward-api-volume-expose-pod-information 301
/docs/tasks/configure-pod-container/environment-variable-expose-pod-information /docs/tasks/inject-data-application/environment-variable-expose-pod-information 301
/docs/tasks/configure-pod-container/limit-range /docs/tasks/administer-cluster/cpu-memory-limit 301
/docs/tasks/configure-pod-container/romana-network-policy /docs/tasks/administer-cluster/romana-network-policy 301
/docs/tasks/configure-pod-container/weave-network-policy /docs/tasks/administer-cluster/weave-network-policy 301
/docs/tasks/kubectl/get-shell-running-container /docs/tasks/debug-application-cluster/get-shell-running-container 301
/docs/tasks/kubectl/install /docs/tasks/tools/install-kubectl 301
/docs/tasks/kubectl/list-all-running-container-images /docs/tasks/access-application-cluster/list-all-running-container-images 301
/docs/tasks/manage-stateful-set/debugging-a-statefulset /docs/tasks/debug-application-cluster/debug-stateful-set 301
/docs/tasks/manage-stateful-set/deleting-a-statefulset /docs/tasks/run-application/delete-stateful-set 301
/docs/tasks/manage-stateful-set/scale-stateful-set /docs/tasks/run-application/scale-stateful-set 301
/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set /docs/tasks/run-application/upgrade-pet-set-to-stateful-set 301
/docs/tasks/run-application/podpreset /docs/tasks/inject-data-application/podpreset 301
/docs/tasks/troubleshoot/debug-init-containers /docs/tasks/debug-application-cluster/debug-init-containers 301
/docs/tasks/web-ui-dashboard /docs/tasks/access-application-cluster/web-ui-dashboard 301
/docs/templatedemos /docs/home/contribute/page-templates 301
/docs/tools/kompose/ /docs/tools/kompose/user-guide 301
/docs/tutorials/clusters/multiple-schedulers /docs/tasks/administer-cluster/configure-multiple-schedulers 301
/docs/tutorials/connecting-apps/connecting-frontend-backend /docs/tasks/access-application-cluster/connecting-frontend-backend 301
/docs/tutorials/federation/set-up-cluster-federation-kubefed /docs/tasks/federation/set-up-cluster-federation-kubefed 301
/docs/tutorials/federation/set-up-coredns-provider-federation /docs/tasks/federation/set-up-coredns-provider-federation 301
/docs/tutorials/federation/set-up-placement-policies-federation /docs/tasks/federation/set-up-placement-policies-federation 301
/docs/tutorials/getting-started/create-cluster /docs/tutorials/kubernetes-basics/cluster-intro 301
/docs/tutorials/stateful-application/run-replicated-stateful-application /docs/tasks/run-application/run-replicated-stateful-application 301
/docs/tutorials/stateful-application/run-stateful-application /docs/tasks/run-application/run-single-instance-stateful-application 301
/docs/tutorials/stateless-application/expose-external-ip-address-service /docs/tasks/access-application-cluster/service-access-application-cluster 301
/docs/tutorials/stateless-application/run-stateless-ap-replication-controller /docs/tasks/run-application/run-stateless-application-deployment 301
/docs/tutorials/stateless-application/run-stateless-application-deployment /docs/tasks/run-application/run-stateless-application-deployment 301
/docs/user-guide/accessing-the-cluster /docs/tasks/access-application-cluster/access-cluster 301
/docs/user-guide/add-entries-to-pod-etc-hosts-with-host-aliases/ /docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases 301
/docs/user-guide/annotations /docs/concepts/overview/working-with-objects/annotations 301
/docs/user-guide/application-troubleshooting /docs/tasks/debug-application-cluster/debug-application 301
/docs/user-guide/compute-resources /docs/concepts/configuration/manage-compute-resources-container 301
/docs/user-guide/config-best-practices /docs/concepts/configuration/overview 301
/docs/user-guide/configmap/ /docs/tasks/configure-pod-container/configmap 301
/docs/user-guide/configuring-containers /docs/tasks/ 301
/docs/user-guide/connecting-applications /docs/concepts/services-networking/connect-applications-service 301
/docs/user-guide/connecting-to-applications-port-forward /docs/tasks/access-application-cluster/port-forward-access-application-cluster 301
/docs/user-guide/connecting-to-applications-proxy /docs/tasks/access-kubernetes-api/http-proxy-access-api 301
/docs/user-guide/container-environment /docs/concepts/containers/container-lifecycle-hooks 301
/docs/user-guide/cron-jobs /docs/concepts/workloads/controllers/cron-jobs 301
/docs/user-guide/debugging-pods-and-replication-controllers /docs/tasks/debug-application-cluster/debug-pod-replication-controller 301
/docs/user-guide/debugging-services /docs/tasks/debug-application-cluster/debug-service 301
/docs/user-guide/deploying-applications /docs/tasks/run-application/run-stateless-application-deployment 301
/docs/user-guide/deployments /docs/concepts/workloads/controllers/deployment 301
/docs/user-guide/downward-api/ /docs/tasks/inject-data-application/downward-api-volume-expose-pod-information 301
/docs/user-guide/downward-api/volume/ /docs/tasks/inject-data-application/downward-api-volume-expose-pod-information 301
/docs/user-guide/environment-guide/ /docs/tasks/inject-data-application/environment-variable-expose-pod-information 301
/docs/user-guide/federation/cluster /docs/tasks/administer-federation/cluster 301
/docs/user-guide/federation/configmap /docs/tasks/administer-federation/configmap 301
/docs/user-guide/federation/daemonsets /docs/tasks/administer-federation/daemonset 301
/docs/user-guide/federation/deployment /docs/tasks/administer-federation/deployment 301
/docs/user-guide/federation/events /docs/tasks/administer-federation/events 301
/docs/user-guide/federation/federated-ingress /docs/tasks/administer-federation/ingress 301
/docs/user-guide/federation/federated-services /docs/tasks/federation/federation-service-discovery 301
/docs/user-guide/federation/ /docs/concepts/cluster-administration/federation 301
/docs/user-guide/federation/namespaces /docs/tasks/administer-federation/namespaces 301
/docs/user-guide/federation/replicasets /docs/tasks/administer-federation/replicaset 301
/docs/user-guide/federation/secrets /docs/tasks/administer-federation/secret 301
/docs/user-guide/garbage-collection /docs/concepts/workloads/controllers/garbage-collection 301
/docs/user-guide/getting-into-containers /docs/tasks/debug-application-cluster/get-shell-running-container 301
/docs/user-guide/gpus /docs/tasks/manage-gpus/scheduling-gpus 301
/docs/user-guide/horizontal-pod-autoscaling/ /docs/tasks/run-application/horizontal-pod-autoscale 301
/docs/user-guide/horizontal-pod-autoscaling/walkthrough /docs/tasks/run-application/horizontal-pod-autoscale-walkthrough 301
/docs/user-guide/identifiers /docs/concepts/overview/working-with-objects/names 301
/docs/user-guide/images /docs/concepts/containers/images 301
/docs/user-guide/ /docs/home/ 301
/docs/user-guide/ingress /docs/concepts/services-networking/ingress 301
/docs/user-guide/introspection-and-debugging /docs/tasks/debug-application-cluster/debug-application-introspection 301
/docs/user-guide/jobs /docs/concepts/workloads/controllers/jobs-run-to-completion 301
/docs/user-guide/jobs/expansions/ /docs/tasks/job/parallel-processing-expansion 301
/docs/user-guide/jobs/work-queue-1/ /docs/tasks/job/coarse-parallel-processing-work-queue/ 301
/docs/user-guide/jobs/work-queue-2/ /docs/tasks/job/fine-parallel-processing-work-queue/ 301
/docs/user-guide/kubeconfig-file /docs/tasks/access-application-cluster/authenticate-across-clusters-kubeconfig 301
/docs/user-guide/labels /docs/concepts/overview/working-with-objects/labels 301
/docs/user-guide/liveness /docs/tasks/configure-pod-container/configure-liveness-readiness-probes 301
/docs/user-guide/load-balancer /docs/tasks/access-application-cluster/create-external-load-balancer 301
/docs/user-guide/logging/elasticsearch /docs/tasks/debug-application-cluster/logging-elasticsearch-kibana 301
/docs/user-guide/logging/overview /docs/concepts/cluster-administration/logging 301
/docs/user-guide/logging/stackdriver /docs/tasks/debug-application-cluster/logging-stackdriver 301
/docs/user-guide/managing-deployments /docs/concepts/cluster-administration/manage-deployment 301
/docs/user-guide/monitoring /docs/tasks/debug-application-cluster/resource-usage-monitoring 301
/docs/user-guide/namespaces /docs/concepts/overview/working-with-objects/namespaces 301
/docs/user-guide/networkpolicies /docs/concepts/services-networking/network-policies 301
/docs/user-guide/node-selection/ /docs/concepts/configuration/assign-pod-node 301
/docs/user-guide/persistent-volumes/ /docs/concepts/storage/persistent-volumes 301
/docs/user-guide/persistent-volumes/walkthrough /docs/tasks/configure-pod-container/configure-persistent-volume-storage 301
/docs/user-guide/petset /docs/concepts/workloads/controllers/petset 301
/docs/user-guide/petset/bootstrapping/ /docs/concepts/workloads/controllers/petset 301
/docs/user-guide/pod-preset/ /docs/tasks/inject-data-application/podpreset 301
/docs/user-guide/pod-security-policy/ /docs/concepts/policy/pod-security-policy 301
/docs/user-guide/pod-states /docs/concepts/workloads/pods/pod-lifecycle 301
/docs/user-guide/pod-templates /docs/concepts/workloads/pods/pod-overview 301
/docs/user-guide/pods/ /docs/concepts/workloads/pods/pod 301
/docs/user-guide/pods/init-container /docs/concepts/workloads/pods/init-containers 301
/docs/user-guide/pods/multi-container /docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume 301
/docs/user-guide/pods/single-container /docs/tasks/run-application/run-stateless-application-deployment 301
/docs/user-guide/prereqs /docs/tasks/tools/install-kubectl 301
/docs/user-guide/production-pods /docs/tasks/ 301
/docs/user-guide/projected-volume/ /docs/tasks/configure-pod-container/configure-projected-volume-storage 301
/docs/user-guide/quick-start /docs/tasks/access-application-cluster/service-access-application-cluster 301
/docs/user-guide/replicasets /docs/concepts/workloads/controllers/replicaset 301
/docs/user-guide/replication-controller/ /docs/concepts/workloads/controllers/replicationcontroller 301
/docs/user-guide/rolling-updates /docs/tasks/run-application/rolling-update-replication-controller 301
/docs/user-guide/secrets/ /docs/concepts/configuration/secret 301
/docs/user-guide/secrets/walkthrough /docs/tasks/inject-data-application/distribute-credentials-secure 301
/docs/user-guide/service-accounts /docs/tasks/configure-pod-container/configure-service-account 301
/docs/user-guide/services-firewalls /docs/tasks/access-application-cluster/configure-cloud-provider-firewall 301
/docs/user-guide/services/ /docs/concepts/services-networking/service 301
/docs/user-guide/services/operations /docs/tasks/access-application-cluster/connecting-frontend-backend 301
/docs/user-guide/sharing-clusters /docs/tasks/administer-cluster/share-configuration 301
/docs/user-guide/simple-nginx /docs/tasks/run-application/run-stateless-application-deployment 301
/docs/user-guide/thirdpartyresources /docs/tasks/access-kubernetes-api/extend-api-third-party-resource 301
/docs/user-guide/ui /docs/tasks/access-application-cluster/web-ui-dashboard 301
/docs/user-guide/update-demo/ /docs/tasks/run-application/rolling-update-replication-controller 301
/docs/user-guide/volumes /docs/concepts/storage/volumes 301
/docs/user-guide/working-with-resources /docs/tutorials/object-management-kubectl/object-management 301
/docs/whatisk8s /docs/concepts/overview/what-is-kubernetes 301
#
# redirects from /js/redirects.js
#
/resource-quota /docs/concepts/policy/resource-quotas 301
/horizontal-pod-autoscaler /docs/tasks/run-application/horizontal-pod-autoscale 301
/docs/roadmap https://github.com/kubernetes/kubernetes/milestones/ 301
/api-ref https://github.com/kubernetes/kubernetes/milestones/ 301
/kubernetes/third_party/swagger-ui /docs/reference 301
/docs/user-guide/overview /docs/concepts/overview/what-is-kubernetes 301
/docs/troubleshooting /docs/tasks/debug-application-cluster/troubleshooting 301
/docs/concepts/services-networking/networkpolicies /docs/concepts/services-networking/network-policies 301
/docs/getting-started-guides/meanstack https://medium.com/google-cloud/running-a-mean-stack-on-google-cloud-platform-with-kubernetes-149ca81c2b5d 301
/docs/samples /docs/tutorials 301
/v1.1 / 301
/v1.0 / 301
#
# Redirect users with chinese language preference to /cn
#
#/ /cn 302 Language=zh

View File

@ -4,11 +4,6 @@ assignees:
- roberthbailey
- liggitt
title: Master-Node communication
redirect_from:
- "/docs/admin/master-node-communication/"
- "/docs/admin/master-node-communication.html"
- "/docs/concepts/cluster-administration/master-node-communication/"
- "/docs/concepts/cluster-administration/master-node-communication.html"
---
* TOC
@ -30,18 +25,18 @@ services). In a typical deployment, the apiserver is configured to listen for
remote connections on a secure HTTPS port (443) with one or more forms of
client [authentication](/docs/admin/authentication/) enabled. One or more forms
of [authorization](/docs/admin/authorization/) should be enabled, especially
if [anonymous requests](/docs/admin/authentication/#anonymous-requests) or
[service account tokens](/docs/admin/authentication/#service-account-tokens)
if [anonymous requests](/docs/admin/authentication/#anonymous-requests) or
[service account tokens](/docs/admin/authentication/#service-account-tokens)
are allowed.
Nodes should be provisioned with the public root certificate for the cluster
such that they can connect securely to the apiserver along with valid client
credentials. For example, on a default GCE deployment, the client credentials
provided to the kubelet are in the form of a client certificate. See
[kubelet TLS bootstrapping](/docs/admin/kubelet-tls-bootstrapping/) for
automated provisioning of kubelet client certificates.
provided to the kubelet are in the form of a client certificate. See
[kubelet TLS bootstrapping](/docs/admin/kubelet-tls-bootstrapping/) for
automated provisioning of kubelet client certificates.
Pods that wish to connect to the apiserver can do so securely by leveraging a
Pods that wish to connect to the apiserver can do so securely by leveraging a
service account so that Kubernetes will automatically inject the public root
certificate and a valid bearer token into the pod when it is instantiated.
The `kubernetes` service (in all namespaces) is configured with a virtual IP
@ -71,23 +66,23 @@ or service through the apiserver's proxy functionality.
The connections from the apiserver to the kubelet are used for fetching logs
for pods, attaching (through kubectl) to running pods, and using the kubelet's
port-forwarding functionality. These connections terminate at the kubelet's
port-forwarding functionality. These connections terminate at the kubelet's
HTTPS endpoint.
By default, the apiserver does not verify the kubelet's serving certificate,
which makes the connection subject to man-in-the-middle attacks, and
which makes the connection subject to man-in-the-middle attacks, and
**unsafe** to run over untrusted and/or public networks.
To verify this connection, use the `--kubelet-certificate-authority` flag to
provide the apiserver with a root certificates bundle to use to verify the
To verify this connection, use the `--kubelet-certificate-authority` flag to
provide the apiserver with a root certificates bundle to use to verify the
kubelet's serving certificate.
If that is not possible, use [SSH tunneling](/docs/admin/master-node-communication/#ssh-tunnels)
between the apiserver and kubelet if required to avoid connecting over an
between the apiserver and kubelet if required to avoid connecting over an
untrusted or public network.
Finally, [Kubelet authentication and/or authorization](/docs/admin/kubelet-authentication-authorization/)
should be enabled to secure the kubelet API.
should be enabled to secure the kubelet API.
### apiserver -> nodes, pods, and services

View File

@ -3,11 +3,6 @@ assignees:
- caesarxuchao
- dchen1107
title: Nodes
redirect_from:
- "/docs/admin/node/"
- "/docs/admin/node.html"
- "/docs/concepts/nodes/node/"
- "/docs/concepts/nodes/node.html"
---
* TOC
@ -68,7 +63,7 @@ The node condition is represented as a JSON object. For example, the following r
]
```
If the Status of the Ready condition is "Unknown" or "False" for longer than the `pod-eviction-timeout`, an argument passed to the [kube-controller-manager](/docs/admin/kube-controller-manager/), all of the Pods on the node are scheduled for deletion by the Node Controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the apiserver is unable to communicate with the kubelet on it. The decision to delete the pods cannot be communicated to the kubelet until it re-establishes communication with the apiserver. In the meantime, the pods which are scheduled for deletion may continue to run on the partitioned node.
If the Status of the Ready condition is "Unknown" or "False" for longer than the `pod-eviction-timeout`, an argument passed to the [kube-controller-manager](/docs/admin/kube-controller-manager/), all of the Pods on the node are scheduled for deletion by the Node Controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the apiserver is unable to communicate with the kubelet on it. The decision to delete the pods cannot be communicated to the kubelet until it re-establishes communication with the apiserver. In the meantime, the pods which are scheduled for deletion may continue to run on the partitioned node.
In versions of Kubernetes prior to 1.5, the node controller would [force delete](/docs/concepts/workloads/pods/pod/#force-deletion-of-pods) these unreachable pods from the apiserver. However, in 1.5 and higher, the node controller does not force delete pods until it is confirmed that they have stopped running in the cluster. One can see these pods which may be running on an unreachable node as being in the "Terminating" or "Unknown" states. In cases where Kubernetes cannot deduce from the underlying infrastructure if a node has permanently left a cluster, the cluster administrator may need to delete the node object by hand. Deleting the node object from Kubernetes causes all the Pod objects running on it to be deleted from the apiserver, freeing up their names.

View File

@ -1,8 +1,5 @@
---
title: Installing Addons
redirect_from:
- "/docs/admin/addons/"
- "/docs/admin/addons.html"
---
## Overview

View File

@ -3,11 +3,6 @@ assignees:
- davidopp
- lavalamp
title: Cluster Administration Overview
redirect_from:
- "/docs/admin/"
- "/docs/admin/index.html"
- "/docs/tasks/administer-cluster/overview/"
- "/docs/tasks/administer-cluster/overview.html"
---
{% capture overview %}

View File

@ -1,12 +1,5 @@
---
title: Federation
redirect_from:
- "/docs/user-guide/federation/"
- "/docs/user-guide/federation/index.html"
- "/docs/concepts/cluster-administration/multiple-clusters/"
- "/docs/concepts/cluster-administration/multiple-clusters.html"
- "/docs/admin/multi-cluster/"
- "/docs/admin/multi-cluster.html"
---
{% capture overview %}
@ -48,7 +41,7 @@ why you might want multiple clusters are:
* [Hybrid cloud](###hybrid-cloud-capabilities): You can have multiple clusters on different cloud providers or
on-premises data centers.
### Caveats
### Caveats
While there are a lot of attractive use cases for federation, there are also
some caveats:

View File

@ -2,9 +2,6 @@
assignees:
- mikedanese
title: Configuring kubelet Garbage Collection
redirect_from:
- "/docs/admin/garbage-collection/"
- "/docs/admin/garbage-collection.html"
---
* TOC

View File

@ -3,12 +3,6 @@ assignees:
- crassirostris
- piosz
title: Logging Architecture
redirect_from:
- "/docs/concepts/clusters/logging/"
- "/docs/concepts/clusters/logging.html"
redirect_from:
- "/docs/user-guide/logging/overview/"
- "/docs/user-guide/logging/overview.html"
---
Application and systems logs can help you understand what is happening inside your cluster. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism; as such, most container engines are likewise designed to support some kind of logging. The easiest and most embraced logging method for containerized applications is to write to the standard output and standard error streams.

View File

@ -4,9 +4,6 @@ assignees:
- janetkuo
- mikedanese
title: Managing Resources
redirect_from:
- "/docs/user-guide/managing-deployments/"
- "/docs/user-guide/managing-deployments.html"
---
You've deployed your application and exposed it via a service. Now what? Kubernetes provides a number of tools to help you manage your application deployment, including scaling and updating. Among the features we'll discuss in more depth are [configuration files](/docs/user-guide/configuring-containers/#configuration-in-kubernetes) and [labels](/docs/user-guide/deploying-applications/#labels).

View File

@ -4,9 +4,6 @@ assignees:
- freehan
- thockin
title: Network Plugins
redirect_from:
- "/docs/admin/network-plugins/"
- "/docs/admin/network-plugins.html"
---
* TOC

View File

@ -2,9 +2,6 @@
assignees:
- thockin
title: Cluster Networking
redirect_from:
- "/docs/admin/networking/"
- "/docs/admin/networking.html"
---
Kubernetes approaches networking somewhat differently than Docker does by
@ -85,7 +82,7 @@ talk to other VMs in your project. This is the same basic model.
Until now this document has talked about containers. In reality, Kubernetes
applies IP addresses at the `Pod` scope - containers within a `Pod` share their
network namespaces - including their IP address. This means that containers
within a `Pod` can all reach each other's ports on `localhost`. This does imply
within a `Pod` can all reach each other's ports on `localhost`. This does imply
that containers within a `Pod` must coordinate port usage, but this is no
different than processes in a VM. We call this the "IP-per-pod" model. This
is implemented in Docker as a "pod container" which holds the network namespace
@ -217,9 +214,9 @@ Calico can also be run in policy enforcement mode in conjunction with other netw
### Weave Net from Weaveworks
[Weave Net](https://www.weave.works/products/weave-net/) is a
resilient and simple to use network for Kubernetes and its hosted applications.
Weave Net runs as a [CNI plug-in](https://www.weave.works/docs/net/latest/cni-plugin/)
[Weave Net](https://www.weave.works/products/weave-net/) is a
resilient and simple to use network for Kubernetes and its hosted applications.
Weave Net runs as a [CNI plug-in](https://www.weave.works/docs/net/latest/cni-plugin/)
or stand-alone. In either version, it doesn't require any configuration or extra code
to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.

View File

@ -1,8 +1,5 @@
---
title: Proxies in Kubernetes
redirect_from:
- "/docs/user-guide/accessing-the-cluster/"
- "/docs/user-guide/accessing-the-cluster.html"
---
{% capture overview %}

View File

@ -2,9 +2,6 @@
assignees:
- sttts
title: Using Sysctls in a Kubernetes Cluster
redirect_from:
- "/docs/admin/sysctls/"
- "/docs/admin/sysctls.html"
---
* TOC

View File

@ -4,9 +4,6 @@ assignees:
- kevin-wangzefeng
- bsalamat
title: Assigning Pods to Nodes
redirect_from:
- "/docs/user-guide/node-selection/"
- "/docs/user-guide/node-selection/index.html"
---
You can constrain a [pod](/docs/concepts/workloads/pods/pod/) to only be able to run on particular [nodes](/docs/concepts/nodes/node/) or to prefer to
@ -205,7 +202,7 @@ If omitted, it defaults to the namespace of the pod where the affinity/anti-affi
If defined but empty, it means "all namespaces."
All `matchExpressions` associated with `requiredDuringSchedulingIgnoredDuringExecution` affinity and anti-affinity
must be satisfied for the pod to schedule onto a node.
must be satisfied for the pod to schedule onto a node.
For more information on inter-pod affinity/anti-affinity, see the design doc
[here](https://git.k8s.io/community/contributors/design-proposals/podaffinity.md).
@ -236,7 +233,7 @@ taint created by the `kubectl taint` line above, and thus a pod with either tole
to schedule onto `node1`:
```yaml
tolerations:
tolerations:
- key: "key"
operator: "Equal"
value: "value"
@ -244,7 +241,7 @@ tolerations:
```
```yaml
tolerations:
tolerations:
- key: "key"
operator: "Exists"
effect: "NoSchedule"
@ -304,7 +301,7 @@ kubectl taint nodes node1 key2=value2:NoSchedule
And a pod has two tolerations:
```yaml
tolerations:
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
@ -327,7 +324,7 @@ an optional `tolerationSeconds` field that dictates how long the pod will stay b
to the node after the taint is added. For example,
```yaml
tolerations:
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
@ -345,7 +342,7 @@ Taints and tolerations are a flexible way to steer pods away from nodes or evict
pods that shouldn't be running. A few of the use cases are
* **dedicated nodes**: If you want to dedicate a set of nodes for exclusive use by
a particular set of users, you can add a taint to those nodes (say,
a particular set of users, you can add a taint to those nodes (say,
`kubectl taint nodes nodename dedicated=groupName:NoSchedule`) and then add a corresponding
toleration to their pods (this would be done most easily by writing a custom
[admission controller](/docs/admin/admission-controllers/)).
@ -410,7 +407,7 @@ that the partition will recover and thus the pod eviction can be avoided.
The toleration the pod would use in that case would look like
```yaml
tolerations:
tolerations:
- key: "node.alpha.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoExecute"

View File

@ -1,8 +1,5 @@
---
title: Managing Compute Resources for Containers
redirect_from:
- "/docs/user-guide/compute-resources/"
- "/docs/user-guide/compute-resources.html"
---
{% capture overview %}

View File

@ -2,9 +2,6 @@
assignees:
- mikedanese
title: Configuration Best Practices
redirect_from:
- "/docs/user-guide/config-best-practices/"
- "/docs/user-guide/config-best-practices.html"
---
{% capture overview %}

View File

@ -2,9 +2,6 @@
assignees:
- mikedanese
title: Secrets
redirect_from:
- "/docs/user-guide/secrets/index/"
- "/docs/user-guide/secrets/index.html"
---
Objects of type `secret` are intended to hold sensitive information, such as

View File

@ -3,15 +3,12 @@ assignees:
- mikedanese
- thockin
title: Container Lifecycle Hooks
redirect_from:
- "/docs/user-guide/container-environment/"
- "/docs/user-guide/container-environment.html"
---
{% capture overview %}
This page describes how kubelet managed Containers can use the Container lifecycle hook framework
to run code triggered by events during their management lifecycle.
to run code triggered by events during their management lifecycle.
{% endcapture %}
@ -34,14 +31,14 @@ There are two hooks that are exposed to Containers:
This hook executes immediately after a container is created.
However, there is no guarantee that the hook will execute before the container ENTRYPOINT.
No parameters are passed to the handler.
No parameters are passed to the handler.
`PreStop`
This hook is called immediately before a container is terminated.
It is blocking, meaning it is synchronous,
so it must complete before the call to delete the container can be sent.
No parameters are passed to the handler.
so it must complete before the call to delete the container can be sent.
No parameters are passed to the handler.
A more detailed description of the termination behavior can be found in
[Termination of Pods](/docs/concepts/workloads/pods/pod/#termination-of-pods).
@ -58,13 +55,13 @@ Resources consumed by the command are counted against the Container.
### Hook handler execution
When a Container lifecycle management hook is called,
the Kubernetes management system executes the handler in the Container registered for that hook. 
the Kubernetes management system executes the handler in the Container registered for that hook. 
Hook handler calls are synchronous within the context of the Pod containing the Container.
This means that for a `PostStart` hook,
the Container ENTRYPOINT and hook fire asynchronously.
However, if the hook takes too long to run or hangs,
the Container cannot reach a `running` state.
the Container cannot reach a `running` state.
The behavior is similar for a `PreStop` hook.
If the hook hangs during execution,
@ -87,16 +84,16 @@ Generally, only single deliveries are made.
If, for example, an HTTP hook receiver is down and is unable to take traffic,
there is no attempt to resend.
In some rare cases, however, double delivery may occur.
For instance, if a kubelet restarts in the middle of sending a hook,
For instance, if a kubelet restarts in the middle of sending a hook,
the hook might be resent after the kubelet comes back up.
### Debugging Hook handlers
The logs for a Hook handler are not exposed in Pod events.
If a handler fails for some reason, it broadcasts an event.
For `PostStart`, this is the `FailedPostStartHook` event,
and for `PreStop`, this is the `FailedPreStopHook` event.
You can see these events by running `kubectl describe pod <pod_name>`.
For `PostStart`, this is the `FailedPostStartHook` event,
and for `PreStop`, this is the `FailedPreStopHook` event.
You can see these events by running `kubectl describe pod <pod_name>`.
Here is some example output of events from running this command:
```
@ -111,7 +108,7 @@ Events:
38s 38s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 5c6a256a2567: PostStart handler: Error executing in Docker Container: 1
37s 37s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 8df9fdfd7054: PostStart handler: Error executing in Docker Container: 1
38s 37s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "main" with RunContainerError: "PostStart handler: Error executing in Docker Container: 1"
1m 22s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Warning FailedPostStartHook
1m 22s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Warning FailedPostStartHook
```
{% endcapture %}

View File

@ -3,9 +3,6 @@ assignees:
- erictune
- thockin
title: Images
redirect_from:
- "/docs/user-guide/images/"
- "/docs/user-guide/images.html"
---
{% capture overview %}
@ -83,7 +80,7 @@ images in the ECR registry.
The kubelet will fetch and periodically refresh ECR credentials. It needs the following permissions to do this:
- `ecr:GetAuthorizationToken`
- `ecr:GetAuthorizationToken`
- `ecr:BatchCheckLayerAvailability`
- `ecr:GetDownloadUrlForLayer`
- `ecr:GetRepositoryPolicy`

View File

@ -2,10 +2,8 @@
assignees:
- lavalamp
title: Kubernetes Components
redirect_from:
- "/docs/admin/cluster-components/"
- "/docs/admin/cluster-components.html"
---
{% capture overview %}
This document outlines the various binary components needed to
deliver a functioning Kubernetes cluster.
@ -15,7 +13,7 @@ deliver a functioning Kubernetes cluster.
## Master Components
Master components provide the cluster's control plane. Master components make global decisions about the
cluster (for example, scheduling), and detecting and responding to cluster events (starting up a new pod when a replication controller's 'replicas' field is unsatisfied).
cluster (for example, scheduling), and detecting and responding to cluster events (starting up a new pod when a replication controller's 'replicas' field is unsatisfied).
Master components can be run on any node in the cluster. However,
for simplicity, set up scripts typically start all master components on
@ -28,7 +26,7 @@ Kubernetes control plane. It is designed to scale horizontally -- that is, it sc
### etcd
[etcd](/docs/admin/etcd) is used as Kubernetes' backing store. All cluster data is stored here. Always have a backup plan for etcd's data for your Kubernetes cluster.
[etcd](/docs/admin/etcd) is used as Kubernetes' backing store. All cluster data is stored here. Always have a backup plan for etcd's data for your Kubernetes cluster.
### kube-controller-manager
@ -41,12 +39,12 @@ These controllers include:
controller object in the system.
* Endpoints Controller: Populates the Endpoints object (that is, joins Services & Pods).
* Service Account & Token Controllers: Create default accounts and API access tokens for new namespaces.
### cloud-controller-manager
cloud-controller-manager runs controllers that interact with the underlying cloud providers. The cloud-controller-manager binary is an alpha feature introduced in Kubernetes release 1.6.
cloud-controller-manager runs controllers that interact with the underlying cloud providers. The cloud-controller-manager binary is an alpha feature introduced in Kubernetes release 1.6.
cloud-controller-manager runs cloud-provider-specific controller loops only. You must disable these controller loops in the kube-controller-manager. You can disable the controller loops by setting the `--cloud-provider` flag to `external` when starting the kube-controller-manager.
cloud-controller-manager runs cloud-provider-specific controller loops only. You must disable these controller loops in the kube-controller-manager. You can disable the controller loops by setting the `--cloud-provider` flag to `external` when starting the kube-controller-manager.
cloud-controller-manager allows cloud vendors code and the Kubernetes core to evolve independent of each other. In prior releases, the core Kubernetes code was dependent upon cloud-provider-specific code for functionality. In future releases, code specific to cloud vendors should be maintained by the cloud vendor themselves, and linked to cloud-controller-manager while running Kubernetes.
@ -55,7 +53,7 @@ The following controllers have cloud provider dependencies:
* Node Controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding
* Route Controller: For setting up routes in the underlying cloud infrastructure
* Service Controller: For creating, updating and deleting cloud provider load balancers
* Volume Controller: For creating, attaching, and mounting volumes, and interacting with the cloud provider to orchestrate volumes
* Volume Controller: For creating, attaching, and mounting volumes, and interacting with the cloud provider to orchestrate volumes
### kube-scheduler

View File

@ -2,9 +2,6 @@
assignees:
- chenopis
title: The Kubernetes API
redirect_from:
- "/docs/api/"
- "/docs/api.html"
---
Overall API conventions are described in the [API conventions doc](https://git.k8s.io/community/contributors/devel/api-conventions.md).

View File

@ -3,10 +3,8 @@ assignees:
- bgrant0607
- mikedanese
title: What is Kubernetes?
redirect_from:
- "/docs/whatisk8s/"
- "/docs/whatisk8s.html"
---
{% capture overview %}
This page is an overview of Kubernetes.
{% endcapture %}
@ -19,7 +17,7 @@ With Kubernetes, you are able to quickly and efficiently respond to customer dem
- Deploy your applications quickly and predictably.
- Scale your applications on the fly.
- Roll out new features seamlessly.
- Limit hardware usage to required resources only.
- Limit hardware usage to required resources only.
Our goal is to foster an ecosystem of components and tools that relieve the burden of running applications in public and private clouds.

View File

@ -1,8 +1,5 @@
---
title: Annotations
redirect_from:
- "/docs/user-guide/annotations/"
- "/docs/user-guide/annotations.html"
---
{% capture overview %}

View File

@ -1,9 +1,5 @@
---
title: Understanding Kubernetes Objects
redirect_from:
- "/docs/concepts/abstractions/overview/"
- "/docs/concepts/abstractions/overview.html"
---
{% capture overview %}

View File

@ -2,9 +2,6 @@
assignees:
- mikedanese
title: Labels and Selectors
redirect_from:
- "/docs/user-guide/labels/"
- "/docs/user-guide/labels.html"
---
_Labels_ are key/value pairs that are attached to objects, such as pods.
@ -60,7 +57,7 @@ An empty label selector (that is, one with zero requirements) selects every obje
A null label selector (which is only possible for optional selector fields) selects no objects.
**Note**: the label selectors of two controllers must not overlap within a namespace, otherwise they will fight with each other.
**Note**: the label selectors of two controllers must not overlap within a namespace, otherwise they will fight with each other.
### _Equality-based_ requirement

View File

@ -3,9 +3,6 @@ assignees:
- mikedanese
- thockin
title: Names
redirect_from:
- "/docs/user-guide/identifiers/"
- "/docs/user-guide/identifiers.html"
---
All objects in the Kubernetes REST API are unambiguously identified by a Name and a UID.

View File

@ -4,9 +4,6 @@ assignees:
- mikedanese
- thockin
title: Namespaces
redirect_from:
- "/docs/user-guide/namespaces/"
- "/docs/user-guide/namespaces.html"
---
Kubernetes supports multiple virtual clusters backed by the same physical cluster.

View File

@ -2,13 +2,10 @@
assignees:
- pweil-
title: Pod Security Policies
redirect_from:
- "/docs/user-guide/pod-security-policy/"
- "/docs/user-guide/pod-security-policy/index.html"
---
Objects of type `PodSecurityPolicy` govern the ability
to make requests on a pod that affect the `SecurityContext` that will be
to make requests on a pod that affect the `SecurityContext` that will be
applied to a pod and container.
See [PodSecurityPolicy proposal](https://git.k8s.io/community/contributors/design-proposals/security-context-constraints.md) for more information.
@ -18,10 +15,10 @@ See [PodSecurityPolicy proposal](https://git.k8s.io/community/contributors/desig
## What is a Pod Security Policy?
A _Pod Security Policy_ is a cluster-level resource that controls the
A _Pod Security Policy_ is a cluster-level resource that controls the
actions that a pod can perform and what it has the ability to access. The
`PodSecurityPolicy` objects define a set of conditions that a pod must
run with in order to be accepted into the system. They allow an
`PodSecurityPolicy` objects define a set of conditions that a pod must
run with in order to be accepted into the system. They allow an
administrator to control the following:
| Control Aspect | Field Name |
@ -41,16 +38,16 @@ administrator to control the following:
| Allocating an FSGroup that owns the pod's volumes | [`fsGroup`](#fsgroup) |
| Requiring the use of a read only root file system | `readOnlyRootFilesystem` |
_Pod Security Policies_ are comprised of settings and strategies that
control the security features a pod has access to. These settings fall
_Pod Security Policies_ are comprised of settings and strategies that
control the security features a pod has access to. These settings fall
into three categories:
- *Controlled by a boolean*: Fields of this type default to the most
restrictive value.
- *Controlled by an allowable set*: Fields of this type are checked
- *Controlled by a boolean*: Fields of this type default to the most
restrictive value.
- *Controlled by an allowable set*: Fields of this type are checked
against the set to ensure their value is allowed.
- *Controlled by a strategy*: Items that have a strategy to provide
a mechanism to generate the value and a mechanism to ensure that a
a mechanism to generate the value and a mechanism to ensure that a
specified value falls into the set of allowable values.
@ -75,22 +72,22 @@ specified.
### SupplementalGroups
- *MustRunAs* - Requires at least one range to be specified. Uses the
- *MustRunAs* - Requires at least one range to be specified. Uses the
minimum value of the first range as the default. Validates against all ranges.
- *RunAsAny* - No default provided. Allows any `supplementalGroups` to be
specified.
### FSGroup
- *MustRunAs* - Requires at least one range to be specified. Uses the
minimum value of the first range as the default. Validates against the
- *MustRunAs* - Requires at least one range to be specified. Uses the
minimum value of the first range as the default. Validates against the
first ID in the first range.
- *RunAsAny* - No default provided. Allows any `fsGroup` ID to be specified.
### Controlling Volumes
The usage of specific volume types can be controlled by setting the
volumes field of the PSP. The allowable values of this field correspond
The usage of specific volume types can be controlled by setting the
volumes field of the PSP. The allowable values of this field correspond
to the volume sources that are defined when creating a volume:
1. azureFile
@ -122,7 +119,7 @@ to the volume sources that are defined when creating a volume:
1. storageos
1. \* (allow all volumes)
The recommended minimum set of allowed volumes for new PSPs are
The recommended minimum set of allowed volumes for new PSPs are
configMap, downwardAPI, emptyDir, persistentVolumeClaim, secret, and projected.
### Host Network
@ -193,7 +190,7 @@ podsecuritypolicy "permissive" deleted
## Enabling Pod Security Policies
In order to use Pod Security Policies in your cluster you must ensure the
In order to use Pod Security Policies in your cluster you must ensure the
following
1. You have enabled the api type `extensions/v1beta1/podsecuritypolicy` (only for versions prior 1.6)

View File

@ -2,9 +2,6 @@
assignees:
- derekwaynecarr
title: Resource Quotas
redirect_from:
- "/docs/admin/resourcequota/"
- "/docs/admin/resourcequota/index.html"
---
When several users or teams share a cluster with a fixed number of nodes,
@ -56,7 +53,7 @@ Resource Quota is enforced in a particular namespace when there is a
## Compute Resource Quota
You can limit the total sum of [compute resources](/docs/user-guide/compute-resources) that can be requested in a given namespace.
The following resource types are supported:
| Resource Name | Description |
@ -70,7 +67,7 @@ The following resource types are supported:
## Storage Resource Quota
You can limit the total sum of [storage resources](/docs/user-guide/persistent-volumes) that can be requested in a given namespace.
You can limit the total sum of [storage resources](/docs/user-guide/persistent-volumes) that can be requested in a given namespace.
In addition, you can limit consumption of storage resources based on associated storage-class.

View File

@ -3,9 +3,6 @@ assignees:
- rickypai
- thockin
title: Adding entries to Pod /etc/hosts with HostAliases
redirect_from:
- "/docs/user-guide/add-entries-to-pod-etc-hosts-with-host-aliases/"
- "/docs/user-guide/add-entries-to-pod-etc-hosts-with-host-aliases.md"
---
* TOC

View File

@ -4,9 +4,6 @@ assignees:
- lavalamp
- thockin
title: Connecting Applications with Services
redirect_from:
- "/docs/user-guide/connecting-applications/"
- "/docs/user-guide/connecting-applications.html"
---
* TOC

View File

@ -3,9 +3,6 @@ assignees:
- davidopp
- thockin
title: DNS Pods and Services
redirect_from:
- "/docs/admin/dns/"
- "/docs/admin/dns.html"
---
## Introduction
@ -105,7 +102,7 @@ spec:
clusterIP: None
ports:
- name: foo # Actually, no port is needed.
port: 1234
port: 1234
targetPort: 1234
---
apiVersion: v1
@ -142,7 +139,7 @@ spec:
```
If there exists a headless service in the same namespace as the pod and with the same name as the subdomain, the cluster's KubeDNS Server also returns an A record for the Pod's fully qualified hostname.
Given a Pod with the hostname set to "busybox-1" and the subdomain set to "default-subdomain", and a headless Service named "default-subdomain" in the same namespace, the pod will see it's own FQDN as "busybox-1.default-subdomain.my-namespace.svc.cluster.local". DNS serves an A record at that name, pointing to the Pod's IP. Both pods "busybox1" and "busybox2" can have their distinct A records.
Given a Pod with the hostname set to "busybox-1" and the subdomain set to "default-subdomain", and a headless Service named "default-subdomain" in the same namespace, the pod will see it's own FQDN as "busybox-1.default-subdomain.my-namespace.svc.cluster.local". DNS serves an A record at that name, pointing to the Pod's IP. Both pods "busybox1" and "busybox2" can have their distinct A records.
As of Kubernetes v1.2, the Endpoints object also has the annotation `endpoints.beta.kubernetes.io/hostnames-map`. Its value is the json representation of map[string(IP)][endpoints.HostRecord], for example: '{"10.245.1.6":{HostName: "my-webserver"}}'.
If the Endpoints are for a headless service, an A record is created with the format <hostname>.<service name>.<pod namespace>.svc.<cluster domain>

View File

@ -2,9 +2,6 @@
assignees:
- bprashanth
title: Ingress Resources
redirect_from:
- "/docs/user-guide/ingress/"
- "/docs/user-guide/ingress.html"
---
* TOC

View File

@ -4,9 +4,6 @@ assignees:
- caseydavenport
- danwinship
title: Network Policies
redirect_from:
- "/docs/user-guide/networkpolicies/"
- "/docs/user-guide/networkpolicies.html"
---
* TOC

View File

@ -2,9 +2,6 @@
assignees:
- bprashanth
title: Services
redirect_from:
- "/docs/user-guide/services/"
- "/docs/user-guide/services/index.html"
---
Kubernetes [`Pods`](/docs/user-guide/pods) are mortal. They are born and when they die, they
@ -319,9 +316,9 @@ Sometimes you don't need or want load-balancing and a single service IP. In
this case, you can create "headless" services by specifying `"None"` for the
cluster IP (`spec.clusterIP`).
This option allows developers to reduce coupling to the Kubernetes system by
allowing them freedom to do discovery their own way. Applications can still use
a self-registration pattern and adapters for other discovery systems could easily
This option allows developers to reduce coupling to the Kubernetes system by
allowing them freedom to do discovery their own way. Applications can still use
a self-registration pattern and adapters for other discovery systems could easily
be built upon this API.
For such `Services`, a cluster IP is not allocated, kube-proxy does not handle
@ -356,15 +353,15 @@ The default is `ClusterIP`.
`Type` values and their behaviors are:
* `ClusterIP`: Exposes the service on a cluster-internal IP. Choosing this value
makes the service only reachable from within the cluster. This is the
* `ClusterIP`: Exposes the service on a cluster-internal IP. Choosing this value
makes the service only reachable from within the cluster. This is the
default `ServiceType`.
* `NodePort`: Exposes the service on each Node's IP at a static port (the `NodePort`).
A `ClusterIP` service, to which the NodePort service will route, is automatically
created. You'll be able to contact the `NodePort` service, from outside the cluster,
* `NodePort`: Exposes the service on each Node's IP at a static port (the `NodePort`).
A `ClusterIP` service, to which the NodePort service will route, is automatically
created. You'll be able to contact the `NodePort` service, from outside the cluster,
by requesting `<NodeIP>:<NodePort>`.
* `LoadBalancer`: Exposes the service externally using a cloud provider's load balancer.
`NodePort` and `ClusterIP` services, to which the external load balancer will route,
* `LoadBalancer`: Exposes the service externally using a cloud provider's load balancer.
`NodePort` and `ClusterIP` services, to which the external load balancer will route,
are automatically created.
* `ExternalName`: Maps the service to the contents of the `externalName` field
(e.g. `foo.bar.example.com`), by returning a `CNAME` record with its value.
@ -441,9 +438,9 @@ This can be achieved by adding the following annotations to the service based on
For AWS:
```yaml
[...]
metadata:
metadata:
name: my-service
annotations:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
[...]
```
@ -516,7 +513,7 @@ spec:
protocol: TCP
port: 80
targetPort: 9376
externalIPs:
externalIPs:
- 80.11.12.10
```

View File

@ -5,9 +5,6 @@ assignees:
- saad-ali
- thockin
title: Persistent Volumes
redirect_from:
- "/docs/user-guide/persistent-volumes/"
- "/docs/user-guide/persistent-volumes/index.html"
---
This document describes the current state of `PersistentVolumes` in Kubernetes. Familiarity with [volumes](/docs/concepts/storage/volumes/) is suggested.
@ -265,7 +262,7 @@ spec:
pdName: "gce-disk-1"
```
A mount option is a string which will be cumulatively joined and used while mounting volume to the disk.
A mount option is a string which will be cumulatively joined and used while mounting volume to the disk.
Note that not all Persistent volume types support mount options. In Kubernetes version 1.6, the following
volume types support mount options.
@ -734,7 +731,7 @@ parameters:
If storage account is not provided, all storage accounts associated with the resource group are searched to find one that matches `skuName` and `location`. If storage account is provided, it must reside in the same resource group as the cluster, and `skuName` and `location` are ignored.
During provision, a secret will be created for mounting credentials. If the cluster has enabled both [RBAC](/docs/admin/authorization/rbac/) and [Controller Roles](/docs/admin/authorization/rbac/#controller-roles), you will first need to add `create` permission of resource `secret` for clusterrole `system:controller:persistent-volume-binder`.
#### Portworx Volume
```yaml
@ -786,7 +783,7 @@ parameters:
* `readOnly`: specifies the access mode to the mounted volume
* `fsType`: the file system to use for the volume
The ScaleIO Kubernetes volume plugin requires a configured Secret object.
The ScaleIO Kubernetes volume plugin requires a configured Secret object.
The secret must be created with type `kubernetes.io/scaleio` and use the same namespace value as that of the PVC where it is referenced
as shown in the following command:

View File

@ -5,9 +5,6 @@ assignees:
- saad-ali
- thockin
title: Volumes
redirect_from:
- "/docs/user-guide/volumes/"
- "/docs/user-guide/volumes.html"
---
{% capture overview %}
@ -788,7 +785,7 @@ spec:
Note that local PersistentVolume cleanup and deletion requires manual
intervention without the external provisioner.
For details on the `local` volume type, see the [Local Persistent Storage
For details on the `local` volume type, see the [Local Persistent Storage
user guide](https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume)
## Using subPath

View File

@ -4,11 +4,6 @@ assignees:
- soltysh
- janetkuo
title: Cron Jobs
redirect_from:
- "/docs/concepts/jobs/cron-jobs/"
- "/docs/concepts/jobs/cron-jobs.html"
- "/docs/user-guide/cron-jobs/"
- "/docs/user-guide/cron-jobs.html"
---
* TOC

View File

@ -2,9 +2,6 @@
assignees:
- erictune
title: Daemon Sets
redirect_from:
- "/docs/admin/daemons/"
- "/docs/admin/daemons.html"
---
* TOC
@ -77,7 +74,7 @@ a node for testing.
If you specify a `.spec.template.spec.nodeSelector`, then the DaemonSet controller will
create pods on nodes which match that [node
selector](/docs/concepts/configuration/assign-pod-node/). Likewise if you specify a `.spec.template.spec.affinity`
selector](/docs/concepts/configuration/assign-pod-node/). Likewise if you specify a `.spec.template.spec.affinity`
then DaemonSet controller will create pods on nodes which match that [node affinity](/docs/concepts/configuration/assign-pod-node/).
If you do not specify either, then the DaemonSet controller will create pods on all nodes.
@ -91,7 +88,7 @@ when the pod is created, so it is ignored by the scheduler). Therefore:
by the DaemonSet controller.
- DaemonSet controller can make pods even when the scheduler has not been started, which can help cluster
bootstrap.
Daemon pods do respect [taints and tolerations](/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature), but they are
created with `NoExecute` tolerations for the `node.alpha.kubernetes.io/notReady` and `node.alpha.kubernetes.io/unreachable`
taints with no `tolerationSeconds`. This ensures that when the `TaintBasedEvictions` alpha feature is enabled,

View File

@ -3,9 +3,6 @@ assignees:
- bgrant0607
- janetkuo
title: Deployments
redirect_from:
- "/docs/user-guide/deployments/"
- "/docs/user-guide/deployments.html"
---
{:toc}
@ -503,7 +500,7 @@ nginx-deployment-1989198191 7 7 0 7m
nginx-deployment-618515232 11 11 11 7m
```
## Pausing and Resuming a Deployment
## Pausing and Resuming a Deployment
You can pause a Deployment before triggering one or more updates and then resume it. This will allow you to
apply multiple fixes in between pausing and resuming without triggering unnecesarry rollouts.
@ -549,7 +546,7 @@ deployment "nginx" resource requirements updated
```
The initial state of the Deployment prior to pausing it will continue its function, but new updates to
the Deployment will not have any effect as long as the Deployment is paused.
the Deployment will not have any effect as long as the Deployment is paused.
Eventually, resume the Deployment and observe a new ReplicaSet coming up with all the new updates:
```shell
@ -754,13 +751,13 @@ to a previous revision, or even pause it if you need to apply multiple tweaks in
You can set `.spec.revisionHistoryLimit` field in a Deployment to specify how many old ReplicaSets for
this Deployment you want to retain. The rest will be garbage-collected in the background. By default,
all revision history will be kept. In a future version, it will default to switch to 2.
all revision history will be kept. In a future version, it will default to switch to 2.
**Note:** Explicitly setting this field to 0, will result in cleaning up all the history of your Deployment
thus that Deployment will not be able to roll back.
## Use Cases
## Use Cases
### Canary Deployment
@ -900,7 +897,7 @@ ReplicaSets will be kept by default, consuming resources in `etcd` and crowding
if this field is not set. The configuration of each Deployment revision is stored in its ReplicaSets;
therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of Deployment.
More specifically, setting this field to zero means that all old ReplicaSets with 0 replica will be cleaned up.
More specifically, setting this field to zero means that all old ReplicaSets with 0 replica will be cleaned up.
In this case, a new Deployment rollout cannot be undone, since its revision history is cleaned up.
### Paused

View File

@ -1,11 +1,5 @@
---
title: Garbage Collection
redirect_from:
- "/docs/concepts/abstractions/controllers/garbage-collection/"
- "/docs/concepts/abstractions/controllers/garbage-collection.html"
- "/docs/user-guide/garbage-collection/"
- "/docs/user-guide/garbage-collection.html"
---
{% capture overview %}
@ -70,15 +64,15 @@ metadata:
When you delete an object, you can specify whether the object's dependents are
also deleted automatically. Deleting dependents automatically is called *cascading
deletion*. There are two modes of *cascading deletion*: *background* and *foreground*.
deletion*. There are two modes of *cascading deletion*: *background* and *foreground*.
If you delete an object without deleting its dependents
automatically, the dependents are said to be *orphaned*.
automatically, the dependents are said to be *orphaned*.
### Background cascading deletion
In *background cascading deletion*, Kubernetes deletes the owner object
immediately and the garbage collector then deletes the dependents in
In *background cascading deletion*, Kubernetes deletes the owner object
immediately and the garbage collector then deletes the dependents in
the background.
### Foreground cascading deletion
@ -90,7 +84,7 @@ the following things are true:
* The object is still visible via the REST API
* The object's `deletionTimestamp` is set
* The object's `metadata.finalizers` contains the value "foregroundDeletion".
Once the "deletion in progress" state is set, the garbage
collector deletes the object's dependents. Once the garbage collector has deleted all
"blocking" dependents (objects with `ownerReference.blockOwnerDeletion=true`), it delete
@ -100,7 +94,7 @@ Note that in the "foregroundDeletion", only dependents with
`ownerReference.blockOwnerDeletion` block the deletion of the owner object.
Kubernetes version 1.7 will add an admission controller that controls user access to set
`blockOwnerDeletion` to true based on delete permissions on the owner object, so that
unauthorized dependents cannot delay deletion of an owner object.
unauthorized dependents cannot delay deletion of an owner object.
If an object's `ownerReferences` field is set by a controller (such as Deployment or ReplicaSet),
blockOwnerDeletion is set automatically and you do not need to manually modify this field.

View File

@ -3,11 +3,6 @@ assignees:
- erictune
- soltysh
title: Jobs - Run to Completion
redirect_from:
- "/docs/concepts/jobs/run-to-completion-finite-workloads/"
- "/docs/concepts/jobs/run-to-completion-finite-workloads.html"
- "/docs/user-guide/jobs/"
- "/docs/user-guide/jobs.html"
---
* TOC

View File

@ -8,13 +8,6 @@ assignees:
- kow3ns
- smarterclayton
title: PetSets
redirect_from:
- "/docs/concepts/abstractions/controllers/petsets/"
- "/docs/concepts/abstractions/controllers/petsets.html"
- "/docs/user-guide/petset/bootstrapping/"
- "/docs/user-guide/petset/bootstrapping/index.html"
- "/docs/user-guide/petset/"
- "/docs/user-guide/petset.html"
---
__Warning:__ Starting in Kubernetes version 1.5, PetSet has been renamed to [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets). To use (or continue to use) PetSet in Kubernetes 1.5, you _must_ [migrate](/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set/) your existing PetSets to StatefulSets. For information on working with StatefulSet, see the tutorial on [how to run replicated stateful applications](/docs/tutorials/stateful-application/run-replicated-stateful-application).

View File

@ -4,9 +4,6 @@ assignees:
- bprashanth
- madhusudancs
title: Replica Sets
redirect_from:
- "/docs/user-guide/replicasets/"
- "/docs/user-guide/replicasets.html"
---
* TOC

View File

@ -3,9 +3,6 @@ assignees:
- bprashanth
- janetkuo
title: Replication Controller
redirect_from:
- "/docs/user-guide/replication-controller/"
- "/docs/user-guide/replication-controller/index.html"
---
* TOC
@ -194,7 +191,7 @@ Ideally, the rolling update controller would take application readiness into acc
The two ReplicationControllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates.
Rolling update is implemented in the client tool
[`kubectl rolling-update`](/docs/user-guide/kubectl/{{page.version}}/#rolling-update). Visit [`kubectl rolling-update` task](/docs/tasks/run-application/rolling-update-replication-controller/) for more concrete examples.
[`kubectl rolling-update`](/docs/user-guide/kubectl/{{page.version}}/#rolling-update). Visit [`kubectl rolling-update` task](/docs/tasks/run-application/rolling-update-replication-controller/) for more concrete examples.
### Multiple release tracks
@ -240,7 +237,7 @@ Note that we recommend using Deployments instead of directly using Replica Sets,
### Deployment (Recommended)
[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is a higher-level API object that updates its underlying Replica Sets and their Pods
in a similar fashion as `kubectl rolling-update`. Deployments are recommended if you want this rolling update functionality,
in a similar fashion as `kubectl rolling-update`. Deployments are recommended if you want this rolling update functionality,
because unlike `kubectl rolling-update`, they are declarative, server-side, and have additional features.
### Bare Pods

View File

@ -7,14 +7,11 @@ assignees:
- kow3ns
- smarterclayton
title: StatefulSets
redirect_from:
- "/docs/concepts/abstractions/controllers/statefulsets/"
- "/docs/concepts/abstractions/controllers/statefulsets.html"
---
{% capture overview %}
**StatefulSets are a beta feature in 1.7. This feature replaces the
PetSets feature from 1.4. Users of PetSets are referred to the 1.5
**StatefulSets are a beta feature in 1.7. This feature replaces the
PetSets feature from 1.4. Users of PetSets are referred to the 1.5
[Upgrade Guide](/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set/)
for further information on how to upgrade existing PetSets to StatefulSets.**
@ -26,7 +23,7 @@ guarantees about the ordering of deployment and scaling.
## Using StatefulSets
StatefulSets are valuable for applications that require one or more of the
StatefulSets are valuable for applications that require one or more of the
following.
* Stable, unique network identifiers.
@ -36,10 +33,10 @@ following.
* Ordered, automated rolling updates.
In the above, stable is synonymous with persistence across Pod (re)scheduling.
If an application doesn't require any stable identifiers or ordered deployment,
deletion, or scaling, you should deploy your application with a controller that
provides a set of stateless replicas. Controllers such as
[Deployment](/docs/concepts/workloads/controllers/deployment/) or
If an application doesn't require any stable identifiers or ordered deployment,
deletion, or scaling, you should deploy your application with a controller that
provides a set of stateless replicas. Controllers such as
[Deployment](/docs/concepts/workloads/controllers/deployment/) or
[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) may be better suited to your stateless needs.
## Limitations
@ -50,11 +47,11 @@ provides a set of stateless replicas. Controllers such as
* StatefulSets currently require a [Headless Service](/docs/concepts/services-networking/service/#headless-services) to be responsible for the network identity of the Pods. You are responsible for creating this Service.
## Components
The example below demonstrates the components of a StatefulSet.
The example below demonstrates the components of a StatefulSet.
* A Headless Service, named nginx, is used to control the network domain.
* A Headless Service, named nginx, is used to control the network domain.
* The StatefulSet, named web, has a Spec that indicates that 3 replicas of the nginx container will be launched in unique Pods.
* The volumeClaimTemplates will provide stable storage using [PersistentVolumes](/docs/concepts/storage/volumes/) provisioned by a
* The volumeClaimTemplates will provide stable storage using [PersistentVolumes](/docs/concepts/storage/volumes/) provisioned by a
PersistentVolume Provisioner.
```yaml
@ -107,30 +104,30 @@ spec:
```
## Pod Identity
StatefulSet Pods have a unique identity that is comprised of an ordinal, a
stable network identity, and stable storage. The identity sticks to the Pod,
StatefulSet Pods have a unique identity that is comprised of an ordinal, a
stable network identity, and stable storage. The identity sticks to the Pod,
regardless of which node it's (re)scheduled on.
### Ordinal Index
For a StatefulSet with N replicas, each Pod in the StatefulSet will be
assigned an integer ordinal, in the range [0,N), that is unique over the Set.
For a StatefulSet with N replicas, each Pod in the StatefulSet will be
assigned an integer ordinal, in the range [0,N), that is unique over the Set.
### Stable Network ID
Each Pod in a StatefulSet derives its hostname from the name of the StatefulSet
and the ordinal of the Pod. The pattern for the constructed hostname
is `$(statefulset name)-$(ordinal)`. The example above will create three Pods
Each Pod in a StatefulSet derives its hostname from the name of the StatefulSet
and the ordinal of the Pod. The pattern for the constructed hostname
is `$(statefulset name)-$(ordinal)`. The example above will create three Pods
named `web-0,web-1,web-2`.
A StatefulSet can use a [Headless Service](/docs/concepts/services-networking/service/#headless-services)
to control the domain of its Pods. The domain managed by this Service takes the form:
`$(service name).$(namespace).svc.cluster.local`, where "cluster.local"
is the [cluster domain](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md).
As each Pod is created, it gets a matching DNS subdomain, taking the form:
`$(podname).$(governing service domain)`, where the governing service is defined
to control the domain of its Pods. The domain managed by this Service takes the form:
`$(service name).$(namespace).svc.cluster.local`, where "cluster.local"
is the [cluster domain](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md).
As each Pod is created, it gets a matching DNS subdomain, taking the form:
`$(podname).$(governing service domain)`, where the governing service is defined
by the `serviceName` field on the StatefulSet.
Here are some examples of choices for Cluster Domain, Service name,
Here are some examples of choices for Cluster Domain, Service name,
StatefulSet name, and how that affects the DNS names for the StatefulSet's Pods.
Cluster Domain | Service (ns/name) | StatefulSet (ns/name) | StatefulSet Domain | Pod DNS | Pod Hostname |
@ -139,96 +136,96 @@ Cluster Domain | Service (ns/name) | StatefulSet (ns/name) | StatefulSet Domain
cluster.local | foo/nginx | foo/web | nginx.foo.svc.cluster.local | web-{0..N-1}.nginx.foo.svc.cluster.local | web-{0..N-1} |
kube.local | foo/nginx | foo/web | nginx.foo.svc.kube.local | web-{0..N-1}.nginx.foo.svc.kube.local | web-{0..N-1} |
Note that Cluster Domain will be set to `cluster.local` unless
Note that Cluster Domain will be set to `cluster.local` unless
[otherwise configured](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md).
### Stable Storage
Kubernetes creates one [PersistentVolume](/docs/concepts/storage/volumes/) for each
VolumeClaimTemplate. In the nginx example above, each Pod will receive a single PersistentVolume
with a storage class of `anything` and 1 Gib of provisioned storage. When a Pod is (re)scheduled
onto a node, its `volumeMounts` mount the PersistentVolumes associated with its
PersistentVolume Claims. Note that, the PersistentVolumes associated with the
Pods' PersistentVolume Claims are not deleted when the Pods, or StatefulSet are deleted.
Kubernetes creates one [PersistentVolume](/docs/concepts/storage/volumes/) for each
VolumeClaimTemplate. In the nginx example above, each Pod will receive a single PersistentVolume
with a storage class of `anything` and 1 Gib of provisioned storage. When a Pod is (re)scheduled
onto a node, its `volumeMounts` mount the PersistentVolumes associated with its
PersistentVolume Claims. Note that, the PersistentVolumes associated with the
Pods' PersistentVolume Claims are not deleted when the Pods, or StatefulSet are deleted.
This must be done manually.
## Deployment and Scaling Guarantees
* For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.
* For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.
* When Pods are being deleted, they are terminated in reverse order, from {N-1..0}.
* Before a scaling operation is applied to a Pod, all of its predecessors must be Running and Ready.
* Before a scaling operation is applied to a Pod, all of its predecessors must be Running and Ready.
* Before a Pod is terminated, all of its successors must be completely shutdown.
The StatefulSet should not specify a `pod.Spec.TerminationGracePeriodSeconds` of 0. This practice is unsafe and strongly discouraged. For further explanation, please refer to [force deleting StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/).
When the nginx example above is created, three Pods will be deployed in the order
web-0, web-1, web-2. web-1 will not be deployed before web-0 is
[Running and Ready](/docs/user-guide/pod-states), and web-2 will not be deployed until
web-1 is Running and Ready. If web-0 should fail, after web-1 is Running and Ready, but before
web-2 is launched, web-2 will not be launched until web-0 is successfully relaunched and
becomes Running and Ready.
When the nginx example above is created, three Pods will be deployed in the order
web-0, web-1, web-2. web-1 will not be deployed before web-0 is
[Running and Ready](/docs/user-guide/pod-states), and web-2 will not be deployed until
web-1 is Running and Ready. If web-0 should fail, after web-1 is Running and Ready, but before
web-2 is launched, web-2 will not be launched until web-0 is successfully relaunched and
becomes Running and Ready.
If a user were to scale the deployed example by patching the StatefulSet such that
`replicas=1`, web-2 would be terminated first. web-1 would not be terminated until web-2
is fully shutdown and deleted. If web-0 were to fail after web-2 has been terminated and
is completely shutdown, but prior to web-1's termination, web-1 would not be terminated
`replicas=1`, web-2 would be terminated first. web-1 would not be terminated until web-2
is fully shutdown and deleted. If web-0 were to fail after web-2 has been terminated and
is completely shutdown, but prior to web-1's termination, web-1 would not be terminated
until web-0 is Running and Ready.
### Pod Management Policies
In Kubernetes 1.7 and later, StatefulSet allows you to relax its ordering guarantees while
In Kubernetes 1.7 and later, StatefulSet allows you to relax its ordering guarantees while
preserving its uniqueness and identity guarantees via its `.spec.podManagementPolicy` field.
#### OrderedReady Pod Management
`OrderedReady` pod management is the default for StatefulSets. It implements the behavior
`OrderedReady` pod management is the default for StatefulSets. It implements the behavior
described [above](#deployment-and-scaling-guarantees).
#### Parallel Pod Management
`Parallel` pod management tells the StatefulSet controller to launch or
terminate all Pods in parallel, and to not wait for Pods to become Running
and Ready or completely terminated prior to launching or terminating another
`Parallel` pod management tells the StatefulSet controller to launch or
terminate all Pods in parallel, and to not wait for Pods to become Running
and Ready or completely terminated prior to launching or terminating another
Pod.
## Update Strategies
In Kuberentes 1.7 and later, StatefulSet's `.spec.updateStrategy` field allows you to configure
and disable automated rolling updates for containers, labels, resource request/limits, and
In Kuberentes 1.7 and later, StatefulSet's `.spec.updateStrategy` field allows you to configure
and disable automated rolling updates for containers, labels, resource request/limits, and
annotations for the Pods in a StatefulSet.
### On Delete
The `OnDelete` update strategy implements the legacy (1.6 and prior) behavior. It is the default
strategy when `spec.updateStrategy` is left unspecified. When a StatefulSet's
`.spec.updateStrategy.type` is set to `OnDelete`, the StatefulSet controller will not automatically
update the Pods in a StatefulSet. Users must manually delete Pods to cause the controller to
The `OnDelete` update strategy implements the legacy (1.6 and prior) behavior. It is the default
strategy when `spec.updateStrategy` is left unspecified. When a StatefulSet's
`.spec.updateStrategy.type` is set to `OnDelete`, the StatefulSet controller will not automatically
update the Pods in a StatefulSet. Users must manually delete Pods to cause the controller to
create new Pods that reflect modifications made to a StatefulSet's `.spec.template`.
### Rolling Updates
The `RollingUpdate` update strategy implements automated, rolling update for the Pods in a
StatefulSet. When a StatefulSet's `.spec.updateStrategy.type` is set to `RollingUpdate`, the
StatefulSet controller will delete and recreate each Pod in the StatefulSet. It will proceed
in the same order as Pod termination (from the largest ordinal to the smallest), updating
each Pod one at a time. It will wait until an updated Pod is Running and Ready prior to
The `RollingUpdate` update strategy implements automated, rolling update for the Pods in a
StatefulSet. When a StatefulSet's `.spec.updateStrategy.type` is set to `RollingUpdate`, the
StatefulSet controller will delete and recreate each Pod in the StatefulSet. It will proceed
in the same order as Pod termination (from the largest ordinal to the smallest), updating
each Pod one at a time. It will wait until an updated Pod is Running and Ready prior to
updating its predecessor.
#### Partitions
The `RollingUpdate` update strategy can be partitioned, by specifying a
`.spec.updateStrategy.rollingUpdate.partition`. If a partition is specified, all Pods with an
ordinal that is greater than or equal to the partition will be updated when the StatefulSet's
`.spec.template` is updated. All Pods with an ordinal that is less than the partition will not
be updated, and, even if they are deleted, they will be recreated at the previous version. If a
StatefulSet's `.spec.updateStrategy.rollingUpdate.partition` is greater than its `.spec.replicas`,
The `RollingUpdate` update strategy can be partitioned, by specifying a
`.spec.updateStrategy.rollingUpdate.partition`. If a partition is specified, all Pods with an
ordinal that is greater than or equal to the partition will be updated when the StatefulSet's
`.spec.template` is updated. All Pods with an ordinal that is less than the partition will not
be updated, and, even if they are deleted, they will be recreated at the previous version. If a
StatefulSet's `.spec.updateStrategy.rollingUpdate.partition` is greater than its `.spec.replicas`,
updates to its `.spec.template` will not be propagated to its Pods.
In most cases you will not need to use a partition, but they are useful if you want to stage an
In most cases you will not need to use a partition, but they are useful if you want to stage an
update, roll out a canary, or perform a phased roll out.
{% endcapture %}
{% capture whatsnext %}
* Follow an example of [deploying a stateful application](/docs/tutorials/stateful-application/basic-stateful-set).
* Follow an example of [deploying a stateful application](/docs/tutorials/stateful-application/basic-stateful-set).
{% endcapture %}
{% include templates/concept.md %}

View File

@ -4,11 +4,6 @@ assignees:
- foxish
- davidopp
title: Disruptions
redirect_from:
- "/docs/admin/disruptions/"
- "/docs/admin/disruptions.html"
- "/docs/tasks/configure-pod-container/configure-pod-disruption-budget/"
- "/docs/tasks/administer-cluster/configure-pod-disruption-budget/"
---
{% capture overview %}

View File

@ -2,11 +2,6 @@
assignees:
- erictune
title: Init Containers
redirect_from:
- "/docs/concepts/abstractions/init-containers/"
- "/docs/concepts/abstractions/init-containers.html"
- "/docs/user-guide/pods/init-container/"
- "/docs/user-guide/pods/init-container.html"
---
{% capture overview %}

View File

@ -1,8 +1,5 @@
---
title: Pod Lifecycle
redirect_from:
- "/docs/user-guide/pod-states/"
- "/docs/user-guide/pod-states.html"
---
{% capture overview %}

View File

@ -2,11 +2,6 @@
assignees:
- erictune
title: Pod Overview
redirect_from:
- "/docs/concepts/abstractions/pod/"
- "/docs/concepts/abstractions/pod.html"
- "/docs/user-guide/pod-templates/"
- "/docs/user-guide/pod-templates.html"
---
{% capture overview %}
@ -64,7 +59,7 @@ Pods do not, by themselves, self-heal. If a Pod is scheduled to a Node that fail
### Pods and Controllers
A Controller can create and manage multiple Pods for you, handling replication and rollout and providing self-healing capabilities at cluster scope. For example, if a Node fails, the Controller might automatically replace the Pod by scheduling an identical replacement on a different Node.
A Controller can create and manage multiple Pods for you, handling replication and rollout and providing self-healing capabilities at cluster scope. For example, if a Node fails, the Controller might automatically replace the Pod by scheduling an identical replacement on a different Node.
Some examples of Controllers that contain one or more pods include:

View File

@ -1,9 +1,6 @@
---
assignees:
title: Pods
redirect_from:
- "/docs/user-guide/pods/index/"
- "/docs/user-guide/pods/index.html"
---
* TOC

View File

@ -1,12 +1,9 @@
---
title: Kubernetes on Ubuntu
redirect_from:
- "/docs/getting-started-guides/ubuntu/calico/"
- "/docs/getting-started-guides/ubuntu/calico.html"
---
{% capture overview %}
There are multiple ways to run a Kubernetes cluster with Ubuntu. These pages explain how to deploy Kubernetes on Ubuntu on multiple public and private clouds, as well as bare metal.
There are multiple ways to run a Kubernetes cluster with Ubuntu. These pages explain how to deploy Kubernetes on Ubuntu on multiple public and private clouds, as well as bare metal.
{% endcapture %}
{% capture body %}
@ -20,7 +17,7 @@ Supports AWS, GCE, Azure, Joyent, OpenStack, VMWare, Bare Metal and localhost de
[conjure-up](http://conjure-up.io/) provides the quickest way to deploy Kubernetes on Ubuntu for multiple clouds and bare metal. It provides a user-friendly UI that prompts you for cloud credentials and configuration options
Available for Ubuntu 16.04 and newer:
Available for Ubuntu 16.04 and newer:
```
sudo snap install conjure-up --classic
@ -37,7 +34,7 @@ conjure-up kubernetes
### Operational Guides
These are more in-depth guides for users choosing to run Kubernetes in production:
These are more in-depth guides for users choosing to run Kubernetes in production:
- [Installation](/docs/getting-started-guides/ubuntu/installation)
- [Validation](/docs/getting-started-guides/ubuntu/validation)

View File

@ -1,7 +1,4 @@
---
redirect_from:
- "/docs/templatedemos/"
- "/docs/templatedemos.html"
title: Using Page Templates
---

View File

@ -3,11 +3,6 @@ assignees:
- bgrant0607
- thockin
title: Kubernetes Documentation
redirect_from:
- "/docs/"
- "/docs/index.html"
- "/docs/user-guide/"
- "/docs/user-guide/index.html"
---
Kubernetes documentation can help you set up Kubernetes, learn about the system, or get your applications and workloads running on Kubernetes. To learn the basics of what Kubernetes is and how it works, read "[What is Kubernetes](/docs/concepts/overview/what-is-kubernetes/)".

View File

@ -1,8 +1,5 @@
---
title: Federation API Reference
redirect_from:
- "/docs/federation/api-reference/"
- "/docs/federation/api-reference/index.md"
---
# API Reference

View File

@ -5,9 +5,6 @@ assignees:
- errordeveloper
- jbeda
title: Using kubeadm to Create a Cluster
redirect_from:
- "/docs/getting-started-guides/kubeadm/"
- "/docs/getting-started-guides/kubeadm.html"
---
{% capture overview %}
@ -359,7 +356,7 @@ kubectl --kubeconfig ./admin.conf get nodes
**Note:** If you are using GCE, instances disable ssh access for root by default.
If that's the case you can log in to the machine, copy the file someplace that
can be accessed and then use
can be accessed and then use
[`gcloud compute copy-files`](https://cloud.google.com/sdk/gcloud/reference/compute/copy-files)
### (Optional) Proxying API Server to localhost

View File

@ -4,9 +4,6 @@ assignees:
- erictune
- mikedanese
title: Picking the Right Solution
redirect_from:
- "/docs/getting-started-guides/index/"
- "/docs/getting-started-guides/index.html"
---
Kubernetes can run on various platforms: from your laptop, to VMs on a cloud provider, to rack of

View File

@ -1,10 +1,5 @@
---
title: Accessing Clusters
redirect_from:
- "/docs/user-guide/accessing-the-cluster/"
- "/docs/user-guide/accessing-the-cluster.html"
- "/docs/concepts/cluster-administration/access-cluster/"
- "/docs/concepts/cluster-administration/access-cluster.html"
---
* TOC

View File

@ -3,11 +3,6 @@ assignees:
- mikedanese
- thockin
title: Authenticate Across Clusters with kubeconfig
redirect_from:
- "/docs/user-guide/kubeconfig-file/"
- "/docs/user-guide/kubeconfig-file.html"
- "/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/"
- "/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig.html"
---
Authentication in Kubernetes can differ for different individuals.

View File

@ -1,10 +1,5 @@
---
title: Communicate Between Containers in the Same Pod Using a Shared Volume
redirect_from:
- "/docs/user-guide/pods/multi-container/"
- "/docs/user-guide/pods/multi-container.html"
- "docs/tasks/configure-pod-container/communicate-containers-same-pod/"
- "docs/tasks/configure-pod-container/communicate-containers-same-pod.html"
---
{% capture overview %}

View File

@ -3,9 +3,6 @@ assignees:
- bprashanth
- davidopp
title: Configure Your Cloud Provider's Firewalls
redirect_from:
- "/docs/user-guide/services-firewalls/"
- "/docs/user-guide/services-firewalls.html"
---
Many cloud providers (e.g. Google Compute Engine) define firewalls that help prevent inadvertent

View File

@ -1,10 +1,5 @@
---
title: Connect a Front End to a Back End Using a Service
redirect_from:
- "/docs/user-guide/services/operations/"
- "/docs/user-guide/services/operations.html"
- "/docs/tutorials/connecting-apps/connecting-frontend-backend/"
- "/docs/tutorials/connecting-apps/connecting-frontend-backend.html"
---
{% capture overview %}

View File

@ -1,8 +1,5 @@
---
title: Create an External Load Balancer
redirect_from:
- "/docs/user-guide/load-balancer/"
- "/docs/user-guide/load-balancer.html"
---

View File

@ -1,8 +1,5 @@
---
title: List All Container Images Running in a Cluster
redirect_from:
- "/docs/tasks/kubectl/list-all-running-container-images/"
- "/docs/tasks/kubectl/list-all-running-container-images.html"
---
{% capture overview %}

View File

@ -1,8 +1,5 @@
---
title: Use Port Forwarding to Access Applications in a Cluster
redirect_from:
- "/docs/user-guide/connecting-to-applications-port-forward/"
- "/docs/user-guide/connecting-to-applications-port-forward.html"
---
{% capture overview %}

View File

@ -1,10 +1,5 @@
---
title: Use a Service to Access an Application in a Cluster
redirect_from:
- "/docs/user-guide/quick-start/"
- "/docs/user-guide/quick-start.html"
- "/docs/tutorials/stateless-application/expose-external-ip-address-service/"
- "/docs/tutorials/stateless-application/expose-external-ip-address-service.html"
---
{% capture overview %}

View File

@ -4,11 +4,6 @@ assignees:
- mikedanese
- rf232
title: Web UI (Dashboard)
redirect_from:
- "/docs/user-guide/ui/"
- "/docs/user-guide/ui.html"
- "/docs/tasks/web-ui-dashboard/"
- "/docs/tasks/web-ui-dashboard.html"
---
Dashboard is a web-based Kubernetes user interface. You can use Dashboard to deploy containerized applications to a Kubernetes cluster, troubleshoot your containerized application, and manage the cluster itself along with its attendant resources. You can use Dashboard to get an overview of applications running on your cluster, as well as for creating or modifying individual Kubernetes resources (such as Deployments, Jobs, DaemonSets, etc). For example, you can scale a Deployment, initiate a rolling update, restart a pod or deploy new applications using a deploy wizard.

View File

@ -3,11 +3,6 @@ assignees:
- enisoc
- IanLewis
title: Extend the Kubernetes API with ThirdPartyResources
redirect_from:
- "/docs/user-guide/thirdpartyresources/"
- "/docs/user-guide/thirdpartyresources.html"
- "/docs/concepts/ecosystem/thirdpartyresource/"
- "/docs/concepts/ecosystem/thirdpartyresource.html"
---
{% assign for_k8s_version="1.7" %}{% include feature-state-deprecated.md %}

View File

@ -1,8 +1,5 @@
---
title: Use an HTTP Proxy to Access the Kubernetes API
redirect_from:
- "/docs/user-guide/connecting-to-applications-proxy/"
- "/docs/user-guide/connecting-to-applications-proxy.html"
---
{% capture overview %}

View File

@ -1,9 +1,5 @@
---
title: Access Clusters Using the Kubernetes API
redirect_from:
- "/docs/user-guide/accessing-the-cluster/"
- "/docs/user-guide/accessing-the-cluster.html"
- "/docs/concepts/cluster-administration/access-cluster/"
---
{% capture overview %}
@ -43,10 +39,10 @@ kubectl. Complete documentation is found in the [kubectl manual](/docs/user-gui
Kubectl handles locating and authenticating to the apiserver. If you want to directly access the REST API with an http client like
`curl` or `wget`, or a browser, there are multiple ways you can locate and authenticate against the apiserver:
1. Run kubectl in proxy mode (recommended). This method is recommended, since it uses the stored apiserver location abd verifies the identity of the apiserver using a self-signed cert. No Man-in-the-middle (MITM) attack is possible using this method .
1. Run kubectl in proxy mode (recommended). This method is recommended, since it uses the stored apiserver location abd verifies the identity of the apiserver using a self-signed cert. No Man-in-the-middle (MITM) attack is possible using this method .
1. Alternatively, you can provide the location and credentials directly to the http client. This works with for client code that is confused by proxies. To protect against man in the middle attacks, you'll need to import a root cert into your browser.
Using the Go or Python client libraries provides accessing kubectl in proxy mode.
Using the Go or Python client libraries provides accessing kubectl in proxy mode.
#### Using kubectl proxy

View File

@ -1,12 +1,9 @@
---
title: Access Services Running on Clusters
redirect_from:
- "/docs/user-guide/accessing-the-cluster/"
- "/docs/user-guide/accessing-the-cluster.html"
---
{% capture overview %}
This page shows how to connect to services running on the Kubernetes cluster.
This page shows how to connect to services running on the Kubernetes cluster.
{% endcapture %}
{% capture prerequisites %}

View File

@ -3,11 +3,6 @@ assignees:
- derekwaynecarr
- janetkuo
title: Apply Resource Quotas and Limits
redirect_from:
- "/docs/admin/resourcequota/walkthrough/"
- "/docs/admin/resourcequota/walkthrough.html"
- "/docs/tasks/configure-pod-container/apply-resource-quota-limit/"
- "/docs/tasks/configure-pod-container/apply-resource-quota-limit.html"
---
{% capture overview %}
@ -359,7 +354,7 @@ the 2 pods we created in the `not-best-effort-nginx` quota.
Scopes provide a mechanism to subdivide the set of resources that are tracked by
any quota document to allow greater flexibility in how operators deploy and track resource
consumption.
consumption.
In addition to `BestEffort` and `NotBestEffort` scopes, there are scopes to restrict
long-running versus time-bound pods. The `Terminating` scope will match any pod

View File

@ -2,11 +2,6 @@
assignees:
- caseydavenport
title: Use Calico for NetworkPolicy
redirect_from:
- "/docs/getting-started-guides/network-policy/calico/"
- "/docs/getting-started-guides/network-policy/calico.html"
- "/docs/tasks/configure-pod-container/calico-network-policy/"
- "/docs/tasks/configure-pod-container/calico-network-policy.html"
---
{% capture overview %}
@ -14,7 +9,7 @@ This page shows how to use Calico for NetworkPolicy.
{% endcapture %}
{% capture prerequisites %}
* Install Calico for Kubernetes.
* Install Calico for Kubernetes.
{% endcapture %}
{% capture steps %}
@ -34,7 +29,7 @@ See the [Calico documentation](http://docs.projectcalico.org/) for more options
{% capture discussion %}
## Understanding Calico components
Deploying a cluster with Calico adds Pods that support Kubernetes NetworkPolicy. These Pods run in the `kube-system` Namespace.
Deploying a cluster with Calico adds Pods that support Kubernetes NetworkPolicy. These Pods run in the `kube-system` Namespace.
To see this list of Pods run:

View File

@ -3,11 +3,6 @@ assignees:
- lavalamp
- thockin
title: Cluster Management
redirect_from:
- "/docs/admin/cluster-management/"
- "/docs/admin/cluster-management.html"
- "/docs/concepts/cluster-administration/cluster-management/"
- "/docs/concepts/cluster-administration/cluster-management.html"
---
* TOC

View File

@ -3,11 +3,6 @@ assignees:
- davidopp
- madhusudancs
title: Configure Multiple Schedulers
redirect_from:
- "/docs/admin/multiple-schedulers/"
- "/docs/admin/multiple-schedulers.html"
- "/docs/tutorials/clusters/multiple-schedulers/"
- "/docs/tutorials/clusters/multiple-schedulers.html"
---
Kubernetes ships with a default scheduler that is described [here](/docs/admin/kube-scheduler/).

View File

@ -3,17 +3,6 @@ assignees:
- mml
- wojtek-t
title: Operating etcd clusters for Kubernetes
redirect_from:
- "/docs/concepts/storage/etcd-store-api-object/"
- "/docs/concepts/storage/etcd-store-api-object.html"
- "/docs/admin/etcd/"
- "/docs/admin/etcd.html"
- "/docs/admin/etcd_upgrade/"
- "/docs/admin/etcd_upgrade.html"
- "/docs/concepts/cluster-administration/configure-etcd/"
- "/docs/concepts/cluster-administration/configure-etcd.html"
- "/docs/concepts/cluster-administration/etcd-upgrade/"
- "/docs/concepts/cluster-administration/etcd-upgrade.html"
---
etcd is a strong, consistent, and highly-available key value store which Kubernetes uses for persistent storage of all of its API objects. This documentation provides specific instruction on operating, upgrading, and rolling back etcd clusters for Kubernetes. For in-depth information on etcd, see [etcd documentation](https://github.com/coreos/etcd/blob/master/Documentation/docs.md).

View File

@ -3,11 +3,6 @@ assignees:
- derekwaynecarr
- janetkuo
title: Set Pod CPU and Memory Limits
redirect_from:
- "/docs/admin/limitrange/"
- "/docs/admin/limitrange/index.html"
- "/docs/tasks/configure-pod-container/limit-range/"
- "/docs/tasks/configure-pod-container/limit-range.html"
---
{% capture overview %}
@ -39,7 +34,7 @@ $ kubectl create namespace limit-example
namespace "limit-example" created
```
Note that `kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands:
Note that `kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands:
```shell
$ kubectl get namespaces
@ -112,7 +107,7 @@ NAME READY STATUS RESTARTS AGE
nginx-2040093540-s8vzu 1/1 Running 0 11s
```
Let's print this Pod with yaml output format (using `-o yaml` flag), and then `grep` the `resources` field. Note that your pod name will be different.
Let's print this Pod with yaml output format (using `-o yaml` flag), and then `grep` the `resources` field. Note that your pod name will be different.
```shell
$ kubectl get pods nginx-2040093540-s8vzu --namespace=limit-example -o yaml | grep resources -C 8
@ -151,7 +146,7 @@ $ kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/valid-pod.
pod "valid-pod" created
```
Now look at the Pod's resources field:
Now look at the Pod's resources field:
```shell
$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources

View File

@ -3,14 +3,9 @@ assignees:
- caseydavenport
- danwinship
title: Declare Network Policy
redirect_from:
- "/docs/getting-started-guides/network-policy/walkthrough/"
- "/docs/getting-started-guides/network-policy/walkthrough.html"
- "/docs/tasks/configure-pod-container/declare-network-policy/"
- "/docs/tasks/configure-pod-container/declare-network-policy.html"
---
{% capture overview %}
This document helps you get started using using the Kubernetes [NetworkPolicy API](/docs/user-guide/network-policies) to declare network policies that govern how pods communicate with each other.
This document helps you get started using using the Kubernetes [NetworkPolicy API](/docs/user-guide/network-policies) to declare network policies that govern how pods communicate with each other.
{% endcapture %}
{% capture prerequisites %}
@ -28,16 +23,16 @@ You'll need to have a Kubernetes cluster in place, with network policy support.
## Create an `nginx` deployment and expose it via a service
To see how Kubernetes network policy works, start off by creating an `nginx` deployment and exposing it via a service.
To see how Kubernetes network policy works, start off by creating an `nginx` deployment and exposing it via a service.
```console
$ kubectl run nginx --image=nginx --replicas=2
deployment "nginx" created
$ kubectl expose deployment nginx --port=80
$ kubectl expose deployment nginx --port=80
service "nginx" exposed
```
This runs two `nginx` pods in the default namespace, and exposes them through a service called `nginx`.
This runs two `nginx` pods in the default namespace, and exposes them through a service called `nginx`.
```console
$ kubectl get svc,pod
@ -104,7 +99,7 @@ Waiting for pod default/busybox-472357175-y0m47 to be running, status is Pending
Hit enter for command prompt
/ # wget --spider --timeout=1 nginx
/ # wget --spider --timeout=1 nginx
Connecting to nginx (10.100.0.16:80)
wget: download timed out
/ #

View File

@ -4,11 +4,6 @@ assignees:
- filipg
- piosz
title: Guaranteed Scheduling For Critical Add-On Pods
redirect_from:
- "/docs/admin/rescheduler/"
- "/docs/admin/rescheduler.html"
- "/docs/concepts/cluster-administration/guaranteed-scheduling-critical-addon-pods/"
- "/docs/concepts/cluster-administration/guaranteed-scheduling-critical-addon-pods.html"
---
* TOC

View File

@ -2,9 +2,6 @@
assignees:
- jszczepkowski
title: Set up High-Availability Kubernetes Masters
redirect_from:
- "/docs/admin/ha-master-gce/"
- "/docs/admin/ha-master-gce.html"
---
* TOC
@ -65,7 +62,7 @@ You can remove a master replica from an HA cluster by using a `kube-down` script
* `KUBE_DELETE_NODES=false` - to restrain deletion of kubelets.
* `KUBE_GCE_ZONE=zone` - the zone from where master replica will be removed.
* `KUBE_REPLICA_NAME=replica_name` - (optional) the name of master replica to remove.
If empty: any replica from the given zone will be removed.
@ -105,7 +102,7 @@ A two-replica cluster is thus inferior, in terms of HA, to a single replica clus
* When you add a master replica, cluster state (etcd) is copied to a new instance.
If the cluster is large, it may take a long time to duplicate its state.
This operation may be sped up by migrating etcd data directory, as described [here](https://coreos.com/etcd/docs/latest/admin_guide.html#member-migration)
This operation may be sped up by migrating etcd data directory, as described [here](https://coreos.com/etcd/docs/latest/admin_guide.html#member-migration)
(we are considering adding support for etcd data dir migration in future).
## Implementation notes

View File

@ -2,9 +2,6 @@
assignees:
- pipejakob
title: Upgrading kubeadm clusters from 1.6 to 1.7
redirect_from:
- "/docs/admin/kubeadm-upgrade-1-7/"
- "/docs/admin/kubeadm-upgrade-1-7.html"
---
{% capture overview %}
@ -95,4 +92,4 @@ You need to have a Kubernetes cluster running version 1.6.x.
{% endcapture %}
{% include templates/task.md %}
{% include templates/task.md %}

View File

@ -1,16 +1,13 @@
---
title: Limit Storage Consumption
redirect_from:
- "/docs/admin/resourcequota/limitstorageconsumption/"
- "/docs/admin/resourcequota/limitstorageconsumption.html"
---
{% capture overview %}
This example demonstrates an easy way to limit the amount of storage consumed in a namespace.
The following resources are used in the demonstration: [ResourceQuota](/docs/concepts/policy/resource-quotas/),
[LimitRange](/docs/tasks/configure-pod-container/limit-range/),
The following resources are used in the demonstration: [ResourceQuota](/docs/concepts/policy/resource-quotas/),
[LimitRange](/docs/tasks/configure-pod-container/limit-range/),
and [PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/).
{% endcapture %}
@ -56,17 +53,17 @@ spec:
storage: 1Gi
```
Minimum storage requests are used when the underlying storage provider requires certain minimums. For example,
AWS EBS volumes have a 1Gi minimum requirement.
Minimum storage requests are used when the underlying storage provider requires certain minimums. For example,
AWS EBS volumes have a 1Gi minimum requirement.
## StorageQuota to limit PVC count and cumulative storage capacity
Admins can limit the number of PVCs in a namespace as well as the cumulative capacity of those PVCs. New PVCs that exceed
either maximum value will be rejected.
In this example, a 6th PVC in the namespace would be rejected because it exceeds the maximum count of 5. Alternatively,
In this example, a 6th PVC in the namespace would be rejected because it exceeds the maximum count of 5. Alternatively,
a 5Gi maximum quota when combined with the 2Gi max limit above, cannot have 3 PVCs where each has 2Gi. That would be 6Gi requested
for a namespace capped at 5Gi.
for a namespace capped at 5Gi.
```
apiVersion: v1
@ -83,10 +80,10 @@ spec:
{% capture discussion %}
## Summary
## Summary
A limit range can put a ceiling on how much storage is requested while a resource quota can effectively cap the storage
consumed by a namespace through claim counts and cumulative storage capacity. The allows a cluster-admin to plan their
consumed by a namespace through claim counts and cumulative storage capacity. The allows a cluster-admin to plan their
cluster's storage budget without risk of any one project going over their allotment.
{% endcapture %}

View File

@ -3,9 +3,6 @@ assignees:
- derekwaynecarr
- janetkuo
title: Namespaces Walkthrough
redirect_from:
- "/docs/admin/namespaces/walkthrough/"
- "/docs/admin/namespaces/walkthrough.html"
---
Kubernetes _namespaces_ help different projects, teams, or customers to share a Kubernetes cluster.
@ -153,9 +150,9 @@ Let's create some contents.
```shell
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
```
We have just created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname.
We have just created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname.
Note that `kubectl run` creates deployments only on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead.
If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/v1.6/#run) for more details.
If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/v1.6/#run) for more details.
```shell
$ kubectl get deployment

View File

@ -3,9 +3,6 @@ assignees:
- derekwaynecarr
- janetkuo
title: Share a Cluster with Namespaces
redirect_from:
- "/docs/admin/namespaces/"
- "/docs/admin/namespaces/index.html"
---
A Namespace is a mechanism to partition resources created by users into

View File

@ -4,11 +4,6 @@ assignees:
- vishh
- timstclair
title: Configure Out Of Resource Handling
redirect_from:
- "/docs/admin/out-of-resource/"
- "/docs/admin/out-of-resource.html"
- "/docs/concepts/cluster-administration/out-of-resource/"
- "/docs/concepts/cluster-administration/out-of-resource.html"
---
* TOC

View File

@ -4,9 +4,6 @@ assignees:
- derekwaynecarr
- dashpole
title: Reserve Compute Resources for System Daemons
redirect_from:
- "/docs/admin/node-allocatable/"
- "/docs/admin/node-allocatable.html"
---
* TOC

View File

@ -2,11 +2,6 @@
assignees:
- chrismarino
title: Romana for NetworkPolicy
redirect_from:
- "/docs/getting-started-guides/network-policy/romana/"
- "/docs/getting-started-guides/network-policy/romana.html"
- "/docs/tasks/configure-pod-container/romana-network-policy/"
- "/docs/tasks/configure-pod-container/romana-network-policy.html"
---
{% capture overview %}
@ -17,7 +12,7 @@ This page shows how to use Romana for NetworkPolicy.
{% capture prerequisites %}
Complete steps 1, 2, and 3 of the [kubeadm getting started guide](/docs/getting-started-guides/kubeadm/).
Complete steps 1, 2, and 3 of the [kubeadm getting started guide](/docs/getting-started-guides/kubeadm/).
{% endcapture %}
@ -25,16 +20,16 @@ Complete steps 1, 2, and 3 of the [kubeadm getting started guide](/docs/getting
## Installing Romana with kubeadm
Follow the [containerized installation guide](https://github.com/romana/romana/tree/master/containerize) for kubeadmin.
Follow the [containerized installation guide](https://github.com/romana/romana/tree/master/containerize) for kubeadmin.
## Applying network policies
To apply network policies use one of the following:
* [Romana network policies](https://github.com/romana/romana/wiki/Romana-policies).
* [Romana network policies](https://github.com/romana/romana/wiki/Romana-policies).
* [Example of Romana network policy](https://github.com/romana/core/tree/master/policy).
* The NetworkPolicy API.
{% endcapture %}
{% capture whatsnext %}

View File

@ -1,15 +1,12 @@
---
assignees:
- thockin
- thockin
title: Build and Run cloud-controller-manager
redirect_from:
- "/docs/getting-started-guides/running-cloud-controller/"
- "/docs/getting-started-guides/running-cloud-controller.html"
---
Kubernetes version 1.6 contains a new binary called as `cloud-controller-manager`. `cloud-controller-manager` is a daemon that embeds cloud-specific control loops in Kubernetes. These cloud-specific control loops were originally in the kube-controller-manager. However, cloud providers move at a different pace and schedule compared to the Kubernetes project, and abstracting the provider-specific code to the `cloud-controller-manager` binary allows cloud provider vendors to evolve independently from the core Kubernetes code.
The `cloud-controller-manager` can be linked to any cloud provider that satisifies the [cloudprovider.Interface](https://git.k8s.io/kubernetes/pkg/cloudprovider/cloud.go).
The `cloud-controller-manager` can be linked to any cloud provider that satisifies the [cloudprovider.Interface](https://git.k8s.io/kubernetes/pkg/cloudprovider/cloud.go).
In future Kubernetes releases, cloud vendors should link code that satisfies the above interface to the `cloud-controller-manager` project and compile `cloud-controller-manager` for their own clouds. Cloud providers would also be responsible for maintaining and evolving their code.
* TOC

View File

@ -3,9 +3,6 @@ assignees:
- mikedanese
- thockin
title: Share Cluster Access with kubeconfig
redirect_from:
- "/docs/user-guide/sharing-clusters/"
- "/docs/user-guide/sharing-clusters.html"
---
Client access to a running Kubernetes cluster can be shared by copying

View File

@ -2,11 +2,6 @@
assignees:
- jsafrane
title: Static Pods
redirect_from:
- "/docs/admin/static-pods/"
- "/docs/admin/static-pods.html"
- "/docs/concepts/cluster-administration/static-pod/"
- "/docs/concepts/cluster-administration/static-pod.html"
---
**If you are running clustered Kubernetes and are using static pods to run a pod on every node, you should probably be using a [DaemonSet](/docs/concepts/workloads/controllers/daemonset/)!**

View File

@ -2,28 +2,25 @@
assignees:
- mml
title: Cluster Management Guide for Version 1.6
redirect_from:
- "/docs/admin/upgrade-1-6/"
- "/docs/admin/upgrade-1-6.html"
---
* TOC
{:toc}
This document outlines the potentially disruptive changes that exist in the 1.6 release cycle. Operators, administrators, and developers should
take note of the changes below in order to maintain continuity across their upgrade process.
This document outlines the potentially disruptive changes that exist in the 1.6 release cycle. Operators, administrators, and developers should
take note of the changes below in order to maintain continuity across their upgrade process.
## Cluster defaults set to etcd 3
## Cluster defaults set to etcd 3
In the 1.6 release cycle, the default backend storage layer has been upgraded to fully leverage [etcd 3 capabilities](https://coreos.com/blog/etcd3-a-new-etcd.html) by default.
For new clusters, there is nothing an operator will need to do, it should "just work". However, if you are upgrading from a 1.5 cluster, care should be taken to ensure
continuity.
In the 1.6 release cycle, the default backend storage layer has been upgraded to fully leverage [etcd 3 capabilities](https://coreos.com/blog/etcd3-a-new-etcd.html) by default.
For new clusters, there is nothing an operator will need to do, it should "just work". However, if you are upgrading from a 1.5 cluster, care should be taken to ensure
continuity.
It is possible to maintain v2 compatibility mode while running etcd 3 for an interim period of time. To do this, you will simply need to update an argument passed to your apiserver during
startup:
It is possible to maintain v2 compatibility mode while running etcd 3 for an interim period of time. To do this, you will simply need to update an argument passed to your apiserver during
startup:
```
$ kube-apiserver --storage-backend='etcd2' $(EXISTING_ARGS)
```
```
However, for long-term maintenance of the cluster, we recommend that the operator plan an outage window in order to perform a [v2->v3 data upgrade](https://coreos.com/etcd/docs/latest/upgrades/upgrade_3_0.html).
However, for long-term maintenance of the cluster, we recommend that the operator plan an outage window in order to perform a [v2->v3 data upgrade](https://coreos.com/etcd/docs/latest/upgrades/upgrade_3_0.html).

View File

@ -2,11 +2,6 @@
assignees:
- bboreham
title: Weave Net for NetworkPolicy
redirect_from:
- "/docs/getting-started-guides/network-policy/weave/"
- "/docs/getting-started-guides/network-policy/weave.html"
- "/docs/tasks/configure-pod-container/weave-network-policy/"
- "/docs/tasks/configure-pod-container/weave-network-policy.html"
---
{% capture overview %}
@ -17,13 +12,13 @@ This page shows how to use Weave Net for NetworkPolicy.
{% capture prerequisites %}
Complete steps 1, 2, and 3 of the [kubeadm getting started guide](/docs/getting-started-guides/kubeadm/).
Complete steps 1, 2, and 3 of the [kubeadm getting started guide](/docs/getting-started-guides/kubeadm/).
{% endcapture %}
{% capture steps %}
## Installing Weave Net addon
## Installing Weave Net addon
Follow the [Integrating Kubernetes via the Addon](https://www.weave.works/docs/net/latest/kube-addon/) guide.

View File

@ -1,8 +1,5 @@
---
title: Federated Cluster
redirect_from:
- "/docs/user-guide/federation/cluster/"
- "/docs/user-guide/federation/cluster.html"
---
{% capture overview %}

View File

@ -1,8 +1,5 @@
---
title: Federated ConfigMap
redirect_from:
- "/docs/user-guide/federation/configmap/"
- "/docs/user-guide/federation/configmap.html"
---
{% capture overview %}
@ -75,7 +72,7 @@ the federation apiserver instead of sending it to a specific Kubernetes cluster.
For example, you can do that using kubectl by running:
```shell
kubectl --context=federation-cluster delete configmap
kubectl --context=federation-cluster delete configmap
```
Note that at this point, deleting a Federated ConfigMap will not delete the

View File

@ -1,8 +1,5 @@
---
title: Federated DaemonSet
redirect_from:
- "/docs/user-guide/federation/daemonsets/"
- "/docs/user-guide/federation/daemonsets.html"
---
{% capture overview %}
@ -43,7 +40,7 @@ request to the Federation apiserver instead of sending it to a Kubernetes
cluster.
Once a Federated Daemonset is created, the federation control plane will create
a matching DaemonSet in all underlying Kubernetes clusters.
a matching DaemonSet in all underlying Kubernetes clusters.
You can verify this by checking each of the underlying clusters, for example:
``` shell

View File

@ -1,8 +1,5 @@
---
title: Federated Deployment
redirect_from:
- "/docs/user-guide/federation/deployment/"
- "/docs/user-guide/federation/deployment.html"
---
{% capture overview %}
@ -14,8 +11,8 @@ Deployment](/docs/concepts/workloads/controllers/deployment/) and provide the sa
Creating them in the federation control plane ensures that the desired number of
replicas exist across the registered clusters.
**As of Kubernetes version 1.5, Federated Deployment is an Alpha feature. The core
functionality of Deployment is present, but some features
**As of Kubernetes version 1.5, Federated Deployment is an Alpha feature. The core
functionality of Deployment is present, but some features
(such as full rollout compatibility) are still in development.**
{% endcapture %}
@ -60,7 +57,7 @@ These Deployments in underlying clusters will match the federation Deployment
_except_ in the number of replicas and revision-related annotations.
Federation control plane ensures that the
sum of replicas in each cluster combined matches the desired number of replicas in the
Federated Deployment.
Federated Deployment.
### Spreading Replicas in Underlying Clusters
@ -81,7 +78,7 @@ Deployment; however, for a Federated Deployment, you must send the request to
the federation apiserver instead of sending it to a specific Kubernetes cluster.
The federation control plane ensures that whenever the Federated Deployment is
updated, it updates the corresponding Deployments in all underlying clusters to
match it. So if the rolling update strategy was chosen then the underlying
match it. So if the rolling update strategy was chosen then the underlying
cluster will do the rolling update independently and `maxSurge` and `maxUnavailable`
will apply only to individual clusters. This behavior may change in the future.

View File

@ -1,8 +1,5 @@
---
title: Federated Events
redirect_from:
- "/docs/user-guide/federation/events/"
- "/docs/user-guide/federation/events.html"
---
This guide explains how to use events in federation control plane to help in debugging.

View File

@ -1,9 +1,7 @@
---
title: Federated Ingress
redirect_from:
- "/docs/user-guide/federation/federated-ingress/"
- "/docs/user-guide/federation/federated-ingress.html"
---
{% capture overview %}
This page explains how to use Kubernetes Federated Ingress to deploy
a common HTTP(S) virtual IP load balancer across a federated service running in
@ -25,7 +23,7 @@ Federated Ingress is released as an alpha feature, and supports Google Cloud Pla
GCE and hybrid scenarios involving both) in Kubernetes v1.4. Work is under way to support other cloud
providers such as AWS, and other hybrid cloud scenarios (e.g. services
spanning private on-premise as well as public cloud Kubernetes
clusters).
clusters).
You create Federated Ingresses in much that same way as traditional
[Kubernetes Ingresses](/docs/concepts/services-networking/ingress/): by making an API
@ -151,7 +149,7 @@ may take up to a few minutes).
the network traffic directed to this ingress (that is, 'Service
Endpoints' behind the service backing the Ingress), so the Federated Ingress does not yet consider these to
be healthy shards and will not direct traffic to any of these clusters.
* The federation control system
* The federation control system
automatically reconfigures the load balancer controllers in all of the
clusters in your federation to make them consistent, and allows
them to share global load balancers. But this reconfiguration can
@ -202,7 +200,7 @@ nginx 10.63.250.98 104.199.136.89 80/TCP 9m
Federations of Kubernetes Clusters can include clusters running in
different cloud providers (for example, Google Cloud, AWS), and on-premises
(for example, on OpenStack). However, in Kubernetes v1.4, Federated Ingress is only
supported across Google Cloud clusters.
supported across Google Cloud clusters.
## Discovering a federated ingress
@ -301,11 +299,11 @@ Check that:
{% capture whatsnext %}
* If you need assistance, use one of the [support channels](/docs/tasks/debug-application-cluster/troubleshooting/) to seek assistance.
* For details about use cases that motivated this work, see
* For details about use cases that motivated this work, see
[Federation proposal](https://git.k8s.io/community/contributors/design-proposals/federation.md).
{% endcapture %}
{% include templates/task.md %}

View File

@ -1,8 +1,5 @@
---
title: Federated Namespaces
redirect_from:
- "/docs/user-guide/federation/namespaces/"
- "/docs/user-guide/federation/namespaces.html"
---
{% capture overview %}

Some files were not shown because too many files have changed in this diff Show More