removing toc shortcode. (#10720)
parent
8f2bb3d6ad
commit
04163e9a7c
|
@ -18,7 +18,6 @@ Here's the architecture of a Kubernetes cluster without the cloud controller man
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -46,16 +45,16 @@ The CCM breaks away some of the functionality of Kubernetes controller manager (
|
|||
In version 1.9, the CCM runs the following controllers from the preceding list:
|
||||
|
||||
* Node controller
|
||||
* Route controller
|
||||
* Route controller
|
||||
* Service controller
|
||||
|
||||
Additionally, it runs another controller called the PersistentVolumeLabels controller. This controller is responsible for setting the zone and region labels on PersistentVolumes created in GCP and AWS clouds.
|
||||
Additionally, it runs another controller called the PersistentVolumeLabels controller. This controller is responsible for setting the zone and region labels on PersistentVolumes created in GCP and AWS clouds.
|
||||
|
||||
{{< note >}}
|
||||
**Note:** Volume controller was deliberately chosen to not be a part of CCM. Due to the complexity involved and due to the existing efforts to abstract away vendor specific volume logic, it was decided that volume controller will not be moved to CCM.
|
||||
**Note:** Volume controller was deliberately chosen to not be a part of CCM. Due to the complexity involved and due to the existing efforts to abstract away vendor specific volume logic, it was decided that volume controller will not be moved to CCM.
|
||||
{{< /note >}}
|
||||
|
||||
The original plan to support volumes using CCM was to use Flex volumes to support pluggable volumes. However, a competing effort known as CSI is being planned to replace Flex.
|
||||
The original plan to support volumes using CCM was to use Flex volumes to support pluggable volumes. However, a competing effort known as CSI is being planned to replace Flex.
|
||||
|
||||
Considering these dynamics, we decided to have an intermediate stop gap measure until CSI becomes ready.
|
||||
|
||||
|
@ -68,7 +67,7 @@ The CCM inherits its functions from components of Kubernetes that are dependent
|
|||
The majority of the CCM's functions are derived from the KCM. As mentioned in the previous section, the CCM runs the following control loops:
|
||||
|
||||
* Node controller
|
||||
* Route controller
|
||||
* Route controller
|
||||
* Service controller
|
||||
* PersistentVolumeLabels controller
|
||||
|
||||
|
@ -92,7 +91,7 @@ The Service controller is responsible for listening to service create, update, a
|
|||
|
||||
#### PersistentVolumeLabels controller
|
||||
|
||||
The PersistentVolumeLabels controller applies labels on AWS EBS/GCE PD volumes when they are created. This removes the need for users to manually set the labels on these volumes.
|
||||
The PersistentVolumeLabels controller applies labels on AWS EBS/GCE PD volumes when they are created. This removes the need for users to manually set the labels on these volumes.
|
||||
|
||||
These labels are essential for the scheduling of pods as these volumes are constrained to work only within the region/zone that they are in. Any Pod using these volumes needs to be scheduled in the same region/zone.
|
||||
|
||||
|
@ -100,7 +99,7 @@ The PersistentVolumeLabels controller was created specifically for the CCM; that
|
|||
|
||||
### 2. Kubelet
|
||||
|
||||
The Node controller contains the cloud-dependent functionality of the kubelet. Prior to the introduction of the CCM, the kubelet was responsible for initializing a node with cloud-specific details such as IP addresses, region/zone labels and instance type information. The introduction of the CCM has moved this initialization operation from the kubelet into the CCM.
|
||||
The Node controller contains the cloud-dependent functionality of the kubelet. Prior to the introduction of the CCM, the kubelet was responsible for initializing a node with cloud-specific details such as IP addresses, region/zone labels and instance type information. The introduction of the CCM has moved this initialization operation from the kubelet into the CCM.
|
||||
|
||||
In this new model, the kubelet initializes a node without cloud-specific information. However, it adds a taint to the newly created node that makes the node unschedulable until the CCM initializes the node with cloud-specific information. It then removes this taint.
|
||||
|
||||
|
@ -118,13 +117,13 @@ For more information about developing plugins, see [Developing Cloud Controller
|
|||
|
||||
## Authorization
|
||||
|
||||
This section breaks down the access required on various API objects by the CCM to perform its operations.
|
||||
This section breaks down the access required on various API objects by the CCM to perform its operations.
|
||||
|
||||
### Node Controller
|
||||
|
||||
The Node controller only works with Node objects. It requires full access to get, list, create, update, patch, watch, and delete Node objects.
|
||||
|
||||
v1/Node:
|
||||
v1/Node:
|
||||
|
||||
- Get
|
||||
- List
|
||||
|
@ -136,17 +135,17 @@ v1/Node:
|
|||
|
||||
### Route controller
|
||||
|
||||
The route controller listens to Node object creation and configures routes appropriately. It requires get access to Node objects.
|
||||
The route controller listens to Node object creation and configures routes appropriately. It requires get access to Node objects.
|
||||
|
||||
v1/Node:
|
||||
v1/Node:
|
||||
|
||||
- Get
|
||||
|
||||
### Service controller
|
||||
|
||||
The service controller listens to Service object create, update and delete events and then configures endpoints for those Services appropriately.
|
||||
The service controller listens to Service object create, update and delete events and then configures endpoints for those Services appropriately.
|
||||
|
||||
To access Services, it requires list, and watch access. To update Services, it requires patch and update access.
|
||||
To access Services, it requires list, and watch access. To update Services, it requires patch and update access.
|
||||
|
||||
To set up endpoints for the Services, it requires access to create, list, get, watch, and update.
|
||||
|
||||
|
@ -249,7 +248,7 @@ rules:
|
|||
|
||||
## Vendor Implementations
|
||||
|
||||
The following cloud providers have implemented CCMs:
|
||||
The following cloud providers have implemented CCMs:
|
||||
|
||||
* [Digital Ocean](https://github.com/digitalocean/digitalocean-cloud-controller-manager)
|
||||
* [Oracle](https://github.com/oracle/oci-cloud-controller-manager)
|
||||
|
|
|
@ -18,7 +18,6 @@ cloud provider).
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -67,9 +66,9 @@ The connections from the apiserver to the kubelet are used for:
|
|||
|
||||
* Fetching logs for pods.
|
||||
* Attaching (through kubectl) to running pods.
|
||||
* Providing the kubelet's port-forwarding functionality.
|
||||
* Providing the kubelet's port-forwarding functionality.
|
||||
|
||||
These connections terminate at the kubelet's HTTPS endpoint. By default,
|
||||
These connections terminate at the kubelet's HTTPS endpoint. By default,
|
||||
the apiserver does not verify the kubelet's serving certificate,
|
||||
which makes the connection subject to man-in-the-middle attacks, and
|
||||
**unsafe** to run over untrusted and/or public networks.
|
||||
|
|
|
@ -18,7 +18,6 @@ architecture design doc for more details.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -76,18 +75,18 @@ the `Terminating` or `Unknown` state. In cases where Kubernetes cannot deduce fr
|
|||
permanently left a cluster, the cluster administrator may need to delete the node object by hand. Deleting the node object from
|
||||
Kubernetes causes all the Pod objects running on the node to be deleted from the apiserver, and frees up their names.
|
||||
|
||||
In version 1.12, `TaintNodesByCondition` feature is promoted to beta,so node lifecycle controller automatically creates
|
||||
In version 1.12, `TaintNodesByCondition` feature is promoted to beta,so node lifecycle controller automatically creates
|
||||
[taints](/docs/concepts/configuration/taint-and-toleration/) that represent conditions.
|
||||
Similarly the scheduler ignores conditions when considering a Node; instead
|
||||
it looks at the Node's taints and a Pod's tolerations.
|
||||
|
||||
Now users can choose between the old scheduling model and a new, more flexible scheduling model.
|
||||
A Pod that does not have any tolerations gets scheduled according to the old model. But a Pod that
|
||||
A Pod that does not have any tolerations gets scheduled according to the old model. But a Pod that
|
||||
tolerates the taints of a particular Node can be scheduled on that Node.
|
||||
|
||||
{{< caution >}}
|
||||
**Caution:** Enabling this feature creates a small delay between the
|
||||
time when a condition is observed and when a taint is created. This delay is usually less than one second, but it can increase the number of Pods that are successfully scheduled but rejected by the kubelet.
|
||||
**Caution:** Enabling this feature creates a small delay between the
|
||||
time when a condition is observed and when a taint is created. This delay is usually less than one second, but it can increase the number of Pods that are successfully scheduled but rejected by the kubelet.
|
||||
{{< /caution >}}
|
||||
|
||||
### Capacity
|
||||
|
@ -127,7 +126,7 @@ a node from the following content:
|
|||
Kubernetes creates a node object internally (the representation), and
|
||||
validates the node by health checking based on the `metadata.name` field. If the node is valid -- that is, if all necessary
|
||||
services are running -- it is eligible to run a pod. Otherwise, it is
|
||||
ignored for any cluster activity until it becomes valid.
|
||||
ignored for any cluster activity until it becomes valid.
|
||||
|
||||
{{< note >}}
|
||||
**Note:** Kubernetes keeps the object for the invalid node and keeps checking to see whether it becomes valid.
|
||||
|
|
|
@ -14,7 +14,6 @@ Add-ons in each section are sorted alphabetically - the ordering does not imply
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -30,7 +29,7 @@ Add-ons in each section are sorted alphabetically - the ordering does not imply
|
|||
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md) is an overlay network provider that can be used with Kubernetes.
|
||||
* [Knitter](https://github.com/ZTE/Knitter/) is a network solution supporting multiple networking in Kubernetes.
|
||||
* [Multus](https://github.com/Intel-Corp/multus-cni) is a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and VPP based workloads in Kubernetes.
|
||||
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and Openshift.
|
||||
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and Openshift.
|
||||
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring.
|
||||
* [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).
|
||||
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
|
||||
|
|
|
@ -12,7 +12,6 @@ manually through `easyrsa`, `openssl` or `cfssl`.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -81,7 +80,7 @@ manually through `easyrsa`, `openssl` or `cfssl`.
|
|||
default_md = sha256
|
||||
req_extensions = req_ext
|
||||
distinguished_name = dn
|
||||
|
||||
|
||||
[ dn ]
|
||||
C = <country>
|
||||
ST = <state>
|
||||
|
@ -89,10 +88,10 @@ manually through `easyrsa`, `openssl` or `cfssl`.
|
|||
O = <organization>
|
||||
OU = <organization unit>
|
||||
CN = <MASTER_IP>
|
||||
|
||||
|
||||
[ req_ext ]
|
||||
subjectAltName = @alt_names
|
||||
|
||||
|
||||
[ alt_names ]
|
||||
DNS.1 = kubernetes
|
||||
DNS.2 = kubernetes.default
|
||||
|
@ -101,7 +100,7 @@ manually through `easyrsa`, `openssl` or `cfssl`.
|
|||
DNS.5 = kubernetes.default.svc.cluster.local
|
||||
IP.1 = <MASTER_IP>
|
||||
IP.2 = <MASTER_CLUSTER_IP>
|
||||
|
||||
|
||||
[ v3_ext ]
|
||||
authorityKeyIdentifier=keyid,issuer:always
|
||||
basicConstraints=CA:FALSE
|
||||
|
@ -213,7 +212,7 @@ Finally, add the same parameters into the API server start parameters.
|
|||
"O": "<organization>",
|
||||
"OU": "<organization unit>"
|
||||
}]
|
||||
}
|
||||
}
|
||||
1. Generate the key and certificate for the API server, which are by default
|
||||
saved into file `server-key.pem` and `server.pem` respectively:
|
||||
|
||||
|
|
|
@ -9,12 +9,11 @@ This page explains how to manage Kubernetes running on a specific
|
|||
cloud provider.
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
### kubeadm
|
||||
[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) is a popular option for creating kubernetes clusters.
|
||||
kubeadm has configuration options to specify configuration information for cloud providers. For example a typical
|
||||
[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) is a popular option for creating kubernetes clusters.
|
||||
kubeadm has configuration options to specify configuration information for cloud providers. For example a typical
|
||||
in-tree cloud provider can be configured using kubeadm as shown below:
|
||||
|
||||
```yaml
|
||||
|
@ -214,7 +213,7 @@ file:
|
|||
connotation, a deployment can use a geographical name for a region identifier
|
||||
such as `us-east`. Available regions are found under the `/v3/regions`
|
||||
endpoint of the Keystone API.
|
||||
* `ca-file` (Optional): Used to specify the path to your custom CA file.
|
||||
* `ca-file` (Optional): Used to specify the path to your custom CA file.
|
||||
|
||||
|
||||
When using Keystone V3 - which changes tenant to project - the `tenant-id` value
|
||||
|
@ -361,12 +360,12 @@ Note that the Kubernetes Node name must match the Photon VM name (or if `overrid
|
|||
|
||||
The VSphere cloud provider uses the hostname of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object.
|
||||
|
||||
## IBM Cloud Kubernetes Service
|
||||
## IBM Cloud Kubernetes Service
|
||||
|
||||
### Compute nodes
|
||||
By using the IBM Cloud Kubernetes Service provider, you can create clusters with a mixture of virtual and physical (bare metal) nodes in a single zone or across multiple zones in a region. For more information, see [Planning your cluster and worker node setup](https://console.bluemix.net/docs/containers/cs_clusters_planning.html#plan_clusters).
|
||||
|
||||
The name of the Kubernetes Node object is the private IP address of the IBM Cloud Kubernetes Service worker node instance.
|
||||
The name of the Kubernetes Node object is the private IP address of the IBM Cloud Kubernetes Service worker node instance.
|
||||
|
||||
### Networking
|
||||
The IBM Cloud Kubernetes Service provider provides VLANs for quality network performance and network isolation for nodes. You can set up custom firewalls and Calico network policies to add an extra layer of security for your cluster, or connect your cluster to your on-prem data center via VPN. For more information, see [Planning in-cluster and private networking](https://console.bluemix.net/docs/containers/cs_network_cluster.html#planning).
|
||||
|
|
|
@ -14,7 +14,6 @@ External garbage collection tools are not recommended as these tools can potenti
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -15,7 +15,6 @@ However, the native functionality provided by a container engine or runtime is u
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -14,7 +14,6 @@ You've deployed your application and exposed it via a service. Now what? Kuberne
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -18,7 +18,6 @@ default. There are 4 distinct networking problems to solve:
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -123,10 +122,10 @@ AOS supports the use of common vendor equipment from manufacturers including Cis
|
|||
Details on how the AOS system works can be accessed here: http://www.apstra.com/products/how-it-works/
|
||||
|
||||
### Big Cloud Fabric from Big Switch Networks
|
||||
|
||||
[Big Cloud Fabric](https://www.bigswitch.com/container-network-automation) is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premises environments. Using unified physical & virtual SDN, Big Cloud Fabric tackles inherent container networking problems such as load balancing, visibility, troubleshooting, security policies & container traffic monitoring.
|
||||
|
||||
With the help of the Big Cloud Fabric's virtual pod multi-tenant architecture, container orchestration systems such as Kubernetes, RedHat Openshift, Mesosphere DC/OS & Docker Swarm will be natively integrated along side with VM orchestration systems such as VMware, OpenStack & Nutanix. Customers will be able to securely inter-connect any number of these clusters and enable inter-tenant communication between them if needed.
|
||||
[Big Cloud Fabric](https://www.bigswitch.com/container-network-automation) is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premises environments. Using unified physical & virtual SDN, Big Cloud Fabric tackles inherent container networking problems such as load balancing, visibility, troubleshooting, security policies & container traffic monitoring.
|
||||
|
||||
With the help of the Big Cloud Fabric's virtual pod multi-tenant architecture, container orchestration systems such as Kubernetes, RedHat Openshift, Mesosphere DC/OS & Docker Swarm will be natively integrated along side with VM orchestration systems such as VMware, OpenStack & Nutanix. Customers will be able to securely inter-connect any number of these clusters and enable inter-tenant communication between them if needed.
|
||||
|
||||
BCF was recognized by Gartner as a visionary in the latest [Magic Quadrant](http://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html). One of the BCF Kubernetes on-premises deployments (which includes Kubernetes, DC/OS & VMware running on multiple DCs across different geographic regions) is also referenced [here](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/).
|
||||
|
||||
|
@ -182,7 +181,7 @@ each other and `Nodes` over the `cbr0` bridge. Those IPs are all routable
|
|||
within the GCE project network.
|
||||
|
||||
GCE itself does not know anything about these IPs, though, so it will not NAT
|
||||
them for outbound internet traffic. To achieve that an iptables rule is used
|
||||
them for outbound internet traffic. To achieve that an iptables rule is used
|
||||
to masquerade (aka SNAT - to make it seem as if packets came from the `Node`
|
||||
itself) traffic that is bound for IPs outside the GCE project network
|
||||
(10.0.0.0/8).
|
||||
|
@ -203,7 +202,7 @@ traffic to the internet.
|
|||
|
||||
### Jaguar
|
||||
|
||||
[Jaguar](https://gitlab.com/sdnlab/jaguar) is an open source solution for Kubernetes's network based on OpenDaylight. Jaguar provides overlay network using vxlan and Jaguar CNIPlugin provides one IP address per pod.
|
||||
[Jaguar](https://gitlab.com/sdnlab/jaguar) is an open source solution for Kubernetes's network based on OpenDaylight. Jaguar provides overlay network using vxlan and Jaguar CNIPlugin provides one IP address per pod.
|
||||
|
||||
### Knitter
|
||||
|
||||
|
@ -232,10 +231,10 @@ Lars Kellogg-Stedman.
|
|||
Multus supports all [reference plugins](https://github.com/containernetworking/plugins) (eg. [Flannel](https://github.com/containernetworking/plugins/tree/master/plugins/meta/flannel), [DHCP](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp), [Macvlan](https://github.com/containernetworking/plugins/tree/master/plugins/main/macvlan)) that implement the CNI specification and 3rd party plugins (eg. [Calico](https://github.com/projectcalico/cni-plugin), [Weave](https://github.com/weaveworks/weave), [Cilium](https://github.com/cilium/cilium), [Contiv](https://github.com/contiv/netplugin)). In addition to it, Multus supports [SRIOV](https://github.com/hustcat/sriov-cni), [DPDK](https://github.com/Intel-Corp/sriov-cni), [OVS-DPDK & VPP](https://github.com/intel/vhost-user-net-plugin) workloads in Kubernetes with both cloud native and NFV based applications in Kubernetes.
|
||||
|
||||
### NSX-T
|
||||
|
||||
|
||||
[VMware NSX-T](https://docs.vmware.com/en/VMware-NSX-T/index.html) is a network virtualization and security platform. NSX-T can provide network virtualization for a multi-cloud and multi-hypervisor environment and is focused on emerging application frameworks and architectures that have heterogeneous endpoints and technology stacks. In addition to vSphere hypervisors, these environments include other hypervisors such as KVM, containers, and bare metal.
|
||||
|
||||
[NSX-T Container Plug-in (NCP)](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) provides integration between NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and Openshift.
|
||||
|
||||
[NSX-T Container Plug-in (NCP)](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) provides integration between NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and Openshift.
|
||||
|
||||
### Nuage Networks VCS (Virtualized Cloud Services)
|
||||
|
||||
|
|
|
@ -8,7 +8,6 @@ content_template: templates/concept
|
|||
weight: 30
|
||||
---
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
|
@ -144,7 +143,7 @@ among nodes that meet that criteria, nodes with a label whose key is `another-no
|
|||
value is `another-node-label-value` should be preferred.
|
||||
|
||||
You can see the operator `In` being used in the example. The new node affinity syntax supports the following operators: `In`, `NotIn`, `Exists`, `DoesNotExist`, `Gt`, `Lt`.
|
||||
You can use `NotIn` and `DoesNotExist` to achieve node anti-affinity behavior, or use
|
||||
You can use `NotIn` and `DoesNotExist` to achieve node anti-affinity behavior, or use
|
||||
[node taints](/docs/concepts/configuration/taint-and-toleration/) to repel pods from specific nodes.
|
||||
|
||||
If you specify both `nodeSelector` and `nodeAffinity`, *both* must be satisfied for the pod
|
||||
|
@ -158,7 +157,7 @@ If you remove or change the label of the node where the pod is scheduled, the po
|
|||
|
||||
The `weight` field in `preferredDuringSchedulingIgnoredDuringExecution` is in the range 1-100. For each node that meets all of the scheduling requirements (resource request, RequiredDuringScheduling affinity expressions, etc.), the scheduler will compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding MatchExpressions. This score is then combined with the scores of other priority functions for the node. The node(s) with the highest total score are the most preferred.
|
||||
|
||||
For more information on node affinity, see the
|
||||
For more information on node affinity, see the
|
||||
[design doc](https://git.k8s.io/community/contributors/design-proposals/scheduling/nodeaffinity.md).
|
||||
|
||||
### Inter-pod affinity and anti-affinity (beta feature)
|
||||
|
@ -174,7 +173,7 @@ like node, rack, cloud provider zone, cloud provider region, etc. You express it
|
|||
key for the node label that the system uses to denote such a topology domain, e.g. see the label keys listed above
|
||||
in the section [Interlude: built-in node labels](#interlude-built-in-node-labels).
|
||||
|
||||
**Note:** Inter-pod affinity and anti-affinity require substantial amount of
|
||||
**Note:** Inter-pod affinity and anti-affinity require substantial amount of
|
||||
processing which can slow down scheduling in large clusters significantly. We do
|
||||
not recommend using them in clusters larger than several hundred nodes.
|
||||
|
||||
|
@ -206,7 +205,7 @@ value V that is running a pod that has a label with key "security" and value "S1
|
|||
rule says that the pod prefers not to be scheduled onto a node if that node is already running a pod with label
|
||||
having key "security" and value "S2". (If the `topologyKey` were `failure-domain.beta.kubernetes.io/zone` then
|
||||
it would mean that the pod cannot be scheduled onto a node if that node is in the same zone as a pod with
|
||||
label having key "security" and value "S2".) See the
|
||||
label having key "security" and value "S2".) See the
|
||||
[design doc](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md)
|
||||
for many more examples of pod affinity and anti-affinity, both the `requiredDuringSchedulingIgnoredDuringExecution`
|
||||
flavor and the `preferredDuringSchedulingIgnoredDuringExecution` flavor.
|
||||
|
@ -333,12 +332,12 @@ web-server-1287567482-s330j 1/1 Running 0 7m 10.192.3
|
|||
|
||||
##### Never co-located in the same node
|
||||
|
||||
The above example uses `PodAntiAffinity` rule with `topologyKey: "kubernetes.io/hostname"` to deploy the redis cluster so that
|
||||
no two instances are located on the same host.
|
||||
See [ZooKeeper tutorial](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure)
|
||||
The above example uses `PodAntiAffinity` rule with `topologyKey: "kubernetes.io/hostname"` to deploy the redis cluster so that
|
||||
no two instances are located on the same host.
|
||||
See [ZooKeeper tutorial](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure)
|
||||
for an example of a StatefulSet configured with anti-affinity for high availability, using the same technique.
|
||||
|
||||
For more information on inter-pod affinity/anti-affinity, see the
|
||||
For more information on inter-pod affinity/anti-affinity, see the
|
||||
[design doc](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md).
|
||||
|
||||
You may want to check [Taints](/docs/concepts/configuration/taint-and-toleration/)
|
||||
|
|
|
@ -10,7 +10,6 @@ feature:
|
|||
weight: 50
|
||||
---
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
|
@ -546,10 +545,10 @@ secret "test-db-secret" created
|
|||
```
|
||||
{{< note >}}
|
||||
**Note:** Special characters such as `$`, `\*`, and `!` require escaping.
|
||||
If the password you are using has special characters, you need to escape them using the `\\` character. For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way:
|
||||
If the password you are using has special characters, you need to escape them using the `\\` character. For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way:
|
||||
|
||||
kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password=S\\!B\\\*d\\$zDsb
|
||||
|
||||
|
||||
You do not need to escape special characters in passwords from files (`--from-file`).
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -773,7 +772,7 @@ Pod level](#use-case-secret-visible-to-one-container-in-a-pod).
|
|||
by impersonating the kubelet. It is a planned feature to only send secrets to
|
||||
nodes that actually require them, to restrict the impact of a root exploit on a
|
||||
single node.
|
||||
|
||||
|
||||
{{< note >}}
|
||||
**Note:** As of 1.7 [encryption of secret data at rest is supported](/docs/tasks/administer-cluster/encrypt-data/).
|
||||
{{< /note >}}
|
||||
|
|
|
@ -8,7 +8,6 @@ content_template: templates/concept
|
|||
weight: 40
|
||||
---
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture overview %}}
|
||||
Node affinity, described [here](/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature),
|
||||
|
|
|
@ -13,7 +13,6 @@ This page describes the resources available to Containers in the Container envir
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -63,5 +62,3 @@ if [DNS addon](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addon
|
|||
[attaching handlers to Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
|
|
|
@ -14,7 +14,6 @@ to run code triggered by events during their management lifecycle.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -122,5 +121,3 @@ Events:
|
|||
[attaching handlers to Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
|
|
|
@ -15,7 +15,6 @@ The `image` property of a container supports the same syntax as the `docker` com
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -15,7 +15,6 @@ This page describes the RuntimeClass resource and runtime selection mechanism.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -8,7 +8,6 @@ content_template: templates/concept
|
|||
weight: 10
|
||||
---
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
|
|
|
@ -22,7 +22,6 @@ Kubernetes itself is decomposed into multiple components, which interact through
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -26,7 +26,6 @@ We'll eventually index and reverse-index labels for efficient queries and watche
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -202,4 +201,4 @@ selector:
|
|||
One use case for selecting over labels is to constrain the set of nodes onto which a pod can schedule.
|
||||
See the documentation on [node selection](/docs/concepts/configuration/assign-pod-node/) for more information.
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -17,7 +17,6 @@ See the [identifiers design doc](https://git.k8s.io/community/contributors/desig
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -31,4 +30,4 @@ By convention, the names of Kubernetes resources should be up to maximum length
|
|||
|
||||
{{< glossary_definition term_id="uid" length="all" >}}
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -15,7 +15,6 @@ These virtual clusters are called namespaces.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -16,7 +16,6 @@ updates.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -145,7 +144,7 @@ For a complete example of authorizing a PodSecurityPolicy, see
|
|||
### Troubleshooting
|
||||
|
||||
- The [Controller Manager](/docs/admin/kube-controller-manager/) must be run
|
||||
against [the secured API port](/docs/reference/access-authn-authz/controlling-access/),
|
||||
against [the secured API port](/docs/reference/access-authn-authz/controlling-access/),
|
||||
and must not have superuser permissions. Otherwise requests would bypass
|
||||
authentication and authorization modules, all PodSecurityPolicy objects would be
|
||||
allowed, and users would be able to create privileged containers. For more details
|
||||
|
@ -375,7 +374,7 @@ several security mechanisms.
|
|||
### Privileged
|
||||
|
||||
**Privileged** - determines if any container in a pod can enable privileged mode.
|
||||
By default a container is not allowed to access any devices on the host, but a
|
||||
By default a container is not allowed to access any devices on the host, but a
|
||||
"privileged" container is given access to all devices on the host. This allows
|
||||
the container nearly all the same access as processes running on the host.
|
||||
This is useful for containers that want to use linux capabilities like
|
||||
|
@ -443,11 +442,11 @@ allowedHostPaths:
|
|||
readOnly: true # only allow read-only mounts
|
||||
```
|
||||
|
||||
{{< warning >}}**Warning:** There are many ways a container with unrestricted access to the host
|
||||
{{< warning >}}**Warning:** There are many ways a container with unrestricted access to the host
|
||||
filesystem can escalate privileges, including reading data from other
|
||||
containers, and abusing the credentials of system services, such as Kubelet.
|
||||
|
||||
Writeable hostPath directory volumes allow containers to write
|
||||
Writeable hostPath directory volumes allow containers to write
|
||||
to the filesystem in ways that let them traverse the host filesystem outside the `pathPrefix`.
|
||||
`readOnly: true`, available in Kubernetes 1.11+, must be used on **all** `allowedHostPaths`
|
||||
to effectively limit access to the specified `pathPrefix`.
|
||||
|
@ -474,7 +473,7 @@ spec:
|
|||
# ... other spec fields
|
||||
volumes:
|
||||
- flexVolume
|
||||
allowedFlexVolumes:
|
||||
allowedFlexVolumes:
|
||||
- driver: example/lvm
|
||||
- driver: example/cifs
|
||||
```
|
||||
|
@ -569,15 +568,15 @@ specified.
|
|||
### AllowedProcMountTypes
|
||||
|
||||
`allowedProcMountTypes` is a whitelist of allowed ProcMountTypes.
|
||||
Empty or nil indicates that only the `DefaultProcMountType` may be used.
|
||||
Empty or nil indicates that only the `DefaultProcMountType` may be used.
|
||||
|
||||
`DefaultProcMount` uses the container runtime defaults for readonly and masked
|
||||
paths for /proc. Most container runtimes mask certain paths in /proc to avoid
|
||||
accidental security exposure of special devices or information. This is denoted
|
||||
as the string `Default`.
|
||||
|
||||
The only other ProcMountType is `UnmaskedProcMount`, which bypasses the
|
||||
default masking behavior of the container runtime and ensures the newly
|
||||
The only other ProcMountType is `UnmaskedProcMount`, which bypasses the
|
||||
default masking behavior of the container runtime and ensures the newly
|
||||
created /proc the container stays in tact with no modifications. This is
|
||||
denoted as the string `Unmasked`.
|
||||
|
||||
|
|
|
@ -15,7 +15,6 @@ Resource quotas are a tool for administrators to address this concern.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -8,7 +8,6 @@ content_template: templates/concept
|
|||
weight: 30
|
||||
---
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
|
@ -375,4 +374,3 @@ the [Federated Services User Guide](/docs/concepts/cluster-administration/federa
|
|||
for further information.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -11,7 +11,6 @@ content_template: templates/concept
|
|||
weight: 10
|
||||
---
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
|
@ -88,7 +87,7 @@ Kubernetes `Services` support `TCP`, `UDP` and `SCTP` for protocols. The defaul
|
|||
is `TCP`.
|
||||
|
||||
{{< note >}}
|
||||
**Note:** SCTP support is an alpha feature since Kubernetes 1.12
|
||||
**Note:** SCTP support is an alpha feature since Kubernetes 1.12
|
||||
{{< /note >}}
|
||||
|
||||
### Services without selectors
|
||||
|
@ -461,7 +460,7 @@ group of the other automatically created resources of the cluster. For example,
|
|||
|
||||
{{< note >}}
|
||||
**Note:** The support of SCTP in the cloud provider's load balancer is up to the cloud provider's
|
||||
load balancer implementation. If SCTP is not supported by the cloud provider's load balancer the
|
||||
load balancer implementation. If SCTP is not supported by the cloud provider's load balancer the
|
||||
Service creation request is accepted but the creation of the load balancer fails.
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -938,7 +937,7 @@ Kubernetes supports SCTP as a `protocol` value in `Service`, `Endpoint`, `Networ
|
|||
|
||||
#### The support of multihomed SCTP associations
|
||||
|
||||
The support of multihomed SCTP associations requires that the CNI plugin can support the assignment of multiple interfaces and IP addresses to a `Pod`.
|
||||
The support of multihomed SCTP associations requires that the CNI plugin can support the assignment of multiple interfaces and IP addresses to a `Pod`.
|
||||
|
||||
NAT for multihomed SCTP associations requires special logic in the corresponding kernel modules.
|
||||
|
||||
|
@ -961,4 +960,3 @@ The kube-proxy does not support the management of SCTP associations when it is i
|
|||
Read [Connecting a Front End to a Back End Using a Service](/docs/tasks/access-application-cluster/connecting-frontend-backend/).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -21,7 +21,6 @@ automatically provisions storage when it is requested by users.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -132,5 +131,3 @@ Pods are scheduled. This can be accomplished by setting the [Volume Binding
|
|||
Mode](/docs/concepts/storage/storage-classes/#volume-binding-mode).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
|
|
|
@ -20,7 +20,6 @@ This document describes the current state of `PersistentVolumes` in Kubernetes.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -674,7 +673,7 @@ and need persistent storage, we recommend that you use the following pattern:
|
|||
`persistentVolumeClaim.storageClassName` field.
|
||||
This will cause the PVC to match the right storage
|
||||
class if the cluster has StorageClasses enabled by the admin.
|
||||
- If the user does not provide a storage class name, leave the
|
||||
- If the user does not provide a storage class name, leave the
|
||||
`persistentVolumeClaim.storageClassName` field as nil.
|
||||
- This will cause a PV to be automatically provisioned for the user with
|
||||
the default StorageClass in the cluster. Many cluster environments have
|
||||
|
|
|
@ -4,7 +4,7 @@ reviewers:
|
|||
- saad-ali
|
||||
- thockin
|
||||
- msau42
|
||||
title: Volume Snapshot Classes
|
||||
title: Volume Snapshot Classes
|
||||
content_template: templates/concept
|
||||
weight: 30
|
||||
---
|
||||
|
@ -17,7 +17,6 @@ with [volume snapshots](/docs/concepts/storage/volume-snapshots/) and
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -15,7 +15,6 @@ This document describes the current state of `VolumeSnapshots` in Kubernetes. Fa
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -23,7 +22,7 @@ This document describes the current state of `VolumeSnapshots` in Kubernetes. Fa
|
|||
|
||||
Similar to how API resources `PersistentVolume` and `PersistentVolumeClaim` are used to provision volumes for users and administrators, `VolumeSnapshotContent` and `VolumeSnapshot` API resources are provided to create volume snapshots for users and administrators.
|
||||
|
||||
A `VolumeSnapshotContent` is a snapshot taken from a volume in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a PersistentVolume is a cluster resource.
|
||||
A `VolumeSnapshotContent` is a snapshot taken from a volume in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a PersistentVolume is a cluster resource.
|
||||
|
||||
A `VolumeSnapshot` is a request for snapshot of a volume by a user. It is similar to a PersistentVolumeClaim.
|
||||
|
||||
|
@ -81,7 +80,7 @@ metadata:
|
|||
spec:
|
||||
snapshotClassName: csi-hostpath-snapclass
|
||||
source:
|
||||
name: pvc-test
|
||||
name: pvc-test
|
||||
kind: PersistentVolumeClaim
|
||||
volumeSnapshotSource:
|
||||
csiVolumeSnapshotSource:
|
||||
|
|
|
@ -22,7 +22,6 @@ Familiarity with [Pods](/docs/user-guide/pods) is suggested.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -925,7 +924,7 @@ A `storageos` volume allows an existing [StorageOS](https://www.storageos.com)
|
|||
volume to be mounted into your Pod.
|
||||
|
||||
StorageOS runs as a Container within your Kubernetes environment, making local
|
||||
or attached storage accessible from any node within the Kubernetes cluster.
|
||||
or attached storage accessible from any node within the Kubernetes cluster.
|
||||
Data can be replicated to protect against node failure. Thin provisioning and
|
||||
compression can improve utilization and reduce cost.
|
||||
|
||||
|
@ -1053,7 +1052,7 @@ spec:
|
|||
image: mysql
|
||||
env:
|
||||
- name: MYSQL_ROOT_PASSWORD
|
||||
value: "rootpasswd"
|
||||
value: "rootpasswd"
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/mysql
|
||||
name: site-data
|
||||
|
@ -1103,7 +1102,7 @@ spec:
|
|||
restartPolicy: Never
|
||||
volumes:
|
||||
- name: workdir1
|
||||
hostPath:
|
||||
hostPath:
|
||||
path: /var/log/pods
|
||||
```
|
||||
|
||||
|
@ -1123,7 +1122,7 @@ several media types.
|
|||
## Out-of-Tree Volume Plugins
|
||||
The Out-of-tree volume plugins include the Container Storage Interface (CSI)
|
||||
and Flexvolume. They enable storage vendors to create custom storage plugins
|
||||
without adding them to the Kubernetes repository.
|
||||
without adding them to the Kubernetes repository.
|
||||
|
||||
Before the introduction of CSI and Flexvolume, all volume plugins (like
|
||||
volume types listed above) were "in-tree" meaning they were built, linked,
|
||||
|
@ -1210,8 +1209,8 @@ persistent volume:
|
|||
{{< feature-state for_k8s_version="v1.11" state="alpha" >}}
|
||||
|
||||
Starting with version 1.11, CSI introduced support for raw block volumes, which
|
||||
relies on the raw block volume feature that was introduced in a previous version of
|
||||
Kubernetes. This feature will make it possible for vendors with external CSI drivers to
|
||||
relies on the raw block volume feature that was introduced in a previous version of
|
||||
Kubernetes. This feature will make it possible for vendors with external CSI drivers to
|
||||
implement raw block volumes support in Kubernetes workloads.
|
||||
|
||||
CSI block volume support is feature-gated and turned off by default. To run CSI with
|
||||
|
@ -1222,7 +1221,7 @@ Kubernetes component using the following feature gate flags:
|
|||
--feature-gates=BlockVolume=true,CSIBlockVolume=true
|
||||
```
|
||||
|
||||
Learn how to
|
||||
Learn how to
|
||||
[setup your PV/PVC with raw block volume support](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support).
|
||||
|
||||
### Flexvolume
|
||||
|
@ -1283,7 +1282,7 @@ In addition, any volume mounts created by Containers in Pods must be destroyed
|
|||
{{< /caution >}}
|
||||
|
||||
### Configuration
|
||||
Before mount propagation can work properly on some deployments (CoreOS,
|
||||
Before mount propagation can work properly on some deployments (CoreOS,
|
||||
RedHat/Centos, Ubuntu) mount share must be configured correctly in
|
||||
Docker as shown below.
|
||||
|
||||
|
@ -1302,5 +1301,3 @@ $ sudo systemctl restart docker
|
|||
{{% capture whatsnext %}}
|
||||
* Follow an example of [deploying WordPress and MySQL with Persistent Volumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/).
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
|
|
|
@ -16,13 +16,12 @@ One CronJob object is like one line of a _crontab_ (cron table) file. It runs a
|
|||
on a given schedule, written in [Cron](https://en.wikipedia.org/wiki/Cron) format.
|
||||
{{< note >}}
|
||||
**Note:** All **CronJob** `schedule:` times are denoted in UTC.
|
||||
{{< /note >}}
|
||||
{{< /note >}}
|
||||
|
||||
For instructions on creating and working with cron jobs, and for an example of a spec file for a cron job, see [Running automated tasks with cron jobs](/docs/tasks/job/automated-tasks-with-cron-jobs).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -42,7 +41,7 @@ For every CronJob, the CronJob controller checks how many schedules it missed in
|
|||
Cannot determine if job needs to be started. Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew.
|
||||
````
|
||||
|
||||
It is important to note that if the `startingDeadlineSeconds` field is set (not `nil`), the controller counts how many missed jobs occurred from the value of `startingDeadlineSeconds` until now rather than from the last scheduled time until now. For example, if `startingDeadlineSeconds` is `200`, the controller counts how many missed jobs occurred in the last 200 seconds.
|
||||
It is important to note that if the `startingDeadlineSeconds` field is set (not `nil`), the controller counts how many missed jobs occurred from the value of `startingDeadlineSeconds` until now rather than from the last scheduled time until now. For example, if `startingDeadlineSeconds` is `200`, the controller counts how many missed jobs occurred in the last 200 seconds.
|
||||
|
||||
A CronJob is counted as missed if it has failed to be created at its scheduled time. For example, If `concurrencyPolicy` is set to `Forbid` and a CronJob was attempted to be scheduled when there was a previous schedule still running, then it would count as missed.
|
||||
|
||||
|
|
|
@ -29,7 +29,6 @@ different flags and/or different memory and cpu requests for different hardware
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -130,7 +129,7 @@ That introduces the following issues:
|
|||
* [Pod preemption](/docs/concepts/configuration/pod-priority-preemption/)
|
||||
is handled by default scheduler. When preemption is enabled, the DaemonSet controller
|
||||
will make scheduling decisions without considering pod priority and preemption.
|
||||
|
||||
|
||||
`ScheduleDaemonSetPods` allows you to schedule DaemonSets using the default
|
||||
scheduler instead of the DaemonSet controller, by adding the `NodeAffinity` term
|
||||
to the DaemonSet pods, instead of the `.spec.nodeName` term. The default
|
||||
|
|
|
@ -26,7 +26,6 @@ A Job can also be used to run multiple pods in parallel.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -223,7 +222,7 @@ Do this by setting the `.spec.activeDeadlineSeconds` field of the Job to a numbe
|
|||
|
||||
The `activeDeadlineSeconds` applies to the duration of the job, no matter how many Pods are created.
|
||||
Once a Job reaches `activeDeadlineSeconds`, the Job and all of its Pods are terminated.
|
||||
The result is that the job has a status with `reason: DeadlineExceeded`.
|
||||
The result is that the job has a status with `reason: DeadlineExceeded`.
|
||||
|
||||
Note that a Job's `.spec.activeDeadlineSeconds` takes precedence over its `.spec.backoffLimit`. Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once it reaches the time limit specified by `activeDeadlineSeconds`, even if the `backoffLimit` is not yet reached.
|
||||
|
||||
|
@ -252,7 +251,7 @@ Note that both the Job Spec and the [Pod Template Spec](https://kubernetes.io/do
|
|||
|
||||
Finished Jobs are usually no longer needed in the system. Keeping them around in
|
||||
the system will put pressure on the API server. If the Jobs are managed directly
|
||||
by a higher level controller, such as
|
||||
by a higher level controller, such as
|
||||
[CronJobs](/docs/concepts/workloads/controllers/cron-jobs/), the Jobs can be
|
||||
cleaned up by CronJobs based on the specified capacity-based cleanup policy.
|
||||
|
||||
|
@ -261,7 +260,7 @@ cleaned up by CronJobs based on the specified capacity-based cleanup policy.
|
|||
{{< feature-state for_k8s_version="v1.12" state="alpha" >}}
|
||||
|
||||
Another way to clean up finished Jobs (either `Complete` or `Failed`)
|
||||
automatically is to use a TTL mechanism provided by a
|
||||
automatically is to use a TTL mechanism provided by a
|
||||
[TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) for
|
||||
finished resources, by specifying the `.spec.ttlSecondsAfterFinished` field of
|
||||
the Job.
|
||||
|
@ -290,16 +289,16 @@ spec:
|
|||
```
|
||||
|
||||
The Job `pi-with-ttl` will be eligible to be automatically deleted, `100`
|
||||
seconds after it finishes.
|
||||
seconds after it finishes.
|
||||
|
||||
If the field is set to `0`, the Job will be eligible to be automatically deleted
|
||||
immediately after it finishes. If the field is unset, this Job won't be cleaned
|
||||
up by the TTL controller after it finishes.
|
||||
|
||||
Note that this TTL mechanism is alpha, with feature gate `TTLAfterFinished`. For
|
||||
more information, see the documentation for
|
||||
more information, see the documentation for
|
||||
[TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) for
|
||||
finished resources.
|
||||
finished resources.
|
||||
|
||||
## Job Patterns
|
||||
|
||||
|
|
|
@ -24,7 +24,6 @@ Alpha Disclaimer: this feature is currently alpha, and can be enabled with
|
|||
{{% /capture %}}
|
||||
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
|
||||
{{% capture body %}}
|
||||
|
@ -34,7 +33,7 @@ Alpha Disclaimer: this feature is currently alpha, and can be enabled with
|
|||
The TTL controller only supports Jobs for now. A cluster operator can use this feature to clean
|
||||
up finished Jobs (either `Complete` or `Failed`) automatically by specifying the
|
||||
`.spec.ttlSecondsAfterFinished` field of a Job, as in this
|
||||
[example](/docs/concepts/workloads/controllers/jobs-run-to-completion/#clean-up-finished-jobs-automatically).
|
||||
[example](/docs/concepts/workloads/controllers/jobs-run-to-completion/#clean-up-finished-jobs-automatically).
|
||||
The TTL controller will assume that a resource is eligible to be cleaned up
|
||||
TTL seconds after the resource has finished, in other words, when the TTL has expired. When the
|
||||
TTL controller cleans up a resource, it will delete it cascadingly, i.e. delete
|
||||
|
|
|
@ -18,7 +18,6 @@ cluster actions, like upgrading and autoscaling clusters.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -266,6 +265,3 @@ the nodes in your cluster, such as a node or system software upgrade, here are s
|
|||
* Learn more about [draining nodes](/docs/tasks/administer-cluster/safely-drain-node/)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -12,7 +12,6 @@ Containers that run before app Containers and can contain utilities or setup
|
|||
scripts not present in an app image.
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
This feature has exited beta in 1.6. Init Containers can be specified in the PodSpec
|
||||
alongside the app `containers` array. The beta annotation value will still be respected
|
||||
|
@ -329,6 +328,3 @@ is removed, requiring a conversion from the deprecated annotations to the
|
|||
* [Creating a Pod that has an Init Container](/docs/tasks/configure-pod-container/configure-pod-initialization/#creating-a-pod-that-has-an-init-container)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -10,7 +10,6 @@ weight: 10
|
|||
This page provides an overview of `Pod`, the smallest deployable object in the Kubernetes object model.
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
## Understanding Pods
|
||||
|
@ -104,5 +103,3 @@ Rather than specifying the current desired state of all replicas, pod templates
|
|||
* [Pod Termination](/docs/concepts/workloads/pods/pod/#termination-of-pods)
|
||||
* Other Pod Topics
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
|
|
|
@ -12,7 +12,6 @@ managed in Kubernetes.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -12,7 +12,6 @@ certain information into pods at creation time. The information can include
|
|||
secrets, volumes, volume mounts, and environment variables.
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
## Understanding Pod Presets
|
||||
|
@ -22,7 +21,7 @@ into a Pod at creation time.
|
|||
You use [label selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors)
|
||||
to specify the Pods to which a given Pod Preset applies.
|
||||
|
||||
Using a Pod Preset allows pod template authors to not have to explicitly provide
|
||||
Using a Pod Preset allows pod template authors to not have to explicitly provide
|
||||
all information for every pod. This way, authors of pod templates consuming a
|
||||
specific service do not need to know all the details about that service.
|
||||
|
||||
|
@ -53,7 +52,7 @@ the Pod; for changes to `Volume`, Kubernetes modifies the Pod Spec.
|
|||
|
||||
{{< note >}}
|
||||
**Note:** A Pod Preset is capable of modifying the `.spec.containers` field in a
|
||||
Pod spec when appropriate. *No* resource definition from the Pod Preset will be
|
||||
Pod spec when appropriate. *No* resource definition from the Pod Preset will be
|
||||
applied to the `initContainers` field.
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -69,7 +68,7 @@ In order to use Pod Presets in your cluster you must ensure the following:
|
|||
|
||||
1. You have enabled the API type `settings.k8s.io/v1alpha1/podpreset`. For
|
||||
example, this can be done by including `settings.k8s.io/v1alpha1=true` in
|
||||
the `--runtime-config` option for the API server.
|
||||
the `--runtime-config` option for the API server.
|
||||
1. You have enabled the admission controller `PodPreset`. One way to doing this
|
||||
is to include `PodPreset` in the `--enable-admission-plugins` option value specified
|
||||
for the API server.
|
||||
|
@ -83,5 +82,3 @@ In order to use Pod Presets in your cluster you must ensure the following:
|
|||
* [Injecting data into a Pod using PodPreset](/docs/tasks/inject-data-application/podpreset/)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
|
|
|
@ -20,7 +20,6 @@ We encourage you to add new [localizations](https://blog.mozilla.org/l10n/2011/
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -4,7 +4,6 @@ content_template: templates/concept
|
|||
weight: 40
|
||||
---
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
|
@ -62,7 +61,7 @@ linkTitle: Title used in links
|
|||
|
||||
### Documentation Side Menu
|
||||
|
||||
The documentation side-bar menu is built from the _current section tree_ starting below `docs/`.
|
||||
The documentation side-bar menu is built from the _current section tree_ starting below `docs/`.
|
||||
|
||||
It will show all sections and their pages.
|
||||
|
||||
|
@ -86,7 +85,7 @@ toc_hide: true
|
|||
|
||||
### The Main Menu
|
||||
|
||||
The site links in the top-right menu -- and also in the footer -- are built by page-lookups. This is to make sure that the page actually exists. So, if the `case-studies` section does not exist in a site (language), it will not be linked to.
|
||||
The site links in the top-right menu -- and also in the footer -- are built by page-lookups. This is to make sure that the page actually exists. So, if the `case-studies` section does not exist in a site (language), it will not be linked to.
|
||||
|
||||
|
||||
## Page Bundles
|
||||
|
@ -137,4 +136,3 @@ The `SASS` source of the stylesheets for this site is stored below `src/sass` an
|
|||
* [Style guide](/docs/contribute/style/style-guide)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -23,7 +23,6 @@ template to use for a new topic, start with the
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -51,18 +50,18 @@ To write a new concept page, create a Markdown file in a subdirectory of the
|
|||
|
||||
The page's body will look like this (remove any optional captures you don't
|
||||
need):
|
||||
|
||||
|
||||
```
|
||||
{{%/* capture overview */%}}
|
||||
|
||||
|
||||
{{%/* /capture */%}}
|
||||
|
||||
|
||||
{{%/* capture body */%}}
|
||||
|
||||
|
||||
{{%/* /capture */%}}
|
||||
|
||||
|
||||
{{%/* capture whatsnext */%}}
|
||||
|
||||
|
||||
{{%/* /capture */%}}
|
||||
```
|
||||
|
||||
|
@ -101,28 +100,28 @@ To write a new task page, create a Markdown file in a subdirectory of the
|
|||
|
||||
The page's body will look like this (remove any optional captures you don't
|
||||
need):
|
||||
|
||||
|
||||
```
|
||||
{{%/* capture overview */%}}
|
||||
|
||||
|
||||
{{%/* /capture */%}}
|
||||
|
||||
|
||||
{{%/* capture prerequisites */%}}
|
||||
|
||||
|
||||
{{</* include "task-tutorial-prereqs.md" */>}} {{</* version-check */>}}
|
||||
|
||||
|
||||
{{%/* /capture */%}}
|
||||
|
||||
|
||||
{{%/* capture steps */%}}
|
||||
|
||||
|
||||
{{%/* /capture */%}}
|
||||
|
||||
|
||||
{{%/* capture discussion */%}}
|
||||
|
||||
|
||||
{{%/* /capture */%}}
|
||||
|
||||
|
||||
{{%/* capture whatsnext */%}}
|
||||
|
||||
|
||||
{{%/* /capture */%}}
|
||||
```
|
||||
|
||||
|
@ -167,32 +166,32 @@ To write a new tutorial page, create a Markdown file in a subdirectory of the
|
|||
|
||||
The page's body will look like this (remove any optional captures you don't
|
||||
need):
|
||||
|
||||
|
||||
```
|
||||
{{%/* capture overview */%}}
|
||||
|
||||
|
||||
{{%/* /capture */%}}
|
||||
|
||||
|
||||
{{%/* capture prerequisites */%}}
|
||||
|
||||
|
||||
{{</* include "task-tutorial-prereqs.md" */>}} {{</* version-check */>}}
|
||||
|
||||
|
||||
{{%/* /capture */%}}
|
||||
|
||||
|
||||
{{%/* capture objectives */%}}
|
||||
|
||||
|
||||
{{%/* /capture */%}}
|
||||
|
||||
|
||||
{{%/* capture lessoncontent */%}}
|
||||
|
||||
|
||||
{{%/* /capture */%}}
|
||||
|
||||
|
||||
{{%/* capture cleanup */%}}
|
||||
|
||||
|
||||
{{%/* /capture */%}}
|
||||
|
||||
|
||||
{{%/* capture whatsnext */%}}
|
||||
|
||||
|
||||
{{%/* /capture */%}}
|
||||
```
|
||||
|
||||
|
@ -221,4 +220,3 @@ An example of a published topic that uses the tutorial template is
|
|||
- Learn about [content organization](/docs/contribute/style/content-organization/)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -10,7 +10,6 @@ This topic discusses multiple ways to interact with clusters.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -285,16 +284,16 @@ The redirect capabilities have been deprecated and removed. Please use a proxy
|
|||
There are several different proxies you may encounter when using Kubernetes:
|
||||
|
||||
1. The [kubectl proxy](#directly-accessing-the-rest-api):
|
||||
|
||||
|
||||
- runs on a user's desktop or in a pod
|
||||
- proxies from a localhost address to the Kubernetes apiserver
|
||||
- client to proxy uses HTTP
|
||||
- proxy to apiserver uses HTTPS
|
||||
- locates apiserver
|
||||
- adds authentication headers
|
||||
|
||||
|
||||
1. The [apiserver proxy](#discovering-builtin-services):
|
||||
|
||||
|
||||
- is a bastion built into the apiserver
|
||||
- connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
|
||||
- runs in the apiserver processes
|
||||
|
@ -302,23 +301,23 @@ There are several different proxies you may encounter when using Kubernetes:
|
|||
- proxy to target may use HTTP or HTTPS as chosen by proxy using available information
|
||||
- can be used to reach a Node, Pod, or Service
|
||||
- does load balancing when used to reach a Service
|
||||
|
||||
|
||||
1. The [kube proxy](/docs/concepts/services-networking/service/#ips-and-vips):
|
||||
|
||||
|
||||
- runs on each node
|
||||
- proxies UDP and TCP
|
||||
- does not understand HTTP
|
||||
- provides load balancing
|
||||
- is just used to reach services
|
||||
|
||||
|
||||
1. A Proxy/Load-balancer in front of apiserver(s):
|
||||
|
||||
|
||||
- existence and implementation varies from cluster to cluster (e.g. nginx)
|
||||
- sits between all clients and one or more apiservers
|
||||
- acts as load balancer if there are several apiservers.
|
||||
|
||||
|
||||
1. Cloud Load Balancers on external services:
|
||||
|
||||
|
||||
- are provided by some cloud providers (e.g. AWS ELB, Google Cloud Load Balancer)
|
||||
- are created automatically when the Kubernetes service has type `LoadBalancer`
|
||||
- use UDP/TCP only
|
||||
|
|
|
@ -16,7 +16,6 @@ well as any provider specific details that may be necessary.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
|
|
|
@ -18,7 +18,6 @@ Dashboard also provides information on the state of Kubernetes resources in your
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -15,7 +15,6 @@ running cluster.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -21,7 +21,6 @@ in the Kubernetes source directory for a canonical example.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
|
|
|
@ -12,7 +12,6 @@ content_template: templates/task
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
|
|
|
@ -20,7 +20,6 @@ directives.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
|
@ -178,7 +177,7 @@ spec:
|
|||
```
|
||||
|
||||
This pod runs in the `Guaranteed` QoS class because `requests` are equal to `limits`.
|
||||
And the container's resource limit for the CPU resource is an integer greater than
|
||||
And the container's resource limit for the CPU resource is an integer greater than
|
||||
or equal to one. The `nginx` container is granted 2 exclusive CPUs.
|
||||
|
||||
|
||||
|
@ -213,8 +212,8 @@ spec:
|
|||
```
|
||||
|
||||
This pod runs in the `Guaranteed` QoS class because only `limits` are specified
|
||||
and `requests` are set equal to `limits` when not explicitly specified. And the
|
||||
container's resource limit for the CPU resource is an integer greater than or
|
||||
and `requests` are set equal to `limits` when not explicitly specified. And the
|
||||
container's resource limit for the CPU resource is an integer greater than or
|
||||
equal to one. The `nginx` container is granted 2 exclusive CPUs.
|
||||
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -22,7 +22,6 @@ To dive a little deeper into implementation details, all cloud controller manage
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -18,7 +18,6 @@ vacated by the evicted critical add-on pod or the amount of resources available
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -51,7 +50,7 @@ killed for this purpose. Please ensure that rescheduler is not enabled along wit
|
|||
|
||||
Rescheduler doesn't have any user facing configuration (component config) or API.
|
||||
|
||||
### Marking pod as critical when using Rescheduler.
|
||||
### Marking pod as critical when using Rescheduler.
|
||||
|
||||
To be considered critical, the pod has to run in the `kube-system` namespace (configurable via flag) and
|
||||
|
||||
|
|
|
@ -14,7 +14,6 @@ This document describes how to use kube-up/down scripts to manage highly availab
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
|
|
|
@ -20,7 +20,6 @@ This example demonstrates how to use Kubernetes namespaces to subdivide your clu
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
|
|
|
@ -18,7 +18,6 @@ nodes become unstable.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -205,7 +204,7 @@ If `nodefs` filesystem has met eviction thresholds, `kubelet` frees up disk spac
|
|||
|
||||
If the `kubelet` is unable to reclaim sufficient resource on the node, `kubelet` begins evicting Pods.
|
||||
|
||||
The `kubelet` ranks Pods for eviction first by whether or not their usage of the starved resource exceeds requests,
|
||||
The `kubelet` ranks Pods for eviction first by whether or not their usage of the starved resource exceeds requests,
|
||||
then by [Priority](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/), and then by the consumption of the starved compute resource relative to the Pods' scheduling requests.
|
||||
|
||||
As a result, `kubelet` ranks and evicts Pods in the following order:
|
||||
|
@ -213,15 +212,15 @@ As a result, `kubelet` ranks and evicts Pods in the following order:
|
|||
* `BestEffort` or `Burstable` Pods whose usage of a starved resource exceeds its request.
|
||||
Such pods are ranked by Priority, and then usage above request.
|
||||
* `Guaranteed` pods and `Burstable` pods whose usage is beneath requests are evicted last.
|
||||
`Guaranteed` Pods are guaranteed only when requests and limits are specified for all
|
||||
the containers and they are equal. Such pods are guaranteed to never be evicted because
|
||||
`Guaranteed` Pods are guaranteed only when requests and limits are specified for all
|
||||
the containers and they are equal. Such pods are guaranteed to never be evicted because
|
||||
of another Pod's resource consumption. If a system daemon (such as `kubelet`, `docker`,
|
||||
and `journald`) is consuming more resources than were reserved via `system-reserved` or
|
||||
`kube-reserved` allocations, and the node only has `Guaranteed` or `Burstable` Pods using
|
||||
less than requests remaining, then the node must choose to evict such a Pod in order to
|
||||
`kube-reserved` allocations, and the node only has `Guaranteed` or `Burstable` Pods using
|
||||
less than requests remaining, then the node must choose to evict such a Pod in order to
|
||||
preserve node stability and to limit the impact of the unexpected consumption to other Pods.
|
||||
In this case, it will choose to evict pods of Lowest Priority first.
|
||||
|
||||
|
||||
If necessary, `kubelet` evicts Pods one at a time to reclaim disk when `DiskPressure`
|
||||
is encountered. If the `kubelet` is responding to `inode` starvation, it reclaims
|
||||
`inodes` by evicting Pods with the lowest quality of service first. If the `kubelet`
|
||||
|
|
|
@ -23,7 +23,6 @@ on each node.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
|
|
|
@ -17,7 +17,6 @@ The `cloud-controller-manager` can be linked to any cloud provider that satisfie
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -16,7 +16,6 @@ This means that the pods are visible on the API server but cannot be controlled
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -103,7 +102,7 @@ Notice we cannot delete the pod with the API server (e.g. via [`kubectl`](/docs/
|
|||
|
||||
{{<note>}}
|
||||
**Note**: Make sure the kubelet has permission to create the mirror pod in the API server.
|
||||
If not, the creation request is rejected by the API server. See
|
||||
If not, the creation request is rejected by the API server. See
|
||||
[PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/).
|
||||
{{</note>}}
|
||||
|
||||
|
|
|
@ -13,7 +13,6 @@ This guide explains how to use events in federation control plane to help in deb
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -18,7 +18,6 @@ Creating them in the federation control plane ensures that they are synchronized
|
|||
across all the clusters in federation.
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -30,7 +30,6 @@ When they do, they are authenticated as a particular Service Account (for exampl
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
|
@ -181,7 +180,7 @@ token: ...
|
|||
|
||||
## Add ImagePullSecrets to a service account
|
||||
|
||||
First, create an imagePullSecret, as described [here](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod).
|
||||
First, create an imagePullSecret, as described [here](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod).
|
||||
Next, verify it has been created. For example:
|
||||
|
||||
```shell
|
||||
|
@ -304,5 +303,3 @@ The application is responsible for reloading the token when it rotates. Periodic
|
|||
reloading (e.g. once every 5 minutes) is sufficient for most usecases.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
|
|
|
@ -14,7 +14,6 @@ More information can be found on the Kompose website at [http://kompose.io](http
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
|
@ -33,7 +32,7 @@ We have multiple ways to install Kompose. Our preferred method is downloading th
|
|||
Kompose is released via GitHub on a three-week cycle, you can see all current releases on the [GitHub release page](https://github.com/kubernetes/kompose/releases).
|
||||
|
||||
```sh
|
||||
# Linux
|
||||
# Linux
|
||||
curl -L https://github.com/kubernetes/kompose/releases/download/v1.1.0/kompose-linux-amd64 -o kompose
|
||||
|
||||
# macOS
|
||||
|
@ -98,7 +97,7 @@ you need is an existing `docker-compose.yml` file.
|
|||
services:
|
||||
|
||||
redis-master:
|
||||
image: k8s.gcr.io/redis:e2e
|
||||
image: k8s.gcr.io/redis:e2e
|
||||
ports:
|
||||
- "6379"
|
||||
|
||||
|
@ -124,8 +123,8 @@ you need is an existing `docker-compose.yml` file.
|
|||
|
||||
```bash
|
||||
$ kompose up
|
||||
We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application.
|
||||
If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.
|
||||
We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application.
|
||||
If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.
|
||||
|
||||
INFO Successfully created Service: redis
|
||||
INFO Successfully created Service: web
|
||||
|
@ -157,7 +156,7 @@ you need is an existing `docker-compose.yml` file.
|
|||
deployment.apps/redis-master created
|
||||
deployment.apps/redis-slave created
|
||||
```
|
||||
|
||||
|
||||
Your deployments are running in Kubernetes.
|
||||
|
||||
4. Access your application.
|
||||
|
@ -252,7 +251,7 @@ INFO Kubernetes file "redis-slave-service.yaml" created
|
|||
INFO Kubernetes file "frontend-deployment.yaml" created
|
||||
INFO Kubernetes file "mlbparks-deployment.yaml" created
|
||||
INFO Kubernetes file "mongodb-deployment.yaml" created
|
||||
INFO Kubernetes file "mongodb-claim0-persistentvolumeclaim.yaml" created
|
||||
INFO Kubernetes file "mongodb-claim0-persistentvolumeclaim.yaml" created
|
||||
INFO Kubernetes file "redis-master-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-slave-deployment.yaml" created
|
||||
|
||||
|
@ -261,10 +260,10 @@ mlbparks-deployment.yaml mongodb-service.yaml redis-slave
|
|||
frontend-deployment.yaml mongodb-claim0-persistentvolumeclaim.yaml redis-master-service.yaml
|
||||
frontend-service.yaml mongodb-deployment.yaml redis-slave-deployment.yaml
|
||||
redis-master-deployment.yaml
|
||||
```
|
||||
```
|
||||
|
||||
When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be over ridden by subsequent file.
|
||||
|
||||
|
||||
### OpenShift
|
||||
|
||||
```sh
|
||||
|
@ -290,11 +289,11 @@ It also supports creating buildconfig for build directive in a service. By defau
|
|||
|
||||
```sh
|
||||
$ kompose --provider openshift --file buildconfig/docker-compose.yml convert
|
||||
WARN [foo] Service cannot be created because of missing port.
|
||||
INFO OpenShift Buildconfig using git@github.com:rtnpro/kompose.git::master as source.
|
||||
WARN [foo] Service cannot be created because of missing port.
|
||||
INFO OpenShift Buildconfig using git@github.com:rtnpro/kompose.git::master as source.
|
||||
INFO OpenShift file "foo-deploymentconfig.yaml" created
|
||||
INFO OpenShift file "foo-imagestream.yaml" created
|
||||
INFO OpenShift file "foo-buildconfig.yaml" created
|
||||
INFO OpenShift file "foo-buildconfig.yaml" created
|
||||
```
|
||||
|
||||
**Note**: If you are manually pushing the Openshift artifacts using ``oc create -f``, you need to ensure that you push the imagestream artifact before the buildconfig artifact, to workaround this Openshift issue: https://github.com/openshift/origin/issues/4518 .
|
||||
|
@ -418,15 +417,15 @@ Using `kompose up` with a `build` key:
|
|||
|
||||
```none
|
||||
$ kompose up
|
||||
INFO Build key detected. Attempting to build and push image 'docker.io/foo/bar'
|
||||
INFO Building image 'docker.io/foo/bar' from directory 'build'
|
||||
INFO Image 'docker.io/foo/bar' from directory 'build' built successfully
|
||||
INFO Pushing image 'foo/bar:latest' to registry 'docker.io'
|
||||
INFO Attempting authentication credentials 'https://index.docker.io/v1/
|
||||
INFO Successfully pushed image 'foo/bar:latest' to registry 'docker.io'
|
||||
INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.
|
||||
|
||||
INFO Deploying application in "default" namespace
|
||||
INFO Build key detected. Attempting to build and push image 'docker.io/foo/bar'
|
||||
INFO Building image 'docker.io/foo/bar' from directory 'build'
|
||||
INFO Image 'docker.io/foo/bar' from directory 'build' built successfully
|
||||
INFO Pushing image 'foo/bar:latest' to registry 'docker.io'
|
||||
INFO Attempting authentication credentials 'https://index.docker.io/v1/
|
||||
INFO Successfully pushed image 'foo/bar:latest' to registry 'docker.io'
|
||||
INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.
|
||||
|
||||
INFO Deploying application in "default" namespace
|
||||
INFO Successfully created Service: foo
|
||||
INFO Successfully created Deployment: foo
|
||||
|
||||
|
@ -479,7 +478,7 @@ The `*-daemonset.yaml` files contain the Daemon Set objects
|
|||
If you want to generate a Chart to be used with [Helm](https://github.com/kubernetes/helm) simply do:
|
||||
|
||||
```sh
|
||||
$ kompose convert -c
|
||||
$ kompose convert -c
|
||||
INFO Kubernetes file "web-svc.yaml" created
|
||||
INFO Kubernetes file "redis-svc.yaml" created
|
||||
INFO Kubernetes file "web-deployment.yaml" created
|
||||
|
@ -509,7 +508,7 @@ For example:
|
|||
|
||||
```yaml
|
||||
version: "2"
|
||||
services:
|
||||
services:
|
||||
nginx:
|
||||
image: nginx
|
||||
dockerfile: foobar
|
||||
|
@ -517,7 +516,7 @@ services:
|
|||
cap_add:
|
||||
- ALL
|
||||
container_name: foobar
|
||||
labels:
|
||||
labels:
|
||||
kompose.service.type: nodeport
|
||||
```
|
||||
|
||||
|
|
|
@ -24,7 +24,6 @@ answer the following questions:
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -15,7 +15,6 @@ Horizontal Pod Autoscaler, to make decisions.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -7,7 +7,6 @@ title: Debugging Kubernetes nodes with crictl
|
|||
content_template: templates/task
|
||||
---
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
|
@ -230,7 +229,7 @@ deleted by the Kubelet.
|
|||
```bash
|
||||
crictl runp pod-config.json
|
||||
```
|
||||
|
||||
|
||||
The ID of the sandbox is returned.
|
||||
|
||||
### Create a container
|
||||
|
|
|
@ -14,7 +14,6 @@ your pods. But there are a number of ways to get even more information about you
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -14,7 +14,6 @@ This is *not* a guide for people who want to debug their cluster. For that you
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -14,7 +14,6 @@ You may also visit [troubleshooting document](/docs/troubleshooting/) for more i
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -14,7 +14,6 @@ This document will hopefully help you to figure out what's going wrong.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -38,7 +38,6 @@ of the potential inaccuracy.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -20,7 +20,6 @@ in the Kubernetes logging overview.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -263,7 +262,7 @@ In this case you need to be able to change the parameters of `DaemonSet` and `Co
|
|||
|
||||
If you're using GKE and Stackdriver Logging is enabled in your cluster, you
|
||||
cannot change its configuration, because it's managed and supported by GKE.
|
||||
However, you can disable the default integration and deploy your own.
|
||||
However, you can disable the default integration and deploy your own.
|
||||
{{< note >}}**Note:** You will have to support and maintain a newly deployed configuration
|
||||
yourself: update the image and configuration, adjust the resources and so on.{{< /note >}}
|
||||
To disable the default logging integration, use the following command:
|
||||
|
@ -325,7 +324,7 @@ kubectl get cm fluentd-gcp-config --namespace kube-system -o yaml > fluentd-gcp-
|
|||
```
|
||||
|
||||
Then in the value for the key `containers.input.conf` insert a new filter right after
|
||||
the `source` section.
|
||||
the `source` section.
|
||||
{{< note >}}**Note:** Order is important.{{< /note >}}
|
||||
|
||||
Updating `ConfigMap` in the apiserver is more complicated than updating `DaemonSet`. It's better
|
||||
|
|
|
@ -19,7 +19,6 @@ you're using.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -103,4 +102,4 @@ problem, such as:
|
|||
* Cloud provider, OS distro, network configuration, and Docker version
|
||||
* Steps to reproduce the problem
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -23,7 +23,6 @@ using `kubefed`.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
|
@ -41,14 +40,14 @@ for installation instructions for your platform.
|
|||
|
||||
## Getting `kubefed`
|
||||
|
||||
Download the client tarball corresponding to the particular release and
|
||||
Download the client tarball corresponding to the particular release and
|
||||
extract the binaries in the tarball:
|
||||
|
||||
{{< note >}}
|
||||
**Note:** Until Kubernetes version `1.8.x` the federation project was
|
||||
**Note:** Until Kubernetes version `1.8.x` the federation project was
|
||||
maintained as part of the [core kubernetes repo](https://github.com/kubernetes/kubernetes).
|
||||
Between Kubernetes releases `1.8` and `1.9`, the federation project moved into
|
||||
a separate [federation repo](https://github.com/kubernetes/federation), where it is
|
||||
Between Kubernetes releases `1.8` and `1.9`, the federation project moved into
|
||||
a separate [federation repo](https://github.com/kubernetes/federation), where it is
|
||||
now maintained. Consequently, the federation release information is available on the
|
||||
[release page](https://github.com/kubernetes/federation/releases).
|
||||
{{< /note >}}
|
||||
|
@ -60,7 +59,7 @@ curl -LO https://storage.googleapis.com/kubernetes-release/release/${RELEASE-VER
|
|||
tar -xzvf kubernetes-client-linux-amd64.tar.gz
|
||||
```
|
||||
{{< note >}}
|
||||
**Note:** The `RELEASE-VERSION` variable should either be set to or replaced with the actual version needed.
|
||||
**Note:** The `RELEASE-VERSION` variable should either be set to or replaced with the actual version needed.
|
||||
{{< /note >}}
|
||||
|
||||
Copy the extracted binary to one of the directories in your `$PATH`
|
||||
|
@ -79,7 +78,7 @@ tar -xzvf federation-client-linux-amd64.tar.gz
|
|||
```
|
||||
|
||||
{{< note >}}
|
||||
**Note:** The `RELEASE-VERSION` variable should be replaced with one of the release versions available at [federation release page](https://github.com/kubernetes/federation/releases).
|
||||
**Note:** The `RELEASE-VERSION` variable should be replaced with one of the release versions available at [federation release page](https://github.com/kubernetes/federation/releases).
|
||||
{{< /note >}}
|
||||
|
||||
Copy the extracted binary to one of the directories in your `$PATH`
|
||||
|
@ -92,7 +91,7 @@ sudo chmod +x /usr/local/bin/kubefed
|
|||
|
||||
### Install kubectl
|
||||
|
||||
You can install a matching version of kubectl using the instructions on
|
||||
You can install a matching version of kubectl using the instructions on
|
||||
the [kubectl install page](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
|
||||
|
||||
## Choosing a host cluster.
|
||||
|
@ -177,7 +176,7 @@ without the Google Cloud DNS API scope by default. If you want to use a
|
|||
Google Kubernetes Engine cluster as a Federation host, you must create it using the `gcloud`
|
||||
command with the appropriate value in the `--scopes` field. You cannot
|
||||
modify a Google Kubernetes Engine cluster directly to add this scope, but you can create a
|
||||
new node pool for your cluster and delete the old one.
|
||||
new node pool for your cluster and delete the old one.
|
||||
|
||||
{{< note >}}
|
||||
**Note:** This will cause pods in the cluster to be rescheduled.
|
||||
|
@ -200,7 +199,7 @@ gcloud container node-pools delete default-pool --cluster gke-cluster
|
|||
|
||||
`kubefed init` sets up the federation control plane in the host
|
||||
cluster and also adds an entry for the federation API server in your
|
||||
local kubeconfig.
|
||||
local kubeconfig.
|
||||
{{< note >}}
|
||||
**Note:** In the beta release of Kubernetes 1.6, `kubefed init` does not automatically set the current context to the
|
||||
newly deployed federation. You can set the current context manually by running:
|
||||
|
@ -436,7 +435,7 @@ Where `<patch-file-name>` is the name of the file you created above.
|
|||
|
||||
## Adding a cluster to a federation
|
||||
|
||||
After you've deployed a federation control plane, you'll need to make that control plane aware of the clusters it should manage.
|
||||
After you've deployed a federation control plane, you'll need to make that control plane aware of the clusters it should manage.
|
||||
|
||||
To join clusters into the federation:
|
||||
|
||||
|
@ -463,7 +462,7 @@ To join clusters into the federation:
|
|||
kubefed join gondor --host-cluster-context=rivendell
|
||||
```
|
||||
|
||||
A new context has now been added to your kubeconfig named `fellowship` (after the name of your federation).
|
||||
A new context has now been added to your kubeconfig named `fellowship` (after the name of your federation).
|
||||
|
||||
|
||||
{{< note >}}
|
||||
|
|
|
@ -4,12 +4,11 @@ content_template: templates/task
|
|||
weight: 30
|
||||
---
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
In this example, we will run a Kubernetes Job with multiple parallel
|
||||
worker processes.
|
||||
worker processes.
|
||||
|
||||
In this example, as each pod is created, it picks up one unit of work
|
||||
from a task queue, completes it, deletes it from the queue, and exits.
|
||||
|
@ -25,7 +24,6 @@ Here is an overview of the steps in this example:
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
|
|
|
@ -26,7 +26,6 @@ Here is an overview of the steps in this example:
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
|
|
|
@ -12,7 +12,6 @@ non-parallel, use of [Jobs](/docs/concepts/workloads/controllers/jobs-run-to-com
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -17,7 +17,6 @@ and the current limitations.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -33,7 +32,7 @@ from 1.10.
|
|||
|
||||
Then you have to install GPU drivers from the corresponding vendor on the nodes
|
||||
and run the corresponding device plugin from the GPU vendor
|
||||
([AMD](#deploying-amd-gpu-device-plugin), [NVIDIA](#deploying-nvidia-gpu-device-plugin)).
|
||||
([AMD](#deploying-amd-gpu-device-plugin), [NVIDIA](#deploying-nvidia-gpu-device-plugin)).
|
||||
|
||||
When the above conditions are true, Kubernetes will expose `nvidia.com/gpu` or
|
||||
`amd.com/gpu` as a schedulable resource.
|
||||
|
|
|
@ -19,7 +19,6 @@ This document walks you through an example of enabling Horizontal Pod Autoscaler
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
|
|
@ -28,7 +28,6 @@ to match the observed average CPU utilization to the target specified by user.
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
@ -161,7 +160,7 @@ into a desired replica count (e.g. due to an error fetching the metrics
|
|||
from the metrics APIs), scaling is skipped.
|
||||
|
||||
Finally, just before HPA scales the target, the scale reccomendation is recorded. The
|
||||
controller considers all recommendations within a configurable window choosing the
|
||||
controller considers all recommendations within a configurable window choosing the
|
||||
highest recommendation from within that window. This value can be configured using the `--horizontal-pod-autoscaler-downscale-stabilization-window` flag, which defaults to 5 minutes.
|
||||
This means that scaledowns will occur gradually, smoothing out the impact of rapidly
|
||||
fluctuating metric values.
|
||||
|
|
|
@ -42,7 +42,6 @@ Rolling updates are initiated with the `kubectl rolling-update` command:
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
|
|
|
@ -21,7 +21,6 @@ protocol that is similar to the
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
|
|
Loading…
Reference in New Issue