rename reference/kubernetes-api/index (#16947)

- renamed docs/reference/kubernetes-api/index.md
- added weight to files in docs/reference/kubernetes-api
- attempt to clean up labels-annotations-taints.md
- updated _redirects file
pull/17013/head^2
Karen Bradshaw 2019-10-23 11:48:16 -04:00 committed by Kubernetes Prow Robot
parent 452d5822d2
commit f29d3153eb
4 changed files with 27 additions and 41 deletions

View File

@ -182,11 +182,9 @@ static/docs/reference/generated/kubernetes-api/v1.15/scroll.js
## Updating the API reference index pages
* Open `<web-base>/content/en/docs/reference/kubernetes-api/api-index.md` for editing, and update the API reference version number. For example:
* Open `<web-base>/content/en/docs/reference/kubernetes-api/index.md` for editing, and update the API reference
version number. For example:
```
```markdown
---
title: v1.15
---
@ -198,7 +196,6 @@ static/docs/reference/generated/kubernetes-api/v1.15/scroll.js
new link for the latest API reference. Remove the oldest API reference version.
There should be five links to the most recent API references.
## Locally test the API reference
Publish a local version of the API reference.

View File

@ -1,5 +1,6 @@
---
title: v1.16
weight: 50
---
[Kubernetes API v1.16](/docs/reference/generated/kubernetes-api/v1.16/)

View File

@ -1,24 +1,26 @@
---
title: Well-Known Labels, Annotations and Taints
content_template: templates/concept
weight: 10
weight: 60
---
{{% capture overview %}}
Kubernetes reserves all labels and annotations in the kubernetes.io namespace.
This document serves both as a reference to the values, and as a coordination point for assigning values.
This document serves both as a reference to the values and as a coordination point for assigning values.
{{% /capture %}}
{{% capture body %}}
## kubernetes.io/arch
Example: `kubernetes.io/arch=amd64`
Used on: Node
Kubelet populates this with `runtime.GOARCH` as defined by Go. This can be handy if you are mixing arm and x86 nodes,
for example.
The Kubelet populates this with `runtime.GOARCH` as defined by Go. This can be handy if you are mixing arm and x86 nodes.
## kubernetes.io/os
@ -26,8 +28,7 @@ Example: `kubernetes.io/os=linux`
Used on: Node
Kubelet populates this with `runtime.GOOS` as defined by Go. This can be handy if you are mixing operating systems
in your cluster (e.g., mixing Linux and Windows nodes).
The Kubelet populates this with `runtime.GOOS` as defined by Go. This can be handy if you are mixing operating systems in your cluster (for example: mixing Linux and Windows nodes).
## beta.kubernetes.io/arch (deprecated)
@ -37,15 +38,13 @@ This label has been deprecated. Please use `kubernetes.io/arch` instead.
This label has been deprecated. Please use `kubernetes.io/os` instead.
## kubernetes.io/hostname
Example: `kubernetes.io/hostname=ip-172-20-114-199.ec2.internal`
Used on: Node
Kubelet populates this with the hostname. Note that the hostname can be changed from the "actual" hostname
by passing the `--hostname-override` flag to kubelet.
The Kubelet populates this label with the hostname. Note that the hostname can be changed from the "actual" hostname by passing the `--hostname-override` flag to the `kubelet`.
## beta.kubernetes.io/instance-type
@ -53,12 +52,10 @@ Example: `beta.kubernetes.io/instance-type=m3.medium`
Used on: Node
Kubelet populates this with the instance type as defined by the `cloudprovider`. It will not be set if
not using a cloudprovider. This can be handy if you want to target certain workloads to certain instance
types, but typically you want to rely on the Kubernetes scheduler to perform resource-based scheduling,
and you should aim to schedule based on properties rather than on instance types (e.g. require a GPU, instead
of requiring a `g2.2xlarge`)
The Kubelet populates this with the instance type as defined by the `cloudprovider`.
This will be set only if you are using a `cloudprovider`. This setting is handy
if you want to target certain workloads to certain instance types, but typically you want
to rely on the Kubernetes scheduler to perform resource-based scheduling. You should aim to schedule based on properties rather than on instance types (for example: require a GPU, instead of requiring a `g2.2xlarge`).
## failure-domain.beta.kubernetes.io/region
@ -74,32 +71,22 @@ Example:
Used on: Node, PersistentVolume
On the Node: Kubelet populates this with the zone information as defined by the `cloudprovider`. It will not be set if
not using a `cloudprovider`, but you should consider setting it on the nodes if it makes sense in your topology.
On the Node: The `kubelet` populates this with the zone information as defined by the `cloudprovider`.
This will be set only if you are using a `cloudprovider`. However, you should consider setting this
on the nodes if it makes sense in your topology.
On the PersistentVolume: The `PersistentVolumeLabel` admission controller will automatically add zone labels to PersistentVolumes,
on GCE and AWS.
On the PersistentVolume: The `PersistentVolumeLabel` admission controller will automatically add zone labels to PersistentVolumes, on GCE and AWS.
Kubernetes will automatically spread the pods in a replication controller or service across nodes in a single-zone
cluster (to reduce the impact of failures). With multiple-zone clusters, this spreading behaviour is extended
across zones (to reduce the impact of zone failures). This is achieved via SelectorSpreadPriority.
Kubernetes will automatically spread the Pods in a replication controller or service across nodes in a single-zone cluster (to reduce the impact of failures). With multiple-zone clusters, this spreading behaviour is extended across zones (to reduce the impact of zone failures). This is achieved via _SelectorSpreadPriority_.
This is a best-effort placement, and so if the zones in your cluster are heterogeneous (e.g. different numbers of nodes,
different types of nodes, or different pod resource requirements), this might prevent equal spreading of
your pods across zones. If desired, you can use homogenous zones (same number and types of nodes) to reduce
the probability of unequal spreading.
_SelectorSpreadPriority_ is a best effort placement. If the zones in your cluster are heterogeneous (for example: different numbers of nodes, different types of nodes, or different pod resource requirements), this placement might prevent equal spreading of your Pods across zones. If desired, you can use homogenous zones (same number and types of nodes) to reduce the probability of unequal spreading.
The scheduler (via the VolumeZonePredicate predicate) will also ensure that pods that claim a given volume
are only placed into the same zone as that volume, as volumes cannot be attached across zones.
The scheduler (through the _VolumeZonePredicate_ predicate) also will ensure that Pods, that claim a given volume, are only placed into the same zone as that volume. Volumes cannot be attached across zones.
The actual values of zone and region don't matter, and nor is the meaning of the hierarchy rigidly defined. The expectation
is that failures of nodes in different zones should be uncorrelated unless the entire region has failed. For example,
zones should typically avoid sharing a single network switch. The exact mapping depends on your particular
infrastructure - a three-rack installation will choose a very different setup to a multi-datacenter configuration.
The actual values of zone and region don't matter. Nor is the node hierarchy rigidly defined.
The expectation is that failures of nodes in different zones should be uncorrelated unless the entire region has failed. For example, zones should typically avoid sharing a single network switch. The exact mapping depends on your particular infrastructure - a three rack installation will choose a very different setup to a multi-datacenter configuration.
If `PersistentVolumeLabel` does not support automatic labeling of your PersistentVolumes, you should consider
adding the labels manually (or adding support to `PersistentVolumeLabel`), if you want the scheduler to prevent
pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't
need to add the zone labels to the volumes at all.
adding the labels manually (or adding support for `PersistentVolumeLabel`). With `PersistentVolumeLabel`, the scheduler prevents Pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all.
{{% /capture %}}

View File

@ -209,6 +209,7 @@
/docs/reference/glossary/maintainer/ /docs/reference/glossary/approver/ 301
/docs/reference/kubectl/kubectl/kubectl_*.md /docs/reference/generated/kubectl/kubectl-commands#:splat 301
/docs/reference/kubernetes-api/index/ /docs/reference/kubernetes-api/api-index/ 301
/docs/reference/workloads-18-19/ https://v1-9.docs.kubernetes.io/docs/reference/workloads-18-19/ 301
/docs/reporting-security-issues/ /security/ 301