parent
206a264c0e
commit
2e192598a0
|
@ -6,7 +6,7 @@ toc:
|
|||
- docs/home/index.md
|
||||
|
||||
- title: Release Notes
|
||||
path: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md
|
||||
path: https://git.k8s.io/kubernetes/CHANGELOG.md
|
||||
- title: Release Roadmap
|
||||
path: https://github.com/kubernetes/kubernetes/milestones/
|
||||
|
||||
|
|
|
@ -31,9 +31,9 @@ toc:
|
|||
- title: OpenAPI and Swagger
|
||||
section:
|
||||
- title: OpenAPI Spec
|
||||
path: https://github.com/kubernetes/kubernetes/blob/master/api/openapi-spec/
|
||||
path: https://git.k8s.io/kubernetes/api/openapi-spec/
|
||||
- title: Swagger Spec
|
||||
path: https://github.com/kubernetes/kubernetes/tree/master/api/swagger-spec/
|
||||
path: https://git.k8s.io/kubernetes/api/swagger-spec/
|
||||
|
||||
- title: Federation API
|
||||
section:
|
||||
|
@ -71,16 +71,16 @@ toc:
|
|||
- title: Kubernetes Design Docs
|
||||
section:
|
||||
- title: Kubernetes Architecture
|
||||
path: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture.md
|
||||
path: https://git.k8s.io/community/contributors/design-proposals/architecture.md
|
||||
- title: Kubernetes Design Overview
|
||||
path: https://github.com/kubernetes/kubernetes/tree/release-1.6/docs/design
|
||||
- title: Kubernetes Identity and Access Management
|
||||
path: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/access.md
|
||||
path: https://git.k8s.io/community/contributors/design-proposals/access.md
|
||||
- docs/admin/ovs-networking.md
|
||||
- title: Security Contexts
|
||||
path: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/security_context.md
|
||||
path: https://git.k8s.io/community/contributors/design-proposals/security_context.md
|
||||
- title: Security in Kubernetes
|
||||
path: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/security.md
|
||||
path: https://git.k8s.io/community/contributors/design-proposals/security.md
|
||||
|
||||
- title: Kubernetes Issues and Security
|
||||
section:
|
||||
|
|
|
@ -25,7 +25,7 @@
|
|||
</div>
|
||||
</div>
|
||||
<div id="miceType" class="center">
|
||||
© {{ 'now' | date: "%Y" }} The Kubernetes Authors | Documentation Distributed under <a href="https://github.com/kubernetes/kubernetes.github.io/blob/master/LICENSE" class="light-text">CC BY 4.0</a>
|
||||
© {{ 'now' | date: "%Y" }} The Kubernetes Authors | Documentation Distributed under <a href="https://git.k8s.io/kubernetes.github.io/LICENSE" class="light-text">CC BY 4.0</a>
|
||||
</div>
|
||||
<div id="miceType" class="center">
|
||||
Copyright © {{ 'now' | date: "%Y" }} The Linux Foundation®. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page: <a href="https://www.linuxfoundation.org/trademark-usage" class="light-text">https://www.linuxfoundation.org/trademark-usage</a>
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
<tr>
|
||||
<td>
|
||||
<p><b>NOTICE</b></p>
|
||||
<p>As of March 14, 2017, the Kubernetes SIG-Docs-Maintainers group have begun migration of the User Guide content as announced previously to the <a href="https://github.com/kubernetes/community/tree/master/sig-docs">SIG Docs community</a> through the <a href="https://groups.google.com/forum/#!forum/kubernetes-sig-docs">kubernetes-sig-docs</a> group and <a href="https://kubernetes.slack.com/messages/sig-docs/">kubernetes.slack.com #sig-docs</a> channel.</p>
|
||||
<p>As of March 14, 2017, the Kubernetes SIG-Docs-Maintainers group have begun migration of the User Guide content as announced previously to the <a href="https://git.k8s.io/community/sig-docs">SIG Docs community</a> through the <a href="https://groups.google.com/forum/#!forum/kubernetes-sig-docs">kubernetes-sig-docs</a> group and <a href="https://kubernetes.slack.com/messages/sig-docs/">kubernetes.slack.com #sig-docs</a> channel.</p>
|
||||
<p>The user guides within this section are being refactored into topics within Tutorials, Tasks, and Concepts. Anything that has been moved will have a notice placed in its previous location as well as a link to its new location. The reorganization implements a new table of contents and should improve the documentation's findability and readability for a wider range of audiences.</p>
|
||||
<p>For any questions, please contact: <a href="mailto:kubernetes-sig-docs@googlegroups.com">kubernetes-sig-docs@googlegroups.com</a></p>
|
||||
</td>
|
||||
|
|
|
@ -91,9 +91,9 @@ Kubernetes 提供了很多的功能,总会有新的场景会受益于新特性
|
|||
|
||||
[Label](/docs/user-guide/labels/) 允许用户按照自己的方式组织管理对应的资源。 [注解](/docs/user-guide/annotations/) 使用户能够以自定义的描述信息来修饰资源,以适用于自己的工作流,并为管理工具提供检查点状态的简单方法。
|
||||
|
||||
此外,[Kubernetes 控制面](/docs/admin/cluster-components) 是构建在相同的 [APIs](/docs/api/) 上面,开发员人和用户都可以用。用户可以编写自己的控制器, [调度器](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/scheduler.md)等等,如果这么做,根据新加的[自定义 API](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/extending-api.md) ,可以扩展当前的通用 [CLI 命令行工具](/docs/user-guide/kubectl-overview/)。
|
||||
此外,[Kubernetes 控制面](/docs/admin/cluster-components) 是构建在相同的 [APIs](/docs/api/) 上面,开发员人和用户都可以用。用户可以编写自己的控制器, [调度器](https://git.k8s.io/community/contributors/devel/scheduler.md)等等,如果这么做,根据新加的[自定义 API](https://git.k8s.io/community/contributors/design-proposals/extending-api.md) ,可以扩展当前的通用 [CLI 命令行工具](/docs/user-guide/kubectl-overview/)。
|
||||
|
||||
这种 [设计](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/principles.md) 使得许多其他系统可以构建在 Kubernetes 之上。
|
||||
这种 [设计](https://git.k8s.io/community/contributors/design-proposals/principles.md) 使得许多其他系统可以构建在 Kubernetes 之上。
|
||||
|
||||
#### Kubernetes 不是什么:
|
||||
|
||||
|
|
|
@ -205,7 +205,7 @@ This plug-in will observe the incoming request and ensure that it does not viola
|
|||
enumerated in the `ResourceQuota` object in a `Namespace`. If you are using `ResourceQuota`
|
||||
objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints.
|
||||
|
||||
See the [resourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_resource_quota.md) and the [example of Resource Quota](/docs/concepts/policy/resource-quotas/) for more details.
|
||||
See the [resourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/admission_control_resource_quota.md) and the [example of Resource Quota](/docs/concepts/policy/resource-quotas/) for more details.
|
||||
|
||||
It is strongly encouraged that this plug-in is configured last in the sequence of admission control plug-ins. This is
|
||||
so that quota is not prematurely incremented only for the request to be rejected later in admission control.
|
||||
|
@ -218,7 +218,7 @@ your Kubernetes deployment, you MUST use this plug-in to enforce those constrain
|
|||
be used to apply default resource requests to Pods that don't specify any; currently, the default LimitRanger
|
||||
applies a 0.1 CPU requirement to all Pods in the `default` namespace.
|
||||
|
||||
See the [limitRange design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_limit_range.md) and the [example of Limit Range](/docs/tasks/configure-pod-container/limit-range/) for more details.
|
||||
See the [limitRange design doc](https://git.k8s.io/community/contributors/design-proposals/admission_control_limit_range.md) and the [example of Limit Range](/docs/tasks/configure-pod-container/limit-range/) for more details.
|
||||
|
||||
### InitialResources (experimental)
|
||||
|
||||
|
@ -227,7 +227,7 @@ then the plug-in auto-populates a compute resource request based on historical u
|
|||
If there is not enough data to make a decision the Request is left unchanged.
|
||||
When the plug-in sets a compute resource request, it annotates the pod with information on what compute resources it auto-populated.
|
||||
|
||||
See the [InitialResouces proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/initial-resources.md) for more details.
|
||||
See the [InitialResouces proposal](https://git.k8s.io/community/contributors/design-proposals/initial-resources.md) for more details.
|
||||
|
||||
### NamespaceLifecycle
|
||||
|
||||
|
|
|
@ -141,7 +141,7 @@ Access to other non-resource paths can be disallowed without restricting access
|
|||
to the REST api.
|
||||
|
||||
For further documentation refer to the authorization.v1beta1 API objects and
|
||||
[webhook.go](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/plugin/pkg/authorizer/webhook/webhook.go).
|
||||
[webhook.go](https://git.k8s.io/kubernetes/staging/src/k8s.io/apiserver/plugin/pkg/authorizer/webhook/webhook.go).
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
|
|
@ -58,7 +58,7 @@ Authorization: Bearer 07401b.f395accd246ae52d
|
|||
|
||||
Each valid token is backed by a secret in the `kube-system` namespace. You can
|
||||
find the full design doc
|
||||
[here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/bootstrap-discovery.md).
|
||||
[here](https://git.k8s.io/community/contributors/design-proposals/bootstrap-discovery.md).
|
||||
|
||||
Here is what the secret looks like. Note that `base64(string)` indicates the
|
||||
value should be base64 encoded. The undecoded version is provided here for
|
||||
|
|
|
@ -385,4 +385,4 @@ if required.
|
|||
|
||||
## For more information
|
||||
|
||||
* [Federation proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/federation.md) details use cases that motivated this work.
|
||||
* [Federation proposal](https://git.k8s.io/community/contributors/design-proposals/federation.md) details use cases that motivated this work.
|
||||
|
|
|
@ -448,7 +448,7 @@ Remember to change `proxy_ip` and add a kube master node IP address to
|
|||
|
||||
## Use Kubeadm with other CRI runtimes
|
||||
|
||||
Since [Kubernetes 1.6 release](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#node-components-1), Kubernetes container runtimes have been transferred to using CRI by default. Currently, the build-in container runtime is Docker which is enabled by build-in `dockershim` in `kubelet`.
|
||||
Since [Kubernetes 1.6 release](https://git.k8s.io/kubernetes/CHANGELOG.md#node-components-1), Kubernetes container runtimes have been transferred to using CRI by default. Currently, the build-in container runtime is Docker which is enabled by build-in `dockershim` in `kubelet`.
|
||||
|
||||
Using other CRI based runtimes with kubeadm is very simple, and currently supported runtimes are:
|
||||
|
||||
|
@ -494,5 +494,5 @@ If you already have kubeadm installed and want to upgrade, run `apt-get update
|
|||
&& apt-get upgrade` or `yum update` to get the latest version of kubeadm.
|
||||
|
||||
Refer to the
|
||||
[CHANGELOG.md](https://github.com/kubernetes/kubeadm/blob/master/CHANGELOG.md)
|
||||
[CHANGELOG.md](https://git.k8s.io/kubeadm/CHANGELOG.md)
|
||||
for more information.
|
||||
|
|
|
@ -66,7 +66,7 @@ kubelet
|
|||
--enable-custom-metrics Support for gathering custom metrics.
|
||||
--enable-debugging-handlers Enables server endpoints for log collection and local running of containers and commands (default true)
|
||||
--enable-server Enable the Kubelet's server (default true)
|
||||
--enforce-node-allocatable stringSlice A comma separated list of levels of node allocatable enforcement to be enforced by kubelet. Acceptible options are 'pods', 'system-reserved' & 'kube-reserved'. If the latter two options are specified, '--system-reserved-cgroup' & '--kube-reserved-cgroup' must also be set respectively. See https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node-allocatable.md for more details. [default='pods'] (default [pods])
|
||||
--enforce-node-allocatable stringSlice A comma separated list of levels of node allocatable enforcement to be enforced by kubelet. Acceptible options are 'pods', 'system-reserved' & 'kube-reserved'. If the latter two options are specified, '--system-reserved-cgroup' & '--kube-reserved-cgroup' must also be set respectively. See https://git.k8s.io/community/contributors/design-proposals/node-allocatable.md for more details. [default='pods'] (default [pods])
|
||||
--event-burst int32 Maximum size of a bursty event records, temporarily allows event records to burst to this number, while still not exceeding event-qps. Only used if --event-qps > 0 (default 10)
|
||||
--event-qps int32 If > 0, limit event creations per second to this value. If 0, unlimited. (default 5)
|
||||
--eviction-hard string A set of eviction thresholds (e.g. memory.available<1Gi) that if met would trigger a pod eviction. (default "memory.available<100Mi")
|
||||
|
@ -76,7 +76,7 @@ kubelet
|
|||
--eviction-soft string A set of eviction thresholds (e.g. memory.available<1.5Gi) that if met over a corresponding grace period would trigger a pod eviction.
|
||||
--eviction-soft-grace-period string A set of eviction grace periods (e.g. memory.available=1m30s) that correspond to how long a soft eviction threshold must hold before triggering a pod eviction.
|
||||
--exit-on-lock-contention Whether kubelet should exit upon lock-file contention.
|
||||
--experimental-allocatable-ignore-eviction When set to 'true', Hard Eviction Thresholds will be ignored while calculating Node Allocatable. See https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node-allocatable.md for more details. [default=false]
|
||||
--experimental-allocatable-ignore-eviction When set to 'true', Hard Eviction Thresholds will be ignored while calculating Node Allocatable. See https://git.k8s.io/community/contributors/design-proposals/node-allocatable.md for more details. [default=false]
|
||||
--experimental-allowed-unsafe-sysctls stringSlice Comma-separated whitelist of unsafe sysctls or unsafe sysctl patterns (ending in *). Use these at your own risk.
|
||||
--experimental-bootstrap-kubeconfig string <Warning: Experimental feature> Path to a kubeconfig file that will be used to get client certificate for kubelet. If the file specified by --kubeconfig does not exist, the bootstrap kubeconfig is used to request a client certificate from the API server. On success, a kubeconfig file referencing the generated key and obtained certificate is written to the path specified by --kubeconfig. The certificate and key file will be stored in the directory pointed by --cert-dir.
|
||||
--experimental-check-node-capabilities-before-mount [Experimental] if set true, the kubelet will check the underlying node for required componenets (binaries, etc.) before performing the mount
|
||||
|
|
|
@ -11,7 +11,7 @@ title: Running in Multiple Zones
|
|||
Kubernetes 1.2 adds support for running a single cluster in multiple failure zones
|
||||
(GCE calls them simply "zones", AWS calls them "availability zones", here we'll refer to them as "zones").
|
||||
This is a lightweight version of a broader Cluster Federation feature (previously referred to by the affectionate
|
||||
nickname ["Ubernetes"](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/federation.md)).
|
||||
nickname ["Ubernetes"](https://git.k8s.io/community/contributors/design-proposals/federation.md)).
|
||||
Full Cluster Federation allows combining separate
|
||||
Kubernetes clusters running in different regions or cloud providers
|
||||
(or on-premise data centers). However, many
|
||||
|
|
|
@ -84,7 +84,7 @@ sudo docker run -it --rm --privileged --net=host \
|
|||
gcr.io/google_containers/node-test:0.2
|
||||
```
|
||||
|
||||
Node conformance test is a containerized version of [node e2e test](https://github.com/kubernetes/community/blob/master/contributors/devel/e2e-node-tests.md).
|
||||
Node conformance test is a containerized version of [node e2e test](https://git.k8s.io/community/contributors/devel/e2e-node-tests.md).
|
||||
By default, it runs all conformance tests.
|
||||
|
||||
Theoretically, you can run any node e2e test if you configure the container and
|
||||
|
|
|
@ -7918,7 +7918,7 @@ Appears In <a href="#pod-v1-core">Pod</a> </aside>
|
|||
</tr>
|
||||
<tr>
|
||||
<td>qosClass <br /> <em>string</em></td>
|
||||
<td>The Quality of Service (QOS) classification assigned to the pod based on resource requirements See PodQOSClass type for available QOS classes More info: <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/design/resource-qos.md">https://github.com/kubernetes/kubernetes/blob/master/docs/design/resource-qos.md</a></td>
|
||||
<td>The Quality of Service (QOS) classification assigned to the pod based on resource requirements See PodQOSClass type for available QOS classes More info: <a href="https://git.k8s.io/community/contributors/design-proposals/resource-qos.md">https://git.k8s.io/community/contributors/design-proposals/resource-qos.md</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>reason <br /> <em>string</em></td>
|
||||
|
|
|
@ -19,7 +19,7 @@ A `node` is a worker machine in Kubernetes, previously known as a `minion`. A no
|
|||
may be a VM or physical machine, depending on the cluster. Each node has
|
||||
the services necessary to run [pods](/docs/user-guide/pods) and is managed by the master
|
||||
components. The services on a node include Docker, kubelet and kube-proxy. See
|
||||
[The Kubernetes Node](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/architecture.md#the-kubernetes-node) section in the
|
||||
[The Kubernetes Node](https://git.k8s.io/community/contributors/design-proposals/architecture.md#the-kubernetes-node) section in the
|
||||
architecture design doc for more details.
|
||||
|
||||
## Node Status
|
||||
|
|
|
@ -31,6 +31,6 @@ Add-ons in each section are sorted alphabetically - the ordering does not imply
|
|||
|
||||
## Legacy Add-ons
|
||||
|
||||
There are several other add-ons documented in the deprecated [cluster/addons](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) directory.
|
||||
There are several other add-ons documented in the deprecated [cluster/addons](https://git.k8s.io/kubernetes/cluster/addons) directory.
|
||||
|
||||
Well-maintained ones should be linked to here. PRs welcome!
|
||||
|
|
|
@ -44,7 +44,7 @@ why you might want multiple clusters are:
|
|||
See [Multi cluster guide](/docs/admin/multi-cluster) for details.
|
||||
* Scalability: There are scalability limits to a single kubernetes cluster (this
|
||||
should not be the case for most users. For more details:
|
||||
[Kubernetes Scaling and Performance Goals](https://github.com/kubernetes/community/blob/master/sig-scalability/goals.md)).
|
||||
[Kubernetes Scaling and Performance Goals](https://git.k8s.io/community/sig-scalability/goals.md)).
|
||||
* [Hybrid cloud](###hybrid-cloud-capabilities): You can have multiple clusters on different cloud providers or
|
||||
on-premises data centers.
|
||||
|
||||
|
|
|
@ -101,7 +101,7 @@ systemd is not present, they write to `.log` files in the `/var/log` directory.
|
|||
System components inside containers always write to the `/var/log` directory,
|
||||
bypassing the default logging mechanism. They use the [glog][glog]
|
||||
logging library. You can find the conventions for logging severity for those
|
||||
components in the [development docs on logging](https://github.com/kubernetes/community/blob/master/contributors/devel/logging.md).
|
||||
components in the [development docs on logging](https://git.k8s.io/community/contributors/devel/logging.md).
|
||||
|
||||
Similarly to the container logs, system component logs in the `/var/log`
|
||||
directory should be rotated. In Kubernetes clusters brought up by
|
||||
|
|
|
@ -225,7 +225,7 @@ to run, and in both cases, the network provides one IP address per pod - as is s
|
|||
|
||||
### CNI-Genie from Huawei
|
||||
|
||||
[CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) is a CNI plugin that enables Kubernetes to [simultanously have access to different implementations](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables) of the [Kubernetes network model](https://github.com/kubernetes/kubernetes.github.io/blob/master/docs/concepts/cluster-administration/networking.md#kubernetes-model) in runtime. This includes any implementation that runs as a [CNI plugin](https://github.com/containernetworking/cni#3rd-party-plugins), such as [Flannel](https://github.com/coreos/flannel#flannel), [Calico](http://docs.projectcalico.org/), [Romana](http://romana.io), [Weave-net](https://www.weave.works/products/weave-net/).
|
||||
[CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) is a CNI plugin that enables Kubernetes to [simultanously have access to different implementations](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables) of the [Kubernetes network model](https://git.k8s.io/kubernetes.github.io/docs/concepts/cluster-administration/networking.md#kubernetes-model) in runtime. This includes any implementation that runs as a [CNI plugin](https://github.com/containernetworking/cni#3rd-party-plugins), such as [Flannel](https://github.com/coreos/flannel#flannel), [Calico](http://docs.projectcalico.org/), [Romana](http://romana.io), [Weave-net](https://www.weave.works/products/weave-net/).
|
||||
|
||||
CNI-Genie also supports [assigning multiple IP addresses to a pod](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-ips/README.md#feature-2-extension-cni-genie-multiple-ip-addresses-per-pod), each from a different CNI plugin.
|
||||
|
||||
|
@ -233,4 +233,4 @@ CNI-Genie also supports [assigning multiple IP addresses to a pod](https://githu
|
|||
|
||||
The early design of the networking model and its rationale, and some future
|
||||
plans are described in more detail in the [networking design
|
||||
document](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/networking.md).
|
||||
document](https://git.k8s.io/community/contributors/design-proposals/networking.md).
|
||||
|
|
|
@ -43,7 +43,7 @@ Run `kubectl get nodes` to get the names of your cluster's nodes. Pick out the o
|
|||
|
||||
If this fails with an "invalid command" error, you're likely using an older version of kubectl that doesn't have the `label` command. In that case, see the [previous version](https://github.com/kubernetes/kubernetes/blob/a053dbc313572ed60d89dae9821ecab8bfd676dc/examples/node-selection/README.md) of this guide for instructions on how to manually set labels on a node.
|
||||
|
||||
Also, note that label keys must be in the form of DNS labels (as described in the [identifiers doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/identifiers.md)), meaning that they are not allowed to contain any upper-case letters.
|
||||
Also, note that label keys must be in the form of DNS labels (as described in the [identifiers doc](https://git.k8s.io/community/contributors/design-proposals/identifiers.md)), meaning that they are not allowed to contain any upper-case letters.
|
||||
|
||||
You can verify that it worked by re-running `kubectl get nodes --show-labels` and checking that the node now has a label.
|
||||
|
||||
|
@ -143,7 +143,7 @@ If you specify multiple `nodeSelectorTerms` associated with `nodeAffinity` types
|
|||
If you specify multiple `matchExpressions` associated with `nodeSelectorTerms`, then the pod can be scheduled onto a node **only if all** `matchExpressions` can be satisfied.
|
||||
|
||||
For more information on node affinity, see the design doc
|
||||
[here](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/nodeaffinity.md).
|
||||
[here](https://git.k8s.io/community/contributors/design-proposals/nodeaffinity.md).
|
||||
|
||||
### Inter-pod affinity and anti-affinity (beta feature)
|
||||
|
||||
|
@ -184,7 +184,7 @@ value V that is running a pod that has a label with key "security" and value "S1
|
|||
rule says that the pod prefers to not schedule onto a node if that node is already running a pod with label
|
||||
having key "security" and value "S2". (If the `topologyKey` were `failure-domain.beta.kubernetes.io/zone` then
|
||||
it would mean that the pod cannot schedule onto a node if that node is in the same zone as a pod with
|
||||
label having key "security" and value "S2".) See the [design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/podaffinity.md).
|
||||
label having key "security" and value "S2".) See the [design doc](https://git.k8s.io/community/contributors/design-proposals/podaffinity.md).
|
||||
for many more examples of pod affinity and anti-affinity, both the `requiredDuringSchedulingIgnoredDuringExecution`
|
||||
flavor and the `preferredDuringSchedulingIgnoredDuringExecution` flavor.
|
||||
|
||||
|
@ -208,7 +208,7 @@ All `matchExpressions` associated with `requiredDuringSchedulingIgnoredDuringExe
|
|||
must be satisfied for the pod to schedule onto a node.
|
||||
|
||||
For more information on inter-pod affinity/anti-affinity, see the design doc
|
||||
[here](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/podaffinity.md).
|
||||
[here](https://git.k8s.io/community/contributors/design-proposals/podaffinity.md).
|
||||
|
||||
## Taints and tolerations (beta feature)
|
||||
|
||||
|
@ -432,7 +432,7 @@ These automatically-added tolerations ensure that
|
|||
the default pod behavior of remaining bound for 5 minutes after one of these
|
||||
problems is detected is maintained.
|
||||
The two default tolerations are added by the [DefaultTolerationSeconds
|
||||
admission controller](https://github.com/kubernetes/kubernetes/tree/master/plugin/pkg/admission/defaulttolerationseconds).
|
||||
admission controller](https://git.k8s.io/kubernetes/plugin/pkg/admission/defaulttolerationseconds).
|
||||
|
||||
[DaemonSet](https://kubernetes.io/docs/admin/daemons/) pods are created with
|
||||
`NoExecute` tolerations for `node.alpha.kubernetes.io/unreachable` and `node.alpha.kubernetes.io/notReady`
|
||||
|
|
|
@ -13,7 +13,7 @@ requests specified, the scheduler can make better decisions about which nodes to
|
|||
place Pods on. And when Containers have their limits specified, contention for
|
||||
resources on a node can be handled in a specified manner. For more details about
|
||||
the difference between requests and limits, see
|
||||
[Resource QoS](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-qos.md).
|
||||
[Resource QoS](https://git.k8s.io/community/contributors/design-proposals/resource-qos.md).
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
@ -243,7 +243,7 @@ The amount of resources available to Pods is less than the node capacity, becaus
|
|||
system daemons use a portion of the available resources. The `allocatable` field
|
||||
[NodeStatus](/docs/resources-reference/v1.6/#nodestatus-v1-core)
|
||||
gives the amount of resources that are available to Pods. For more information, see
|
||||
[Node Allocatable Resources](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node-allocatable.md).
|
||||
[Node Allocatable Resources](https://git.k8s.io/community/contributors/design-proposals/node-allocatable.md).
|
||||
|
||||
The [resource quota](/docs/concepts/policy/resource-quotas/) feature can be configured
|
||||
to limit the total amount of resources that can be consumed. If used in conjunction
|
||||
|
|
|
@ -10,7 +10,7 @@ redirect_from:
|
|||
Objects of type `secret` are intended to hold sensitive information, such as
|
||||
passwords, OAuth tokens, and ssh keys. Putting this information in a `secret`
|
||||
is safer and more flexible than putting it verbatim in a `pod` definition or in
|
||||
a docker image. See [Secrets design document](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/secrets.md) for more information.
|
||||
a docker image. See [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/secrets.md) for more information.
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
@ -121,7 +121,7 @@ data:
|
|||
```
|
||||
|
||||
The data field is a map. Its keys must match
|
||||
[`DNS_SUBDOMAIN`](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/design/identifiers.md), except that leading dots are also
|
||||
[`DNS_SUBDOMAIN`](https://git.k8s.io/community/contributors/design-proposals/identifiers.md), except that leading dots are also
|
||||
allowed. The values are arbitrary data, encoded using base64.
|
||||
|
||||
Create the secret using [`kubectl create`](/docs/user-guide/kubectl/v1.6/#create):
|
||||
|
|
|
@ -7,7 +7,7 @@ redirect_from:
|
|||
- "/docs/api.html"
|
||||
---
|
||||
|
||||
Overall API conventions are described in the [API conventions doc](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md).
|
||||
Overall API conventions are described in the [API conventions doc](https://git.k8s.io/community/contributors/devel/api-conventions.md).
|
||||
|
||||
API endpoints, resource types and samples are described in [API Reference](/docs/reference).
|
||||
|
||||
|
@ -23,13 +23,13 @@ Kubernetes itself is decomposed into multiple components, which interact through
|
|||
|
||||
In our experience, any system that is successful needs to grow and change as new use cases emerge or existing ones change. Therefore, we expect the Kubernetes API to continuously change and grow. However, we intend to not break compatibility with existing clients, for an extended period of time. In general, new API resources and new resource fields can be expected to be added frequently. Elimination of resources or fields will require following a deprecation process. The precise deprecation policy for eliminating features is TBD, but once we reach our 1.0 milestone, there will be a specific policy.
|
||||
|
||||
What constitutes a compatible change and how to change the API are detailed by the [API change document](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api_changes.md).
|
||||
What constitutes a compatible change and how to change the API are detailed by the [API change document](https://git.k8s.io/community/contributors/devel/api_changes.md).
|
||||
|
||||
## OpenAPI and Swagger definitions
|
||||
|
||||
Complete API details are documented using [Swagger v1.2](http://swagger.io/) and [OpenAPI](https://www.openapis.org/). The Kubernetes apiserver (aka "master") exposes an API that can be used to retrieve the Swagger v1.2 Kubernetes API spec located at `/swaggerapi`. You can also enable a UI to browse the API documentation at `/swagger-ui` by passing the `--enable-swagger-ui=true` flag to apiserver.
|
||||
|
||||
Starting with kubernetes 1.4, OpenAPI spec is also available at [`/swagger.json`](https://github.com/kubernetes/kubernetes/blob/master/api/openapi-spec/swagger.json). While we are transitioning from Swagger v1.2 to OpenAPI (aka Swagger v2.0), some of the tools such as kubectl and swagger-ui are still using v1.2 spec. OpenAPI spec is in Beta as of Kubernetes 1.5.
|
||||
Starting with kubernetes 1.4, OpenAPI spec is also available at [`/swagger.json`](https://git.k8s.io/kubernetes/api/openapi-spec/swagger.json). While we are transitioning from Swagger v1.2 to OpenAPI (aka Swagger v2.0), some of the tools such as kubectl and swagger-ui are still using v1.2 spec. OpenAPI spec is in Beta as of Kubernetes 1.5.
|
||||
|
||||
Kubernetes implements an alternative Protobuf based serialization format for the API that is primarily intended for intra-cluster communication, documented in the [design proposal](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/proposals/protobuf.md) and the IDL files for each schema are located in the Go packages that define the API objects.
|
||||
|
||||
|
@ -42,12 +42,12 @@ multiple API versions, each at a different API path, such as `/api/v1` or
|
|||
We chose to version at the API level rather than at the resource or field level to ensure that the API presents a clear, consistent view of system resources and behavior, and to enable controlling access to end-of-lifed and/or experimental APIs. The JSON and Protobuf serialization schemas follow the same guidelines for schema changes - all descriptions below cover both formats.
|
||||
|
||||
Note that API versioning and Software versioning are only indirectly related. The [API and release
|
||||
versioning proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/versioning.md) describes the relationship between API versioning and
|
||||
versioning proposal](https://git.k8s.io/community/contributors/design-proposals/versioning.md) describes the relationship between API versioning and
|
||||
software versioning.
|
||||
|
||||
|
||||
Different API versions imply different levels of stability and support. The criteria for each level are described
|
||||
in more detail in the [API Changes documentation](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api_changes.md#alpha-beta-and-stable-versions). They are summarized here:
|
||||
in more detail in the [API Changes documentation](https://git.k8s.io/community/contributors/devel/api_changes.md#alpha-beta-and-stable-versions). They are summarized here:
|
||||
|
||||
- Alpha level:
|
||||
- The version names contain `alpha` (e.g. `v1alpha1`).
|
||||
|
@ -71,7 +71,7 @@ in more detail in the [API Changes documentation](https://github.com/kubernetes/
|
|||
|
||||
## API groups
|
||||
|
||||
To make it easier to extend the Kubernetes API, we implemented [*API groups*](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-group.md).
|
||||
To make it easier to extend the Kubernetes API, we implemented [*API groups*](https://git.k8s.io/community/contributors/design-proposals/api-group.md).
|
||||
The API group is specified in a REST path and in the `apiVersion` field of a serialized object.
|
||||
|
||||
Currently there are several API groups in use:
|
||||
|
@ -83,10 +83,10 @@ Currently there are several API groups in use:
|
|||
|
||||
|
||||
There are two supported paths to extending the API.
|
||||
1. [Third Party Resources](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/extending-api.md)
|
||||
1. [Third Party Resources](https://git.k8s.io/community/contributors/design-proposals/extending-api.md)
|
||||
are for users with very basic CRUD needs.
|
||||
1. Coming soon: users needing the full set of Kubernetes API semantics can implement their own apiserver
|
||||
and use the [aggregator](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/aggregated-api-servers.md)
|
||||
and use the [aggregator](https://git.k8s.io/community/contributors/design-proposals/aggregated-api-servers.md)
|
||||
to make it seamless for clients.
|
||||
|
||||
|
||||
|
|
|
@ -93,7 +93,7 @@ Even though Kubernetes provides a lot of functionality, there are always new sce
|
|||
|
||||
[Labels](/docs/concepts/overview/working-with-objects/labels/) empower users to organize their resources however they please. [Annotations](/docs/concepts/overview/working-with-objects/annotations/) enable users to decorate resources with custom information to facilitate their workflows and provide an easy way for management tools to checkpoint state.
|
||||
|
||||
Additionally, the [Kubernetes control plane](/docs/concepts/overview/components/) is built upon the same [APIs](/docs/reference/api-overview/) that are available to developers and users. Users can write their own controllers, such as [schedulers](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/scheduler.md), with [their own APIs](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/extending-api.md) that can be targeted by a general-purpose [command-line tool](/docs/user-guide/kubectl-overview/).
|
||||
Additionally, the [Kubernetes control plane](/docs/concepts/overview/components/) is built upon the same [APIs](/docs/reference/api-overview/) that are available to developers and users. Users can write their own controllers, such as [schedulers](https://git.k8s.io/community/contributors/devel/scheduler.md), with [their own APIs](https://git.k8s.io/community/contributors/design-proposals/extending-api.md) that can be targeted by a general-purpose [command-line tool](/docs/user-guide/kubectl-overview/).
|
||||
|
||||
This [design](https://github.com/kubernetes/community/blob/{{page.githubbranch}}/contributors/design-proposals/principles.md) has enabled a number of other systems to build atop Kubernetes.
|
||||
|
||||
|
|
|
@ -21,7 +21,7 @@ This page explains how Kubernetes objects are represented in the Kubernetes API,
|
|||
|
||||
A Kubernetes object is a "record of intent"--once you create the object, the Kubernetes system will constantly work to ensure that object exists. By creating an object, you're effectively telling the Kubernetes system what you want your cluster's workload to look like; this is your cluster's **desired state**.
|
||||
|
||||
To work with Kubernetes objects--whether to create, modify, or delete them--you'll need to use the [Kubernetes API](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api-conventions.md). When you use the `kubectl` command-line interface, for example, the CLI makes the necessary Kubernetes API calls for you; you can also use the Kubernetes API directly in your own programs. Kubernetes currently provides a `golang` [client library](https://github.com/kubernetes/client-go) for this purpose, and other language libraries (such as [Python](https://github.com/kubernetes-incubator/client-python)) are being developed.
|
||||
To work with Kubernetes objects--whether to create, modify, or delete them--you'll need to use the [Kubernetes API](https://git.k8s.io/community/contributors/devel/api-conventions.md). When you use the `kubectl` command-line interface, for example, the CLI makes the necessary Kubernetes API calls for you; you can also use the Kubernetes API directly in your own programs. Kubernetes currently provides a `golang` [client library](https://github.com/kubernetes/client-go) for this purpose, and other language libraries (such as [Python](https://github.com/kubernetes-incubator/client-python)) are being developed.
|
||||
|
||||
### Object Spec and Status
|
||||
|
||||
|
@ -30,7 +30,7 @@ Every Kubernetes object includes two nested object fields that govern the object
|
|||
|
||||
For example, a Kubernetes Deployment is an object that can represent an application running on your cluster. When you create the Deployment, you might set the Deployment spec to specify that you want three replicas of the application to be running. The Kubernetes system reads the Deployment spec and starts three instances of your desired application--updating the status to match your spec. If any of those instances should fail (a status change), the Kubernetes system responds to the difference between spec and status by making a correction--in this case, starting a replacement instance.
|
||||
|
||||
For more information on the object spec, status, and metadata, see the [Kubernetes API Conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md).
|
||||
For more information on the object spec, status, and metadata, see the [Kubernetes API Conventions](https://git.k8s.io/community/contributors/devel/api-conventions.md).
|
||||
|
||||
### Describing a Kubernetes Object
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ For non-unique user-provided attributes, Kubernetes provides [labels](/docs/user
|
|||
|
||||
## Names
|
||||
|
||||
Names are generally client-provided. Only one object of a given kind can have a given name at a time (i.e., they are spatially unique). But if you delete an object, you can make a new object with the same name. Names are used to refer to an object in a resource URL, such as `/api/v1/pods/some-name`. By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restrictions. See the [identifiers design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/identifiers.md) for the precise syntax rules for names.
|
||||
Names are generally client-provided. Only one object of a given kind can have a given name at a time (i.e., they are spatially unique). But if you delete an object, you can make a new object with the same name. Names are used to refer to an object in a resource URL, such as `/api/v1/pods/some-name`. By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restrictions. See the [identifiers design doc](https://git.k8s.io/community/contributors/design-proposals/identifiers.md) for the precise syntax rules for names.
|
||||
|
||||
## UIDs
|
||||
|
||||
|
|
|
@ -11,7 +11,7 @@ Objects of type `PodSecurityPolicy` govern the ability
|
|||
to make requests on a pod that affect the `SecurityContext` that will be
|
||||
applied to a pod and container.
|
||||
|
||||
See [PodSecurityPolicy proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/security-context-constraints.md) for more information.
|
||||
See [PodSecurityPolicy proposal](https://git.k8s.io/community/contributors/design-proposals/security-context-constraints.md) for more information.
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
@ -203,4 +203,4 @@ following
|
|||
|
||||
In Kubernetes 1.5 and newer, you can use PodSecurityPolicy to control access to privileged containers based on user role and groups. Access to different PodSecurityPolicy objects can be controlled via authorization. To limit access to PodSecurityPolicy objects for pods created via a Deployment, ReplicaSet, etc, the [Controller Manager](/docs/admin/kube-controller-manager/) must be run against the secured API port, and must not have superuser permissions.
|
||||
|
||||
PodSecurityPolicy authorization uses the union of all policies available to the user creating the pod and the service account specified on the pod. When pods are created via a Deployment, ReplicaSet, etc, it is Controller Manager that creates the pod, so if it is running against the unsecured API port, all PodSecurityPolicy objects would be allowed, and you could not effectively subdivide access. Access to given PSP policies for a user will be effective only when deploying Pods directly. For more details, see the [PodSecurityPolicy RBAC example](https://github.com/kubernetes/kubernetes/blob/master/examples/podsecuritypolicy/rbac/README.md) of applying PodSecurityPolicy to control access to privileged containers based on role and groups when deploying Pods directly.
|
||||
PodSecurityPolicy authorization uses the union of all policies available to the user creating the pod and the service account specified on the pod. When pods are created via a Deployment, ReplicaSet, etc, it is Controller Manager that creates the pod, so if it is running against the unsecured API port, all PodSecurityPolicy objects would be allowed, and you could not effectively subdivide access. Access to given PSP policies for a user will be effective only when deploying Pods directly. For more details, see the [PodSecurityPolicy RBAC example](https://git.k8s.io/kubernetes/examples/podsecuritypolicy/rbac/README.md) of applying PodSecurityPolicy to control access to privileged containers based on role and groups when deploying Pods directly.
|
||||
|
|
|
@ -240,4 +240,4 @@ See a [detailed example for how to use resource quota](/docs/tasks/configure-pod
|
|||
|
||||
## Read More
|
||||
|
||||
See [ResourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_resource_quota.md) for more information.
|
||||
See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/admission_control_resource_quota.md) for more information.
|
||||
|
|
|
@ -325,7 +325,7 @@ kube-dns 10.180.3.17:53,10.180.3.17:53 1h
|
|||
|
||||
If you do not see the endpoints, see endpoints section in the [debugging services documentation](/docs/tasks/debug-application-cluster/debug-service/).
|
||||
|
||||
For additional Kubernetes DNS examples, see the [cluster-dns examples](https://github.com/kubernetes/kubernetes/tree/master/examples/cluster-dns) in the Kubernetes GitHub repository.
|
||||
For additional Kubernetes DNS examples, see the [cluster-dns examples](https://git.k8s.io/kubernetes/examples/cluster-dns) in the Kubernetes GitHub repository.
|
||||
|
||||
## Kubernetes Federation (Multiple Zone support)
|
||||
|
||||
|
|
|
@ -47,9 +47,9 @@ It can be configured to give services externally-reachable urls, load balance tr
|
|||
|
||||
Before you start using the Ingress resource, there are a few things you should understand. The Ingress is a beta resource, not available in any Kubernetes release prior to 1.1. You need an Ingress controller to satisfy an Ingress, simply creating the resource will have no effect.
|
||||
|
||||
GCE/GKE deploys an ingress controller on the master. You can deploy any number of custom ingress controllers in a pod. You must annotate each ingress with the appropriate class, as indicated [here](https://github.com/kubernetes/ingress/tree/master/controllers/nginx#running-multiple-ingress-controllers) and [here](https://github.com/kubernetes/ingress/blob/master/controllers/gce/BETA_LIMITATIONS.md#disabling-glbc).
|
||||
GCE/GKE deploys an ingress controller on the master. You can deploy any number of custom ingress controllers in a pod. You must annotate each ingress with the appropriate class, as indicated [here](https://git.k8s.io/ingress/controllers/nginx#running-multiple-ingress-controllers) and [here](https://git.k8s.io/ingress/controllers/gce/BETA_LIMITATIONS.md#disabling-glbc).
|
||||
|
||||
Make sure you review the [beta limitations](https://github.com/kubernetes/ingress/blob/master/controllers/gce/BETA_LIMITATIONS.md) of this controller. In environments other than GCE/GKE, you need to [deploy a controller](https://github.com/kubernetes/ingress/tree/master/controllers) as a pod.
|
||||
Make sure you review the [beta limitations](https://git.k8s.io/ingress/controllers/gce/BETA_LIMITATIONS.md) of this controller. In environments other than GCE/GKE, you need to [deploy a controller](https://git.k8s.io/ingress/controllers) as a pod.
|
||||
|
||||
## The Ingress Resource
|
||||
|
||||
|
@ -74,7 +74,7 @@ spec:
|
|||
|
||||
__Lines 1-4__: As with all other Kubernetes config, an Ingress needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see [here](/docs/user-guide/deploying-applications), [here](/docs/user-guide/configuring-containers), and [here](/docs/user-guide/working-with-resources).
|
||||
|
||||
__Lines 5-7__: Ingress [spec](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#spec-and-status) has all the information needed to configure a loadbalancer or proxy server. Most importantly, it contains a list of rules matched against all incoming requests. Currently the Ingress resource only supports http rules.
|
||||
__Lines 5-7__: Ingress [spec](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status) has all the information needed to configure a loadbalancer or proxy server. Most importantly, it contains a list of rules matched against all incoming requests. Currently the Ingress resource only supports http rules.
|
||||
|
||||
__Lines 8-9__: Each http rule contains the following information: A host (e.g.: foo.bar.com, defaults to * in this example), a list of paths (e.g.: /testpath) each of which has an associated backend (test:80). Both the host and path must match the content of an incoming request before the loadbalancer directs traffic to the backend.
|
||||
|
||||
|
@ -84,11 +84,11 @@ __Global Parameters__: For the sake of simplicity the example Ingress has no glo
|
|||
|
||||
## Ingress controllers
|
||||
|
||||
In order for the Ingress resource to work, the cluster must have an Ingress controller running. This is unlike other types of controllers, which typically run as part of the `kube-controller-manager` binary, and which are typically started automatically as part of cluster creation. You need to choose the ingress controller implementation that is the best fit for your cluster, or implement one. Examples and instructions can be found [here](https://github.com/kubernetes/ingress/tree/master/controllers).
|
||||
In order for the Ingress resource to work, the cluster must have an Ingress controller running. This is unlike other types of controllers, which typically run as part of the `kube-controller-manager` binary, and which are typically started automatically as part of cluster creation. You need to choose the ingress controller implementation that is the best fit for your cluster, or implement one. Examples and instructions can be found [here](https://git.k8s.io/ingress/controllers).
|
||||
|
||||
## Before you begin
|
||||
|
||||
The following document describes a set of cross platform features exposed through the Ingress resource. Ideally, all Ingress controllers should fulfill this specification, but we're not there yet. The docs for the GCE and nginx controllers are [here](https://github.com/kubernetes/ingress/blob/master/controllers/gce/README.md) and [here](https://github.com/kubernetes/ingress/blob/master/controllers/nginx/README.md) respectively. **Make sure you review controller specific docs so you understand the caveats of each one**.
|
||||
The following document describes a set of cross platform features exposed through the Ingress resource. Ideally, all Ingress controllers should fulfill this specification, but we're not there yet. The docs for the GCE and nginx controllers are [here](https://git.k8s.io/ingress/controllers/gce/README.md) and [here](https://git.k8s.io/ingress/controllers/nginx/README.md) respectively. **Make sure you review controller specific docs so you understand the caveats of each one**.
|
||||
|
||||
## Types of Ingress
|
||||
|
||||
|
@ -217,13 +217,13 @@ spec:
|
|||
servicePort: 80
|
||||
```
|
||||
|
||||
Note that there is a gap between TLS features supported by various Ingress controllers. Please refer to documentation on [nginx](https://github.com/kubernetes/ingress/blob/master/controllers/nginx/README.md#https), [GCE](https://github.com/kubernetes/ingress/blob/master/controllers/gce/README.md#tls), or any other platform specific Ingress controller to understand how TLS works in your environment.
|
||||
Note that there is a gap between TLS features supported by various Ingress controllers. Please refer to documentation on [nginx](https://git.k8s.io/ingress/controllers/nginx/README.md#https), [GCE](https://git.k8s.io/ingress/controllers/gce/README.md#tls), or any other platform specific Ingress controller to understand how TLS works in your environment.
|
||||
|
||||
### Loadbalancing
|
||||
|
||||
An Ingress controller is bootstrapped with some loadbalancing policy settings that it applies to all Ingress, such as the loadbalancing algorithm, backend weight scheme etc. More advanced loadbalancing concepts (e.g.: persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can still get these features through the [service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). With time, we plan to distill loadbalancing patterns that are applicable cross platform into the Ingress resource.
|
||||
An Ingress controller is bootstrapped with some loadbalancing policy settings that it applies to all Ingress, such as the loadbalancing algorithm, backend weight scheme etc. More advanced loadbalancing concepts (e.g.: persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can still get these features through the [service loadbalancer](https://git.k8s.io/contrib/service-loadbalancer). With time, we plan to distill loadbalancing patterns that are applicable cross platform into the Ingress resource.
|
||||
|
||||
It's also worth noting that even though health checks are not exposed directly through the Ingress, there exist parallel concepts in Kubernetes such as [readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) which allow you to achieve the same end result. Please review the controller specific docs to see how they handle health checks ([nginx](https://github.com/kubernetes/ingress/blob/master/controllers/nginx/README.md), [GCE](https://github.com/kubernetes/ingress/blob/master/controllers/gce/README.md#health-checks)).
|
||||
It's also worth noting that even though health checks are not exposed directly through the Ingress, there exist parallel concepts in Kubernetes such as [readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) which allow you to achieve the same end result. Please review the controller specific docs to see how they handle health checks ([nginx](https://git.k8s.io/ingress/controllers/nginx/README.md), [GCE](https://git.k8s.io/ingress/controllers/gce/README.md#health-checks)).
|
||||
|
||||
## Updating an Ingress
|
||||
|
||||
|
@ -293,5 +293,5 @@ You can expose a Service in multiple ways that don't directly involve the Ingres
|
|||
|
||||
* Use [Service.Type=LoadBalancer](/docs/user-guide/services/#type-loadbalancer)
|
||||
* Use [Service.Type=NodePort](/docs/user-guide/services/#type-nodeport)
|
||||
* Use a [Port Proxy](https://github.com/kubernetes/contrib/tree/master/for-demos/proxy-to-service)
|
||||
* Deploy the [Service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). This allows you to share a single IP among multiple Services and achieve more advanced loadbalancing through Service Annotations.
|
||||
* Use a [Port Proxy](https://git.k8s.io/contrib/for-demos/proxy-to-service)
|
||||
* Deploy the [Service loadbalancer](https://git.k8s.io/contrib/service-loadbalancer). This allows you to share a single IP among multiple Services and achieve more advanced loadbalancing through Service Annotations.
|
||||
|
|
|
@ -85,7 +85,7 @@ spec:
|
|||
|
||||
__Mandatory Fields__: As with all other Kubernetes config, a `NetworkPolicy` needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see [here](/docs/user-guide/simple-yaml), [here](/docs/user-guide/configuring-containers), and [here](/docs/user-guide/working-with-resources).
|
||||
|
||||
__spec__: `NetworkPolicy` [spec](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status) has all the information needed to define a particular network policy in the given namespace.
|
||||
__spec__: `NetworkPolicy` [spec](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status) has all the information needed to define a particular network policy in the given namespace.
|
||||
|
||||
__podSelector__: Each `NetworkPolicy` includes a `podSelector` which selects the grouping of pods to which the policy applies. Since `NetworkPolicy` currently only supports definining `ingress` rules, this `podSelector` essentially defines the "destination pods" for the policy. The example policy selects pods with the label "role=db". An empty `podSelector` selects all pods in the namespace.
|
||||
|
||||
|
|
|
@ -151,7 +151,7 @@ Each PV contains a spec and status, which is the specification and status of the
|
|||
|
||||
### Capacity
|
||||
|
||||
Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. See the Kubernetes [Resource Model](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/resources.md) to understand the units expected by `capacity`.
|
||||
Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. See the Kubernetes [Resource Model](https://git.k8s.io/community/contributors/design-proposals/resources.md) to understand the units expected by `capacity`.
|
||||
|
||||
Currently, storage size is the only resource that can be set or requested. Future attributes may include IOPS, throughput, etc.
|
||||
|
||||
|
@ -304,7 +304,7 @@ Claims use the same conventions as volumes when requesting storage with specific
|
|||
|
||||
### Resources
|
||||
|
||||
Claims, like pods, can request specific quantities of a resource. In this case, the request is for storage. The same [resource model](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/resources.md) applies to both volumes and claims.
|
||||
Claims, like pods, can request specific quantities of a resource. In this case, the request is for storage. The same [resource model](https://git.k8s.io/community/contributors/design-proposals/resources.md) applies to both volumes and claims.
|
||||
|
||||
### Selector
|
||||
|
||||
|
@ -435,7 +435,7 @@ for provisioning PVs. This field must be specified.
|
|||
You are not restricted to specifying the "internal" provisioners
|
||||
listed here (whose names are prefixed with "kubernetes.io" and shipped
|
||||
alongside Kubernetes). You can also run and specify external provisioners,
|
||||
which are independent programs that follow a [specification](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/volume-provisioning.md)
|
||||
which are independent programs that follow a [specification](https://git.k8s.io/community/contributors/design-proposals/volume-provisioning.md)
|
||||
defined by Kubernetes. Authors of external provisioners have full discretion
|
||||
over where their code lives, how the provisioner is shipped, how it needs to be
|
||||
run, what volume plugin it uses (including Flex), etc. The repository [kubernetes-incubator/external-storage](https://github.com/kubernetes-incubator/external-storage)
|
||||
|
@ -519,7 +519,7 @@ parameters:
|
|||
```
|
||||
$ kubectl create secret generic heketi-secret --type="kubernetes.io/glusterfs" --from-literal=key='opensesame' --namespace=default
|
||||
```
|
||||
Example of a secret can be found in [glusterfs-provisioning-secret.yaml](https://github.com/kubernetes/kubernetes/blob/master/examples/persistent-volume-provisioning/glusterfs/glusterfs-secret.yaml).
|
||||
Example of a secret can be found in [glusterfs-provisioning-secret.yaml](https://git.k8s.io/kubernetes/examples/persistent-volume-provisioning/glusterfs/glusterfs-secret.yaml).
|
||||
* `clusterid`: `630372ccdc720a92c681fb928f27b53f` is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids, for ex:
|
||||
"8452344e2becec931ece4e33c4674e4e,42982310de6c63381718ccfa6d8cf397". This is an optional parameter.
|
||||
* `gidMin`, `gidMax` : The minimum and maximum value of GID range for the storage class. A unique value (GID) in this range ( gidMin-gidMax ) will be used for dynamically provisioned volumes. These are optional values. If not specified, the volume will be provisioned with a value between 2000-2147483647 which are defaults for gidMin and gidMax respectively.
|
||||
|
@ -614,7 +614,7 @@ parameters:
|
|||
|
||||
vSphere Infrastructure(VI) administrator can specify storage requirements for applications in terms of storage capabilities while creating a storage class inside Kubernetes. Please note that while creating a StorageClass, administrator should specify storage capability names used in the table above as these names might differ from the ones used by VSAN. For example - Number of disk stripes per object is referred to as stripeWidth in VSAN documentation however vSphere Cloud Provider uses a friendly name diskStripes.
|
||||
|
||||
You can see [vSphere example](https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/vsphere) for more details.
|
||||
You can see [vSphere example](https://git.k8s.io/kubernetes/examples/volumes/vsphere) for more details.
|
||||
|
||||
#### Ceph RBD
|
||||
|
||||
|
|
|
@ -616,7 +616,7 @@ spec:
|
|||
volumePath: "[DatastoreName] volumes/myDisk"
|
||||
fsType: ext4
|
||||
```
|
||||
More examples can be found [here](https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/vsphere).
|
||||
More examples can be found [here](https://git.k8s.io/kubernetes/examples/volumes/vsphere).
|
||||
|
||||
|
||||
### Quobyte
|
||||
|
|
|
@ -152,7 +152,7 @@ information about working with config files, see [deploying applications](/docs/
|
|||
[configuring containers](/docs/user-guide/configuring-containers), and
|
||||
[using kubectl to manage resources](/docs/user-guide/working-with-resources) documents.
|
||||
|
||||
A cron job also needs a [`.spec` section](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status).
|
||||
A cron job also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status).
|
||||
|
||||
**Note:** All modifications to a cron job, especially its `.spec`, will be applied only to the next run.
|
||||
|
||||
|
|
|
@ -35,7 +35,7 @@ As with all other Kubernetes config, a DaemonSet needs `apiVersion`, `kind`, and
|
|||
general information about working with config files, see [deploying applications](/docs/user-guide/deploying-applications/),
|
||||
[configuring containers](/docs/user-guide/configuring-containers/), and [working with resources](/docs/concepts/tools/kubectl/object-management-overview/) documents.
|
||||
|
||||
A DaemonSet also needs a [`.spec`](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status) section.
|
||||
A DaemonSet also needs a [`.spec`](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status) section.
|
||||
|
||||
### Pod Template
|
||||
|
||||
|
|
|
@ -648,7 +648,7 @@ attributes to the Deployment's `status.conditions`:
|
|||
* Status=False
|
||||
* Reason=ProgressDeadlineExceeded
|
||||
|
||||
See the [Kubernetes API conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#typical-status-properties) for more information on status conditions.
|
||||
See the [Kubernetes API conventions](https://git.k8s.io/community/contributors/devel/api-conventions.md#typical-status-properties) for more information on status conditions.
|
||||
|
||||
**Note:** Kubernetes will take no action on a stalled Deployment other than to report a status condition with
|
||||
`Reason=ProgressDeadlineExceeded`. Higher level orchestrators can take advantage of it and act accordingly, for
|
||||
|
@ -775,7 +775,7 @@ As with all other Kubernetes configs, a Deployment needs `apiVersion`, `kind`, a
|
|||
For general information about working with config files, see [deploying applications](/docs/tutorials/stateless-application/run-stateless-application-deployment/),
|
||||
configuring containers, and [using kubectl to manage resources](/docs/tutorials/object-management-kubectl/object-management/) documents.
|
||||
|
||||
A Deployment also needs a [`.spec` section](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#spec-and-status).
|
||||
A Deployment also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status).
|
||||
|
||||
### Pod Template
|
||||
|
||||
|
|
|
@ -167,9 +167,9 @@ kubectl delete replicaset my-repset --cascade=false
|
|||
|
||||
{% capture whatsnext %}
|
||||
|
||||
[Design Doc 1](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/garbage-collection.md)
|
||||
[Design Doc 1](https://git.k8s.io/community/contributors/design-proposals/garbage-collection.md)
|
||||
|
||||
[Design Doc 2](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/synchronous-garbage-collection.md)
|
||||
[Design Doc 2](https://git.k8s.io/community/contributors/design-proposals/synchronous-garbage-collection.md)
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
|
|
@ -86,7 +86,7 @@ As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `meta
|
|||
general information about working with config files, see [here](/docs/user-guide/simple-yaml),
|
||||
[here](/docs/user-guide/configuring-containers), and [here](/docs/user-guide/working-with-resources).
|
||||
|
||||
A Job also needs a [`.spec` section](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status).
|
||||
A Job also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status).
|
||||
|
||||
### Pod Template
|
||||
|
||||
|
@ -266,7 +266,7 @@ The pattern names are also links to examples and more detailed description.
|
|||
| Single Job with Static Work Assignment | ✓ | | ✓ | |
|
||||
|
||||
When you specify completions with `.spec.completions`, each Pod created by the Job controller
|
||||
has an identical [`spec`](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status). This means that
|
||||
has an identical [`spec`](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status). This means that
|
||||
all pods will have the same command line and the same
|
||||
image, the same volumes, and (almost) the same environment variables. These patterns
|
||||
are different ways to arrange for pods to work on different things.
|
||||
|
|
|
@ -90,7 +90,7 @@ As with all other Kubernetes config, a ReplicationController needs `apiVersion`,
|
|||
general information about working with config files, see [here](/docs/user-guide/simple-yaml/),
|
||||
[here](/docs/user-guide/configuring-containers/), and [here](/docs/concepts/tools/kubectl/object-management-overview/).
|
||||
|
||||
A ReplicationController also needs a [`.spec` section](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status).
|
||||
A ReplicationController also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status).
|
||||
|
||||
### Pod Template
|
||||
|
||||
|
|
|
@ -6,5 +6,5 @@ title: Deprecated Alternatives
|
|||
|
||||
# *Stop. These guides are superseded by [Minikube](../minikube/). They are only listed here for completeness.*
|
||||
|
||||
* [Using Vagrant](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/local-cluster/vagrant.md)
|
||||
* *Advanced:* [Directly using Kubernetes raw binaries (Linux Only)](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/local-cluster/local.md)
|
||||
* [Using Vagrant](https://git.k8s.io/community/contributors/devel/local-cluster/vagrant.md)
|
||||
* *Advanced:* [Directly using Kubernetes raw binaries (Linux Only)](https://git.k8s.io/community/contributors/devel/local-cluster/local.md)
|
|
@ -252,7 +252,7 @@ kubectl cluster-info
|
|||
|
||||
### Accessing the cluster programmatically
|
||||
|
||||
It's possible to use the locally stored client certificates to access the api server. For example, you may want to use any of the [Kubernetes API client libraries](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/client-libraries.md) to program against your Kubernetes cluster in the programming language of your choice.
|
||||
It's possible to use the locally stored client certificates to access the api server. For example, you may want to use any of the [Kubernetes API client libraries](https://git.k8s.io/community/contributors/devel/client-libraries.md) to program against your Kubernetes cluster in the programming language of your choice.
|
||||
|
||||
To demonstrate how to use these locally stored certificates, we provide the following example of using ```curl``` to communicate to the master api server via https:
|
||||
|
||||
|
|
|
@ -30,7 +30,7 @@ Further information is available in the Kubernetes on Mesos [contrib directory][
|
|||
- A running [Mesos cluster on Google Compute Engine][5]
|
||||
- A [VPN connection][10] to the cluster
|
||||
- A machine in the cluster which should become the Kubernetes *master node* with:
|
||||
- Go (see [here](https://github.com/kubernetes/community/blob/master/contributors/devel/development.md) for required versions)
|
||||
- Go (see [here](https://git.k8s.io/community/contributors/devel/development.md) for required versions)
|
||||
- make (i.e. build-essential)
|
||||
- Docker
|
||||
|
||||
|
@ -332,6 +332,6 @@ Future work will add instructions to this guide to enable support for Kubernetes
|
|||
[8]: https://github.com/mesosphere/kubernetes-mesos/issues
|
||||
[9]: https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples
|
||||
[10]: http://open.mesosphere.com/getting-started/cloud/google/mesosphere/#vpn-setup
|
||||
[11]: https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/README.md#kube-dns
|
||||
[12]: https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/kubedns-controller.yaml.in
|
||||
[11]: https://git.k8s.io/kubernetes/cluster/addons/dns/README.md#kube-dns
|
||||
[12]: https://git.k8s.io/kubernetes/cluster/addons/dns/kubedns-controller.yaml.in
|
||||
[13]: https://github.com/kubernetes-incubator/kube-mesos-framework/blob/master/README.md
|
||||
|
|
|
@ -34,8 +34,8 @@ the following drivers:
|
|||
|
||||
* virtualbox
|
||||
* vmwarefusion
|
||||
* kvm ([driver installation](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm-driver))
|
||||
* xhyve ([driver installation](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#xhyve-driver))
|
||||
* kvm ([driver installation](https://git.k8s.io/minikube/docs/drivers.md#kvm-driver))
|
||||
* xhyve ([driver installation](https://git.k8s.io/minikube/docs/drivers.md#xhyve-driver))
|
||||
|
||||
Note that the IP below is dynamic and can change. It can be retrieved with `minikube ip`.
|
||||
|
||||
|
@ -88,7 +88,7 @@ This will use an alternative minikube ISO image containing both rkt, and Docker,
|
|||
|
||||
### Driver plugins
|
||||
|
||||
See [DRIVERS](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md) for details on supported drivers and how to install
|
||||
See [DRIVERS](https://git.k8s.io/minikube/docs/drivers.md) for details on supported drivers and how to install
|
||||
plugins, if required.
|
||||
|
||||
### Reusing the Docker daemon
|
||||
|
@ -296,17 +296,17 @@ $ export no_proxy=$no_proxy,$(minikube ip)
|
|||
|
||||
## Design
|
||||
|
||||
Minikube uses [libmachine](https://github.com/docker/machine/tree/master/libmachine) for provisioning VMs, and [localkube](https://github.com/kubernetes/minikube/tree/master/pkg/localkube) (originally written and donated to this project by [RedSpread](https://redspread.com/)) for running the cluster.
|
||||
Minikube uses [libmachine](https://github.com/docker/machine/tree/master/libmachine) for provisioning VMs, and [localkube](https://git.k8s.io/minikube/pkg/localkube) (originally written and donated to this project by [RedSpread](https://redspread.com/)) for running the cluster.
|
||||
|
||||
For more information about minikube, see the [proposal](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/local-cluster-ux.md).
|
||||
For more information about minikube, see the [proposal](https://git.k8s.io/community/contributors/design-proposals/local-cluster-ux.md).
|
||||
|
||||
## Additional Links:
|
||||
* **Goals and Non-Goals**: For the goals and non-goals of the minikube project, please see our [roadmap](https://github.com/kubernetes/minikube/blob/master/docs/contributors/roadmap.md).
|
||||
* **Development Guide**: See [CONTRIBUTING.md](https://github.com/kubernetes/minikube/blob/master/CONTRIBUTING.md) for an overview of how to send pull requests.
|
||||
* **Building Minikube**: For instructions on how to build/test minikube from source, see the [build guide](https://github.com/kubernetes/minikube/blob/master/docs/contributors/build_guide.md)
|
||||
* **Adding a New Dependency**: For instructions on how to add a new depeindency to minikube see the [adding dependencies guide](https://github.com/kubernetes/minikube/blob/master/docs/contributors/adding_a_dependency.md)
|
||||
* **Adding a New Addon**: For instruction on how to add a new addon for minikube see the [adding an addon guide](https://github.com/kubernetes/minikube/blob/master/docs/contributors/adding_an_addon.md)
|
||||
* **Updating Kubernetes**: For instructions on how to update kubernetes see the [updating Kubernetes guide](https://github.com/kubernetes/minikube/blob/master/docs/contributors/updating_kubernetes.md)
|
||||
* **Goals and Non-Goals**: For the goals and non-goals of the minikube project, please see our [roadmap](https://git.k8s.io/minikube/docs/contributors/roadmap.md).
|
||||
* **Development Guide**: See [CONTRIBUTING.md](https://git.k8s.io/minikube/CONTRIBUTING.md) for an overview of how to send pull requests.
|
||||
* **Building Minikube**: For instructions on how to build/test minikube from source, see the [build guide](https://git.k8s.io/minikube/docs/contributors/build_guide.md)
|
||||
* **Adding a New Dependency**: For instructions on how to add a new depeindency to minikube see the [adding dependencies guide](https://git.k8s.io/minikube/docs/contributors/adding_a_dependency.md)
|
||||
* **Adding a New Addon**: For instruction on how to add a new addon for minikube see the [adding an addon guide](https://git.k8s.io/minikube/docs/contributors/adding_an_addon.md)
|
||||
* **Updating Kubernetes**: For instructions on how to update kubernetes see the [updating Kubernetes guide](https://git.k8s.io/minikube/docs/contributors/updating_kubernetes.md)
|
||||
|
||||
## Community
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ title: OpenStack Heat
|
|||
|
||||
## Getting started with OpenStack
|
||||
|
||||
This guide will take you through the steps of deploying Kubernetes to Openstack using `kube-up.sh`. The primary mechanisms for this are [OpenStack Heat](https://wiki.openstack.org/wiki/Heat) and the [SaltStack](https://github.com/kubernetes/kubernetes/tree/master/cluster/saltbase) distributed with Kubernetes.
|
||||
This guide will take you through the steps of deploying Kubernetes to Openstack using `kube-up.sh`. The primary mechanisms for this are [OpenStack Heat](https://wiki.openstack.org/wiki/Heat) and the [SaltStack](https://git.k8s.io/kubernetes/cluster/saltbase) distributed with Kubernetes.
|
||||
|
||||
The default OS is CentOS 7, this has not been tested on other operating systems.
|
||||
|
||||
|
|
|
@ -67,7 +67,7 @@ Exponential restart back-off for a failing container is currently not supported.
|
|||
|
||||
## Experimental NVIDIA GPU support
|
||||
|
||||
The `--experimental-nvidia-gpus` flag, and related [GPU features](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/gpu-support.md) are not supported.
|
||||
The `--experimental-nvidia-gpus` flag, and related [GPU features](https://git.k8s.io/community/contributors/design-proposals/gpu-support.md) are not supported.
|
||||
|
||||
## QoS Classes
|
||||
|
||||
|
|
|
@ -164,7 +164,7 @@ You will need binaries for:
|
|||
|
||||
A Kubernetes binary release includes all the Kubernetes binaries as well as the supported release of etcd.
|
||||
You can use a Kubernetes binary release (recommended) or build your Kubernetes binaries following the instructions in the
|
||||
[Developer Documentation](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/). Only using a binary release is covered in this guide.
|
||||
[Developer Documentation](https://git.k8s.io/community/contributors/devel/). Only using a binary release is covered in this guide.
|
||||
|
||||
Download the [latest binary release](https://github.com/kubernetes/kubernetes/releases/latest) and unzip it.
|
||||
Server binary tarballs are no longer included in the Kubernetes final tarball, so you will need to locate and run
|
||||
|
|
|
@ -47,33 +47,33 @@ so we can find them.
|
|||
|
||||
Deployment of the cluster is [supported on a wide variety of public clouds](#cloud-compatibility), private OpenStack clouds, or raw bare metal clusters. Bare metal deployments are supported via [MAAS](http://maas.io/).
|
||||
|
||||
After deciding which cloud to deploy to, follow the [cloud setup page](https://jujucharms.com/docs/devel/getting-started-general#2.-choose-a-cloud) to configure deploying to that cloud.
|
||||
After deciding which cloud to deploy to, follow the [cloud setup page](https://jujucharms.com/docs/devel/getting-started-general#2.-choose-a-cloud) to configure deploying to that cloud.
|
||||
|
||||
Load your [cloud credentials](https://jujucharms.com/docs/2.0/credentials) for each
|
||||
cloud provider you would like to use.
|
||||
Load your [cloud credentials](https://jujucharms.com/docs/2.0/credentials) for each
|
||||
cloud provider you would like to use.
|
||||
|
||||
In this example
|
||||
|
||||
```
|
||||
juju add-credential aws
|
||||
juju add-credential aws
|
||||
credential name: my_credentials
|
||||
select auth-type [userpass, oauth, etc]: userpass
|
||||
enter username: jorge
|
||||
enter password: *******
|
||||
```
|
||||
|
||||
You can also just auto load credentials for popular clouds with the `juju autoload-credentials` command, which will auto import your credentials from the default files and environment variables for each cloud.
|
||||
You can also just auto load credentials for popular clouds with the `juju autoload-credentials` command, which will auto import your credentials from the default files and environment variables for each cloud.
|
||||
|
||||
Next we need to bootstrap a controller to manage the cluster. You need to define the cloud you want to bootstrap on, the region, and then any name for your controller node:
|
||||
|
||||
```
|
||||
juju update-clouds # This command ensures all the latest regions are up to date on your client
|
||||
juju bootstrap aws/us-east-2
|
||||
juju bootstrap aws/us-east-2
|
||||
```
|
||||
or, another example, this time on Azure:
|
||||
or, another example, this time on Azure:
|
||||
|
||||
```
|
||||
juju bootstrap azure/centralus
|
||||
juju bootstrap azure/centralus
|
||||
```
|
||||
|
||||
You will need a controller node for each cloud or region you are deploying to. See the [controller documentation](https://jujucharms.com/docs/2.0/controllers) for more information.
|
||||
|
@ -84,13 +84,13 @@ Note that each controller can host multiple Kubernetes clusters in a given cloud
|
|||
{% capture steps %}
|
||||
## Launch a Kubernetes cluster
|
||||
|
||||
The following command will deploy the initial 9-node starter cluster. The speed of execution is very dependent of the performance of the cloud you're deploying to:
|
||||
The following command will deploy the initial 9-node starter cluster. The speed of execution is very dependent of the performance of the cloud you're deploying to:
|
||||
|
||||
```
|
||||
juju deploy canonical-kubernetes
|
||||
```
|
||||
|
||||
After this command executes the cloud will then launch instances and begin the deployment process.
|
||||
After this command executes the cloud will then launch instances and begin the deployment process.
|
||||
|
||||
## Monitor deployment
|
||||
|
||||
|
@ -98,9 +98,9 @@ The `juju status` command provides information about each unit in the cluster. U
|
|||
|
||||
juju status
|
||||
|
||||
Output:
|
||||
Output:
|
||||
|
||||
```
|
||||
```
|
||||
Model Controller Cloud/Region Version
|
||||
default aws-us-east-2 aws/us-east-2 2.0.1
|
||||
|
||||
|
@ -159,8 +159,8 @@ juju scp kubernetes-master/0:config ~/.kube/config
|
|||
|
||||
Fetch a binary for the architecture you have deployed. If your client is a
|
||||
different architecture you will need to get the appropriate `kubectl` binary
|
||||
through other means. In this example we copy kubectl to `~/bin` for convenience,
|
||||
by default this should be in your $PATH.
|
||||
through other means. In this example we copy kubectl to `~/bin` for convenience,
|
||||
by default this should be in your $PATH.
|
||||
|
||||
```
|
||||
mkdir -p ~/bin
|
||||
|
@ -171,7 +171,7 @@ Query the cluster:
|
|||
|
||||
kubectl cluster-info
|
||||
|
||||
Output:
|
||||
Output:
|
||||
|
||||
```
|
||||
Kubernetes master is running at https://52.15.104.227:443
|
||||
|
@ -206,7 +206,7 @@ Or multiple units at one time:
|
|||
```shell
|
||||
juju add-unit -n3 kubernetes-worker
|
||||
```
|
||||
You can also ask for specific instance types or other machine-specific constraints. See the [constraints documentation](https://jujucharms.com/docs/stable/reference-constraints) for more information. Here are some examples, note that generic constraints such as `cores` and `mem` are more portable between clouds. In this case we'll ask for a specific instance type from AWS:
|
||||
You can also ask for specific instance types or other machine-specific constraints. See the [constraints documentation](https://jujucharms.com/docs/stable/reference-constraints) for more information. Here are some examples, note that generic constraints such as `cores` and `mem` are more portable between clouds. In this case we'll ask for a specific instance type from AWS:
|
||||
|
||||
```shell
|
||||
juju set-constraints kubernetes-worker instance-type=c4.large
|
||||
|
@ -218,7 +218,7 @@ You can also scale the etcd charm for more fault tolerant key/value storage:
|
|||
```shell
|
||||
juju add-unit -n3 etcd
|
||||
```
|
||||
It is strongly recommended to run an odd number of units for quorum.
|
||||
It is strongly recommended to run an odd number of units for quorum.
|
||||
|
||||
## Tear down cluster
|
||||
|
||||
|
@ -240,8 +240,8 @@ The Ubuntu Kubernetes deployment uses open-source operations, or operations as c
|
|||
The Kubernetes layer and bundles can be found in the `kubernetes`
|
||||
project on github.com:
|
||||
|
||||
- [Bundle location](https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/bundles)
|
||||
- [Kubernetes charm layer location](https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/layers/kubernetes)
|
||||
- [Bundle location](https://git.k8s.io/kubernetes/cluster/juju/bundles)
|
||||
- [Kubernetes charm layer location](https://git.k8s.io/kubernetes/cluster/juju/layers)
|
||||
- [Canonical Kubernetes home](https://jujucharms.com/canonical-kubernetes/)
|
||||
|
||||
Feature requests, bug reports, pull requests or any feedback would be much appreciated.
|
||||
|
@ -264,4 +264,3 @@ Bare Metal (MAAS) | Juju | Ubuntu | flannel, calico | [doc
|
|||
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
|
||||
|
||||
{% include templates/task.md %}
|
||||
|
||||
|
|
|
@ -91,13 +91,13 @@ To see the different types of tests the Kubernetes end-to-end charm has access
|
|||
to, we encourage you to see the upstream documentation on the different types
|
||||
of tests, and to strongly understand what subsets of the tests you are running.
|
||||
|
||||
[Kinds of tests](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/e2e-tests.md#kinds-of-tests)
|
||||
[Kinds of tests](https://git.k8s.io/community/contributors/devel/e2e-tests.md#kinds-of-tests)
|
||||
|
||||
### More information on end-to-end testing
|
||||
|
||||
Along with the above descriptions, end-to-end testing is a much larger subject
|
||||
than this readme can encapsulate. There is far more information in the
|
||||
[end-to-end testing guide](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/e2e-tests.md).
|
||||
[end-to-end testing guide](https://git.k8s.io/community/contributors/devel/e2e-tests.md).
|
||||
|
||||
### Evaluating end-to-end results
|
||||
|
||||
|
|
|
@ -20,7 +20,7 @@ This page also describes how to configure and get started with the cloud provide
|
|||
|
||||
To start using Kubernetes on top of vSphere and use the vSphere Cloud Provider use Kubernetes-Anywhere. Kubernetes-Anywhere will deploy and configure a cluster from scratch.
|
||||
|
||||
Detailed steps can be found at the [getting started with Kubernetes-Anywhere on vSphere page](https://github.com/kubernetes/kubernetes-anywhere/blob/master/phase1/vsphere/README.md)
|
||||
Detailed steps can be found at the [getting started with Kubernetes-Anywhere on vSphere page](https://git.k8s.io/kubernetes-anywhere/phase1/vsphere/README.md)
|
||||
|
||||
### vSphere Cloud Provider
|
||||
|
||||
|
@ -38,7 +38,7 @@ guide](http://kubernetes.io/docs/user-guide/persistent-volumes/#vsphere) and the
|
|||
guide](/docs/concepts/storage/volumes/#vspherevolume)
|
||||
|
||||
Examples can be found
|
||||
[here](https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/vsphere)
|
||||
[here](https://git.k8s.io/kubernetes/examples/volumes/vsphere)
|
||||
|
||||
#### Configuring vSphere Cloud Provider
|
||||
|
||||
|
@ -82,7 +82,7 @@ Virtual machine > Configuration > Add new disk
|
|||
Resource > Assign virtual machine to resource pool
|
||||
```
|
||||
|
||||
* Provide the cloud config file to each instance of kubelet, apiserver and controller manager via ```--cloud-config=<path to file>``` flag. Cloud config [template can be found at Kubernetes-Anywhere](https://github.com/kubernetes/kubernetes-anywhere/blob/master/phase1/vsphere/vsphere.conf)
|
||||
* Provide the cloud config file to each instance of kubelet, apiserver and controller manager via ```--cloud-config=<path to file>``` flag. Cloud config [template can be found at Kubernetes-Anywhere](https://git.k8s.io/kubernetes-anywhere/phase1/vsphere/vsphere.conf)
|
||||
|
||||
Sample Config:
|
||||
|
||||
|
@ -119,7 +119,7 @@ Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This cluster
|
|||
#### Prerequisites
|
||||
|
||||
* You need administrator credentials to an ESXi machine or vCenter instance with write mode api access enabled (not available on the free ESXi license).
|
||||
* You must have Go (see [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/development.md#go-versions) for supported versions) installed: [www.golang.org](http://www.golang.org).
|
||||
* You must have Go (see [here](https://git.k8s.io/community/contributors/devel/development.md#go-versions) for supported versions) installed: [www.golang.org](http://www.golang.org).
|
||||
* You must have your `GOPATH` set up and include `$GOPATH/bin` in your `PATH`.
|
||||
|
||||
```shell
|
||||
|
|
|
@ -54,7 +54,7 @@ Requirements
|
|||
* Git
|
||||
* Go 1.7.1+
|
||||
* make (if using Linux or MacOS)
|
||||
* Important notes and other dependencies are listed [here](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md#building-kubernetes-on-a-local-osshell-environment)
|
||||
* Important notes and other dependencies are listed [here](https://git.k8s.io/community/contributors/devel/development.md#building-kubernetes-on-a-local-osshell-environment)
|
||||
|
||||
**kubelet**
|
||||
|
||||
|
|
|
@ -17,7 +17,7 @@ repository. This page shows how to create a pull request.
|
|||
1. Sign the
|
||||
[Linux Foundation Contributor License Agreement](https://identity.linuxfoundation.org/projects/cncf){: target="_blank"}.
|
||||
|
||||
Documentation will be published under the [CC BY SA 4.0](https://github.com/kubernetes/kubernetes.github.io/blob/master/LICENSE) license.
|
||||
Documentation will be published under the [CC BY SA 4.0](https://git.k8s.io/kubernetes.github.io/LICENSE) license.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
|
|
@ -16,7 +16,7 @@ title: Using Page Templates
|
|||
<li><a href="#concept_template">Concept</a></li>
|
||||
</ul>
|
||||
|
||||
<p>The page templates are in the <a href="https://github.com/kubernetes/kubernetes.github.io/tree/master/_includes/templates" target="_blank">_includes/templates</a> directory of the <a href="https://github.com/kubernetes/kubernetes.github.io">kubernetes.github.io</a> repository.
|
||||
<p>The page templates are in the <a href="https://git.k8s.io/kubernetes.github.io/_includes/templates" target="_blank">_includes/templates</a> directory of the <a href="https://github.com/kubernetes/kubernetes.github.io">kubernetes.github.io</a> repository.
|
||||
|
||||
<h2 id="task_template">Task template</h2>
|
||||
|
||||
|
|
|
@ -33,7 +33,7 @@ can see your changes.
|
|||
|
||||
You can use the k8sdocs Docker image to run a local staging server. If you're
|
||||
interested, you can view the
|
||||
[Dockerfile](https://github.com/kubernetes/kubernetes.github.io/blob/master/staging-container/Dockerfile){: target="_blank"}
|
||||
[Dockerfile](https://git.k8s.io/kubernetes.github.io/staging-container/Dockerfile){: target="_blank"}
|
||||
for this image.
|
||||
|
||||
1. Install Docker if you don't already have it.
|
||||
|
|
|
@ -30,10 +30,10 @@ multiple API versions, each at a different API path, such as `/api/v1` or
|
|||
The version is set at the API level rather than at the resource or field level to ensure that the API presents a clear, consistent view of system resources and behavior, and to enable controlling access to end-of-life and/or experimental APIs. The JSON and Protobuf serialization schemas follow the same guidelines for schema changes; all descriptions below cover both formats.
|
||||
|
||||
Note that API versioning and software versioning are only indirectly related. The [API and release
|
||||
versioning proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/versioning.md) describes the relationship between API versioning and software versioning.
|
||||
versioning proposal](https://git.k8s.io/community/contributors/design-proposals/versioning.md) describes the relationship between API versioning and software versioning.
|
||||
|
||||
Different API versions imply different levels of stability and support. The criteria for each level are described
|
||||
in more detail in the [API Changes documentation](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api_changes.md#alpha-beta-and-stable-versions).
|
||||
in more detail in the [API Changes documentation](https://git.k8s.io/community/contributors/devel/api_changes.md#alpha-beta-and-stable-versions).
|
||||
|
||||
The criteria are summarized here:
|
||||
|
||||
|
@ -57,7 +57,7 @@ The criteria are summarized here:
|
|||
|
||||
## API groups
|
||||
|
||||
[*API groups*](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-group.md) make it easier to extend the Kubernetes API. The API group is specified in a REST path and in the `apiVersion` field of a serialized object.
|
||||
[*API groups*](https://git.k8s.io/community/contributors/design-proposals/api-group.md) make it easier to extend the Kubernetes API. The API group is specified in a REST path and in the `apiVersion` field of a serialized object.
|
||||
|
||||
Currently, there are several API groups in use:
|
||||
|
||||
|
@ -66,7 +66,7 @@ Currently, there are several API groups in use:
|
|||
(for example, `apiVersion: batch/v1`). Full list of supported API groups can be seen in [Kubernetes API reference](/docs/reference/).
|
||||
|
||||
There is a supported path to extending the API:
|
||||
* [Third Party Resources](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/extending-api.md)
|
||||
* [Third Party Resources](https://git.k8s.io/community/contributors/design-proposals/extending-api.md)
|
||||
are for users with very basic CRUD needs.
|
||||
|
||||
|
||||
|
|
|
@ -29,4 +29,4 @@ assignees:
|
|||
|
||||
## Design Docs
|
||||
|
||||
An archive of the design docs for Kubernetes functionality. Good starting points are [Kubernetes Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture.md) and [Kubernetes Design Overview](https://github.com/kubernetes/kubernetes/tree/{{page.fullversion}}/docs/design).
|
||||
An archive of the design docs for Kubernetes functionality. Good starting points are [Kubernetes Architecture](https://git.k8s.io/community/contributors/design-proposals/architecture.md) and [Kubernetes Design Overview](https://github.com/kubernetes/kubernetes/tree/{{page.fullversion}}/docs/design).
|
||||
|
|
|
@ -17,9 +17,9 @@ Join the [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernete
|
|||
|
||||
We’re extremely grateful for security researchers and users that report vulnerabilities to the Kubernetes Open Source Community. All reports are thoroughly investigated by a set of community volunteers.
|
||||
|
||||
To make a report, please email the private [kubernetes-security@googlegroups.com](mailto:kubernetes-security@googlegroups.com) list with the security details and the details expected for [all Kubernetes bug reports](https://github.com/kubernetes/kubernetes/blob/master/.github/ISSUE_TEMPLATE.md).
|
||||
To make a report, please email the private [kubernetes-security@googlegroups.com](mailto:kubernetes-security@googlegroups.com) list with the security details and the details expected for [all Kubernetes bug reports](https://git.k8s.io/kubernetes/.github/ISSUE_TEMPLATE.md).
|
||||
|
||||
You may encrypt your email to this list using the GPG keys of the [Product Security Team members](https://github.com/kubernetes/community/blob/master/contributors/devel/security-release-process.md#product-security-team-pst). Encryption using GPG is NOT required to make a disclosure.
|
||||
You may encrypt your email to this list using the GPG keys of the [Product Security Team members](https://git.k8s.io/community/contributors/devel/security-release-process.md#product-security-team-pst). Encryption using GPG is NOT required to make a disclosure.
|
||||
|
||||
### When Should I Report a Vulnerability?
|
||||
|
||||
|
@ -35,7 +35,7 @@ You may encrypt your email to this list using the GPG keys of the [Product Secur
|
|||
|
||||
## Security Vulnerability Response
|
||||
|
||||
Each report is acknowledged and analyzed by Product Security Team members within 3 working days. This will set off the [Security Release Process](https://github.com/kubernetes/community/blob/master/contributors/devel/security-release-process.md#product-security-team-pst).
|
||||
Each report is acknowledged and analyzed by Product Security Team members within 3 working days. This will set off the [Security Release Process](https://git.k8s.io/community/contributors/devel/security-release-process.md#product-security-team-pst).
|
||||
|
||||
Any vulnerability information shared with Product Security Team stays within Kubernetes project and will not be disseminated to other projects unless it is necessary to get the issue fixed.
|
||||
|
||||
|
|
|
@ -1217,7 +1217,7 @@ Appears In <a href="#pod-v1-core">Pod</a> </aside>
|
|||
</tr>
|
||||
<tr>
|
||||
<td>qosClass <br /> <em>string</em></td>
|
||||
<td>The Quality of Service (QOS) classification assigned to the pod based on resource requirements See PodQOSClass type for available QOS classes More info: <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/design/resource-qos.md">https://github.com/kubernetes/kubernetes/blob/master/docs/design/resource-qos.md</a></td>
|
||||
<td>The Quality of Service (QOS) classification assigned to the pod based on resource requirements See PodQOSClass type for available QOS classes More info: <a href="https://git.k8s.io/community/contributors/design-proposals/resource-qos.md">https://git.k8s.io/community/contributors/design-proposals/resource-qos.md</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>reason <br /> <em>string</em></td>
|
||||
|
|
|
@ -415,7 +415,7 @@ control of your Kubernetes cluster.
|
|||
|
||||
kubeadm deb/rpm packages and binaries are built for amd64, arm64, armhfp,
|
||||
ppc64el, and s390x following the [multi-platform
|
||||
proposal](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/multi-platform.md).
|
||||
proposal](https://git.k8s.io/community/contributors/design-proposals/multi-platform.md).
|
||||
|
||||
Currently, only the pod networks flannel and Weave Net work on multiple architectures.
|
||||
For Weave Net just use its [standard install](https://www.weave.works/docs/net/latest/kube-addon/).
|
||||
|
|
|
@ -46,7 +46,7 @@ You will install these packages on all of your machines:
|
|||
**Note:** If you already have kubeadm installed, you should do a `apt-get update &&
|
||||
apt-get upgrade` or `yum update` to get the latest version of kubeadm. See the
|
||||
kubeadm release notes if you want to read about the different [kubeadm
|
||||
releases](https://github.com/kubernetes/kubeadm/blob/master/CHANGELOG.md)
|
||||
releases](https://git.k8s.io/kubeadm/CHANGELOG.md)
|
||||
|
||||
For each machine:
|
||||
|
||||
|
|
|
@ -88,7 +88,7 @@ have special requirements, or just because you want to understand what is undern
|
|||
cluster, try the [Getting Started from Scratch](/docs/getting-started-guides/scratch) guide.
|
||||
|
||||
If you are interested in supporting Kubernetes on a new platform, see
|
||||
[Writing a Getting Started Guide](https://github.com/kubernetes/community/blob/master/contributors/devel/writing-a-getting-started-guide.md).
|
||||
[Writing a Getting Started Guide](https://git.k8s.io/community/contributors/devel/writing-a-getting-started-guide.md).
|
||||
|
||||
## Universal
|
||||
|
||||
|
@ -143,7 +143,7 @@ Below is a table of all of the solutions listed above.
|
|||
|
||||
IaaS Provider | Config. Mgmt. | OS | Networking | Docs | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ----------------------------
|
||||
any | any | multi-support | any CNI | [docs](https://kubernetes.io/docs/getting-started-guides/kubeadm/) | Project ([SIG-cluster-lifecycle](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle))
|
||||
any | any | multi-support | any CNI | [docs](https://kubernetes.io/docs/getting-started-guides/kubeadm/) | Project ([SIG-cluster-lifecycle](https://git.k8s.io/community/sig-cluster-lifecycle))
|
||||
GKE | | | GCE | [docs](https://cloud.google.com/container-engine) | Commercial
|
||||
Stackpoint.io | | multi-support | multi-support | [docs](https://stackpoint.io/) | Commercial
|
||||
AppsCode.com | Saltstack | Debian | multi-support | [docs](https://appscode.com/products/cloud-deployment/) | Commercial
|
||||
|
|
|
@ -43,7 +43,7 @@ kubectl apply -f "https://git.io/weave-kube"
|
|||
|
||||
## Example Liquid template code for tabs
|
||||
|
||||
Below is the [Liquid](https://shopify.github.io/liquid/) template code for the tabs demo above to illustrate how to specify the contents of each tab. The [`/_includes/tabs.md`](https://github.com/kubernetes/kubernetes.github.io/tree/master/_includes/tabs.md) file included at the end then uses those elements to render the actual tab set.
|
||||
Below is the [Liquid](https://shopify.github.io/liquid/) template code for the tabs demo above to illustrate how to specify the contents of each tab. The [`/_includes/tabs.md`](https://git.k8s.io/kubernetes.github.io/_includes/tabs.md) file included at the end then uses those elements to render the actual tab set.
|
||||
|
||||
### The code
|
||||
|
||||
|
|
|
@ -132,7 +132,7 @@ Kubernetes supports [Go](#go-client) and [Python](#python-client) client librari
|
|||
* Write an application atop of the client-go clients. Note that client-go defines its own API objects, so if needed, please import API definitions from client-go rather than from the main repository, e.g., `import "k8s.io/client-go/1.4/pkg/api/v1"` is correct.
|
||||
|
||||
The Go client can use the same [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/)
|
||||
as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes/client-go/blob/master/examples/out-of-cluster-client-configuration/main.go).
|
||||
as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://git.k8s.io/client-go/examples/out-of-cluster-client-configuration/main.go).
|
||||
|
||||
If the application is deployed as a Pod in the cluster, please refer to the [next section](#accessing-the-api-from-a-pod).
|
||||
|
||||
|
@ -145,7 +145,7 @@ as the kubectl CLI does to locate and authenticate to the apiserver. See this [e
|
|||
|
||||
#### Other languages
|
||||
|
||||
There are [client libraries](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/client-libraries.md) for accessing the API from other languages. See documentation for other libraries for how they authenticate.
|
||||
There are [client libraries](https://git.k8s.io/community/contributors/devel/client-libraries.md) for accessing the API from other languages. See documentation for other libraries for how they authenticate.
|
||||
|
||||
### Accessing the API from a Pod
|
||||
|
||||
|
@ -177,7 +177,7 @@ From within a pod the recommended ways to connect to API are:
|
|||
in any container of the pod can access it. See this [example of using kubectl proxy
|
||||
in a pod](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/kubectl-container/).
|
||||
- use the Go client library, and create a client using the `rest.InClusterConfig()` and `kubernetes.NewForConfig()` functions.
|
||||
They handle locating and authenticating to the apiserver. [example](https://github.com/kubernetes/client-go/blob/master/examples/in-cluster/main.go)
|
||||
They handle locating and authenticating to the apiserver. [example](https://git.k8s.io/client-go/examples/in-cluster/main.go)
|
||||
|
||||
In each case, the credentials of the pod are used to communicate securely with the apiserver.
|
||||
|
||||
|
|
|
@ -124,7 +124,7 @@ Kubernetes supports [Go](#go-client) and [Python](#python-client) client librari
|
|||
* Write an application atop of the client-go clients. Note that client-go defines its own API objects, so if needed, please import API definitions from client-go rather than from the main repository, e.g., `import "k8s.io/client-go/1.4/pkg/api/v1"` is correct.
|
||||
|
||||
The Go client can use the same [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/)
|
||||
as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes/client-go/blob/master/examples/out-of-cluster/main.go):
|
||||
as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://git.k8s.io/client-go/examples/out-of-cluster/main.go):
|
||||
|
||||
```golang
|
||||
import (
|
||||
|
@ -167,7 +167,7 @@ for i in ret.items:
|
|||
|
||||
#### Other languages
|
||||
|
||||
There are [client libraries](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/client-libraries.md) for accessing the API from other languages. See documentation for other libraries for how they authenticate.
|
||||
There are [client libraries](https://git.k8s.io/community/contributors/devel/client-libraries.md) for accessing the API from other languages. See documentation for other libraries for how they authenticate.
|
||||
|
||||
### Accessing the API from a Pod
|
||||
|
||||
|
@ -199,7 +199,7 @@ From within a pod the recommended ways to connect to API are:
|
|||
in any container of the pod can access it. See this [example of using kubectl proxy
|
||||
in a pod](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/kubectl-container/).
|
||||
- use the Go client library, and create a client using the `rest.InClusterConfig()` and `kubernetes.NewForConfig()` functions.
|
||||
They handle locating and authenticating to the apiserver. [example](https://github.com/kubernetes/client-go/blob/master/examples/in-cluster/main.go)
|
||||
They handle locating and authenticating to the apiserver. [example](https://git.k8s.io/client-go/examples/in-cluster/main.go)
|
||||
|
||||
In each case, the credentials of the pod are used to communicate securely with the apiserver.
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ To install Kubernetes on a set of machines, consult one of the existing [Getting
|
|||
|
||||
## Upgrading a cluster
|
||||
|
||||
The current state of cluster upgrades is provider dependent, and some releases may require special care when upgrading. It is recommended that administrators consult both the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md), as well as the version specific upgrade notes prior to upgrading their clusters.
|
||||
The current state of cluster upgrades is provider dependent, and some releases may require special care when upgrading. It is recommended that administrators consult both the [release notes](https://git.k8s.io/kubernetes/CHANGELOG.md), as well as the version specific upgrade notes prior to upgrading their clusters.
|
||||
|
||||
* [Upgrading to 1.6](/docs/admin/upgrade-1-6)
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ Various cluster management operations may voluntarily evict pods. "Voluntary"
|
|||
means an eviction can be safely delayed for a reasonable period of time. The
|
||||
principal examples today are draining a node for maintenance or upgrade
|
||||
(`kubectl drain`), and cluster autoscaling down. In the future the
|
||||
[rescheduler](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/rescheduling.md)
|
||||
[rescheduler](https://git.k8s.io/community/contributors/design-proposals/rescheduling.md)
|
||||
may also perform voluntary evictions. By contrast, something like evicting pods
|
||||
because a node has become unreachable or reports `NotReady`, is not "voluntary."
|
||||
|
||||
|
|
|
@ -247,7 +247,7 @@ it is enough to start etcd in 2.2.z version, wait until it is healthy, stop it,
|
|||
|
||||
Versions 3.0+ of etcd do not support general rollback. That is,
|
||||
after migrating from M.N to M.N+1, there is no way to go back to M.N.
|
||||
The etcd team has provided a [custom rollback tool](https://github.com/kubernetes/kubernetes/tree/master/cluster/images/etcd/rollback)
|
||||
The etcd team has provided a [custom rollback tool](https://git.k8s.io/kubernetes/cluster/images/etcd/rollback)
|
||||
but the rollback tool has these limitations:
|
||||
|
||||
* This custom rollback tool is not part of the etcd repo and does not receive the same
|
||||
|
|
|
@ -227,7 +227,7 @@ constrain the amount of resource a pod consumes on a node.
|
|||
{% endcapture %}
|
||||
|
||||
{% capture whatsnext %}
|
||||
* See [LimitRange design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/admission_control_limit_range.md) for more information.
|
||||
* See [LimitRange design doc](https://git.k8s.io/community/contributors/design-proposals/admission_control_limit_range.md) for more information.
|
||||
* See [Resources](/docs/concepts/configuration/manage-compute-resources-container/) for a detailed description of the Kubernetes resource model.
|
||||
{% endcapture %}
|
||||
|
||||
|
|
|
@ -179,7 +179,7 @@ The output is:
|
|||
### Option 3: Delete the kube-dns-autoscaler manifest file from the master node
|
||||
|
||||
This option works if kube-dns-autoscaler is under control of the
|
||||
[Addon Manager](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/README.md)'s
|
||||
[Addon Manager](https://git.k8s.io/kubernetes/cluster/addons/README.md)'s
|
||||
control, and you have write access to the master node.
|
||||
|
||||
Sign in to the master node and delete the corresponding manifest file.
|
||||
|
|
|
@ -34,7 +34,7 @@ the rescheduler tries to free up space for the add-on by evicting some pods; the
|
|||
|
||||
To avoid situation when another pod is scheduled into the space prepared for the critical add-on,
|
||||
the chosen node gets a temporary taint "CriticalAddonsOnly" before the eviction(s)
|
||||
(see [more details](https://github.com/kubernetes/kubernetes/blob/master/docs/design/taint-toleration-dedicated.md)).
|
||||
(see [more details](https://git.k8s.io/community/contributors/design-proposals/taint-toleration-dedicated.md)).
|
||||
Each critical add-on has to tolerate it,
|
||||
while the other pods shouldn't tolerate the taint. The taint is removed once the add-on is successfully scheduled.
|
||||
|
||||
|
@ -44,7 +44,7 @@ killed for this purpose.
|
|||
|
||||
## Config
|
||||
|
||||
Rescheduler should be [enabled by default as a static pod](https://github.com/kubernetes/kubernetes/blob/master/cluster/saltbase/salt/rescheduler/rescheduler.manifest).
|
||||
Rescheduler should be [enabled by default as a static pod](https://git.k8s.io/kubernetes/cluster/saltbase/salt/rescheduler/rescheduler.manifest).
|
||||
It doesn't have any user facing configuration (component config) or API and can be disabled:
|
||||
|
||||
* during cluster setup by setting `ENABLE_RESCHEDULER` flag to `false`
|
||||
|
|
|
@ -157,5 +157,5 @@ To make such deployment secure, communication between etcd instances is authoriz
|
|||
|
||||
## Additional reading
|
||||
|
||||
[Automated HA master deployment - design doc](https://github.com/kubernetes/kubernetes/blob/master/docs/design/ha_master.md)
|
||||
[Automated HA master deployment - design doc](https://git.k8s.io/community/contributors/design-proposals/ha_master.md)
|
||||
|
||||
|
|
|
@ -87,14 +87,14 @@ to define *Hard* resource usage limits that a *Namespace* may consume.
|
|||
A limit range defines min/max constraints on the amount of resources a single entity can consume in
|
||||
a *Namespace*.
|
||||
|
||||
See [Admission control: Limit Range](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_limit_range.md)
|
||||
See [Admission control: Limit Range](https://git.k8s.io/community/contributors/design-proposals/admission_control_limit_range.md)
|
||||
|
||||
A namespace can be in one of two phases:
|
||||
|
||||
* `Active` the namespace is in use
|
||||
* `Terminating` the namespace is being deleted, and can not be used for new objects
|
||||
|
||||
See the [design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#phases) for more details.
|
||||
See the [design doc](https://git.k8s.io/community/contributors/design-proposals/namespaces.md#phases) for more details.
|
||||
|
||||
## Creating a new namespace
|
||||
|
||||
|
@ -117,7 +117,7 @@ Note that the name of your namespace must be a DNS compatible label.
|
|||
|
||||
There's an optional field `finalizers`, which allows observables to purge resources whenever the namespace is deleted. Keep in mind that if you specify a nonexistent finalizer, the namespace will be created but will get stuck in the `Terminating` state if the user tries to delete it.
|
||||
|
||||
More information on `finalizers` can be found in the namespace [design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#finalizers).
|
||||
More information on `finalizers` can be found in the namespace [design doc](https://git.k8s.io/community/contributors/design-proposals/namespaces.md#finalizers).
|
||||
|
||||
|
||||
### Working in namespaces
|
||||
|
@ -148,5 +148,5 @@ across namespaces, you need to use the fully qualified domain name (FQDN).
|
|||
|
||||
## Design
|
||||
|
||||
Details of the design of namespaces in Kubernetes, including a [detailed example](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#example-openshift-origin-managing-a-kubernetes-namespace)
|
||||
can be found in the [namespaces design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md)
|
||||
Details of the design of namespaces in Kubernetes, including a [detailed example](https://git.k8s.io/community/contributors/design-proposals/namespaces.md#example-openshift-origin-managing-a-kubernetes-namespace)
|
||||
can be found in the [namespaces design doc](https://git.k8s.io/community/contributors/design-proposals/namespaces.md)
|
||||
|
|
|
@ -103,7 +103,7 @@ It is recommended that the kubernetes system daemons are placed under a top
|
|||
level control group (`runtime.slice` on systemd machines for example). Each
|
||||
system daemon should ideally run within its own child control group. Refer to
|
||||
[this
|
||||
doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node-allocatable.md#recommended-cgroups-setup)
|
||||
doc](https://git.k8s.io/community/contributors/design-proposals/node-allocatable.md#recommended-cgroups-setup)
|
||||
for more details on recommended control group hierarchy.
|
||||
|
||||
Note that Kubelet **does not** create `--kube-reserved-cgroup` if it doesn't
|
||||
|
|
|
@ -9,7 +9,7 @@ redirect_from:
|
|||
|
||||
Kubernetes version 1.6 contains a new binary called as `cloud-controller-manager`. `cloud-controller-manager` is a daemon that embeds cloud-specific control loops in Kubernetes. These cloud-specific control loops were originally in the kube-controller-manager. However, cloud providers move at a different pace and schedule compared to the Kubernetes project, and abstracting the provider-specific code to the `cloud-controller-manager` binary allows cloud provider vendors to evolve independently from the core Kubernetes code.
|
||||
|
||||
The `cloud-controller-manager` can be linked to any cloud provider that satisifies the [cloudprovider.Interface](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/cloud.go).
|
||||
The `cloud-controller-manager` can be linked to any cloud provider that satisifies the [cloudprovider.Interface](https://git.k8s.io/kubernetes/pkg/cloudprovider/cloud.go).
|
||||
In future Kubernetes releases, cloud vendors should link code that satisfies the above interface to the `cloud-controller-manager` project and compile `cloud-controller-manager` for their own clouds. Cloud providers would also be responsible for maintaining and evolving their code.
|
||||
|
||||
* TOC
|
||||
|
@ -19,11 +19,11 @@ In future Kubernetes releases, cloud vendors should link code that satisfies the
|
|||
|
||||
To build cloud-controller-manager for your cloud, follow these steps:
|
||||
|
||||
* Write a cloudprovider that satisfies the [cloudprovider.Interface](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/cloud.go).
|
||||
* Write a cloudprovider that satisfies the [cloudprovider.Interface](https://git.k8s.io/kubernetes/pkg/cloudprovider/cloud.go).
|
||||
* Link the cloudprovider to cloud-controller-manager
|
||||
|
||||
The methods in [cloudprovider.Interface](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/cloud.go) are self-explanatory. All of the
|
||||
[existing providers](https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider/providers) satisfy this interface. If your cloud is already a part
|
||||
The methods in [cloudprovider.Interface](https://git.k8s.io/kubernetes/pkg/cloudprovider/cloud.go) are self-explanatory. All of the
|
||||
[existing providers](https://git.k8s.io/kubernetes/pkg/cloudprovider/providers) satisfy this interface. If your cloud is already a part
|
||||
of the existing providers, you do not need to write a new provider; you can proceed directly with linking your cloud provider to the `cloud-controller-manager`.
|
||||
|
||||
Once your code is ready, you must import that code into `cloud-controller-manager`. See the [rancher cloud sample](https://github.com/rancher/rancher-cloud-controller-manager) for a reference example. The import step in the sample is the only step required to link your cloud provider to the `cloud-controller-manager`.
|
||||
|
|
|
@ -302,7 +302,7 @@ Check that:
|
|||
{% capture whatsnext %}
|
||||
* If you need assistance, use one of the [support channels](http://kubernetes.io/docs/troubleshooting/) to seek assistance.
|
||||
* For details about use cases that motivated this work, see
|
||||
[Federation proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/federation.md).
|
||||
[Federation proposal](https://git.k8s.io/community/contributors/design-proposals/federation.md).
|
||||
{% endcapture %}
|
||||
{% include templates/task.md %}
|
||||
|
||||
|
|
|
@ -88,7 +88,7 @@ than what you expect to use.
|
|||
|
||||
If you specify a request, a Pod is guaranteed to be able to use that much
|
||||
of the resource. See
|
||||
[Resource QoS](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-qos.md) for the difference between resource limits and requests.
|
||||
[Resource QoS](https://git.k8s.io/community/contributors/design-proposals/resource-qos.md) for the difference between resource limits and requests.
|
||||
|
||||
## If you don't specify limits or requests
|
||||
|
||||
|
|
|
@ -198,7 +198,7 @@ PersistentVolume are not present on the Pod resource itself.
|
|||
{% capture whatsnext %}
|
||||
|
||||
* Learn more about [PersistentVolumes](/docs/concepts/storage/persistent-volumes/).
|
||||
* Read the [Persistent Storage design document](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/persistent-storage.md).
|
||||
* Read the [Persistent Storage design document](https://git.k8s.io/community/contributors/design-proposals/persistent-storage.md).
|
||||
|
||||
### Reference
|
||||
|
||||
|
|
|
@ -329,7 +329,7 @@ applied to Volumes as follows:
|
|||
|
||||
* `fsGroup`: Volumes that support ownership management are modified to be owned
|
||||
and writable by the GID specified in `fsGroup`. See the
|
||||
[Ownership Management design document](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/volume-ownership-management.md)
|
||||
[Ownership Management design document](https://git.k8s.io/community/contributors/design-proposals/volume-ownership-management.md)
|
||||
for more details.
|
||||
|
||||
* `seLinuxOptions`: Volumes that support SELinux labeling are relabeled to be accessible
|
||||
|
@ -349,8 +349,8 @@ protection, you must ensure each Pod is assigned a unique MCS label.
|
|||
* [PodSecurityContext](/docs/api-reference/v1.6/#podsecuritycontext-v1-core)
|
||||
* [SecurityContext](/docs/api-reference/v1.6/#securitycontext-v1-core)
|
||||
* [Tuning Docker with the newest security enhancements](https://opensource.com/business/15/3/docker-security-tuning)
|
||||
* [Security Contexts design document](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/security_context.md)
|
||||
* [Ownership Management design document](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/volume-ownership-management.md)
|
||||
* [Security Contexts design document](https://git.k8s.io/community/contributors/design-proposals/security_context.md)
|
||||
* [Ownership Management design document](https://git.k8s.io/community/contributors/design-proposals/volume-ownership-management.md)
|
||||
* [Pod Security Policies](https://kubernetes.io/docs/concepts/policy/pod-security-policy/)
|
||||
|
||||
|
||||
|
|
|
@ -321,7 +321,7 @@ Fluentd is written in Ruby and allows to extend its capabilities using
|
|||
[plugins](http://www.fluentd.org/plugins). If you want to use a plugin, which is not included
|
||||
in the default Stackdriver Logging container image, you have to build a custom image. Imagine
|
||||
you want to add Kafka sink for messages from a particular container for additional processing.
|
||||
You can re-use the default [container image sources](https://github.com/kubernetes/contrib/tree/master/fluentd/fluentd-gcp-image)
|
||||
You can re-use the default [container image sources](https://git.k8s.io/contrib/fluentd/fluentd-gcp-image)
|
||||
with minor changes:
|
||||
|
||||
* Change Makefile to point to your container repository, e.g. `PREFIX=gcr.io/<your-project-id>`.
|
||||
|
|
|
@ -13,7 +13,7 @@ Understanding how an application behaves when deployed is crucial to scaling the
|
|||
|
||||
## Overview
|
||||
|
||||
Heapster is a cluster-wide aggregator of monitoring and event data. It currently supports Kubernetes natively and works on all Kubernetes setups. Heapster runs as a pod in the cluster, similar to how any Kubernetes application would run. The Heapster pod discovers all nodes in the cluster and queries usage information from the nodes' [Kubelet](/docs/admin/kubelet/)s, the on-machine Kubernetes agent. The Kubelet itself fetches the data from [cAdvisor](https://github.com/google/cadvisor). Heapster groups the information by pod along with the relevant labels. This data is then pushed to a configurable backend for storage and visualization. Currently supported backends include [InfluxDB](http://influxdb.com/) (with [Grafana](http://grafana.org/) for visualization), [Google Cloud Monitoring](https://cloud.google.com/monitoring/) and many others described in more details [here](https://github.com/kubernetes/heapster/blob/master/docs/sink-configuration.md). The overall architecture of the service can be seen below:
|
||||
Heapster is a cluster-wide aggregator of monitoring and event data. It currently supports Kubernetes natively and works on all Kubernetes setups. Heapster runs as a pod in the cluster, similar to how any Kubernetes application would run. The Heapster pod discovers all nodes in the cluster and queries usage information from the nodes' [Kubelet](/docs/admin/kubelet/)s, the on-machine Kubernetes agent. The Kubelet itself fetches the data from [cAdvisor](https://github.com/google/cadvisor). Heapster groups the information by pod along with the relevant labels. This data is then pushed to a configurable backend for storage and visualization. Currently supported backends include [InfluxDB](http://influxdb.com/) (with [Grafana](http://grafana.org/) for visualization), [Google Cloud Monitoring](https://cloud.google.com/monitoring/) and many others described in more details [here](https://git.k8s.io/heapster/docs/sink-configuration.md). The overall architecture of the service can be seen below:
|
||||
|
||||

|
||||
|
||||
|
|
|
@ -384,4 +384,4 @@ Check that:
|
|||
|
||||
## For more information
|
||||
|
||||
* [Federation proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/federation.md) details use cases that motivated this work.
|
||||
* [Federation proposal](https://git.k8s.io/community/contributors/design-proposals/federation.md) details use cases that motivated this work.
|
||||
|
|
|
@ -53,7 +53,7 @@ tar -xzvf kubernetes-client-windows-amd64.tar.gz
|
|||
`amd64`. If you are on a different architecture, please use a URL
|
||||
appropriate for your architecture. You can find the list of available
|
||||
binaries on the
|
||||
[release page](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#client-binaries-1).
|
||||
[release page](https://git.k8s.io/kubernetes/CHANGELOG.md#client-binaries-1).
|
||||
|
||||
Copy the extracted binaries to one of the directories in your `$PATH`
|
||||
and set the executable permission on those binaries.
|
||||
|
|
|
@ -13,7 +13,7 @@ You can use a `podpreset` object to inject certain information into pods at crea
|
|||
time. This information can include secrets, volumes, volume mounts, and environment
|
||||
variables.
|
||||
|
||||
See [PodPreset proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/pod-preset.md) for more information.
|
||||
See [PodPreset proposal](https://git.k8s.io/community/contributors/design-proposals/pod-preset.md) for more information.
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
|
|
@ -34,7 +34,7 @@ Here is an overview of the steps in this example:
|
|||
## Starting Redis
|
||||
|
||||
For this example, for simplicitly, we will start a single instance of Redis.
|
||||
See the [Redis Example](https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook) for an example
|
||||
See the [Redis Example](https://git.k8s.io/kubernetes/examples/guestbook) for an example
|
||||
of deploying Redis scalably and redundantly.
|
||||
|
||||
Start a temporary Pod running Redis and a service so we can find it.
|
||||
|
|
|
@ -52,7 +52,7 @@ controlled by the php-apache deployment we created in the first step of these in
|
|||
Roughly speaking, HPA will increase and decrease the number of replicas
|
||||
(via the deployment) to maintain an average CPU utilization across all Pods of 50%
|
||||
(since each pod requests 200 milli-cores by [kubectl run](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/user-guide/kubectl/kubectl_run.md), this means average CPU usage of 100 milli-cores).
|
||||
See [here](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#autoscaling-algorithm) for more details on the algorithm.
|
||||
See [here](https://git.k8s.io/community/contributors/design-proposals/horizontal-pod-autoscaler.md#autoscaling-algorithm) for more details on the algorithm.
|
||||
|
||||
```shell
|
||||
$ kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
|
||||
|
|
|
@ -46,7 +46,7 @@ or the custom metrics API (for all other metrics).
|
|||
|
||||
Please note that if some of the pod's containers do not have the relevant resource request set,
|
||||
CPU utilization for the pod will not be defined and the autoscaler will not take any action
|
||||
for that metric. See the [autoscaling algorithm design document](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#autoscaling-algorithm) for further
|
||||
for that metric. See the [autoscaling algorithm design document](https://git.k8s.io/community/contributors/design-proposals/horizontal-pod-autoscaler.md#autoscaling-algorithm) for further
|
||||
details about how the autoscaling algorithm works.
|
||||
|
||||
* For per-pod custom metrics, the controller functions similarly to per-pod resource metrics,
|
||||
|
@ -66,7 +66,7 @@ See [Support for custom metrics](#prerequisites) for more details on REST client
|
|||
|
||||
The autoscaler accesses corresponding replication controller, deployment or replica set by scale sub-resource.
|
||||
Scale is an interface that allows you to dynamically set the number of replicas and examine each of their current states.
|
||||
More details on scale sub-resource can be found [here](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#scale-subresource).
|
||||
More details on scale sub-resource can be found [here](https://git.k8s.io/community/contributors/design-proposals/horizontal-pod-autoscaler.md#scale-subresource).
|
||||
|
||||
|
||||
## API Object
|
||||
|
@ -80,7 +80,7 @@ can be found in `autoscaling/v2alpha1`. The new fields introduced in `autoscalin
|
|||
are preserved as annotations when working with `autoscaling/v1`.
|
||||
|
||||
More details about the API object can be found at
|
||||
[HorizontalPodAutoscaler Object](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
|
||||
[HorizontalPodAutoscaler Object](https://git.k8s.io/community/contributors/design-proposals/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
|
||||
|
||||
## Support for Horizontal Pod Autoscaler in kubectl
|
||||
|
||||
|
@ -141,6 +141,6 @@ available at [the k8s.io/metrics repository](https://github.com/kubernetes/metri
|
|||
|
||||
## Further reading
|
||||
|
||||
* Design documentation: [Horizontal Pod Autoscaling](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md).
|
||||
* Design documentation: [Horizontal Pod Autoscaling](https://git.k8s.io/community/contributors/design-proposals/horizontal-pod-autoscaler.md).
|
||||
* kubectl autoscale command: [kubectl autoscale](/docs/user-guide/kubectl/v1.6/#autoscale).
|
||||
* Usage example of [Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/).
|
||||
|
|
|
@ -21,7 +21,7 @@ which in turn uses a
|
|||
For more information, see
|
||||
[Running a Stateless Application Using a Deployment](/docs/tasks/run-application/run-stateless-application-deployment/).
|
||||
|
||||
To update a service without an outage, `kubectl` supports what is called ['rolling update'](/docs/user-guide/kubectl/v1.6/#rolling-update), which updates one pod at a time, rather than taking down the entire service at the same time. See the [rolling update design document](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/simple-rolling-update.md) and the [example of rolling update](/docs/tasks/run-application/rolling-update-replication-controller/) for more information.
|
||||
To update a service without an outage, `kubectl` supports what is called ['rolling update'](/docs/user-guide/kubectl/v1.6/#rolling-update), which updates one pod at a time, rather than taking down the entire service at the same time. See the [rolling update design document](https://git.k8s.io/community/contributors/design-proposals/simple-rolling-update.md) and the [example of rolling update](/docs/tasks/run-application/rolling-update-replication-controller/) for more information.
|
||||
|
||||
Note that `kubectl rolling-update` only supports Replication Controllers. However, if you deploy applications with Replication Controllers,
|
||||
consider switching them to [Deployments](/docs/concepts/workloads/controllers/deployment/). A Deployment is a higher-level controller that automates rolling updates
|
||||
|
@ -161,7 +161,7 @@ spec:
|
|||
- containerPort: 80
|
||||
```
|
||||
|
||||
To update to version 1.9.1, you can use [`kubectl rolling-update --image`](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/simple-rolling-update.md) to specify the new image:
|
||||
To update to version 1.9.1, you can use [`kubectl rolling-update --image`](https://git.k8s.io/community/contributors/design-proposals/simple-rolling-update.md) to specify the new image:
|
||||
|
||||
```shell
|
||||
$ kubectl rolling-update my-nginx --image=nginx:1.9.1
|
||||
|
|
|
@ -21,7 +21,7 @@ VT-x or AMD-v virtualization must be enabled in your computer's BIOS.
|
|||
If you do not already have a hypervisor installed, install one now.
|
||||
|
||||
* For OS X, install
|
||||
[xhyve driver](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#xhyve-driver),
|
||||
[xhyve driver](https://git.k8s.io/minikube/docs/drivers.md#xhyve-driver),
|
||||
[VirtualBox](https://www.virtualbox.org/wiki/Downloads), or
|
||||
[VMware Fusion](https://www.vmware.com/products/fusion).
|
||||
|
||||
|
|
|
@ -302,7 +302,7 @@ nodes. There are lots of ways to setup the profiles though, such as:
|
|||
|
||||
* Through a [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) that runs a Pod on each node to
|
||||
ensure the correct profiles are loaded. An example implementation can be found
|
||||
[here](https://github.com/kubernetes/contrib/tree/master/apparmor/loader).
|
||||
[here](https://git.k8s.io/contrib/apparmor/loader).
|
||||
* At node initialization time, using your node initialization scripts (e.g. Salt, Ansible, etc.) or
|
||||
image.
|
||||
* By copying the profiles to each node and loading them through SSH, as demonstrated in the
|
||||
|
|
|
@ -575,7 +575,7 @@ Add, delete, or update individual elements. This does not preserve ordering.
|
|||
|
||||
This merge strategy uses a special tag on each field called a `patchMergeKey`. The
|
||||
`patchMergeKey` is defined for each field in the Kubernetes source code:
|
||||
[types.go](https://github.com/kubernetes/kubernetes/blob/master/pkg/api/v1/types.go#L2119)
|
||||
[types.go](https://git.k8s.io/kubernetes/pkg/api/v1/types.go#L2119)
|
||||
When merging a list of maps, the field specified as the `patchMergeKey` for a given element
|
||||
is used like a map key for that element.
|
||||
|
||||
|
@ -649,7 +649,7 @@ by `name`.
|
|||
As of Kubernetes 1.5, merging lists of primitive elements is not supported.
|
||||
|
||||
**Note:** Which of the above strategies is chosen for a given field is controlled by
|
||||
the `patchStrategy` tag in [types.go](https://github.com/kubernetes/kubernetes/blob/master/pkg/api/v1/types.go#L2119)
|
||||
the `patchStrategy` tag in [types.go](https://git.k8s.io/kubernetes/pkg/api/v1/types.go#L2119)
|
||||
If no `patchStrategy` is specified for a field of type list, then
|
||||
the list is replaced.
|
||||
|
||||
|
|
|
@ -85,7 +85,7 @@ however they require a better understanding of the Kubernetes object schema.
|
|||
- `edit`: Directly edit the raw configuration of a live object by opening its configuration in an editor.
|
||||
- `patch`: Directly modify specific fields of a live object by using a patch string.
|
||||
For more details on patch strings, see the patch section in
|
||||
[API Conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#patch-operations).
|
||||
[API Conventions](https://git.k8s.io/community/contributors/devel/api-conventions.md#patch-operations).
|
||||
|
||||
## How to delete objects
|
||||
|
||||
|
|
|
@ -839,7 +839,7 @@ ring. The [`KubernetesSeedProvider`](java/src/main/java/io/k8s/cassandra/Kuberne
|
|||
discovers Cassandra seeds IP addresses via the Kubernetes API, those Cassandra
|
||||
instances are defined within the Cassandra Service.
|
||||
|
||||
Refer to the custom seed provider [README](https://github.com/kubernetes/examples/blob/master/cassandra/java/README.md) for further
|
||||
Refer to the custom seed provider [README](https://git.k8s.io/examples/cassandra/java/README.md) for further
|
||||
`KubernetesSeedProvider` configurations. For this example you should not need
|
||||
to customize the Seed Provider configurations.
|
||||
|
||||
|
|
|
@ -122,7 +122,7 @@ chcon -Rt svirt_sandbox_file_t /tmp/data
|
|||
```
|
||||
|
||||
Continuing with host path, create the persistent volume objects in Kubernetes using
|
||||
[local-volumes.yaml](https://github.com/kubernetes/examples/blob/master/mysql-wordpress-pd/local-volumes.yaml):
|
||||
[local-volumes.yaml](https://git.k8s.io/examples/mysql-wordpress-pd/local-volumes.yaml):
|
||||
|
||||
```shell
|
||||
export KUBE_REPO=https://raw.githubusercontent.com/kubernetes/examples/master
|
||||
|
@ -139,10 +139,10 @@ Create two persistent disks. You will need to create the disks in the
|
|||
same [GCE zone](https://cloud.google.com/compute/docs/zones) as the
|
||||
Kubernetes cluster. The default setup script will create the cluster
|
||||
in the `us-central1-b` zone, as seen in the
|
||||
[config-default.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/config-default.sh) file. Replace
|
||||
[config-default.sh](https://git.k8s.io/kubernetes/cluster/gce/config-default.sh) file. Replace
|
||||
`<zone>` below with the appropriate zone. The names `wordpress-1` and
|
||||
`wordpress-2` must match the `pdName` fields we have specified in
|
||||
[gce-volumes.yaml](https://github.com/kubernetes/examples/blob/master/mysql-wordpress-pd/gce-volumes.yaml).
|
||||
[gce-volumes.yaml](https://git.k8s.io/examples/mysql-wordpress-pd/gce-volumes.yaml).
|
||||
|
||||
```shell
|
||||
gcloud compute disks create --size=20GB --zone=<zone> wordpress-1
|
||||
|
@ -180,13 +180,13 @@ access the database.
|
|||
|
||||
Now that the persistent disks and secrets are defined, the Kubernetes
|
||||
pods can be launched. Start MySQL using
|
||||
[mysql-deployment.yaml](https://github.com/kubernetes/examples/blob/master/mysql-wordpress-pd/mysql-deployment.yaml).
|
||||
[mysql-deployment.yaml](https://git.k8s.io/examples/mysql-wordpress-pd/mysql-deployment.yaml).
|
||||
|
||||
```shell
|
||||
kubectl create -f $KUBE_REPO/mysql-wordpress-pd/mysql-deployment.yaml
|
||||
```
|
||||
|
||||
Take a look at [mysql-deployment.yaml](https://github.com/kubernetes/examples/blob/master/mysql-wordpress-pd/mysql-deployment.yaml), and
|
||||
Take a look at [mysql-deployment.yaml](https://git.k8s.io/examples/mysql-wordpress-pd/mysql-deployment.yaml), and
|
||||
note that we've defined a volume mount for `/var/lib/mysql`, and then
|
||||
created a Persistent Volume Claim that looks for a 20G volume. This
|
||||
claim is satisfied by any volume that meets the requirements, in our
|
||||
|
@ -235,7 +235,7 @@ kubectl logs <pod-name>
|
|||
Version: '5.6.29' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL)
|
||||
```
|
||||
|
||||
Also in [mysql-deployment.yaml](https://github.com/kubernetes/examples/blob/master/mysql-wordpress-pd/mysql-deployment.yaml) we created a
|
||||
Also in [mysql-deployment.yaml](https://git.k8s.io/examples/mysql-wordpress-pd/mysql-deployment.yaml) we created a
|
||||
service to allow other pods to reach this mysql instance. The name is
|
||||
`wordpress-mysql` which resolves to the pod IP.
|
||||
|
||||
|
@ -269,7 +269,7 @@ local-pv-2 20Gi RWO Bound default/mysql-pv-claim
|
|||
## Deploy WordPress
|
||||
|
||||
Next deploy WordPress using
|
||||
[wordpress-deployment.yaml](https://github.com/kubernetes/examples/blob/master/mysql-wordpress-pd/wordpress-deployment.yaml):
|
||||
[wordpress-deployment.yaml](https://git.k8s.io/examples/mysql-wordpress-pd/wordpress-deployment.yaml):
|
||||
|
||||
```shell
|
||||
kubectl create -f $KUBE_REPO/mysql-wordpress-pd/wordpress-deployment.yaml
|
||||
|
|
|
@ -52,7 +52,7 @@ $ kubectl cluster-info
|
|||
|
||||
If you see a url response, you are ready to go. If not, read the [Getting Started guides](http://kubernetes.io/docs/getting-started-guides/) for how to get started, and follow the [prerequisites](http://kubernetes.io/docs/user-guide/prereqs/) to install and configure `kubectl`. As noted above, if you have a Google Container Engine cluster set up, read [this example](https://cloud.google.com/container-engine/docs/tutorials/guestbook) instead.
|
||||
|
||||
All the files referenced in this example can be downloaded [from GitHub](https://github.com/kubernetes/examples/tree/master/guestbook).
|
||||
All the files referenced in this example can be downloaded [from GitHub](https://git.k8s.io/examples/guestbook).
|
||||
|
||||
### Quick Start
|
||||
|
||||
|
@ -108,7 +108,7 @@ Before continuing to the gory details, we also recommend you to read Kubernetes
|
|||
|
||||
#### Define a Deployment
|
||||
|
||||
To start the redis master, use the file [redis-master-deployment.yaml](https://github.com/kubernetes/examples/tree/master/guestbook/redis-master-deployment.yaml), which describes a single [pod](http://kubernetes.io/docs/user-guide/pods/) running a redis key-value server in a container.
|
||||
To start the redis master, use the file [redis-master-deployment.yaml](https://git.k8s.io/examples/guestbook/redis-master-deployment.yaml), which describes a single [pod](http://kubernetes.io/docs/user-guide/pods/) running a redis key-value server in a container.
|
||||
|
||||
Although we have a single instance of our redis master, we are using a [Deployment](http://kubernetes.io/docs/user-guide/deployments/) to enforce that exactly one pod keeps running. E.g., if the node were to go down, the Deployment will ensure that the redis master gets restarted on a healthy node. (In our simplified example, this could result in data loss.)
|
||||
|
||||
|
@ -166,7 +166,7 @@ A Kubernetes [Service](http://kubernetes.io/docs/user-guide/services/) is a name
|
|||
Services find the pods to load balance based on the pods' labels.
|
||||
The selector field of the Service description determines which pods will receive the traffic sent to the Service, and the `port` and `targetPort` information defines what port the Service proxy will run at.
|
||||
|
||||
The file [redis-master-service.yaml](https://github.com/kubernetes/examples/tree/master/guestbook/redis-master-deployment.yaml) defines the redis master Service:
|
||||
The file [redis-master-service.yaml](https://git.k8s.io/examples/guestbook/redis-master-deployment.yaml) defines the redis master Service:
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE redis-master-service.yaml -->
|
||||
|
||||
|
@ -238,7 +238,7 @@ This example has been configured to use the DNS service by default.
|
|||
|
||||
If your cluster does not have the DNS service enabled, then you can use environment variables by setting the
|
||||
`GET_HOSTS_FROM` env value in both
|
||||
[redis-slave-deployment.yaml](https://github.com/kubernetes/examples/tree/master/guestbook/redis-slave-deployment.yaml) and [frontend-deployment.yaml](https://github.com/kubernetes/examples/tree/master/guestbook/frontend-deployment.yaml)
|
||||
[redis-slave-deployment.yaml](https://git.k8s.io/examples/guestbook/redis-slave-deployment.yaml) and [frontend-deployment.yaml](https://git.k8s.io/examples/guestbook/frontend-deployment.yaml)
|
||||
from `dns` to `env` before you start up the app.
|
||||
(However, this is unlikely to be necessary. You can check for the DNS service in the list of the cluster's services by
|
||||
running `kubectl --namespace=kube-system get rc -l k8s-app=kube-dns`.)
|
||||
|
@ -350,7 +350,7 @@ In Kubernetes, a Deployment is responsible for managing multiple instances of a
|
|||
Just like the master, we want to have a Service to proxy connections to the redis slaves. In this case, in addition to discovery, the slave Service will provide transparent load balancing to web app clients.
|
||||
|
||||
This time we put the Service and Deployment into one [file](http://kubernetes.io/docs/user-guide/managing-deployments/#organizing-resource-configurations). Grouping related objects together in a single file is often better than having separate files.
|
||||
The specification for the slaves is in [all-in-one/redis-slave.yaml](https://github.com/kubernetes/examples/tree/master/guestbook/all-in-one/redis-slave.yaml):
|
||||
The specification for the slaves is in [all-in-one/redis-slave.yaml](https://git.k8s.io/examples/guestbook/all-in-one/redis-slave.yaml):
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE all-in-one/redis-slave.yaml -->
|
||||
|
||||
|
@ -460,7 +460,7 @@ A frontend pod is a simple PHP server that is configured to talk to either the s
|
|||
Again we'll create a set of replicated frontend pods instantiated by a Deployment — this time, with three replicas.
|
||||
|
||||
As with the other pods, we now want to create a Service to group the frontend pods.
|
||||
The Deployment and Service are described in the file [all-in-one/frontend.yaml](https://github.com/kubernetes/examples/tree/master/guestbook/all-in-one/frontend.yaml):
|
||||
The Deployment and Service are described in the file [all-in-one/frontend.yaml](https://git.k8s.io/examples/guestbook/all-in-one/frontend.yaml):
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE all-in-one/frontend.yaml -->
|
||||
|
||||
|
@ -534,7 +534,7 @@ spec:
|
|||
|
||||
For supported cloud providers, such as Google Compute Engine or Google Container Engine, you can specify to use an external load balancer
|
||||
in the service `spec`, to expose the service onto an external load balancer IP.
|
||||
To do this, uncomment the `type: LoadBalancer` line in the [all-in-one/frontend.yaml](https://github.com/kubernetes/examples/tree/master/guestbook/all-in-one/frontend.yaml) file before you start the service.
|
||||
To do this, uncomment the `type: LoadBalancer` line in the [all-in-one/frontend.yaml](https://git.k8s.io/examples/guestbook/all-in-one/frontend.yaml) file before you start the service.
|
||||
|
||||
[See the appendix below](#appendix-accessing-the-guestbook-site-externally) on accessing the guestbook site externally for more details.
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@ assignees:
|
|||
title: Rolling Update Demo
|
||||
---
|
||||
|
||||
This example demonstrates the usage of Kubernetes to perform a [rolling update](/docs/user-guide/kubectl/kubectl_rolling-update/) on a running group of [pods](/docs/user-guide/pods/). See [here](/docs/concepts/cluster-administration/manage-deployment/#updating-your-application-without-a-service-outage) to understand why you need a rolling update. Also check [rolling update design document](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/simple-rolling-update.md) for more information.
|
||||
This example demonstrates the usage of Kubernetes to perform a [rolling update](/docs/user-guide/kubectl/kubectl_rolling-update/) on a running group of [pods](/docs/user-guide/pods/). See [here](/docs/concepts/cluster-administration/manage-deployment/#updating-your-application-without-a-service-outage) to understand why you need a rolling update. Also check [rolling update design document](https://git.k8s.io/community/contributors/design-proposals/simple-rolling-update.md) for more information.
|
||||
|
||||
The files for this example are viewable in [our docs repo
|
||||
here](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/user-guide/update-demo).
|
||||
|
|
|
@ -40,7 +40,7 @@ The simplest pod definition describes the deployment of a single container. For
|
|||
|
||||
A pod definition is a declaration of a _desired state_. Desired state is a very important concept in the Kubernetes model. Many things present a desired state to the system, and it is Kubernetes' responsibility to make sure that the current state matches the desired state. For example, when you create a Pod, you declare that you want the containers in it to be running. If the containers happen to not be running (e.g. program failure, ...), Kubernetes will continue to (re-)create them for you in order to drive them to the desired state. This process continues until the Pod is deleted.
|
||||
|
||||
See the [design document](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/README.md) for more details.
|
||||
See the [design document](https://git.k8s.io/community/contributors/design-proposals/README.md) for more details.
|
||||
|
||||
|
||||
#### Pod Management
|
||||
|
|
|
@ -14,7 +14,7 @@ $( document ).ready(function() {
|
|||
$("#continueEditButton").text("Edit " + forwarding);
|
||||
$("#continueEditButton").attr("href", "https://github.com/kubernetes/kubernetes.github.io/edit/master/" + forwarding)
|
||||
$("#viewOnGithubButton").text("View " + forwarding + " on GitHub");
|
||||
$("#viewOnGithubButton").attr("href", "https://github.com/kubernetes/kubernetes.github.io/tree/master/" + forwarding)
|
||||
$("#viewOnGithubButton").attr("href", "https://git.k8s.io/kubernetes.github.io/" + forwarding)
|
||||
} else {
|
||||
$("#generalInstructions").show();
|
||||
$("#continueEdit").hide();
|
||||
|
|
Loading…
Reference in New Issue