Remaining links -> Absolute

pull/43/head
John Mulhausen 2016-02-22 20:44:13 -08:00
parent ebf5d38024
commit 0793b7ecb0
116 changed files with 397 additions and 415 deletions

View File

@ -75,7 +75,7 @@ This plug-in will observe the incoming request and ensure that it does not viola
enumerated in the `ResourceQuota` object in a `Namespace`. If you are using `ResourceQuota`
objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints.
See the [resourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/admission_control_resource_quota.md) and the [example of Resource Quota](resourcequota/) for more details.
See the [resourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_resource_quota.md) and the [example of Resource Quota](/{{page.version}}/docs/admin/resourcequota/) for more details.
It is strongly encouraged that this plug-in is configured last in the sequence of admission control plug-ins. This is
so that quota is not prematurely incremented only for the request to be rejected later in admission control.
@ -88,7 +88,7 @@ your Kubernetes deployment, you MUST use this plug-in to enforce those constrain
be used to apply default resource requests to Pods that don't specify any; currently, the default LimitRanger
applies a 0.1 CPU requirement to all Pods in the `default` namespace.
See the [limitRange design doc](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/admission_control_limit_range.md) and the [example of Limit Range](limitrange/) for more details.
See the [limitRange design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_limit_range.md) and the [example of Limit Range](/{{page.version}}/docs/admin/limitrange/) for more details.
### InitialResources (experimental)
@ -97,7 +97,7 @@ then the plug-in auto-populates a compute resource request based on historical u
If there is not enough data to make a decision the Request is left unchanged.
When the plug-in sets a compute resource request, it annotates the pod with information on what compute resources it auto-populated.
See the [InitialResouces proposal](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/proposals/initial-resources.md) for more details.
See the [InitialResouces proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/initial-resources.md) for more details.
### NamespaceExists (deprecated)

View File

@ -85,7 +85,7 @@ To permit an action Policy with an unset namespace applies regardless of namespa
3. Kubelet can read and write events: `{"user":"kubelet", "resource": "events"}`
4. Bob can just read pods in namespace "projectCaribou": `{"user":"bob", "resource": "pods", "readonly": true, "namespace": "projectCaribou"}`
[Complete file example](http://releases.k8s.io/release-1.1/pkg/auth/authorizer/abac/example_policy_file.jsonl)
[Complete file example](http://releases.k8s.io/{{page.githubbranch}}/pkg/auth/authorizer/abac/example_policy_file.jsonl)
### A quick note on service accounts

View File

@ -15,7 +15,7 @@ unsatisfied).
Master components could in theory be run on any node in the cluster. However,
for simplicity, current set up scripts typically start all master components on
the same VM, and does not run user containers on this VM. See
[high-availability.md](high-availability) for an example multi-master-VM setup.
[high-availability.md](/{{page.version}}/docs/admin/high-availability) for an example multi-master-VM setup.
Even in the future, when Kubernetes is fully self-hosting, it will probably be
wise to only allow master components to schedule on a subset of nodes, to limit
@ -24,19 +24,19 @@ node-compromising security exploit.
### kube-apiserver
[kube-apiserver](kube-apiserver) exposes the Kubernetes API; it is the front-end for the
[kube-apiserver](/{{page.version}}/docs/admin/kube-apiserver) exposes the Kubernetes API; it is the front-end for the
Kubernetes control plane. It is designed to scale horizontally (i.e., one scales
it by running more of them-- [high-availability.md](high-availability)).
it by running more of them-- [high-availability.md](/{{page.version}}/docs/admin/high-availability)).
### etcd
[etcd](etcd) is used as Kubernetes' backing store. All cluster data is stored here.
[etcd](/{{page.version}}/docs/admin/etcd) is used as Kubernetes' backing store. All cluster data is stored here.
Proper administration of a Kubernetes cluster includes a backup plan for etcd's
data.
### kube-controller-manager
[kube-controller-manager](kube-controller-manager) is a binary that runs controllers, which are the
[kube-controller-manager](/{{page.version}}/docs/admin/kube-controller-manager) is a binary that runs controllers, which are the
background threads that handle routine tasks in the cluster. Logically, each
controller is a separate process, but to reduce the number of moving pieces in
the system, they are all compiled into a single binary and run in a single
@ -57,7 +57,7 @@ These controllers include:
### kube-scheduler
[kube-scheduler](kube-scheduler) watches newly created pods that have no node assigned, and
[kube-scheduler](/{{page.version}}/docs/admin/kube-scheduler) watches newly created pods that have no node assigned, and
selects a node for them to run on.
### addons
@ -65,17 +65,17 @@ selects a node for them to run on.
Addons are pods and services that implement cluster features. They don't run on
the master VM, but currently the default setup scripts that make the API calls
to create these pods and services does run on the master VM. See:
[kube-master-addons](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/kube-master-addons/kube-master-addons.sh)
[kube-master-addons](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/salt/kube-master-addons/kube-master-addons.sh)
Addon objects are created in the "kube-system" namespace.
Example addons are:
* [DNS](http://releases.k8s.io/release-1.1/cluster/addons/dns/) provides cluster local DNS.
* [kube-ui](http://releases.k8s.io/release-1.1/cluster/addons/kube-ui/) provides a graphical UI for the
* [DNS](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/) provides cluster local DNS.
* [kube-ui](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/kube-ui/) provides a graphical UI for the
cluster.
* [fluentd-elasticsearch](http://releases.k8s.io/release-1.1/cluster/addons/fluentd-elasticsearch/) provides
log storage. Also see the [gcp version](http://releases.k8s.io/release-1.1/cluster/addons/fluentd-gcp/).
* [cluster-monitoring](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/) provides
* [fluentd-elasticsearch](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/fluentd-elasticsearch/) provides
log storage. Also see the [gcp version](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/fluentd-gcp/).
* [cluster-monitoring](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/) provides
monitoring for the cluster.
## Node components
@ -85,7 +85,7 @@ the Kubernetes runtime environment.
### kubelet
[kubelet](kubelet) is the primary node agent. It:
[kubelet](/{{page.version}}/docs/admin/kubelet) is the primary node agent. It:
* Watches for pods that have been assigned to its node (either by apiserver
or via local configuration file) and:
* Mounts the pod's required volumes
@ -98,7 +98,7 @@ the Kubernetes runtime environment.
### kube-proxy
[kube-proxy](kube-proxy) enables the Kubernetes service abstraction by maintaining
[kube-proxy](/{{page.version}}/docs/admin/kube-proxy) enables the Kubernetes service abstraction by maintaining
network rules on the host and performing connection forwarding.
### docker

View File

@ -13,7 +13,7 @@ At v1.0, Kubernetes supports clusters up to 100 nodes with 30 pods per node and
A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by a "master" (the cluster-level control plane).
Normally the number of nodes in a cluster is controlled by the the value `NUM_MINIONS` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/release-1.1/cluster/gce/config-default.sh)).
Normally the number of nodes in a cluster is controlled by the the value `NUM_MINIONS` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/{{page.githubbranch}}/cluster/gce/config-default.sh)).
Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up.
@ -56,14 +56,14 @@ These limits, however, are based on data collected from addons running on 4-node
To avoid running into cluster addon resource issues, when creating a cluster with many nodes, consider the following:
- Scale memory and CPU limits for each of the following addons, if used, along with the size of cluster (there is one replica of each handling the entire cluster so memory and CPU usage tends to grow proportionally with size/load on cluster):
- Heapster ([GCM/GCL backed](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/google/heapster-controller.yaml), [InfluxDB backed](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/influxdb/heapster-controller.yaml), [InfluxDB/GCL backed](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/googleinfluxdb/heapster-controller-combined.yaml), [standalone](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/standalone/heapster-controller.yaml))
* [InfluxDB and Grafana](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
* [skydns, kube2sky, and dns etcd](http://releases.k8s.io/release-1.1/cluster/addons/dns/skydns-rc.yaml.in)
* [Kibana](http://releases.k8s.io/release-1.1/cluster/addons/fluentd-elasticsearch/kibana-controller.yaml)
- Heapster ([GCM/GCL backed](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/google/heapster-controller.yaml), [InfluxDB backed](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/influxdb/heapster-controller.yaml), [InfluxDB/GCL backed](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/googleinfluxdb/heapster-controller-combined.yaml), [standalone](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/standalone/heapster-controller.yaml))
* [InfluxDB and Grafana](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
* [skydns, kube2sky, and dns etcd](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/skydns-rc.yaml.in)
* [Kibana](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/fluentd-elasticsearch/kibana-controller.yaml)
* Scale number of replicas for the following addons, if used, along with the size of cluster (there are multiple replicas of each so increasing replicas should help handle increased load, but, since load per replica also increases slightly, also consider increasing CPU/memory limits):
* [elasticsearch](http://releases.k8s.io/release-1.1/cluster/addons/fluentd-elasticsearch/es-controller.yaml)
* [elasticsearch](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/fluentd-elasticsearch/es-controller.yaml)
* Increase memory and CPU limits slightly for each of the following addons, if used, along with the size of cluster (there is one replica per node but CPU/memory usage increases slightly along with cluster load/size as well):
* [FluentD with ElasticSearch Plugin](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/fluentd-es/fluentd-es.yaml)
* [FluentD with GCP Plugin](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)
* [FluentD with ElasticSearch Plugin](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/salt/fluentd-es/fluentd-es.yaml)
* [FluentD with GCP Plugin](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)
For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](/{{page.version}}/docs/user-guide/compute-resources/#troubleshooting).

View File

@ -63,7 +63,7 @@ recommend testing the upgrade on an experimental cluster before performing the u
## Resizing a cluster
If your cluster runs short on resources you can easily add more machines to it if your cluster is running in [Node self-registration mode](node/#self-registration-of-nodes).
If your cluster runs short on resources you can easily add more machines to it if your cluster is running in [Node self-registration mode](/{{page.version}}/docs/admin/node/#self-registration-of-nodes).
If you're using GCE or GKE it's done by resizing Instance Group managing your Nodes. It can be accomplished by modifying number of instances on `Compute > Compute Engine > Instance groups > your group > Edit group` [Google Cloud Console page](https://console.developers.google.com) or using gcloud CLI:
```shell
@ -145,7 +145,7 @@ kubectl replace nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedu
If you deleted the node's VM instance and created a new one, then a new schedulable node resource will
be created automatically when you create a new VM instance (if you're using a cloud provider that supports
node discovery; currently this is only Google Compute Engine, not including CoreOS on Google Compute Engine using kube-register). See [Node](node) for more details.
node discovery; currently this is only Google Compute Engine, not including CoreOS on Google Compute Engine using kube-register). See [Node](/{{page.version}}/docs/admin/node) for more details.
## Advanced Topics

View File

@ -89,7 +89,7 @@ Mitigations:
- Action use IaaS providers reliable storage (e.g GCE PD or AWS EBS volume) for VMs with apiserver+etcd
- Mitigates: Apiserver backing storage lost
- Action: Use (experimental) [high-availability](high-availability) configuration
- Action: Use (experimental) [high-availability](/{{page.version}}/docs/admin/high-availability) configuration
- Mitigates: Master VM shutdown or master components (scheduler, API server, controller-managing) crashing
- Will tolerate one or more simultaneous node or component failures
- Mitigates: Apiserver backing storage (i.e., etcd's data directory) lost
@ -108,5 +108,5 @@ Mitigations:
- Mitigates: Node shutdown
- Mitigates: Kubelet software fault
- Action: [Multiple independent clusters](multi-cluster) (and avoid making risky changes to all clusters at once)
- Action: [Multiple independent clusters](/{{page.version}}/docs/admin/multi-cluster) (and avoid making risky changes to all clusters at once)
- Mitigates: Everything listed above.

View File

@ -75,7 +75,7 @@ Normally, the machine that a pod runs on is selected by the Kubernetes scheduler
created by the Daemon controller have the machine already selected (`.spec.nodeName` is specified
when the pod is created, so it is ignored by the scheduler). Therefore:
- the [`unschedulable`](node/#manual-node-administration) field of a node is not respected
- the [`unschedulable`](/{{page.version}}/docs/admin/node/#manual-node-administration) field of a node is not respected
by the daemon set controller.
- daemon set controller can make pods even when the scheduler has not been started, which can help cluster
bootstrap.
@ -140,7 +140,7 @@ use a Daemon Set rather than creating individual pods.
### Static Pods
It is possible to create pods by writing a file to a certain directory watched by Kubelet. These
are called [static pods](static-pods).
are called [static pods](/{{page.version}}/docs/admin/static-pods).
Unlike DaemonSet, static pods cannot be managed with kubectl
or other Kubernetes API clients. Static pods do not depend on the apiserver, making them useful
in cluster bootstrapping cases. Also, static pods may be deprecated in the future.

View File

@ -1,7 +1,7 @@
---
title: "DNS Integration with Kubernetes"
---
As of Kubernetes 0.8, DNS is offered as a [cluster add-on](http://releases.k8s.io/release-1.1/cluster/addons/README.md).
As of Kubernetes 0.8, DNS is offered as a [cluster add-on](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/README.md).
If enabled, a DNS Pod and Service will be scheduled on the cluster, and the kubelets will be
configured to tell individual containers to use the DNS Service's IP to resolve DNS names.
@ -36,4 +36,4 @@ time.
## For more information
See [the docs for the DNS cluster addon](http://releases.k8s.io/release-1.1/cluster/addons/dns/README.md).
See [the docs for the DNS cluster addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md).

View File

@ -13,7 +13,7 @@ internet at large), because access to etcd is equivalent to root in your
cluster.
Data Reliability: for reasonable safety, either etcd needs to be run as a
[cluster](high-availability/#clustering-etcd) (multiple machines each running
[cluster](/{{page.version}}/docs/admin/high-availability/#clustering-etcd) (multiple machines each running
etcd) or etcd's data directory should be located on durable storage (e.g., GCE's
persistent disk). In either case, if high availability is required--as it might
be in a production cluster--the data directory ought to be [backed up
@ -23,14 +23,14 @@ to reduce downtime in case of corruption.
## Default configuration
The default setup scripts use kubelet's file-based static pods feature to run etcd in a
[pod](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/etcd/etcd.manifest). This manifest should only
[pod](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/salt/etcd/etcd.manifest). This manifest should only
be run on master VMs. The default location that kubelet scans for manifests is
`/etc/kubernetes/manifests/`.
## Kubernetes's usage of etcd
By default, Kubernetes objects are stored under the `/registry` key in etcd.
This path can be prefixed by using the [kube-apiserver](kube-apiserver) flag
This path can be prefixed by using the [kube-apiserver](/{{page.version}}/docs/admin/kube-apiserver) flag
`--etcd-prefix="/foo"`.
`etcd` is the only place that Kubernetes keeps state.

View File

@ -53,11 +53,11 @@ choices. For example, on systemd-based systems (e.g. RHEL, CentOS), you can run
If you are extending from a standard Kubernetes installation, the `kubelet` binary should already be present on your system. You can run
`which kubelet` to determine if the binary is in fact installed. If it is not installed,
you should install the [kubelet binary](https://storage.googleapis.com/kubernetes-release/release/v0.19.3/bin/linux/amd64/kubelet), the
[kubelet init file](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/kubelet/initd) and [high-availability/default-kubelet](high-availability/default-kubelet)
[kubelet init file](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/salt/kubelet/initd) and [high-availability/default-kubelet](/{{page.version}}/docs/admin/high-availability/default-kubelet)
scripts.
If you are using monit, you should also install the monit daemon (`apt-get install monit`) and the [high-availability/monit-kubelet](high-availability/monit-kubelet) and
[high-availability/monit-docker](high-availability/monit-docker) configs.
If you are using monit, you should also install the monit daemon (`apt-get install monit`) and the [high-availability/monit-kubelet](/{{page.version}}/docs/admin/high-availability/monit-kubelet) and
[high-availability/monit-docker](/{{page.version}}/docs/admin/high-availability/monit-docker) configs.
On systemd systems you `systemctl enable kubelet` and `systemctl enable docker`.
@ -86,7 +86,7 @@ First, hit the etcd discovery service to create a new token:
curl https://discovery.etcd.io/new?size=3
```
On each node, copy the [etcd.yaml](high-availability/etcd.yaml) file into `/etc/kubernetes/manifests/etcd.yaml`
On each node, copy the [etcd.yaml](/{{page.version}}/docs/admin/high-availability/etcd.yaml) file into `/etc/kubernetes/manifests/etcd.yaml`
The kubelet on each node actively monitors the contents of that directory, and it will create an instance of the `etcd`
server from the definition of the pod specified in `etcd.yaml`.
@ -156,7 +156,7 @@ The easiest way to create this directory, may be to copy it from the master node
### Starting the API Server
Once these files exist, copy the [kube-apiserver.yaml](high-availability/kube-apiserver.yaml) into `/etc/kubernetes/manifests/` on each master node.
Once these files exist, copy the [kube-apiserver.yaml](/{{page.version}}/docs/admin/high-availability/kube-apiserver.yaml) into `/etc/kubernetes/manifests/` on each master node.
The kubelet monitors this directory, and will automatically create an instance of the `kube-apiserver` container using the pod definition specified
in the file.
@ -185,7 +185,7 @@ master election. On each of the three apiserver nodes, we run a small utility a
election protocol using etcd "compare and swap". If the apiserver node wins the election, it starts the master component it is managing (e.g. the scheduler), if it
loses the election, it ensures that any master components running on the node (e.g. the scheduler) are stopped.
In the future, we expect to more tightly integrate this lease-locking into the scheduler and controller-manager binaries directly, as described in the [high availability design proposal](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/proposals/high-availability.md)
In the future, we expect to more tightly integrate this lease-locking into the scheduler and controller-manager binaries directly, as described in the [high availability design proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/high-availability.md)
### Installing configuration files
@ -197,11 +197,11 @@ touch /var/log/kube-controller-manager.log
```
Next, set up the descriptions of the scheduler and controller manager pods on each node.
by copying [kube-scheduler.yaml](high-availability/kube-scheduler.yaml) and [kube-controller-manager.yaml](high-availability/kube-controller-manager.yaml) into the `/srv/kubernetes/` directory.
by copying [kube-scheduler.yaml](/{{page.version}}/docs/admin/high-availability/kube-scheduler.yaml) and [kube-controller-manager.yaml](high-availability//{{page.version}}/docs/admin/kube-controller-manager.yaml) into the `/srv/kubernetes/` directory.
### Running the podmaster
Now that the configuration files are in place, copy the [podmaster.yaml](high-availability/podmaster.yaml) config file into `/etc/kubernetes/manifests/`
Now that the configuration files are in place, copy the [podmaster.yaml](/{{page.version}}/docs/admin/high-availability/podmaster.yaml) config file into `/etc/kubernetes/manifests/`
As before, the kubelet on the node monitors this directory, and will start an instance of the podmaster using the pod specification provided in `podmaster.yaml`.

View File

@ -26,7 +26,7 @@ This example demonstrates how limits can be applied to a Kubernetes namespace to
min/max resource limits per pod. In addition, this example demonstrates how you can
apply default resource limits to pods in the absence of an end-user specified value.
See [LimitRange design doc](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/admission_control_limit_range.md) for more information. For a detailed description of the Kubernetes resource model, see [Resources](/{{page.version}}/docs/user-guide/compute-resources)
See [LimitRange design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_limit_range.md) for more information. For a detailed description of the Kubernetes resource model, see [Resources](/{{page.version}}/docs/user-guide/compute-resources)
## Step 0: Prerequisites

View File

@ -7,7 +7,7 @@ This document describes some of the issues to consider when making a decision ab
Note that at present,
Kubernetes does not offer a mechanism to aggregate multiple clusters into a single virtual cluster. However,
we [plan to do this in the future](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/proposals/federation.md).
we [plan to do this in the future](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/federation.md).
## Scope of a single cluster

View File

@ -37,7 +37,7 @@ The Namespace provides a unique scope for:
## Usage
Look [here](namespaces/) for an in depth example of namespaces.
Look [here](/{{page.version}}/docs/admin/namespaces/) for an in depth example of namespaces.
### Viewing namespaces
@ -84,13 +84,13 @@ to define *Hard* resource usage limits that a *Namespace* may consume.
A limit range defines min/max constraints on the amount of resources a single entity can consume in
a *Namespace*.
See [Admission control: Limit Range](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/admission_control_limit_range.md)
See [Admission control: Limit Range](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_limit_range.md)
A namespace can be in one of two phases:
* `Active` the namespace is in use
* `Terminating` the namespace is being deleted, and can not be used for new objects
See the [design doc](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/namespaces.md#phases) for more details.
See the [design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#phases) for more details.
### Creating a new namespace
@ -105,7 +105,7 @@ metadata:
Note that the name of your namespace must be a DNS compatible label.
More information on the `finalizers` field can be found in the namespace [design doc](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/namespaces.md#finalizers).
More information on the `finalizers` field can be found in the namespace [design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#finalizers).
Then run:
@ -132,7 +132,7 @@ This delete is asynchronous, so for a time you will see the namespace in the `Te
## Namespaces and DNS
When you create a [Service](/{{page.version}}/docs/user-guide/services), it creates a corresponding [DNS entry](dns).
When you create a [Service](/{{page.version}}/docs/user-guide/services), it creates a corresponding [DNS entry](/{{page.version}}/docs/admin/dns).
This entry is of the form `<service-name>.<namespace-name>.svc.cluster.local`, which means
that if a container just uses `<service-name>` it will resolve to the service which
is local to a namespace. This is useful for using the same configuration across
@ -141,5 +141,5 @@ across namespaces, you need to use the fully qualified domain name (FQDN).
## Design
Details of the design of namespaces in Kubernetes, including a [detailed example](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/namespaces.md#example-openshift-origin-managing-a-kubernetes-namespace)
can be found in the [namespaces design doc](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/namespaces.md)
Details of the design of namespaces in Kubernetes, including a [detailed example](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#example-openshift-origin-managing-a-kubernetes-namespace)
can be found in the [namespaces design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md)

View File

@ -1,7 +1,7 @@
---
title: "Kubernetes Namespaces"
---
Kubernetes _[namespaces](/{{page.version}}/docs/admin/namespaces)_ help different projects, teams, or customers to share a Kubernetes cluster.
Kubernetes _namespaces_ help different projects, teams, or customers to share a Kubernetes cluster.
It does this by providing the following:
@ -49,7 +49,7 @@ One pattern this organization could follow is to partition the Kubernetes cluste
Let's create two new namespaces to hold our work.
Use the file [`namespace-dev.json`](namespace-dev.json) which describes a development namespace:
Use the file [`namespace-dev.json`](/{{page.version}}/docs/admin/namespacesnamespace-dev.json) which describes a development namespace:
<!-- BEGIN MUNGE: EXAMPLE namespace-dev.json -->
@ -66,7 +66,7 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo
}
```
[Download example](namespace-dev.json)
[Download example](/{{page.version}}/docs/admin/namespacesnamespace-dev.json)
<!-- END MUNGE: EXAMPLE namespace-dev.json -->
Create the development namespace using kubectl.

View File

@ -163,7 +163,7 @@ people have reported success with Flannel and Kubernetes.
### OpenVSwitch
[OpenVSwitch](ovs-networking) is a somewhat more mature but also
[OpenVSwitch](/{{page.version}}/docs/admin/ovs-networking) is a somewhat more mature but also
complicated way to build an overlay network. This is endorsed by several of the
"Big Shops" for networking.
@ -181,4 +181,4 @@ IPs.
The early design of the networking model and its rationale, and some future
plans are described in more detail in the [networking design
document](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/networking.md).
document](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/networking.md).

View File

@ -10,7 +10,7 @@ title: "Node"
may be a VM or physical machine, depending on the cluster. Each node has
the services necessary to run [Pods](/{{page.version}}/docs/user-guide/pods) and is managed by the master
components. The services on a node include docker, kubelet and network proxy. See
[The Kubernetes Node](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/architecture.md#the-kubernetes-node) section in the
[The Kubernetes Node](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/architecture.md#the-kubernetes-node) section in the
architecture design doc for more details.
## Node Status

View File

@ -147,8 +147,8 @@ restrictions around nodes: pods from several namespaces may run on the same node
## Example
See a [detailed example for how to use resource quota](resourcequota/).
See a [detailed example for how to use resource quota](/{{page.version}}/docs/admin/resourcequota/).
## Read More
See [ResourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/admission_control_resource_quota.md) for more information.
See [ResourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_resource_quota.md) for more information.

View File

@ -3,7 +3,7 @@ title: "Resource Quota"
---
This example demonstrates how [resource quota](/{{page.version}}/docs/admin/admission-controllers/#resourcequota) and
[limitsranger](/{{page.version}}/docs/admin/admission-controllers/#limitranger) can be applied to a Kubernetes namespace.
See [ResourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/admission_control_resource_quota.md) for more information.
See [ResourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_resource_quota.md) for more information.
This example assumes you have a functional Kubernetes setup.

View File

@ -99,4 +99,4 @@ We should define a grains.conf key that captures more specifically what network
## Further reading
The [cluster/saltbase](http://releases.k8s.io/release-1.1/cluster/saltbase/) tree has more details on the current SaltStack configuration.
The [cluster/saltbase](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/) tree has more details on the current SaltStack configuration.

View File

@ -30,7 +30,7 @@ multiple API versions, each at a different API path, such as `/api/v1` or
We chose to version at the API level rather than at the resource or field level to ensure that the API presents a clear, consistent view of system resources and behavior, and to enable controlling access to end-of-lifed and/or experimental APIs.
Note that API versioning and Software versioning are only indirectly related. The [API and release
versioning proposal](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/versioning.md) describes the relationship between API versioning and
versioning proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/versioning.md) describes the relationship between API versioning and
software versioning.
@ -60,7 +60,7 @@ in more detail in the [API Changes documentation](/{{page.version}}/docs/devel/a
## API groups
To make it easier to extend the Kubernetes API, we are in the process of implementing [*API
groups*](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/proposals/api-group.md). These are simply different interfaces to read and/or modify the
groups*](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/api-group.md). These are simply different interfaces to read and/or modify the
same underlying resources. The API group is specified in a REST path and in the `apiVersion` field
of a serialized object.
@ -73,7 +73,7 @@ Currently there are two API groups in use:
In the future we expect that there will be more API groups, all at REST path `/apis/$API_GROUP` and
using `apiVersion: $API_GROUP/$VERSION`. We expect that there will be a way for (third parties to
create their own API groups](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/extending-api.md), and to avoid naming collisions.
create their own API groups](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/extending-api.md), and to avoid naming collisions.
## Enabling resources in the extensions group

View File

@ -139,13 +139,13 @@ In general, condition values may change back and forth, but some condition trans
A typical oscillating condition type is `Ready`, which indicates the object was believed to be fully operational at the time it was last probed. A possible monotonic condition could be `Succeeded`. A `False` status for `Succeeded` would imply failure. An object that was still active would not have a `Succeeded` condition, or its status would be `Unknown`.
Some resources in the v1 API contain fields called **`phase`**, and associated `message`, `reason`, and other status fields. The pattern of using `phase` is deprecated. Newer API types should use conditions instead. Phase was essentially a state-machine enumeration field, that contradicted [system-design principles](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/principles.md#control-logic) and hampered evolution, since [adding new enum values breaks backward compatibility](/{{page.version}}/docs/devel/api_changes). Rather than encouraging clients to infer implicit properties from phases, we intend to explicitly expose the conditions that clients need to monitor. Conditions also have the benefit that it is possible to create some conditions with uniform meaning across all resource types, while still exposing others that are unique to specific resource types. See [#7856](http://issues.k8s.io/7856) for more details and discussion.
Some resources in the v1 API contain fields called **`phase`**, and associated `message`, `reason`, and other status fields. The pattern of using `phase` is deprecated. Newer API types should use conditions instead. Phase was essentially a state-machine enumeration field, that contradicted [system-design principles](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/principles.md#control-logic) and hampered evolution, since [adding new enum values breaks backward compatibility](/{{page.version}}/docs/devel/api_changes). Rather than encouraging clients to infer implicit properties from phases, we intend to explicitly expose the conditions that clients need to monitor. Conditions also have the benefit that it is possible to create some conditions with uniform meaning across all resource types, while still exposing others that are unique to specific resource types. See [#7856](http://issues.k8s.io/7856) for more details and discussion.
In condition types, and everywhere else they appear in the API, **`Reason`** is intended to be a one-word, CamelCase representation of the category of cause of the current status, and **`Message`** is intended to be a human-readable phrase or sentence, which may contain specific details of the individual occurrence. `Reason` is intended to be used in concise output, such as one-line `kubectl get` output, and in summarizing occurrences of causes, whereas `Message` is intended to be presented to users in detailed status explanations, such as `kubectl describe` output.
Historical information status (e.g., last transition time, failure counts) is only provided with reasonable effort, and is not guaranteed to not be lost.
Status information that may be large (especially proportional in size to collections of other resources, such as lists of references to other objects -- see below) and/or rapidly changing, such as [resource usage](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/resources.md#usage-data), should be put into separate objects, with possibly a reference from the original object. This helps to ensure that GETs and watch remain reasonably efficient for the majority of clients, which may not need that data.
Status information that may be large (especially proportional in size to collections of other resources, such as lists of references to other objects -- see below) and/or rapidly changing, such as [resource usage](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/resources.md#usage-data), should be put into separate objects, with possibly a reference from the original object. This helps to ensure that GETs and watch remain reasonably efficient for the majority of clients, which may not need that data.
Some resources report the `observedGeneration`, which is the `generation` most recently observed by the component responsible for acting upon changes to the desired state of the resource. This can be used, for instance, to ensure that the reported status reflects the most recent desired status.

View File

@ -251,7 +251,7 @@ Breaking compatibility of a beta or stable API version, such as v1, is unaccepta
Compatibility for experimental or alpha APIs is not strictly required, but
breaking compatibility should not be done lightly, as it disrupts all users of the
feature. Experimental APIs may be removed. Alpha and beta API versions may be deprecated
and eventually removed wholesale, as described in the [versioning document](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/versioning.md).
and eventually removed wholesale, as described in the [versioning document](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/versioning.md).
Document incompatible changes across API versions under the [conversion tips](/{{page.version}}/docs/api/).
If your change is going to be backward incompatible or might be a breaking change for API
@ -494,7 +494,7 @@ doing!
## Write end-to-end tests
Check out the [E2E docs](e2e-tests) for detailed information about how to write end-to-end
Check out the [E2E docs](/{{page.version}}/docs/devel/e2e-tests) for detailed information about how to write end-to-end
tests for your feature.
## Examples and docs

View File

@ -22,7 +22,7 @@ particular, they may be self-merged by the release branch owner without fanfare,
in the case the release branch owner knows the cherry pick was already
requested - this should not be the norm, but it may happen.
[Contributor License Agreements](http://releases.k8s.io/release-1.1/CONTRIBUTING.md) is considered implicit
[Contributor License Agreements](http://releases.k8s.io/{{page.githubbranch}}/CONTRIBUTING.md) is considered implicit
for all code within cherry-pick pull requests, ***unless there is a large
conflict***.

View File

@ -3,7 +3,7 @@ title: "Kubernetes API client libraries"
---
### Supported
* [Go](http://releases.k8s.io/release-1.1/pkg/client/)
* [Go](http://releases.k8s.io/{{page.githubbranch}}/pkg/client/)
### User Contributed

View File

@ -2,13 +2,11 @@
title: "devel/coding-conventions"
---
Code conventions
- Bash
- https://google-styleguide.googlecode.com/svn/trunk/shell.xml
- Ensure that build, release, test, and cluster-management scripts run on OS X
- Go
- Ensure your code passes the [presubmit checks](development/#hooks)
- Ensure your code passes the [presubmit checks](/{{page.version}}/docs/devel/development/#hooks)
- [Go Code Review Comments](https://github.com/golang/go/wiki/CodeReviewComments)
- [Effective Go](https://golang.org/doc/effective_go)
- Comment your code.
@ -27,8 +25,8 @@ Code conventions
- API conventions
- [API changes](/{{page.version}}/docs/devel/api_changes)
- [API conventions](/{{page.version}}/docs/devel/api-conventions)
- [Kubectl conventions](kubectl-conventions)
- [Logging conventions](logging)
- [Kubectl conventions](/{{page.version}}/docs/devel/kubectl-conventions)
- [Logging conventions](/{{page.version}}/docs/devel/logging)
Testing conventions
@ -58,6 +56,3 @@ Coding advice
- Go
- [Go landmines](https://gist.github.com/lavalamp/4bd23295a9f32706a48f)

View File

@ -38,7 +38,4 @@ PRs that are incorrectly judged to be merge-able, may be reverted and subject to
## Holds
Any maintainer or core contributor who wants to review a PR but does not have time immediately may put a hold on a PR simply by saying so on the PR discussion and offering an ETA measured in single-digit days at most. Any PR that has a hold shall not be merged until the person who requested the hold acks the review, withdraws their hold, or is overruled by a preponderance of maintainers.
Any maintainer or core contributor who wants to review a PR but does not have time immediately may put a hold on a PR simply by saying so on the PR discussion and offering an ETA measured in single-digit days at most. Any PR that has a hold shall not be merged until the person who requested the hold acks the review, withdraws their hold, or is overruled by a preponderance of maintainers.

View File

@ -251,7 +251,7 @@ my-nginx nginx run=my-nginx 3
```
We did not start any services, hence there are none listed. But we see three replicas displayed properly.
Check the [guestbook](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/guestbook/) application to learn how to create a service.
Check the [guestbook](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) application to learn how to create a service.
You can already play with scaling the replicas with:
```shell

View File

@ -3,7 +3,7 @@ title: "Development Guide"
---
# Releases and Official Builds
Official releases are built in Docker containers. Details are [here](http://releases.k8s.io/release-1.1/build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below.
Official releases are built in Docker containers. Details are [here](http://releases.k8s.io/{{page.githubbranch}}/build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below.
## Go development environment
@ -66,7 +66,7 @@ git push -f origin myfeature
1. Visit https://github.com/$YOUR_GITHUB_USERNAME/kubernetes
2. Click the "Compare and pull request" button next to your "myfeature" branch.
3. Check out the pull request [process](pull-requests) for more details
3. Check out the pull request [process](/{{page.version}}/docs/devel/pull-requests) for more details
### When to retain commits and when to squash
@ -80,7 +80,7 @@ fixups (e.g. automated doc formatting), use one or more commits for the
changes to tooling and a final commit to apply the fixup en masse. This makes
reviews much easier.
See [Faster Reviews](faster_reviews) for more details.
See [Faster Reviews](/{{page.version}}/docs/devel/faster_reviews) for more details.
## godep and dependency management
@ -297,18 +297,18 @@ go run hack/e2e.go -v -ctl='delete pod foobar'
## Conformance testing
End-to-end testing, as described above, is for [development
distributions](writing-a-getting-started-guide). A conformance test is used on
a [versioned distro](writing-a-getting-started-guide).
distributions](/{{page.version}}/docs/devel/writing-a-getting-started-guide). A conformance test is used on
a [versioned distro](/{{page.version}}/docs/devel/writing-a-getting-started-guide).
The conformance test runs a subset of the e2e-tests against a manually-created cluster. It does not
require support for up/push/down and other operations. To run a conformance test, you need to know the
IP of the master for your cluster and the authorization arguments to use. The conformance test is
intended to run against a cluster at a specific binary release of Kubernetes.
See [conformance-test.sh](http://releases.k8s.io/release-1.1/hack/conformance-test.sh).
See [conformance-test.sh](http://releases.k8s.io/{{page.githubbranch}}/hack/conformance-test.sh).
## Testing out flaky tests
[Instructions here](flaky-tests)
[Instructions here](/{{page.version}}/docs/devel/flaky-tests)
## Regenerating the CLI documentation

View File

@ -1,7 +1,7 @@
---
title: "Getting Kubernetes Builds"
---
You can use [hack/get-build.sh](http://releases.k8s.io/release-1.1/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build).
You can use [hack/get-build.sh](http://releases.k8s.io/{{page.githubbranch}}/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build).
Run `./hack/get-build.sh -h` for its usage.

View File

@ -64,7 +64,7 @@ Guide](/{{page.version}}/docs/admin/).
Authorization applies to all HTTP requests on the main apiserver port.
This doc explains the available authorization implementations.
* **Admission Control Plugins** ([admission_control](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/admission_control.md))
* **Admission Control Plugins** ([admission_control](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control.md))
## Building releases

View File

@ -21,30 +21,30 @@ divided by the node's capacity).
Finally, the node with the highest priority is chosen
(or, if there are multiple such nodes, then one of them is chosen at random). The code
for this main scheduling loop is in the function `Schedule()` in
[plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/release-1.1/plugin/pkg/scheduler/generic_scheduler.go)
[plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/{{page.githubbranch}}/plugin/pkg/scheduler/generic_scheduler.go)
## Scheduler extensibility
The scheduler is extensible: the cluster administrator can choose which of the pre-defined
scheduling policies to apply, and can add new ones. The built-in predicates and priorities are
defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/release-1.1/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and
[plugin/pkg/scheduler/algorithm/priorities/priorities.go](http://releases.k8s.io/release-1.1/plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively.
defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/{{page.githubbranch}}/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and
[plugin/pkg/scheduler/algorithm/priorities/priorities.go](http://releases.k8s.io/{{page.githubbranch}}/plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively.
The policies that are applied when scheduling can be chosen in one of two ways. Normally,
the policies used are selected by the functions `defaultPredicates()` and `defaultPriorities()` in
[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/release-1.1/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go).
[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/{{page.githubbranch}}/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go).
However, the choice of policies
can be overridden by passing the command-line flag `--policy-config-file` to the scheduler, pointing to a JSON
file specifying which scheduling policies to use. See
[examples/scheduler-policy-config.json](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/scheduler-policy-config.json) for an example
[examples/scheduler-policy-config.json](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/scheduler-policy-config.json) for an example
config file. (Note that the config file format is versioned; the API is defined in
[plugin/pkg/scheduler/api](http://releases.k8s.io/release-1.1/plugin/pkg/scheduler/api/)).
[plugin/pkg/scheduler/api](http://releases.k8s.io/{{page.githubbranch}}/plugin/pkg/scheduler/api/)).
Thus to add a new scheduling policy, you should modify predicates.go or priorities.go,
and either register the policy in `defaultPredicates()` or `defaultPriorities()`, or use a policy config file.
## Exploring the code
If you want to get a global picture of how the scheduler works, you can start in
[plugin/cmd/kube-scheduler/app/server.go](http://releases.k8s.io/release-1.1/plugin/cmd/kube-scheduler/app/server.go)
[plugin/cmd/kube-scheduler/app/server.go](http://releases.k8s.io/{{page.githubbranch}}/plugin/cmd/kube-scheduler/app/server.go)

View File

@ -1,20 +1,20 @@
---
title: "Scheduler Algorithm in Kubernetes"
---
For each unscheduled Pod, the Kubernetes scheduler tries to find a node across the cluster according to a set of rules. A general introduction to the Kubernetes scheduler can be found at [scheduler.md](scheduler). In this document, the algorithm of how to select a node for the Pod is explained. There are two steps before a destination node of a Pod is chosen. The first step is filtering all the nodes and the second is ranking the remaining nodes to find a best fit for the Pod.
For each unscheduled Pod, the Kubernetes scheduler tries to find a node across the cluster according to a set of rules. A general introduction to the Kubernetes scheduler can be found at [scheduler.md](/{{page.version}}/docs/devel/scheduler). In this document, the algorithm of how to select a node for the Pod is explained. There are two steps before a destination node of a Pod is chosen. The first step is filtering all the nodes and the second is ranking the remaining nodes to find a best fit for the Pod.
## Filtering the nodes
The purpose of filtering the nodes is to filter out the nodes that do not meet certain requirements of the Pod. For example, if the free resource on a node (measured by the capacity minus the sum of the resource requests of all the Pods that already run on the node) is less than the Pod's required resource, the node should not be considered in the ranking phase so it is filtered out. Currently, there are several "predicates" implementing different filtering policies, including:
- `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted.
- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of requests of all Pods on the node. To learn more about the resource QoS in Kubernetes, please check [QoS proposal](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/proposals/resource-qos.md).
- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of requests of all Pods on the node. To learn more about the resource QoS in Kubernetes, please check [QoS proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/resource-qos.md).
- `PodFitsHostPorts`: Check if any HostPort required by the Pod is already occupied on the node.
- `PodFitsHost`: Filter out all nodes except the one specified in the PodSpec's NodeName field.
- `PodSelectorMatches`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field ([Here](/{{page.version}}/docs/user-guide/node-selection/) is an example of how to use `nodeSelector` field).
- `CheckNodeLabelPresence`: Check if all the specified labels exist on a node or not, regardless of the value.
The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/release-1.1/plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/release-1.1/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go).
The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/{{page.githubbranch}}/plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/{{page.githubbranch}}/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go).
## Ranking the nodes
@ -32,7 +32,7 @@ Currently, Kubernetes scheduler provides some practical priority functions, incl
- `CalculateSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on the same node.
- `CalculateAntiAffinityPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on nodes with the same value for a particular label.
The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](http://releases.k8s.io/release-1.1/plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/release-1.1/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler) for how to customize).
The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](http://releases.k8s.io/{{page.githubbranch}}/plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/{{page.githubbranch}}/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](/{{page.version}}/docs/devel/scheduler) for how to customize).

View File

@ -37,7 +37,7 @@ These guidelines say *what* to do. See the Rationale section for *why*.
own repo.
- Add or update a row in [The Matrix](/{{page.version}}/docs/getting-started-guides/).
- State the binary version of Kubernetes that you tested clearly in your Guide doc.
- Setup a cluster and run the [conformance test](development/#conformance-testing) against it, and report the
- Setup a cluster and run the [conformance test](/{{page.version}}/docs/devel/development/#conformance-testing) against it, and report the
results in your PR.
- Versioned distros should typically not modify or add code in `cluster/`. That is just scripts for developer
distros.

View File

@ -28,16 +28,16 @@ export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash
```
NOTE: This script calls [cluster/kube-up.sh](http://releases.k8s.io/release-1.1/cluster/kube-up.sh)
which in turn calls [cluster/aws/util.sh](http://releases.k8s.io/release-1.1/cluster/aws/util.sh)
using [cluster/aws/config-default.sh](http://releases.k8s.io/release-1.1/cluster/aws/config-default.sh).
NOTE: This script calls [cluster/kube-up.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/kube-up.sh)
which in turn calls [cluster/aws/util.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/aws/util.sh)
using [cluster/aws/config-default.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/aws/config-default.sh).
This process takes about 5 to 10 minutes. Once the cluster is up, the IP addresses of your master and node(s) will be printed,
as well as information about the default services running in the cluster (monitoring, logging, dns). User credentials and security
tokens are written in `~/.kube/config`, they will be necessary to use the CLI or the HTTP Basic Auth.
By default, the script will provision a new VPC and a 4 node k8s cluster in us-west-2a (Oregon) with `t2.micro` instances running on Ubuntu.
You can override the variables defined in [config-default.sh](http://releases.k8s.io/release-1.1/cluster/aws/config-default.sh) to change this behavior as follows:
You can override the variables defined in [config-default.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/aws/config-default.sh) to change this behavior as follows:
```shell
export KUBE_AWS_ZONE=eu-west-1c
@ -84,9 +84,9 @@ For more information, please read [kubeconfig files](/{{page.version}}/docs/user
See [a simple nginx example](/{{page.version}}/docs/user-guide/simple-nginx) to try out your new cluster.
The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/guestbook/)
The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/)
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/)
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/)
## Tearing down the cluster

View File

@ -21,7 +21,7 @@ cd kubernetes
make release
```
For more details on the release process see the [`build/` directory](http://releases.k8s.io/release-1.1/build/)
For more details on the release process see the [`build/` directory](http://releases.k8s.io/{{page.githubbranch}}/build/)

View File

@ -1,9 +1,6 @@
---
title: "Getting Started on CoreOS"
---
* TOC
{:toc}

View File

@ -212,7 +212,7 @@ You then should be able to access it from anywhere via the Azure virtual IP for
You now have a full-blow cluster running in Azure, congrats!
You should probably try deploy other [example apps](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/) or write your own ;)
You should probably try deploy other [example apps](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) or write your own ;)
## Tear down...

View File

@ -117,4 +117,4 @@ Once complete, restart the server. When it comes back up, you should have SSH a
## Testing the Cluster
You should now have a functional bare-metal Kubernetes cluster with one master and two compute hosts.
Try running the [guestbook demo](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/guestbook/) to test out your new cluster!
Try running the [guestbook demo](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) to test out your new cluster!

View File

@ -648,7 +648,7 @@ Now that the CoreOS with Kubernetes installed is up and running lets spin up som
See [a simple nginx example](/{{page.version}}/docs/user-guide/simple-nginx) to try out your new cluster.
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/).
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/).
## Helping commands for debugging

View File

@ -28,7 +28,7 @@ Explore the following resources for more information about Kubernetes, Kubernete
- [DCOS Documentation](https://docs.mesosphere.com/)
- [Managing DCOS Services](https://docs.mesosphere.com/services/kubernetes/)
- [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/)
- [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/)
- [Kubernetes on Mesos Documentation](https://releases.k8s.io/release-1.1/contrib/mesos/README.md)
- [Kubernetes on Mesos Release Notes](https://github.com/mesosphere/kubernetes-mesos/releases)
- [Kubernetes on DCOS Package Source](https://github.com/mesosphere/kubernetes-mesos)
@ -105,7 +105,7 @@ $ dcos kubectl get pods --namespace=kube-system
Names and ages may vary.
Now that Kubernetes is installed on DCOS, you may wish to explore the [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/) or the [Kubernetes User Guide](/{{page.version}}/docs/user-guide/).
Now that Kubernetes is installed on DCOS, you may wish to explore the [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) or the [Kubernetes User Guide](/{{page.version}}/docs/user-guide/).
## Uninstall

View File

@ -10,7 +10,7 @@ Here's a diagram of what the final result will look like:
![Kubernetes Single Node on Docker](/images/docs/k8s-docker.png)
_Note_:
These instructions are somewhat significantly more advanced than the [single node](docker) instructions. If you are
These instructions are somewhat significantly more advanced than the [single node](/{{page.version}}/docs/getting-started-guides/docker) instructions. If you are
interested in just starting to explore Kubernetes, we recommend that you start there.
_Note_:
@ -81,4 +81,4 @@ See [here](/{{page.version}}/docs/getting-started-guides/docker-multinode/deploy
Once your cluster has been created you can [test it out](/{{page.version}}/docs/getting-started-guides/docker-multinode/testing)
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/)
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/)

View File

@ -5,9 +5,9 @@ title: "Deploy DNS"
First of all, download the template dns rc and svc file from
[skydns-rc template](skydns-rc.yaml.in)
[skydns-rc template](/{{page.version}}/docs/getting-started-guides/docker-multinode/skydns-rc.yaml.in)
[skydns-svc template](skydns-svc.yaml.in)
[skydns-svc template](/{{page.version}}/docs/getting-started-guides/docker-multinode/skydns-svc.yaml.in)
### Set env

View File

@ -176,4 +176,4 @@ If all else fails, ask questions on [Slack](/{{page.version}}/docs/troubleshooti
### Next steps
Move on to [adding one or more workers](worker) or [deploy a dns](deployDNS)
Move on to [adding one or more workers](/{{page.version}}/docs/getting-started-guides/docker-multinode/worker) or [deploy a dns](/{{page.version}}/docs/getting-started-guides/docker-multinode/deployDNS)

View File

@ -3,7 +3,7 @@ title: "Adding a Kubernetes worker node via Docker."
---
These instructions are very similar to the master set-up above, but they are duplicated for clarity.
You need to repeat these instructions for each node you want to join the cluster.
We will assume that the IP address of this node is `${NODE_IP}` and you have the IP address of the master in `${MASTER_IP}` that you created in the [master instructions](master).
We will assume that the IP address of this node is `${NODE_IP}` and you have the IP address of the master in `${MASTER_IP}` that you created in the [master instructions](/{{page.version}}/docs/getting-started-guides/docker-multinode/master).
For each worker node, there are three steps:
@ -136,4 +136,4 @@ sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1
### Next steps
Move on to [testing your cluster](testing) or add another node](#).
Move on to [testing your cluster](/{{page.version}}/docs/getting-started-guides/docker-multinode/testing) or add another node](#).

View File

@ -1,7 +1,7 @@
---
title: "Kubernetes multiple nodes cluster with flannel on Fedora"
---
This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](fedora_manual_config) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network.
This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](/{{page.version}}/docs/getting-started-guides/fedora/fedora_manual_config) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network.
* TOC
{:toc}

View File

@ -40,7 +40,7 @@ wget -q -O - https://get.k8s.io | bash
Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster.
By default, some containers will already be running on your cluster. Containers like `kibana` and `elasticsearch` provide [logging](logging), while `heapster` provides [monitoring](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/README.md) services.
By default, some containers will already be running on your cluster. Containers like `kibana` and `elasticsearch` provide [logging](/{{page.version}}/docs/getting-started-guides/logging), while `heapster` provides [monitoring](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/README.md) services.
The script run by the commands above creates a cluster with the name/prefix "kubernetes". It defines one specific cluster config, so you can't run it more than once.
@ -53,7 +53,7 @@ cluster/kube-up.sh
If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `<kubernetes>/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.
If you run into trouble, please see the section on [troubleshooting](gce/#troubleshooting), post to the
If you run into trouble, please see the section on [troubleshooting](/{{page.version}}/docs/getting-started-guides/gce/#troubleshooting), post to the
[google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on [Slack](/{{page.version}}/docs/troubleshooting/#slack).
The next few steps will show you:
@ -152,7 +152,7 @@ Some of the pods may take a few seconds to start up (during this time they'll sh
Then, see [a simple nginx example](/{{page.version}}/docs/user-guide/simple-nginx) to try out your new cluster.
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/). The [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/guestbook/) is a good "getting started" walkthrough.
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/). The [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) is a good "getting started" walkthrough.
### Tearing down the cluster

View File

@ -5,7 +5,7 @@ Kubernetes can run on a range of platforms, from your laptop, to VMs on a cloud
bare metal servers. The effort required to set up a cluster varies from running a single command to
crafting your own customized cluster. We'll guide you in picking a solution that fits for your needs.
If you just want to "kick the tires" on Kubernetes, we recommend the [local Docker-based](docker) solution.
If you just want to "kick the tires" on Kubernetes, we recommend the [local Docker-based](/{{page.version}}/docs/getting-started-guides/docker) solution.
The local Docker-based solution is one of several [Local cluster](#local-machine-solutions) solutions
that are quick to set up, but are limited to running on one machine.
@ -31,9 +31,9 @@ But their size and availability is limited to that of a single machine.
The local-machine solutions are:
- [Local Docker-based](docker) (recommended starting point)
- [Vagrant](vagrant) (works on any platform with Vagrant: Linux, MacOS, or Windows.)
- [No-VM local cluster](locally) (Linux only)
- [Local Docker-based](/{{page.version}}/docs/getting-started-guides/docker) (recommended starting point)
- [Vagrant](/{{page.version}}/docs/getting-started-guides/vagrant) (works on any platform with Vagrant: Linux, MacOS, or Windows.)
- [No-VM local cluster](/{{page.version}}/docs/getting-started-guides/locally) (Linux only)
### Hosted Solutions
@ -58,7 +58,7 @@ base operating systems.
If you can find a guide below that matches your needs, use it. It may be a little out of date, but
it will be easier than starting from scratch. If you do want to start from scratch because you
have special requirements or just because you want to understand what is underneath a Kubernetes
cluster, try the [Getting Started from Scratch](scratch) guide.
cluster, try the [Getting Started from Scratch](/{{page.version}}/docs/getting-started-guides/scratch) guide.
If you are interested in supporting Kubernetes on a new platform, check out our [advice for
writing a new solution](/{{page.version}}/docs/devel/writing-a-getting-started-guide).

View File

@ -192,7 +192,7 @@ juju add-unit docker # creates unit docker/2, kubernetes/2, docker-flannel/2
## Launch the "k8petstore" example app
The [k8petstore example](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/k8petstore/) is available as a
The [k8petstore example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/k8petstore/) is available as a
[juju action](https://jujucharms.com/docs/devel/actions).
```shell
@ -221,7 +221,7 @@ juju destroy-environment --force `juju env`
The Kubernetes charms and bundles can be found in the `kubernetes` project on
github.com:
- [Bundle Repository](http://releases.k8s.io/release-1.1/cluster/juju/bundles)
- [Bundle Repository](http://releases.k8s.io/{{page.githubbranch}}/cluster/juju/bundles)
* [Kubernetes master charm](https://releases.k8s.io/release-1.1/cluster/juju/charms/trusty/kubernetes-master)
* [Kubernetes node charm](https://releases.k8s.io/release-1.1/cluster/juju/charms/trusty/kubernetes)
- [More about Juju](https://jujucharms.com)

View File

@ -8,7 +8,7 @@ title: "Getting started locally"
#### Linux
Not running Linux? Consider running Linux in a local virtual machine with [Vagrant](vagrant), or on a cloud provider like [Google Compute Engine](gce)
Not running Linux? Consider running Linux in a local virtual machine with [Vagrant](/{{page.version}}/docs/getting-started-guides/vagrant), or on a cloud provider like [Google Compute Engine](/{{page.version}}/docs/getting-started-guides/gce)
#### Docker

View File

@ -2,7 +2,7 @@
title: "Cluster Level Logging with Elasticsearch and Kibana"
---
On the Google Compute Engine (GCE) platform the default cluster level logging support targets
[Google Cloud Logging](https://cloud.google.com/logging/docs/) as described at the [Logging](logging) getting
[Google Cloud Logging](https://cloud.google.com/logging/docs/) as described at the [Logging](/{{page.version}}/docs/getting-started-guides/logging) getting
started page. Here we describe how to set up a cluster to ingest logs into Elasticsearch and view them using Kibana as an
alternative to Google Cloud Logging.

View File

@ -19,12 +19,12 @@ monitoring-heapster-v1-20ej 0/1 Running 9 32
Here is the same information in a picture which shows how the pods might be placed on specific nodes.
![Cluster](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/blog-logging/diagrams/cloud-logging.png)
![Cluster](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/blog-logging/diagrams/cloud-logging.png)
This diagram shows four nodes created on a Google Compute Engine cluster with the name of each VM node on a purple background. The internal and public IPs of each node are shown on gray boxes and the pods running in each node are shown in green boxes. Each pod box shows the name of the pod and the namespace it runs in, the IP address of the pod and the images which are run as part of the pod's execution. Here we see that every node is running a fluentd-cloud-logging pod which is collecting the log output of the containers running on the same node and sending them to Google Cloud Logging. A pod which provides the
[cluster DNS service](/{{page.version}}/docs/admin/dns) runs on one of the nodes and a pod which provides monitoring support runs on another node.
To help explain how cluster level logging works let's start off with a synthetic log generator pod specification [counter-pod.yaml](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/blog-logging/counter-pod.yaml):
To help explain how cluster level logging works let's start off with a synthetic log generator pod specification [counter-pod.yaml](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/blog-logging/counter-pod.yaml):
<!-- BEGIN MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
@ -41,7 +41,7 @@ spec:
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
```
[Download example](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/blog-logging/counter-pod.yaml)
[Download example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/blog-logging/counter-pod.yaml)
<!-- END MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
This pod specification has one container which runs a bash script when the container is born. This script simply writes out the value of a counter and the date once per second and runs indefinitely. Let's create the pod in the default
@ -64,7 +64,7 @@ This step may take a few minutes to download the ubuntu:14.04 image during which
One of the nodes is now running the counter pod:
![Counter Pod](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/blog-logging/diagrams/27gf-counter.png)
![Counter Pod](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/blog-logging/diagrams/27gf-counter.png)
When the pod status changes to `Running` we can use the kubectl logs command to view the output of this counter pod.
@ -213,6 +213,6 @@ $ cat 21\:00\:00_21\:59\:59_S0.json | jq '.structPayload.log'
...
```
This page has touched briefly on the underlying mechanisms that support gathering cluster level logs on a Kubernetes deployment. The approach here only works for gathering the standard output and standard error output of the processes running in the pod's containers. To gather other logs that are stored in files one can use a sidecar container to gather the required files as described at the page [Collecting log files within containers with Fluentd](http://releases.k8s.io/release-1.1/contrib/logging/fluentd-sidecar-gcp/README.md) and sending them to the Google Cloud Logging service.
This page has touched briefly on the underlying mechanisms that support gathering cluster level logs on a Kubernetes deployment. The approach here only works for gathering the standard output and standard error output of the processes running in the pod's containers. To gather other logs that are stored in files one can use a sidecar container to gather the required files as described at the page [Collecting log files within containers with Fluentd](http://releases.k8s.io/{{page.githubbranch}}/contrib/logging/fluentd-sidecar-gcp/README.md) and sending them to the Google Cloud Logging service.
Some of the material in this section also appears in the blog article [Cluster Level Logging with Kubernetes](http://blog.kubernetes.io/2015/06/cluster-level-logging-with-kubernetes)

View File

@ -184,7 +184,7 @@ host machine (mac).
To learn more about Pods, Volumes, Labels, Services, and Replication Controllers, start with the
[Kubernetes Walkthrough](/{{page.version}}/docs/user-guide/walkthrough/).
To skip to a more advanced example, see the [Guestbook Example](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/guestbook/)
To skip to a more advanced example, see the [Guestbook Example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/)
1. Destroy cluster

View File

@ -94,7 +94,7 @@ scripts. The master node is always Ubuntu.
See [a simple nginx example](/{{page.version}}/docs/user-guide/simple-nginx) to try out your new cluster.
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/).
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/).
### Debugging

View File

@ -63,7 +63,7 @@ accomplished in two ways:
- Configure network to route Pod IPs
- Harder to setup from scratch.
- Google Compute Engine ([GCE](gce)) and [AWS](/{{page.version}}/docs/getting-started-guides/aws) guides use this approach.
- Google Compute Engine ([GCE](/{{page.version}}/docs/getting-started-guides/gce)) and [AWS](/{{page.version}}/docs/getting-started-guides/aws) guides use this approach.
- Need to make the Pod IPs routable by programming routers, switches, etc.
- Can be configured external to Kubernetes, or can implement in the "Routes" interface of a Cloud Provider module.
- Generally highest performance.
@ -815,7 +815,7 @@ At this point you should be able to run through one of the basic examples, such
### Running the Conformance Test
You may want to try to run the [Conformance test](http://releases.k8s.io/release-1.1/hack/conformance-test.sh). Any failures may give a hint as to areas that need more attention.
You may want to try to run the [Conformance test](http://releases.k8s.io/{{page.githubbranch}}/hack/conformance-test.sh). Any failures may give a hint as to areas that need more attention.
### Networking

View File

@ -242,7 +242,7 @@ Replace `<MASTER_IP>` in `calico-kubernetes-ubuntu-demo-master/dns/skydns-rc.yam
## Launch other Services With Calico-Kubernetes
At this point, you have a fully functioning cluster running on kubernetes with a master and 2 nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/) to set up other services on your cluster.
At this point, you have a fully functioning cluster running on kubernetes with a master and 2 nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) to set up other services on your cluster.
## Connectivity to outside the cluster

View File

@ -139,7 +139,7 @@ NAME LABELS STATUS
10.10.103.250 kubernetes.io/hostname=10.10.103.250 Ready
```
Also you can run Kubernetes [guest-example](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/guestbook/) to build a redis backend cluster on the k8s
Also you can run Kubernetes [guest-example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) to build a redis backend cluster on the k8s
### Deploy addons
@ -258,4 +258,4 @@ Some examples are as follows:
The script will not delete any resources of your cluster, it just replaces the binaries.
You can use `kubectl` command to check if the newly upgraded k8s is working correctly.
For example, use `$ kubectl get nodes` to see if all of your nodes are ready.Or refer to [test-it-out](ubuntu/#test-it-out)
For example, use `$ kubectl get nodes` to see if all of your nodes are ready.Or refer to [test-it-out](/{{page.version}}/docs/getting-started-guides/ubuntu/#test-it-out)

View File

@ -240,7 +240,7 @@ my-nginx 10.0.0.1 <none> 80/TCP run=my-nginx
```
We did not start any services, hence there are none listed. But we see three replicas displayed properly.
Check the [guestbook](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/guestbook/) application to learn how to create a service.
Check the [guestbook](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) application to learn how to create a service.
You can already play with scaling the replicas with:
```shell

View File

@ -17,15 +17,15 @@ title: "Kubernetes Documentation: releases.k8s.io/release-1.1"
* The [API object documentation](http://kubernetes.io/third_party/swagger-ui/)
is a detailed description of all fields found in core API objects.
* An overview of the [Design of Kubernetes](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/)
* An overview of the [Design of Kubernetes](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/)
* There are example files and walkthroughs in the [examples](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples)
* There are example files and walkthroughs in the [examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples)
folder.
* If something went wrong, see the [troubleshooting](troubleshooting) document for how to debug.
* If something went wrong, see the [troubleshooting](/{{page.version}}/docs/troubleshooting) document for how to debug.
You should also check the [known issues](/{{page.version}}/docs/user-guide/known-issues) for the release you're using.
* To report a security issue, see [Reporting a Security Issue](reporting-security-issues).
* To report a security issue, see [Reporting a Security Issue](/{{page.version}}/docs/reporting-security-issues).

View File

@ -22,8 +22,8 @@ Check the location and credentials that kubectl knows about with this command:
$ kubectl config view
```
Many of the [examples](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/) provide an introduction to using
kubectl and complete documentation is found in the [kubectl manual](kubectl/kubectl).
Many of the [examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) provide an introduction to using
kubectl and complete documentation is found in the [kubectl manual](/{{page.version}}/docs/user-guide/kubectl/kubectl).
### Directly accessing the REST API
@ -52,7 +52,7 @@ Run it like this:
$ kubectl proxy --port=8080 &
```
See [kubectl proxy](kubectl/kubectl_proxy) for more details.
See [kubectl proxy](/{{page.version}}/docs/user-guide/kubectl/kubectl_proxy) for more details.
Then you can explore the API with curl, wget, or a browser, like so:
@ -98,8 +98,8 @@ with future high-availability support.
There are [client libraries](/{{page.version}}/docs/devel/client-libraries) for accessing the API
from several languages. The Kubernetes project-supported
[Go](http://releases.k8s.io/release-1.1/pkg/client/)
client library can use the same [kubeconfig file](kubeconfig-file)
[Go](http://releases.k8s.io/{{page.githubbranch}}/pkg/client/)
client library can use the same [kubeconfig file](/{{page.version}}/docs/user-guide/kubeconfig-file)
as the kubectl CLI does to locate and authenticate to the apiserver.
See documentation for other libraries for how they authenticate.
@ -114,7 +114,7 @@ the `kubernetes` DNS name, which resolves to a Service IP which in turn
will be routed to an apiserver.
The recommended way to authenticate to the apiserver is with a
[service account](service-accounts) credential. By kube-system, a pod
[service account](/{{page.version}}/docs/user-guide/service-accounts) credential. By kube-system, a pod
is associated with a service account, and a credential (token) for that
service account is placed into the filesystem tree of each container in that pod,
at `/var/run/secrets/kubernetes.io/serviceaccount/token`.
@ -125,7 +125,7 @@ From within a pod the recommended ways to connect to API are:
process within a container. This proxies the
Kubernetes API to the localhost interface of the pod, so that other processes
in any container of the pod can access it. See this [example of using kubectl proxy
in a pod](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/kubectl-container/).
in a pod](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/kubectl-container/).
- use the Go client library, and create a client using the `client.NewInCluster()` factory.
This handles locating and authenticating to the apiserver.
@ -136,7 +136,7 @@ In each case, the credentials of the pod are used to communicate securely with t
The previous section was about connecting the Kubernetes API server. This section is about
connecting to other services running on Kubernetes cluster. In Kubernetes, the
[nodes](/{{page.version}}/docs/admin/node), [pods](pods) and [services](services) all have
[nodes](/{{page.version}}/docs/admin/node), [pods](/{{page.version}}/docs/user-guide/pods) and [services](/{{page.version}}/docs/user-guide/services) all have
their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a cluster will not be
routable, so they will not be reachable from a machine outside the cluster,
such as your desktop machine.
@ -147,8 +147,8 @@ You have several options for connecting to nodes, pods and services from outside
- Access services through public IPs.
- Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside
the cluster. See the [services](services) and
[kubectl expose](kubectl/kubectl_expose) documentation.
the cluster. See the [services](/{{page.version}}/docs/user-guide/services) and
[kubectl expose](/{{page.version}}/docs/user-guide/kubectl/kubectl_expose) documentation.
- Depending on your cluster environment, this may just expose the service to your corporate network,
or it may expose it to the internet. Think about whether the service being exposed is secure.
Does it do its own authentication?
@ -164,7 +164,7 @@ You have several options for connecting to nodes, pods and services from outside
- Only works for HTTP/HTTPS.
- Described [here](#discovering-builtin-services).
- Access from a node or pod in the cluster.
- Run a pod, and then connect to a shell in it using [kubectl exec](kubectl/kubectl_exec).
- Run a pod, and then connect to a shell in it using [kubectl exec](/{{page.version}}/docs/user-guide/kubectl/kubectl_exec).
Connect to other nodes, pods, and services from that shell.
- Some clusters may allow you to ssh to a node in the cluster. From there you may be able to
access cluster services. This is a non-standard method, and will work on some clusters but
@ -251,7 +251,7 @@ There are several different proxies you may encounter when using Kubernetes:
- proxy to target may use HTTP or HTTPS as chosen by proxy using available information
- can be used to reach a Node, Pod, or Service
- does load balancing when used to reach a Service
1. The [kube proxy](services/#ips-and-vips):
1. The [kube proxy](/{{page.version}}/docs/user-guide/services/#ips-and-vips):
- runs on each node
- proxies UDP and TCP
- does not understand HTTP

View File

@ -1,7 +1,7 @@
---
title: "Annotations"
---
We have [labels](labels) for identifying metadata.
We have [labels](/{{page.version}}/docs/user-guide/labels) for identifying metadata.
It is also useful to be able to attach arbitrary non-identifying metadata, for retrieval by API clients such as tools, libraries, etc. This information may be large, may be structured or unstructured, may include characters not permitted by labels, etc. Such information would not be used for object selection and therefore doesn't belong in labels.

View File

@ -186,6 +186,6 @@ check:
#### More information
If none of the above solves your problem, follow the instructions in [Debugging Service document](debugging-services) to make sure that your `Service` is running, has `Endpoints`, and your `Pods` are actually serving; you have DNS working, iptables rules installed, and kube-proxy does not seem to be misbehaving.
If none of the above solves your problem, follow the instructions in [Debugging Service document](/{{page.version}}/docs/user-guide/debugging-services) to make sure that your `Service` is running, has `Endpoints`, and your `Pods` are actually serving; you have DNS working, iptables rules installed, and kube-proxy does not seem to be misbehaving.
You may also visit [troubleshooting document](/{{page.version}}/docs/troubleshooting/) for more information.

View File

@ -4,20 +4,20 @@ title: "Compute Resources"
* TOC
{:toc}
When specifying a [pod](pods), you can optionally specify how much CPU and memory (RAM) each
When specifying a [pod](/{{page.version}}/docs/user-guide/pods), you can optionally specify how much CPU and memory (RAM) each
container needs. When containers have their resource requests specified, the scheduler is
able to make better decisions about which nodes to place pods on; and when containers have their
limits specified, contention for resources on a node can be handled in a specified manner. For
more details about the difference between requests and limits, please refer to
[Resource QoS](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/proposals/resource-qos.md).
[Resource QoS](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/resource-qos.md).
*CPU* and *memory* are each a *resource type*. A resource type has a base unit. CPU is specified
in units of cores. Memory is specified in units of bytes.
CPU and RAM are collectively referred to as *compute resources*, or just *resources*. Compute
resources are measureable quantities which can be requested, allocated, and consumed. They are
distinct from [API resources](working-with-resources). API resources, such as pods and
[services](services) are objects that can be written to and retrieved from the Kubernetes API
distinct from [API resources](/{{page.version}}/docs/user-guide/working-with-resources). API resources, such as pods and
[services](/{{page.version}}/docs/user-guide/services) are objects that can be written to and retrieved from the Kubernetes API
server.
## Resource Requests and Limits of Pod and Container
@ -111,7 +111,7 @@ To determine if a container cannot be scheduled or is being killed due to resour
The resource usage of a pod is reported as part of the Pod status.
If [optional monitoring](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/README.md) is configured for your cluster,
If [optional monitoring](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/README.md) is configured for your cluster,
then pod resource usage can be retrieved from the monitoring system.
## Troubleshooting
@ -234,11 +234,11 @@ We can see that this container was terminated because `reason:OOM Killed`, where
The current system only allows resource quantities to be specified on a container.
It is planned to improve accounting for resources which are shared by all containers in a pod,
such as [EmptyDir volumes](volumes/#emptydir).
such as [EmptyDir volumes](/{{page.version}}/docs/user-guide/volumes/#emptydir).
The current system only supports container requests and limits for CPU and Memory.
It is planned to add new resource types, including a node disk space
resource, and a framework for adding custom [resource types](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/resources.md#resource-types).
resource, and a framework for adding custom [resource types](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/resources.md#resource-types).
Kubernetes supports overcommitment of resources by supporting multiple levels of [Quality of Service](http://issue.k8s.io/168).

View File

@ -9,16 +9,16 @@ This document is meant to highlight and consolidate in one place configuration b
1. Group related objects together in a single file. This is often better than separate files.
1. Use `kubectl create -f <directory>` where possible. This looks for config objects in all `.yaml`, `.yml`, and `.json` files in `<directory>` and passes them to create.
1. Create a service before corresponding replication controllers so that the scheduler can spread the pods comprising the service. You can also create the replication controller without specifying replicas, create the service, then scale up the replication controller, which may work better in an example using progressive disclosure and may have benefits in real scenarios also, such as ensuring one replica works before creating lots of them)
1. Don't use `hostPort` unless absolutely necessary (e.g., for a node daemon) as it will prevent certain scheduling configurations due to port conflicts. Use the apiserver proxying or port forwarding for debug/admin access, or a service for external service access. If you need to expose a pod's port on the host machine, consider using a [NodePort](services/#type--loadbalancer) service before resorting to `hostPort`. If you only need access to the port for debugging purposes, you can also use the [kubectl proxy and apiserver proxy](/{{page.version}}/docs/user-guide/connecting-to-applications-proxy) or [kubectl port-forward](/{{page.version}}/docs/user-guide/connecting-to-applications-port-forward).
1. Don't use `hostPort` unless absolutely necessary (e.g., for a node daemon) as it will prevent certain scheduling configurations due to port conflicts. Use the apiserver proxying or port forwarding for debug/admin access, or a service for external service access. If you need to expose a pod's port on the host machine, consider using a [NodePort](/{{page.version}}/docs/user-guide/services/#type--loadbalancer) service before resorting to `hostPort`. If you only need access to the port for debugging purposes, you can also use the [kubectl proxy and apiserver proxy](/{{page.version}}/docs/user-guide/connecting-to-applications-proxy) or [kubectl port-forward](/{{page.version}}/docs/user-guide/connecting-to-applications-port-forward).
1. Don't use `hostNetwork` for the same reasons as `hostPort`.
1. Don't specify default values unnecessarily, to simplify and minimize configs. For example, omit the selector and labels in ReplicationController if you want them to be the same as the labels in its podTemplate, since those fields are populated from the podTemplate labels by default.
1. Instead of attaching one label to a set of pods to represent a service (e.g., `service: myservice`) and another to represent the replication controller managing the pods (e.g., `controller: mycontroller`), attach labels that identify semantic attributes of your application or deployment and select the appropriate subsets in your service and replication controller, such as `{ app: myapp, tier: frontend, deployment: v3 }`. A service can be made to span multiple deployments, such as across rolling updates, by simply omitting release-specific labels from its selector, rather than updating a service's selector to match the replication controller's selector fully.
1. Use kubectl bulk operations (via files and/or labels) for get and delete. See [label selectors](labels/#label-selectors) and [using labels effectively](managing-deployments/#using-labels-effectively).
1. Use kubectl run and expose to quickly create and expose single container replication controllers. See the [quick start guide](quick-start) for an example.
1. Use headless services for easy service discovery when you don't need kube-proxy load balancing. See [headless services](services/#headless-services).
1. Use kubectl bulk operations (via files and/or labels) for get and delete. See [label selectors](/{{page.version}}/docs/user-guide/labels/#label-selectors) and [using labels effectively](/{{page.version}}/docs/user-guide/managing-deployments/#using-labels-effectively).
1. Use kubectl run and expose to quickly create and expose single container replication controllers. See the [quick start guide](/{{page.version}}/docs/user-guide/quick-start) for an example.
1. Use headless services for easy service discovery when you don't need kube-proxy load balancing. See [headless services](/{{page.version}}/docs/user-guide/services/#headless-services).
1. Use kubectl delete rather than stop. Delete has a superset of the functionality of stop and stop is deprecated.
1. If there is a viable alternative to naked pods (i.e. pods not bound to a controller), go with the alternative. Controllers are almost always preferable to creating pods (except for some `restartPolicy: Never` scenarios). A minimal Job is coming. See [#1624](http://issue.k8s.io/1624). Naked pods will not be rescheduled in the event of node failure.
1. Put a version number or hash as a suffix to the name and in a label on a replication controller to facilitate rolling update, as we do for [--image](kubectl/kubectl_rolling-update). This is necessary because rolling-update actually creates a new controller as opposed to modifying the existing controller. This does not play well with version agnostic controller names.
1. Put a version number or hash as a suffix to the name and in a label on a replication controller to facilitate rolling update, as we do for [--image](/{{page.version}}/docs/user-guide/kubectl/kubectl_rolling-update). This is necessary because rolling-update actually creates a new controller as opposed to modifying the existing controller. This does not play well with version agnostic controller names.
1. Put an object description in an annotation to allow better introspection.

View File

@ -89,7 +89,7 @@ spec: # specification of the pod's contents
args: ["/bin/echo \"${MESSAGE}\""]
```
However, a shell isn't necessary just to expand environment variables. Kubernetes will do it for you if you use [`$(ENVVAR)` syntax](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/expansion):
However, a shell isn't necessary just to expand environment variables. Kubernetes will do it for you if you use [`$(ENVVAR)` syntax](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/expansion):
```yaml
command: ["/bin/echo"]

View File

@ -113,11 +113,11 @@ NAME ENDPOINTS
nginxsvc 10.245.0.14:80,10.245.0.15:80
```
You should now be able to curl the nginx Service on `10.0.116.146:80` from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire, if you're curious about how this works you can read more about the [service proxy](services/#virtual-ips-and-service-proxies).
You should now be able to curl the nginx Service on `10.0.116.146:80` from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire, if you're curious about how this works you can read more about the [service proxy](/{{page.version}}/docs/user-guide/services/#virtual-ips-and-service-proxies).
## Accessing the Service
Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. The former works out of the box while the latter requires the [kube-dns cluster addon](http://releases.k8s.io/release-1.1/cluster/addons/dns/README.md).
Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. The former works out of the box while the latter requires the [kube-dns cluster addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md).
### Environment Variables
@ -155,7 +155,7 @@ NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kube-dns 10.179.240.10 <none> 53/UDP,53/TCP k8s-app=kube-dns 8d
```
If it isn't running, you can [enable it](http://releases.k8s.io/release-1.1/cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived IP (nginxsvc), and a dns server that has assigned a name to that IP (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Let's create another pod to test this:
If it isn't running, you can [enable it](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived IP (nginxsvc), and a dns server that has assigned a name to that IP (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Let's create another pod to test this:
```yaml
$ cat curlpod.yaml
@ -196,9 +196,9 @@ Till now we have only accessed the nginx server from within the cluster. Before
* Self signed certificates for https (unless you already have an identity certificate)
* An nginx server configured to use the certificates
* A [secret](secrets) that makes the certificates accessible to pods
* A [secret](/{{page.version}}/docs/user-guide/secrets) that makes the certificates accessible to pods
You can acquire all these from the [nginx https example](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/https-nginx/), in short:
You can acquire all these from the [nginx https example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/https-nginx/), in short:
```shell
$ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json
@ -262,7 +262,7 @@ spec:
Noteworthy points about the nginx-app manifest:
- It contains both rc and service specification in the same file
- The [nginx server](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/https-nginx/default.conf) serves http traffic on port 80 and https traffic on 443, and nginx Service exposes both ports.
- The [nginx server](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/https-nginx/default.conf) serves http traffic on port 80 and https traffic on 443, and nginx Service exposes both ports.
- Each container has access to the keys through a volume mounted at /etc/nginx/ssl. This is setup *before* the nginx server is started.
```shell
@ -383,4 +383,4 @@ cluster/private cloud network.
## What's next?
[Learn about more Kubernetes features that will help you run containers reliably in production.](production-pods)
[Learn about more Kubernetes features that will help you run containers reliably in production.](/{{page.version}}/docs/user-guide/production-pods)

View File

@ -1,7 +1,7 @@
---
title: "Connecting to applications: kubectl port-forward"
---
kubectl port-forward forwards connections to a local port to a port on a pod. Its man page is available [here](kubectl/kubectl_port-forward). Compared to [kubectl proxy](/{{page.version}}/docs/user-guide/accessing-the-cluster/#using-kubectl-proxy), `kubectl port-forward` is more generic as it can forward TCP traffic while `kubectl proxy` can only forward HTTP traffic. This guide demonstrates how to use `kubectl port-forward` to connect to a Redis database, which may be useful for database debugging.
kubectl port-forward forwards connections to a local port to a port on a pod. Its man page is available [here](/{{page.version}}/docs/user-guide/kubectl/kubectl_port-forward). Compared to [kubectl proxy](/{{page.version}}/docs/user-guide/accessing-the-cluster/#using-kubectl-proxy), `kubectl port-forward` is more generic as it can forward TCP traffic while `kubectl proxy` can only forward HTTP traffic. This guide demonstrates how to use `kubectl port-forward` to connect to a Redis database, which may be useful for database debugging.
## Creating a Redis master

View File

@ -1,7 +1,7 @@
---
title: "Connecting to applications: kubectl proxy and apiserver proxy"
---
You have seen the [basics](/{{page.version}}/docs/user-guide/accessing-the-cluster) about `kubectl proxy` and `apiserver proxy`. This guide shows how to use them together to access a service([kube-ui](ui)) running on the Kubernetes cluster from your workstation.
You have seen the [basics](/{{page.version}}/docs/user-guide/accessing-the-cluster) about `kubectl proxy` and `apiserver proxy`. This guide shows how to use them together to access a service([kube-ui](/{{page.version}}/docs/user-guide/ui)) running on the Kubernetes cluster from your workstation.
## Getting the apiserver proxy URL of kube-ui
@ -13,7 +13,7 @@ $ kubectl cluster-info | grep "KubeUI"
KubeUI is running at https://173.255.119.104/api/v1/proxy/namespaces/kube-system/services/kube-ui
```
if this command does not find the URL, try the steps [here](ui/#accessing-the-ui).
if this command does not find the URL, try the steps [here](/{{page.version}}/docs/user-guide/ui/#accessing-the-ui).
## Connecting to the kube-ui service from your local workstation

View File

@ -6,7 +6,7 @@ This document describes the environment for Kubelet managed containers on a Kube
This cluster information makes it possible to build applications that are *cluster aware*.
Additionally, the Kubernetes container environment defines a series of hooks that are surfaced to optional hook handlers defined as part of individual containers.  Container hooks are somewhat analogous to operating system signals in a traditional process model.   However these hooks are designed to make it easier to build reliable, scalable cloud applications in the Kubernetes cluster.  Containers that participate in this cluster lifecycle become *cluster native*.
Another important part of the container environment is the file system that is available to the container. In Kubernetes, the filesystem is a combination of an [image](images) and one or more [volumes](volumes).
Another important part of the container environment is the file system that is available to the container. In Kubernetes, the filesystem is a combination of an [image](/{{page.version}}/docs/user-guide/images) and one or more [volumes](/{{page.version}}/docs/user-guide/volumes).
The following sections describe both the cluster information provided to containers, as well as the hooks and life-cycle that allows containers to interact with the management system.
@ -21,7 +21,7 @@ There are two types of information that are available within the container envir
Currently, the Pod name for the pod in which the container is running is set as the hostname of the container, and is accessible through all calls to access the hostname within the container (e.g. the hostname command, or the [gethostname][1] function call in libc), but this is planned to change in the future and should not be used.
The Pod name and namespace are also available as environment variables via the [downward API](downward-api). Additionally, user-defined environment variables from the pod definition, are also available to the container, as are any environment variables specified statically in the Docker image.
The Pod name and namespace are also available as environment variables via the [downward API](/{{page.version}}/docs/user-guide/downward-api). Additionally, user-defined environment variables from the pod definition, are also available to the container, as are any environment variables specified statically in the Docker image.
In the future, we anticipate expanding this information with richer information about the container.  Examples include available memory, number of restarts, and in general any state that you could get from the call to GET /pods on the API server.
@ -36,7 +36,7 @@ FOO_SERVICE_HOST=<the host the service is running on>
FOO_SERVICE_PORT=<the port the service is running on>
```
Services have dedicated IP address, and are also surfaced to the container via DNS (If [DNS addon](http://releases.k8s.io/release-1.1/cluster/addons/dns/) is enabled).  Of course DNS is still not an enumerable protocol, so we will continue to provide environment variables so that containers can do discovery.
Services have dedicated IP address, and are also surfaced to the container via DNS (If [DNS addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/) is enabled).  Of course DNS is still not an enumerable protocol, so we will continue to provide environment variables so that containers can do discovery.
## Container Hooks
@ -52,7 +52,7 @@ This hook is sent immediately after a container is created.  It notifies the co
*PreStop*
This hook is called immediately before a container is terminated. No parameters are passed to the handler. This event handler is blocking, and must complete before the call to delete the container is sent to the Docker daemon. The SIGTERM notification sent by Docker is also still sent. A more complete description of termination behavior can be found in [Termination of Pods](pods/#termination-of-pods).
This hook is called immediately before a container is terminated. No parameters are passed to the handler. This event handler is blocking, and must complete before the call to delete the container is sent to the Docker daemon. The SIGTERM notification sent by Docker is also still sent. A more complete description of termination behavior can be found in [Termination of Pods](/{{page.version}}/docs/user-guide/pods/#termination-of-pods).
### Hook Handler Execution

View File

@ -18,11 +18,11 @@ we can use:
Docker images have metadata associated with them that is used to store information about the image.
The image author may use this to define defaults for the command and arguments to run a container
when the user does not supply values. Docker calls the fields for commands and arguments
`Entrypoint` and `Cmd` respectively. The full details for this feature are too complicated to
describe here, mostly due to the fact that the docker API allows users to specify both of these
when the user does not supply values. Docker calls the fields for commands and arguments
`Entrypoint` and `Cmd` respectively. The full details for this feature are too complicated to
describe here, mostly due to the fact that the Docker API allows users to specify both of these
fields as either a string array or a string and there are subtle differences in how those cases are
handled. We encourage the curious to check out [docker's documentation]() for this feature.
handled. We encourage the curious to check out Docker's documentation for this feature.
Kubernetes allows you to override both the image's default command (docker `Entrypoint`) and args
(docker `Cmd`) with the `Command` and `Args` fields of `Container`. The rules are:
@ -90,7 +90,4 @@ The relationship between Docker's capabilities and [Linux capabilities](http://m
| LEASE | CAP_LEASE |
| SETFCAP | CAP_SETFCAP |
| WAKE_ALARM | CAP_WAKE_ALARM |
| BLOCK_SUSPEND | CAP_BLOCK_SUSPEND |
| BLOCK_SUSPEND | CAP_BLOCK_SUSPEND |

View File

@ -1,18 +1,18 @@
---
title: "Kubernetes User Guide: Managing Applications: Deploying continuously running applications"
---
You previously read about how to quickly deploy a simple replicated application using [`kubectl run`](quick-start) and how to configure and launch single-run containers using pods ([Configuring containers](/{{page.version}}/docs/user-guide/configuring-containers)). Here you'll use the configuration-based approach to deploy a continuously running, replicated application.
You previously read about how to quickly deploy a simple replicated application using [`kubectl run`](/{{page.version}}/docs/user-guide/quick-start) and how to configure and launch single-run containers using pods ([Configuring containers](/{{page.version}}/docs/user-guide/configuring-containers)). Here you'll use the configuration-based approach to deploy a continuously running, replicated application.
* TOC
{:toc}
## Launching a set of replicas using a configuration file
Kubernetes creates and manages sets of replicated containers (actually, replicated [Pods](pods)) using [*Replication Controllers*](replication-controller).
Kubernetes creates and manages sets of replicated containers (actually, replicated [Pods](/{{page.version}}/docs/user-guide/pods)) using [*Replication Controllers*](/{{page.version}}/docs/user-guide/replication-controller).
A replication controller simply ensures that a specified number of pod "replicas" are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. It's analogous to Google Compute Engine's [Instance Group Manager](https://cloud.google.com/compute/docs/instance-groups/manager/) or AWS's [Auto-scaling Group](http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingGroup) (with no scaling policies).
The replication controller created to run nginx by `kubectl run` in the [Quick start](quick-start) could be specified using YAML as follows:
The replication controller created to run nginx by `kubectl run` in the [Quick start](/{{page.version}}/docs/user-guide/quick-start) could be specified using YAML as follows:
```yaml
apiVersion: v1
@ -70,7 +70,7 @@ my-nginx-buaiq 1/1 Running 0 51s
## Deleting replication controllers
When you want to kill your application, delete your replication controller, as in the [Quick start](quick-start):
When you want to kill your application, delete your replication controller, as in the [Quick start](/{{page.version}}/docs/user-guide/quick-start):
```shell
$ kubectl delete rc my-nginx
@ -83,7 +83,7 @@ If you try to delete the pods before deleting the replication controller, it wil
## Labels
Kubernetes uses user-defined key-value attributes called [*labels*](labels) to categorize and identify sets of resources, such as pods and replication controllers. The example above specified a single label in the pod template, with key `app` and value `nginx`. All pods created carry that label, which can be viewed using `-L`:
Kubernetes uses user-defined key-value attributes called [*labels*](/{{page.version}}/docs/user-guide/labels) to categorize and identify sets of resources, such as pods and replication controllers. The example above specified a single label in the pod template, with key `app` and value `nginx`. All pods created carry that label, which can be viewed using `-L`:
```shell
$ kubectl get pods -L app
@ -100,7 +100,7 @@ CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS APP
my-nginx nginx nginx app=nginx 2 nginx
```
More importantly, the pod template's labels are used to create a [`selector`](labels/#label-selectors) that will match pods carrying those labels. You can see this field by requesting it using the [Go template output format of `kubectl get`](kubectl/kubectl_get):
More importantly, the pod template's labels are used to create a [`selector`](/{{page.version}}/docs/user-guide/labels/#label-selectors) that will match pods carrying those labels. You can see this field by requesting it using the [Go template output format of `kubectl get`](/{{page.version}}/docs/user-guide/kubectl/kubectl_get):
```shell
$ kubectl get rc my-nginx -o template --template="{{.spec.selector}}"

View File

@ -57,7 +57,7 @@ spec:
- containerPort: 80
```
[Download example](nginx-deployment.yaml)
[Download example](/{{page.version}}/docs/user-guide/nginx-deployment.yaml)
<!-- END MUNGE: EXAMPLE nginx-deployment.yaml -->
Run the example by downloading the example file and then running this command:
@ -135,7 +135,7 @@ spec:
- containerPort: 80
```
[Download example](new-nginx-deployment.yaml)
[Download example](/{{page.version}}/docs/user-guide/new-nginx-deployment.yaml)
<!-- END MUNGE: EXAMPLE new-nginx-deployment.yaml -->
@ -250,7 +250,7 @@ before changing course.
As with all other Kubernetes configs, a Deployment needs `apiVersion`, `kind`, and
`metadata` fields. For general information about working with config files,
see [here](deploying-applications), [here](/{{page.version}}/docs/user-guide/configuring-containers), and [here](working-with-resources).
see [here](/{{page.version}}/docs/user-guide/deploying-applications), [here](/{{page.version}}/docs/user-guide/configuring-containers), and [here](/{{page.version}}/docs/user-guide/working-with-resources).
A Deployment also needs a [`.spec` section](/{{page.version}}/docs/devel/api-conventions/#spec-and-status).
@ -258,8 +258,8 @@ A Deployment also needs a [`.spec` section](/{{page.version}}/docs/devel/api-con
The `.spec.template` is the only required field of the `.spec`.
The `.spec.template` is a [pod template](replication-controller/#pod-template). It has exactly
the same schema as a [pod](pods), except it is nested and does not have an
The `.spec.template` is a [pod template](/{{page.version}}/docs/user-guide/replication-controller/#pod-template). It has exactly
the same schema as a [pod](/{{page.version}}/docs/user-guide/pods), except it is nested and does not have an
`apiVersion` or `kind`.
### Replicas
@ -347,5 +347,5 @@ Note: This is not implemented yet.
### kubectl rolling update
[Kubectl rolling update](kubectl/kubectl_rolling-update) also updates pods and replication controllers in a similar fashion.
[Kubectl rolling update](/{{page.version}}/docs/user-guide/kubectl/kubectl_rolling-update) also updates pods and replication controllers in a similar fashion.
But deployments is declarative and is server side.

View File

@ -8,7 +8,7 @@ In this doc, we introduce the Kubernetes command line for interacting with the a
#### docker run
How do I run an nginx container and expose it to the world? Checkout [kubectl run](kubectl/kubectl_run).
How do I run an nginx container and expose it to the world? Checkout [kubectl run](/{{page.version}}/docs/user-guide/kubectl/kubectl_run).
With docker:
@ -30,7 +30,7 @@ replicationcontroller "nginx-app" created
$ kubectl expose rc nginx-app --port=80 --name=nginx-http
```
With kubectl, we create a [replication controller](replication-controller) which will make sure that N pods are running nginx (where N is the number of replicas stated in the spec, which defaults to 1). We also create a [service](services) with a selector that matches the replication controller's selector. See the [Quick start](quick-start) for more information.
With kubectl, we create a [replication controller](/{{page.version}}/docs/user-guide/replication-controller) which will make sure that N pods are running nginx (where N is the number of replicas stated in the spec, which defaults to 1). We also create a [service](/{{page.version}}/docs/user-guide/services) with a selector that matches the replication controller's selector. See the [Quick start](/{{page.version}}/docs/user-guide/quick-start) for more information.
By default images are run in the background, similar to `docker run -d ...`, if you want to run things in the foreground, use:
@ -45,7 +45,7 @@ To destroy the replication controller (and it's pods) you need to run `kubectl
#### docker ps
How do I list what is currently running? Checkout [kubectl get](kubectl/kubectl_get).
How do I list what is currently running? Checkout [kubectl get](/{{page.version}}/docs/user-guide/kubectl/kubectl_get).
With docker:
@ -65,7 +65,7 @@ nginx-app-5jyvm 1/1 Running 0 1h
#### docker attach
How do I attach to a process that is already running in a container? Checkout [kubectl attach](kubectl/kubectl_attach)
How do I attach to a process that is already running in a container? Checkout [kubectl attach](/{{page.version}}/docs/user-guide/kubectl/kubectl_attach)
With docker:
@ -89,7 +89,7 @@ $ kubectl attach -it nginx-app-5jyvm
#### docker exec
How do I execute a command in a container? Checkout [kubectl exec](kubectl/kubectl_exec).
How do I execute a command in a container? Checkout [kubectl exec](/{{page.version}}/docs/user-guide/kubectl/kubectl_exec).
With docker:
@ -128,11 +128,11 @@ $ kubectl exec -ti nginx-app-5jyvm -- /bin/sh
# exit
```
For more information see [Getting into containers](getting-into-containers).
For more information see [Getting into containers](/{{page.version}}/docs/user-guide/getting-into-containers).
#### docker logs
How do I follow stdout/stderr of a running process? Checkout [kubectl logs](kubectl/kubectl_logs).
How do I follow stdout/stderr of a running process? Checkout [kubectl logs](/{{page.version}}/docs/user-guide/kubectl/kubectl_logs).
With docker:
@ -159,11 +159,11 @@ $ kubectl logs --previous nginx-app-zibvs
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
```
See [Logging](logging) for more information.
See [Logging](/{{page.version}}/docs/user-guide/logging) for more information.
#### docker stop and docker rm
How do I stop and delete a running process? Checkout [kubectl delete](kubectl/kubectl_delete).
How do I stop and delete a running process? Checkout [kubectl delete](/{{page.version}}/docs/user-guide/kubectl/kubectl_delete).
With docker
@ -197,11 +197,11 @@ Notice that we don't delete the pod directly. With kubectl we want to delete the
#### docker login
There is no direct analog of `docker login` in kubectl. If you are interested in using Kubernetes with a private registry, see [Using a Private Registry](images/#using-a-private-registry).
There is no direct analog of `docker login` in kubectl. If you are interested in using Kubernetes with a private registry, see [Using a Private Registry](/{{page.version}}/docs/user-guide/images/#using-a-private-registry).
#### docker version
How do I get the version of my client and server? Checkout [kubectl version](kubectl/kubectl_version).
How do I get the version of my client and server? Checkout [kubectl version](/{{page.version}}/docs/user-guide/kubectl/kubectl_version).
With docker:
@ -229,7 +229,7 @@ Server Version: version.Info{Major:"0", Minor:"21+", GitVersion:"v0.21.1-411-g32
#### docker info
How do I get miscellaneous info about my environment and configuration? Checkout [kubectl cluster-info](kubectl/kubectl_cluster-info).
How do I get miscellaneous info about my environment and configuration? Checkout [kubectl cluster-info](/{{page.version}}/docs/user-guide/kubectl/kubectl_cluster-info).
With docker:

View File

@ -76,7 +76,7 @@ spec:
restartPolicy: Never
```
[Download example](downward-api/dapi-pod.yaml)
[Download example](/{{page.version}}/docs/user-guide/downward-api/dapi-pod.yaml)
<!-- END MUNGE: EXAMPLE downward-api/dapi-pod.yaml -->
@ -86,7 +86,7 @@ Using a similar syntax it's possible to expose pod information to containers usi
Downward API are dumped to a mounted volume. This is achieved using a `downwardAPI`
volume type and the different items represent the files to be created. `fieldPath` references the field to be exposed.
Downward API volume permits to store more complex data like [`metadata.labels`](labels) and [`metadata.annotations`](/{{page.version}}/docs/user-guide/annotations). Currently key/value pair set fields are saved using `key="value"` format:
Downward API volume permits to store more complex data like [`metadata.labels`](/{{page.version}}/docs/user-guide/labels) and [`metadata.annotations`](/{{page.version}}/docs/user-guide/annotations). Currently key/value pair set fields are saved using `key="value"` format:
```conf
key1="value1"
@ -145,10 +145,10 @@ spec:
fieldPath: metadata.annotations
```
[Download example](downward-api/volume/dapi-volume.yaml)
[Download example](/{{page.version}}/docs/user-guide/downward-api/volume/dapi-volume.yaml)
<!-- END MUNGE: EXAMPLE downward-api/volume/dapi-volume.yaml -->
Some more thorough examples:
* [environment variables](environment-guide/)
* [downward API](downward-api/)
* [environment variables](/{{page.version}}/docs/user-guide/environment-guide/)
* [downward API](/{{page.version}}/docs/user-guide/downward-api/)

View File

@ -15,7 +15,7 @@ started](/{{page.version}}/docs/getting-started-guides/) for installation instru
Containers consume the downward API using environment variables. The downward API allows
containers to be injected with the name and namespace of the pod the container is in.
Use the [`examples/downward-api/dapi-pod.yaml`](dapi-pod.yaml) file to create a Pod with a container that consumes the
Use the [`dapi-pod.yaml`](/{{page.version}}/docs/user-guide/downward-api/dapi-pod.yaml) file to create a Pod with a container that consumes the
downward API.
```shell

View File

@ -69,7 +69,7 @@ Backend Namespace: default
```
First the frontend pod's information is printed. The pod name and
[namespace](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/namespaces.md) are retrieved from the
[namespace](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md) are retrieved from the
[Downward API](/{{page.version}}/docs/user-guide/downward-api). Next, `USER_VAR` is the name of
an environment variable set in the [pod
definition](/{{page.version}}/docs/user-guide/environment-guide/show-rc.yaml). Then, the dynamic Kubernetes environment

View File

@ -5,7 +5,7 @@ Developers can use `kubectl exec` to run commands in a container. This guide dem
## Using kubectl exec to check the environment variables of a container
Kubernetes exposes [services](services/#environment-variables) through environment variables. It is convenient to check these environment variables using `kubectl exec`.
Kubernetes exposes [services](/{{page.version}}/docs/user-guide/services/#environment-variables) through environment variables. It is convenient to check these environment variables using `kubectl exec`.
We first create a pod and a service,

View File

@ -28,21 +28,21 @@ Then, it compares the arithmetic mean of the pods' CPU utilization with the targ
CPU utilization is the recent CPU usage of a pod divided by the sum of CPU requested by the pod's containers.
Please note that if some of the pod's containers do not have CPU request set,
CPU utilization for the pod will not be defined and the autoscaler will not take any action.
Further details of the autoscaling algorithm are given [here](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/horizontal-pod-autoscaler.md#autoscaling-algorithm).
Further details of the autoscaling algorithm are given [here](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#autoscaling-algorithm).
Autoscaler uses heapster to collect CPU utilization.
Therefore, it is required to deploy heapster monitoring in your cluster for autoscaling to work.
Autoscaler accesses corresponding replication controller or deployment by scale sub-resource.
Scale is an interface which allows to dynamically set the number of replicas and to learn the current state of them.
More details on scale sub-resource can be found [here](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/horizontal-pod-autoscaler.md#scale-subresource).
More details on scale sub-resource can be found [here](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#scale-subresource).
## API Object
Horizontal pod autoscaler is a top-level resource in the Kubernetes REST API (currently in [beta](/{{page.version}}/docs/api/)#api-versioning)).
More details about the API object can be found at
[HorizontalPodAutoscaler Object](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
[HorizontalPodAutoscaler Object](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
## Support for horizontal pod autoscaler in kubectl
@ -55,7 +55,7 @@ In addition, there is a special `kubectl autoscale` command that allows for easy
For instance, executing `kubectl autoscale rc foo --min=2 --max=5 --cpu-percent=80`
will create an autoscaler for replication controller *foo*, with target CPU utilization set to `80%`
and the number of replicas between 2 and 5.
The detailed documentation of `kubectl autoscale` can be found [here](kubectl/kubectl_autoscale).
The detailed documentation of `kubectl autoscale` can be found [here](/{{page.version}}/docs/user-guide/kubectl/kubectl_autoscale).
## Autoscaling during rolling update
@ -73,11 +73,6 @@ the horizontal pod autoscaler will not be bound to the new replication controlle
## Further reading
* Design documentation: [Horizontal Pod Autoscaling](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/horizontal-pod-autoscaler.md).
* Manual of autoscale command in kubectl: [kubectl autoscale](kubectl/kubectl_autoscale).
* Usage example of [Horizontal Pod Autoscaler](horizontal-pod-autoscaling/).
* Design documentation: [Horizontal Pod Autoscaling](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md).
* Manual of autoscale command in kubectl: [kubectl autoscale](/{{page.version}}/docs/user-guide/kubectl/kubectl_autoscale).
* Usage example of [Horizontal Pod Autoscaler](/{{page.version}}/docs/user-guide/horizontal-pod-autoscaling/).

View File

@ -20,7 +20,7 @@ heapster monitoring will be turned-on by default).
To demonstrate horizontal pod autoscaler we will use a custom docker image based on php-apache server.
The image can be found [here](https://releases.k8s.io/release-1.1/docs/user-guide/horizontal-pod-autoscaling/image).
It defines [index.php](image/index.php) page which performs some CPU intensive computations.
It defines [index.php](/{{page.version}}/docs/user-guide/horizontal-pod-autoscaling/image/index.php) page which performs some CPU intensive computations.
First, we will start a replication controller running the image and expose it as an external service:
@ -69,7 +69,7 @@ OK!
## Step Two: Create horizontal pod autoscaler
Now that the server is running, we will create a horizontal pod autoscaler for it.
To create it, we will use the [hpa-php-apache.yaml](hpa-php-apache.yaml) file, which looks like this:
To create it, we will use the [hpa-php-apache.yaml](/{{page.version}}/docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml) file, which looks like this:
```yaml
apiVersion: extensions/v1beta1
@ -93,7 +93,7 @@ controlled by the php-apache replication controller we created in the first step
Roughly speaking, the horizontal autoscaler will increase and decrease the number of replicas
(via the replication controller) so as to maintain an average CPU utilization across all Pods of 50%
(since each pod requests 200 milli-cores by [kubectl run](#kubectl-run), this means average CPU utilization of 100 milli-cores).
See [here](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/horizontal-pod-autoscaler.md#autoscaling-algorithm) for more details on the algorithm.
See [here](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#autoscaling-algorithm) for more details on the algorithm.
We will create the autoscaler by executing the following command:
@ -102,8 +102,8 @@ $ kubectl create -f docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.ya
horizontalpodautoscaler "php-apache" created
```
Alternatively, we can create the autoscaler using [kubectl autoscale](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/user-guide/kubectl/kubectl_autoscale.md).
The following command will create the equivalent autoscaler as defined in the [hpa-php-apache.yaml](hpa-php-apache.yaml) file:
Alternatively, we can create the autoscaler using [kubectl autoscale](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/user-guide/kubectl/kubectl_autoscale.md).
The following command will create the equivalent autoscaler as defined in the [hpa-php-apache.yaml](/{{page.version}}/docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml) file:
```shell
$ kubectl autoscale rc php-apache --cpu-percent=50 --min=1 --max=10

View File

@ -3,11 +3,11 @@ title: "Identifiers"
---
All objects in the Kubernetes REST API are unambiguously identified by a Name and a UID.
For non-unique user-provided attributes, Kubernetes provides [labels](labels) and [annotations](/{{page.version}}/docs/user-guide/annotations).
For non-unique user-provided attributes, Kubernetes provides [labels](/{{page.version}}/docs/user-guide/labels) and [annotations](/{{page.version}}/docs/user-guide/annotations).
## Names
Names are generally client-provided. Only one object of a given kind can have a given name at a time (i.e., they are spatially unique). But if you delete an object, you can make a new object with the same name. Names are the used to refer to an object in a resource URL, such as `/api/v1/pods/some-name`. By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restrictions. See the [identifiers design doc](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/identifiers.md) for the precise syntax rules for names.
Names are generally client-provided. Only one object of a given kind can have a given name at a time (i.e., they are spatially unique). But if you delete an object, you can make a new object with the same name. Names are the used to refer to an object in a resource URL, such as `/api/v1/pods/some-name`. By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restrictions. See the [identifiers design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/identifiers.md) for the precise syntax rules for names.
## UIDs

View File

@ -146,7 +146,7 @@ where node creation is automated.
Kubernetes supports specifying registry keys on a pod.
First, create a `.dockercfg`, such as running `docker login <registry.domain>`.
Then put the resulting `.dockercfg` file into a [secret resource](secrets). For example:
Then put the resulting `.dockercfg` file into a [secret resource](/{{page.version}}/docs/user-guide/secrets). For example:
```shell
$ docker login
@ -201,7 +201,7 @@ spec:
This needs to be done for each pod that is using a private registry.
However, setting of this field can be automated by setting the imagePullSecrets
in a [serviceAccount](service-accounts) resource.
in a [serviceAccount](/{{page.version}}/docs/user-guide/service-accounts) resource.
Currently, all pods will potentially have read access to any images which were
pulled using imagePullSecrets. That is, imagePullSecrets does *NOT* protect your

View File

@ -67,13 +67,13 @@ rules:
*POSTing this to the API server will have no effect if you have not configured an [Ingress controller](#ingress-controllers).*
__Lines 1-4__: As with all other Kubernetes config, an Ingress needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see [here](simple-yaml), [here](/{{page.version}}/docs/user-guide/configuring-containers), and [here](working-with-resources).
__Lines 1-4__: As with all other Kubernetes config, an Ingress needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see [here](/{{page.version}}/docs/user-guide/simple-yaml), [here](/{{page.version}}/docs/user-guide/configuring-containers), and [here](/{{page.version}}/docs/user-guide/working-with-resources).
__Lines 5-7__: Ingress [spec](/{{page.version}}/docs/devel/api-conventions/#spec-and-status) has all the information needed to configure a loadbalancer or proxy server. Most importantly, it contains a list of rules matched against all incoming requests. Currently the Ingress resource only supports http rules.
__Lines 8-9__: Each http rule contains the following information: A host (eg: foo.bar.com, defaults to * in this example), a list of paths (eg: /testpath) each of which has an associated backend (test:80). Both the host and path must match the content of an incoming request before the loadbalancer directs traffic to the backend.
__Lines 10-12__: A backend is a service:port combination as described in the [services doc](services). Ingress traffic is typically sent directly to the endpoints matching a backend.
__Lines 10-12__: A backend is a service:port combination as described in the [services doc](/{{page.version}}/docs/user-guide/services). Ingress traffic is typically sent directly to the endpoints matching a backend.
__Global Parameters__: For the sake of simplicity the example Ingress has no global parameters, see the [api-reference](https://releases.k8s.io/release-1.1/pkg/apis/extensions/v1beta1/types.go) for a full definition of the resource. One can specify a global default backend in the absence of which requests that don't match a path in the spec are sent to the default backend of the Ingress controller. Though the Ingress resource doesn't support HTTPS yet, security configs would also be global.
@ -100,7 +100,7 @@ spec:
servicePort: 80
```
[Download example](ingress.yaml)
[Download example](/{{page.version}}/docs/user-guide/ingress.yaml)
<!-- END MUNGE: EXAMPLE ingress.yaml -->
If you create it using `kubectl -f` you should see:

View File

@ -308,9 +308,9 @@ status:
Learn about additional debugging tools, including:
* [Logging](logging)
* [Monitoring](monitoring)
* [Getting into containers via `exec`](getting-into-containers)
* [Logging](/{{page.version}}/docs/user-guide/logging)
* [Monitoring](/{{page.version}}/docs/user-guide/monitoring)
* [Getting into containers via `exec`](/{{page.version}}/docs/user-guide/getting-into-containers)
* [Connecting to containers via proxies](/{{page.version}}/docs/user-guide/connecting-to-applications-proxy)
* [Connecting to containers via port forwarding](/{{page.version}}/docs/user-guide/connecting-to-applications-port-forward)

View File

@ -42,7 +42,7 @@ spec:
restartPolicy: Never
```
[Download example](job.yaml)
[Download example](/{{page.version}}/docs/user-guide/job.yaml)
<!-- END MUNGE: EXAMPLE job.yaml -->
Run the example job by downloading the example file and then running this command:
@ -93,7 +93,7 @@ $ kubectl logs pi-aiw0a
## Writing a Job Spec
As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields. For
general information about working with config files, see [here](simple-yaml),
general information about working with config files, see [here](/{{page.version}}/docs/user-guide/simple-yaml),
[here](/{{page.version}}/docs/user-guide/configuring-containers), and [here](working-with-resources).
A Job also needs a [`.spec` section](/{{page.version}}/docs/devel/api-conventions/#spec-and-status).
@ -102,14 +102,14 @@ A Job also needs a [`.spec` section](/{{page.version}}/docs/devel/api-convention
The `.spec.template` is the only required field of the `.spec`.
The `.spec.template` is a [pod template](replication-controller/#pod-template). It has exactly
the same schema as a [pod](pods), except it is nested and does not have an `apiVersion` or
The `.spec.template` is a [pod template](/{{page.version}}/docs/user-guide/replication-controller/#pod-template). It has exactly
the same schema as a [pod](/{{page.version}}/docs/user-guide/pods), except it is nested and does not have an `apiVersion` or
`kind`.
In addition to required fields for a Pod, a pod template in a job must specify appropriate
lables (see [pod selector](#pod-selector) and an appropriate restart policy.
Only a [`RestartPolicy`](pod-states) equal to `Never` or `OnFailure` are allowed.
Only a [`RestartPolicy`](/{{page.version}}/docs/user-guide/pod-states) equal to `Never` or `OnFailure` are allowed.
### Pod Selector
@ -117,7 +117,7 @@ The `.spec.selector` field is a label query over a set of pods.
The `spec.selector` is an object consisting of two fields:
* `matchLabels` - works the same as the `.spec.selector` of a [ReplicationController](replication-controller)
* `matchLabels` - works the same as the `.spec.selector` of a [ReplicationController](/{{page.version}}/docs/user-guide/replication-controller)
* `matchExpressions` - allows to build more sophisticated selectors by specyfing key,
list of values and an operator that relates the key and values.
@ -161,7 +161,7 @@ a non-zero exit code, or the Container was killed for exceeding a memory limit,
happens, and the `.spec.template.containers[].restartPolicy = "OnFailure"`, then the Pod stays
on the node, but the Container is re-run. Therefore, your program needs to handle the the case when it is
restarted locally, or else specify `.spec.template.containers[].restartPolicy = "Never"`.
See [pods-states](pod-states) for more information on `restartPolicy`.
See [pods-states](/{{page.version}}/docs/user-guide/pod-states) for more information on `restartPolicy`.
An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node
(node is upgraded, rebooted, delelted, etc.), or if a container of the Pod fails and the
@ -188,11 +188,11 @@ requires only a single pod.
### Replication Controller
Jobs are complementary to [Replication Controllers](replication-controller).
Jobs are complementary to [Replication Controllers](/{{page.version}}/docs/user-guide/replication-controller).
A Replication Controller manages pods which are not expected to terminate (e.g. web servers), and a Job
manages pods that are expected to terminate (e.g. batch jobs).
As discussed in [life of a pod](pod-states), `Job` is *only* appropriate for pods with
As discussed in [life of a pod](/{{page.version}}/docs/user-guide/pod-states), `Job` is *only* appropriate for pods with
`RestartPolicy` equal to `OnFailure` or `Never`. (Note: If `RestartPolicy` is not set, the default
value is `Always`.)

View File

@ -122,7 +122,7 @@ The rules for loading and merging the kubeconfig files are straightforward, but
## Manipulation of kubeconfig via `kubectl config <subcommand>`
In order to more easily manipulate kubeconfig files, there are a series of subcommands to `kubectl config` to help.
See [kubectl/kubectl_config.md](kubectl/kubectl_config) for help.
See [kubectl/kubectl_config.md](/{{page.version}}/docs/user-guide/kubectl/kubectl_config) for help.
### Example

View File

@ -1,7 +1,7 @@
---
title: "kubectl overview"
---
Use this overview of the `kubectl` command line interface to help you start running commands against Kubernetes clusters. This overview quickly covers `kubectl` syntax, describes the command operations, and provides common examples. For details about each command, including all the supported flags and subcommands, see the [kubectl](kubectl/kubectl) reference documentation.
Use this overview of the `kubectl` command line interface to help you start running commands against Kubernetes clusters. This overview quickly covers `kubectl` syntax, describes the command operations, and provides common examples. For details about each command, including all the supported flags and subcommands, see the [kubectl](/{{page.version}}/docs/user-guide/kubectl/kubectl) reference documentation.
TODO: Auto-generate this file to ensure it's always in sync with any `kubectl` changes, see [#14177](http://pr.k8s.io/14177).
@ -74,7 +74,7 @@ Operation | Syntax | Description
`stop` | `kubectl stop` | Deprecated: Instead, see `kubectl delete`.
`version` | `kubectl version [--client] [flags]` | Display the Kubernetes version running on the client and server.
Remember: For more about command operations, see the [kubectl](kubectl/kubectl) reference documentation.
Remember: For more about command operations, see the [kubectl](/{{page.version}}/docs/user-guide/kubectl/kubectl) reference documentation.
## Resource types
@ -101,7 +101,7 @@ Resource type | Abbreviated alias
## Output options
Use the following sections for information about how you can format or sort the output of certain commands. For details about which commands support the various output options, see the [kubectl](kubectl/kubectl) reference documentation.
Use the following sections for information about how you can format or sort the output of certain commands. For details about which commands support the various output options, see the [kubectl](/{{page.version}}/docs/user-guide/kubectl/kubectl) reference documentation.
### Formatting output
@ -120,8 +120,8 @@ Output format | Description
`-o=custom-columns=<spec>` | Print a table using a comma separated list of [custom columns](#custom-columns).
`-o=custom-columns-file=<filename>` | Print a table using the [custom columns](#custom-columns) template in the `<filename>` file.
`-o=json` | Output a JSON formatted API object.
`-o=jsonpath=<template>` | Print the fields defined in a [jsonpath](jsonpath) expression.
`-o=jsonpath-file=<filename>` | Print the fields defined by the [jsonpath](jsonpath) expression in the `<filename>` file.
`-o=jsonpath=<template>` | Print the fields defined in a [jsonpath](/{{page.version}}/docs/user-guide/jsonpath) expression.
`-o=jsonpath-file=<filename>` | Print the fields defined by the [jsonpath](/{{page.version}}/docs/user-guide/jsonpath) expression in the `<filename>` file.
`-o=name` | Print only the resource name and nothing else.
`-o=wide` | Output in the plain-text format with any additional information. For pods, the node name is included.
`-o=yaml` | Output a YAML formatted API object.
@ -132,7 +132,7 @@ In this example, the following command outputs the details for a single pod as a
`$ kubectl get pod web-pod-13je7 -o=yaml`
Remember: See the [kubectl](kubectl/kubectl) reference documentation for details about which output format is supported by each command.
Remember: See the [kubectl](/{{page.version}}/docs/user-guide/kubectl/kubectl) reference documentation for details about which output format is supported by each command.
#### Custom columns
@ -167,7 +167,7 @@ submit-queue 610995
### Sorting list objects
To output objects to a sorted list in your terminal window, you can add the `--sort-by` flag to a supported `kubectl` command. Sort your objects by specifying any numeric or string field with the `--sort-by` flag. To specify a field, use a [jsonpath](jsonpath) expression.
To output objects to a sorted list in your terminal window, you can add the `--sort-by` flag to a supported `kubectl` command. Sort your objects by specifying any numeric or string field with the `--sort-by` flag. To specify a field, use a [jsonpath](/{{page.version}}/docs/user-guide/jsonpath) expression.
#### Syntax
@ -267,4 +267,4 @@ $ kubectl logs -f <pod-name>
## Next steps
Start using the [kubectl](kubectl/kubectl) commands.
Start using the [kubectl](/{{page.version}}/docs/user-guide/kubectl/kubectl) commands.

View File

@ -43,7 +43,7 @@ Valid label values must be 63 characters or less and must be empty or begin and
## Label selectors
Unlike [names and UIDs](identifiers), labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).
Unlike [names and UIDs](/{{page.version}}/docs/user-guide/identifiers), labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).
Via a _label selector_, the client/user can identify a set of objects. The label selector is the core grouping primitive in Kubernetes.
@ -125,7 +125,7 @@ $ kubectl get pods -l 'environment,environment notin (frontend)'
### Set references in API objects
Some Kubernetes objects, such as [`service`s](services) and [`replicationcontroller`s](replication-controller), also use label selectors to specify sets of other resources, such as [pods](pods).
Some Kubernetes objects, such as [`service`s](/{{page.version}}/docs/user-guide/services) and [`replicationcontroller`s](/{{page.version}}/docs/user-guide/replication-controller), also use label selectors to specify sets of other resources, such as [pods](/{{page.version}}/docs/user-guide/pods).
#### Service and ReplicationController
@ -149,7 +149,7 @@ this selector (respectively in `json` or `yaml` format) is equivalent to `compon
#### Job and other new resources
Newer resources, such as [job](jobs), support _set-based_ requirements as well.
Newer resources, such as [job](/{{page.version}}/docs/user-guide/jobs), support _set-based_ requirements as well.
```yaml
selector:

View File

@ -3,7 +3,7 @@ title: "Overview"
---
This example shows two types of pod [health checks](/{{page.version}}/docs/user-guide/production-pods/#liveness-and-readiness-probes-aka-health-checks): HTTP checks and container execution checks.
The [exec-liveness.yaml](exec-liveness.yaml) demonstrates the container execution check.
The [exec-liveness.yaml](/{{page.version}}/docs/user-guide/liveness/exec-liveness.yaml) demonstrates the container execution check.
```yaml
livenessProbe:
@ -26,7 +26,7 @@ echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600
so when Kubelet executes the health check 15 seconds (defined by initialDelaySeconds) after the container started, the check would fail.
The [http-liveness.yaml](http-liveness.yaml) demonstrates the HTTP check.
The [http-liveness.yaml](/{{page.version}}/docs/user-guide/liveness/http-liveness.yaml) demonstrates the HTTP check.
```yaml
livenessProbe:

View File

@ -2,9 +2,9 @@
title: "Elasticsearch/Kibana Logging Demonstration"
---
This directory contains two [pod](/{{page.version}}/docs/user-guide/pods) specifications which can be used as synthetic
logging sources. The pod specification in [synthetic_0_25lps.yaml](synthetic_0_25lps.yaml)
logging sources. The pod specification in [synthetic_0_25lps.yaml](/{{page.version}}/docs/user-guide/logging-demo/synthetic_0_25lps.yaml)
describes a pod that just emits a log message once every 4 seconds. The pod specification in
[synthetic_10lps.yaml](synthetic_10lps.yaml)
[synthetic_10lps.yaml](/{{page.version}}/docs/user-guide/logging-demo/synthetic_10lps.yaml)
describes a pod that just emits 10 log lines per second.
See [logging document](/{{page.version}}/docs/user-guide/logging/) for more details about logging. To observe the ingested log lines when using Google Cloud Logging please see the getting

View File

@ -10,8 +10,8 @@ Kubernetes components, such as kubelet and apiserver, use the [glog](https://god
## Examining the logs of running containers
The logs of a running container may be fetched using the command `kubectl logs`. For example, given
this pod specification [counter-pod.yaml](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/blog-logging/counter-pod.yaml), which has a container which writes out some text to standard
output every second. (You can find different pod specifications [here](logging-demo/).)
this pod specification [counter-pod.yaml](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/blog-logging/counter-pod.yaml), which has a container which writes out some text to standard
output every second. (You can find different pod specifications [here](/{{page.version}}/docs/user-guide/logging-demo/).)
<!-- BEGIN MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
@ -28,7 +28,7 @@ spec:
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
```
[Download example](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/blog-logging/counter-pod.yaml)
[Download example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/blog-logging/counter-pod.yaml)
<!-- END MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
we can run the pod:
@ -86,7 +86,7 @@ describes how to ingest cluster level logs into Elasticsearch and view them usin
## Ingesting Application Log Files
Cluster level logging only collects the standard output and standard error output of the applications
running in containers. The guide [Collecting log files within containers with Fluentd](http://releases.k8s.io/release-1.1/contrib/logging/fluentd-sidecar-gcp/README.md) explains how the log files of applications can also be ingested into Google Cloud logging.
running in containers. The guide [Collecting log files within containers with Fluentd](http://releases.k8s.io/{{page.githubbranch}}/contrib/logging/fluentd-sidecar-gcp/README.md) explains how the log files of applications can also be ingested into Google Cloud logging.
## Known issues

View File

@ -1,7 +1,7 @@
---
title: "Kubernetes User Guide: Managing Applications: Managing deployments"
---
You've deployed your application and exposed it via a service. Now what? Kubernetes provides a number of tools to help you manage your application deployment, including scaling and updating. Among the features we'll discuss in more depth are [configuration files](/{{page.version}}/docs/user-guide/configuring-containers/#configuration-in-kubernetes) and [labels](deploying-applications/#labels).
You've deployed your application and exposed it via a service. Now what? Kubernetes provides a number of tools to help you manage your application deployment, including scaling and updating. Among the features we'll discuss in more depth are [configuration files](/{{page.version}}/docs/user-guide/configuring-containers/#configuration-in-kubernetes) and [labels](/{{page.version}}/docs/user-guide/deploying-applications/#labels).
* TOC
{:toc}
@ -113,7 +113,7 @@ my-nginx-svc app=nginx app=nginx 10.0.152.174 80/TCP
The examples we've used so far apply at most a single label to any resource. There are many scenarios where multiple labels should be used to distinguish sets from one another.
For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels:
For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels:
```yaml
labels:
@ -233,7 +233,7 @@ my-nginx-o0ef1 1/1 Running 0 1h
At some point, you'll eventually need to update your deployed application, typically by specifying a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several update operations, each of which is applicable to different scenarios.
To update a service without an outage, `kubectl` supports what is called ['rolling update'?](kubectl/kubectl_rolling-update), which updates one pod at a time, rather than taking down the entire service at the same time. See the [rolling update design document](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/simple-rolling-update.md) and the [example of rolling update](update-demo/) for more information.
To update a service without an outage, `kubectl` supports what is called ['rolling update'?](/{{page.version}}/docs/user-guide/kubectl/kubectl_rolling-update), which updates one pod at a time, rather than taking down the entire service at the same time. See the [rolling update design document](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/simple-rolling-update.md) and the [example of rolling update](/{{page.version}}/docs/user-guide/update-demo/) for more information.
Let's say you were running version 1.7.9 of nginx:
@ -256,7 +256,7 @@ spec:
- containerPort: 80
```
To update to version 1.9.1, you can use [`kubectl rolling-update --image`](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/simple-rolling-update.md):
To update to version 1.9.1, you can use [`kubectl rolling-update --image`](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/simple-rolling-update.md):
```shell
$ kubectl rolling-update my-nginx --image=nginx:1.9.1
@ -361,7 +361,7 @@ Update succeeded. Deleting my-nginx
my-nginx-v4
```
You can also run the [update demo](update-demo/) to see a visual representation of the rolling update process.
You can also run the [update demo](/{{page.version}}/docs/user-guide/update-demo/) to see a visual representation of the rolling update process.
## In-place updates of resources
@ -405,5 +405,5 @@ replicationcontrollers/my-nginx-v4
## What's next?
- [Learn about how to use `kubectl` for application introspection and debugging.](introspection-and-debugging)
- [Learn about how to use `kubectl` for application introspection and debugging.](/{{page.version}}/docs/user-guide/introspection-and-debugging)
- [Tips and tricks when working with config](/{{page.version}}/docs/user-guide/config-best-practices

View File

@ -1,7 +1,7 @@
---
title: "Resource Usage Monitoring in Kubernetes"
---
Understanding how an application behaves when deployed is crucial to scaling the application and providing a reliable service. In a Kubernetes cluster, application performance can be examined at many different levels: containers, [pods](pods), [services](services), and whole clusters. As part of Kubernetes we want to provide users with detailed resource usage information about their running applications at all these levels. This will give users deep insights into how their applications are performing and where possible application bottlenecks may be found. In comes [Heapster](https://github.com/GoogleCloudPlatform/heapster), a project meant to provide a base monitoring platform on Kubernetes.
Understanding how an application behaves when deployed is crucial to scaling the application and providing a reliable service. In a Kubernetes cluster, application performance can be examined at many different levels: containers, [pods](/{{page.version}}/docs/user-guide/pods), [services](/{{page.version}}/docs/user-guide/services), and whole clusters. As part of Kubernetes we want to provide users with detailed resource usage information about their running applications at all these levels. This will give users deep insights into how their applications are performing and where possible application bottlenecks may be found. In comes [Heapster](https://github.com/GoogleCloudPlatform/heapster), a project meant to provide a base monitoring platform on Kubernetes.
### Overview

View File

@ -19,7 +19,7 @@ In future versions of Kubernetes, objects in the same namespace will have the sa
access control policies by default.
It is not necessary to use multiple namespaces just to separate slightly different
resources, such as different versions of the same software: use [labels](labels) to distinguish
resources, such as different versions of the same software: use [labels](/{{page.version}}/docs/user-guide/labels) to distinguish
resources within the same namespace.
## Working with Namespaces
@ -73,7 +73,7 @@ $ kubectl config set-context $(CONTEXT) --namespace=<insert-namespace-name-here>
## Namespaces and DNS
When you create a [Service](services), it creates a corresponding [DNS entry](/{{page.version}}/docs/admin/dns).
When you create a [Service](/{{page.version}}/docs/user-guide/services), it creates a corresponding [DNS entry](/{{page.version}}/docs/admin/dns).
This entry is of the form `<service-name>.<namespace-name>.svc.cluster.local`, which means
that if a container just uses `<service-name>` it will resolve to the service which
is local to a namespace. This is useful for using the same configuration across

View File

@ -15,7 +15,7 @@ Then, to add a label to the node you've chosen, run `kubectl label nodes <node-n
If this fails with an "invalid command" error, you're likely using an older version of kubectl that doesn't have the `label` command. In that case, see the [previous version](https://github.com/kubernetes/kubernetes/blob/a053dbc313572ed60d89dae9821ecab8bfd676dc/examples/node-selection/README.md) of this guide for instructions on how to manually set labels on a node.
Also, note that label keys must be in the form of DNS labels (as described in the [identifiers doc](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/identifiers.md)), meaning that they are not allowed to contain any upper-case letters.
Also, note that label keys must be in the form of DNS labels (as described in the [identifiers doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/identifiers.md)), meaning that they are not allowed to contain any upper-case letters.
You can verify that it worked by re-running `kubectl get nodes` and checking that the node now has a label.

View File

@ -9,17 +9,17 @@ Kubernetes supports [Docker](http://www.docker.io) and [Rocket](https://coreos.c
While Kubernetes currently focuses on continuously-running stateless (e.g. web server or in-memory object cache) and "cloud native" stateful applications (e.g. NoSQL datastores), in the near future it will support all the other workload types commonly found in production cluster environments, such as batch, stream processing, and traditional databases.
In Kubernetes, all containers run inside [pods](pods). A pod can host a single container, or multiple cooperating containers; in the latter case, the containers in the pod are guaranteed to be co-located on the same machine and can share resources. A pod can also contain zero or more [volumes](volumes), which are directories that are private to a container or shared across containers in a pod. For each pod the user creates, the system finds a machine that is healthy and that has sufficient available capacity, and starts up the corresponding container(s) there. If a container fails it can be automatically restarted by Kubernetes' node agent, called the Kubelet. But if the pod or its machine fails, it is not automatically moved or restarted unless the user also defines a [replication controller](replication-controller), which we discuss next.
In Kubernetes, all containers run inside [pods](/{{page.version}}/docs/user-guide/pods). A pod can host a single container, or multiple cooperating containers; in the latter case, the containers in the pod are guaranteed to be co-located on the same machine and can share resources. A pod can also contain zero or more [volumes](/{{page.version}}/docs/user-guide/volumes), which are directories that are private to a container or shared across containers in a pod. For each pod the user creates, the system finds a machine that is healthy and that has sufficient available capacity, and starts up the corresponding container(s) there. If a container fails it can be automatically restarted by Kubernetes' node agent, called the Kubelet. But if the pod or its machine fails, it is not automatically moved or restarted unless the user also defines a [replication controller](/{{page.version}}/docs/user-guide/replication-controller), which we discuss next.
Users can create and manage pods themselves, but Kubernetes drastically simplifies system management by allowing users to delegate two common pod-related activities: deploying multiple pod replicas based on the same pod configuration, and creating replacement pods when a pod or its machine fails. The Kubernetes API object that manages these behaviors is called a [replication controller](replication-controller). It defines a pod in terms of a template, that the system then instantiates as some number of pods (specified by the user). The replicated set of pods might constitute an entire application, a micro-service, or one layer in a multi-tier application. Once the pods are created, the system continually monitors their health and that of the machines they are running on; if a pod fails due to a software problem or machine failure, the replication controller automatically creates a new pod on a healthy machine, to maintain the set of pods at the desired replication level. Multiple pods from the same or different applications can share the same machine. Note that a replication controller is needed even in the case of a single non-replicated pod if the user wants it to be re-created when it or its machine fails.
Users can create and manage pods themselves, but Kubernetes drastically simplifies system management by allowing users to delegate two common pod-related activities: deploying multiple pod replicas based on the same pod configuration, and creating replacement pods when a pod or its machine fails. The Kubernetes API object that manages these behaviors is called a [replication controller](/{{page.version}}/docs/user-guide/replication-controller). It defines a pod in terms of a template, that the system then instantiates as some number of pods (specified by the user). The replicated set of pods might constitute an entire application, a micro-service, or one layer in a multi-tier application. Once the pods are created, the system continually monitors their health and that of the machines they are running on; if a pod fails due to a software problem or machine failure, the replication controller automatically creates a new pod on a healthy machine, to maintain the set of pods at the desired replication level. Multiple pods from the same or different applications can share the same machine. Note that a replication controller is needed even in the case of a single non-replicated pod if the user wants it to be re-created when it or its machine fails.
Frequently it is useful to refer to a set of pods, for example to limit the set of pods on which a mutating operation should be performed, or that should be queried for status. As a general mechanism, users can attach to most Kubernetes API objects arbitrary key-value pairs called [labels](labels), and then use a set of label selectors (key-value queries over labels) to constrain the target of API operations. Each resource also has a map of string keys and values that can be used by external tooling to store and retrieve arbitrary metadata about this object, called [annotations](/{{page.version}}/docs/user-guide/annotations).
Frequently it is useful to refer to a set of pods, for example to limit the set of pods on which a mutating operation should be performed, or that should be queried for status. As a general mechanism, users can attach to most Kubernetes API objects arbitrary key-value pairs called [labels](/{{page.version}}/docs/user-guide/labels), and then use a set of label selectors (key-value queries over labels) to constrain the target of API operations. Each resource also has a map of string keys and values that can be used by external tooling to store and retrieve arbitrary metadata about this object, called [annotations](/{{page.version}}/docs/user-guide/annotations).
Kubernetes supports a unique [networking model](/{{page.version}}/docs/admin/networking). Kubernetes encourages a flat address space and does not dynamically allocate ports, instead allowing users to select whichever ports are convenient for them. To achieve this, it allocates an IP address for each pod.
Modern Internet applications are commonly built by layering micro-services, for example a set of web front-ends talking to a distributed in-memory key-value store talking to a replicated storage service. To facilitate this architecture, Kubernetes offers the [service](services) abstraction, which provides a stable IP address and [DNS name](/{{page.version}}/docs/admin/dns) that corresponds to a dynamic set of pods such as the set of pods constituting a micro-service. The set is defined using a label selector and thus can refer to any set of pods. When a container running in a Kubernetes pod connects to this address, the connection is forwarded by a local agent (called the kube proxy) running on the source machine, to one of the corresponding back-end containers. The exact back-end is chosen using a round-robin policy to balance load. The kube proxy takes care of tracking the dynamic set of back-ends as pods are replaced by new pods on new hosts, so that the service IP address (and DNS name) never changes.
Modern Internet applications are commonly built by layering micro-services, for example a set of web front-ends talking to a distributed in-memory key-value store talking to a replicated storage service. To facilitate this architecture, Kubernetes offers the [service](/{{page.version}}/docs/user-guide/services) abstraction, which provides a stable IP address and [DNS name](/{{page.version}}/docs/admin/dns) that corresponds to a dynamic set of pods such as the set of pods constituting a micro-service. The set is defined using a label selector and thus can refer to any set of pods. When a container running in a Kubernetes pod connects to this address, the connection is forwarded by a local agent (called the kube proxy) running on the source machine, to one of the corresponding back-end containers. The exact back-end is chosen using a round-robin policy to balance load. The kube proxy takes care of tracking the dynamic set of back-ends as pods are replaced by new pods on new hosts, so that the service IP address (and DNS name) never changes.
Every resource in Kubernetes, such as a pod, is identified by a URI and has a UID. Important components of the URI are the kind of object (e.g. pod), the object's name, and the object's [namespace](namespaces). For a certain object kind, every name is unique within its namespace. In contexts where an object name is provided without a namespace, it is assumed to be in the default namespace. UID is unique across time and space.
Every resource in Kubernetes, such as a pod, is identified by a URI and has a UID. Important components of the URI are the kind of object (e.g. pod), the object's name, and the object's [namespace](/{{page.version}}/docs/user-guide/namespaces). For a certain object kind, every name is unique within its namespace. In contexts where an object name is provided without a namespace, it is assumed to be in the default namespace. UID is unique across time and space.

View File

@ -1,7 +1,7 @@
---
title: "Persistent Volumes and Claims"
---
This document describes the current state of `PersistentVolumes` in Kubernetes. Familiarity with [volumes](volumes) is suggested.
This document describes the current state of `PersistentVolumes` in Kubernetes. Familiarity with [volumes](/{{page.version}}/docs/user-guide/volumes) is suggested.
* TOC
{:toc}
@ -14,7 +14,7 @@ A `PersistentVolume` (PV) is a piece of networked storage in the cluster that ha
A `PersistentVolumeClaim` (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g, can be mounted once read/write or many times read-only).
Please see the [detailed walkthrough with working examples](persistent-volumes/).
Please see the [detailed walkthrough with working examples](/{{page.version}}/docs/user-guide/persistent-volumes/).
## Lifecycle of a volume and claim
@ -80,7 +80,7 @@ apiVersion: v1
### Capacity
Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. See the Kubernetes [Resource Model](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/resources.md) to understand the units expected by `capacity`.
Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. See the Kubernetes [Resource Model](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/resources.md) to understand the units expected by `capacity`.
Currently, storage size is the only resource that can be set or requested. Future attributes may include IOPS, throughput, etc.
@ -146,7 +146,7 @@ Claims use the same conventions as volumes when requesting storage with specific
### Resources
Claims, like pods, can request specific quantities of a resource. In this case, the request is for storage. The same [resource model](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/resources.md) applies to both volumes and claims.
Claims, like pods, can request specific quantities of a resource. In this case, the request is for storage. The same [resource model](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/resources.md) applies to both volumes and claims.
## Claims As Volumes

View File

@ -6,7 +6,7 @@ nginx serving content from your persistent volume.
This guide assumes knowledge of Kubernetes fundamentals and that you have a cluster up and running.
See [Persistent Storage design document](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/persistent-storage.md) for more information.
See [Persistent Storage design document](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/persistent-storage.md) for more information.
## Provisioning

View File

@ -46,12 +46,13 @@ More detailed information about the current (and previous) container statuses ca
## RestartPolicy
The possible values for RestartPolicy are `Always`, `OnFailure`, or `Never`. If RestartPolicy is not set, the default value is `Always`. RestartPolicy applies to all containers in the pod. RestartPolicy only refers to restarts of the containers by the Kubelet on the same node. Failed containers that are restarted by Kubelet, are restarted with an exponential back-off delay, the delay is in multiples of sync-frequency 0, 1x, 2x, 4x, 8x ... capped at 5 minutes and is reset after 10 minutes of successful execution. As discussed in the [pods document](pods/#durability-of-pods-or-lack-thereof), once bound to a node, a pod will never be rebound to another node. This means that some kind of controller is necessary in order for a pod to survive node failure, even if just a single pod at a time is desired.
The possible values for RestartPolicy are `Always`, `OnFailure`, or `Never`. If RestartPolicy is not set, the default value is `Always`. RestartPolicy applies to all containers in the pod. RestartPolicy only refers to restarts of the containers by the Kubelet on the same node. Failed containers that are restarted by Kubelet, are restarted with an exponential back-off delay, the delay is in multiples of sync-frequency 0, 1x, 2x, 4x, 8x ... capped at 5 minutes and is reset after 10 minutes of successful execution. As discussed in the [pods document](/{{page.version}}/docs/user-guide/pods/#durability-of-pods-or-lack-thereof), once bound to a node, a pod will never be rebound to another node. This means that some kind of controller is necessary in order for a pod to survive node failure, even if just a single pod at a time is desired.
Three types of controllers are currently available:
- Use a [`Job`](jobs) for pods which are expected to terminate (e.g. batch computations).
- Use a [`ReplicationController`](replication-controller) for pods which are not expected to
- Use a [`Job`](/{{page.version}}/docs/user-guide/jobs) for pods which are expected to terminate (e.g. batch computations).
- Use a [`ReplicationController`](/{{page.version}}/docs/user-guide/replication-controller) for pods which are not expected to
terminate, and where (e.g. web servers).
- Use a [`DaemonSet`](/{{page.version}}/docs/admin/daemons): Use for pods which need to run 1 per machine because they provide a
machine-specific system service.

View File

@ -19,9 +19,9 @@ The context of the pod can be defined as the conjunction of several Linux namesp
Applications within a pod also have access to shared volumes, which are defined at the pod level and made available in each application's filesystem. Additionally, a pod may define top-level cgroup isolations which form an outer bound to any individual isolation applied to constituent applications.
In terms of [Docker](https://www.docker.com/) constructs, a pod consists of a colocated group of Docker containers with shared [volumes](volumes). PID namespace sharing is not yet implemented with Docker.
In terms of [Docker](https://www.docker.com/) constructs, a pod consists of a colocated group of Docker containers with shared [volumes](/{{page.version}}/docs/user-guide/volumes). PID namespace sharing is not yet implemented with Docker.
Like individual application containers, pods are considered to be relatively ephemeral rather than durable entities. As discussed in [life of a pod](pod-states), pods are scheduled to nodes and remain there until termination (according to restart policy) or deletion. When a node dies, the pods scheduled to that node are deleted. Specific pods are never rescheduled to new nodes; instead, they must be replaced (see [replication controller](replication-controller) for more details). (In the future, a higher-level API may support pod migration.)
Like individual application containers, pods are considered to be relatively ephemeral rather than durable entities. As discussed in [life of a pod](/{{page.version}}/docs/user-guide/pod-states), pods are scheduled to nodes and remain there until termination (according to restart policy) or deletion. When a node dies, the pods scheduled to that node are deleted. Specific pods are never rescheduled to new nodes; instead, they must be replaced (see [/{{page.version}}/docs/user-guide/replication controller](/{{page.version}}/docs/user-guide/replication-controller) for more details). (In the future, a higher-level API may support pod migration.)
## Motivation for pods
@ -68,7 +68,7 @@ That approach would provide co-location, but would not provide most of the benef
Pods aren't intended to be treated as durable [pets](https://blog.engineyard.com/2014/pets-vs-cattle). They won't survive scheduling failures, node failures, or other evictions, such as due to lack of resources, or in the case of node maintenance.
In general, users shouldn't need to create pods directly. They should almost always use controllers (e.g., [replication controller](replication-controller)), even for singletons. Controllers provide self-healing with a cluster scope, as well as replication and rollout management.
In general, users shouldn't need to create pods directly. They should almost always use controllers (e.g., [replication controller](/{{page.version}}/docs/user-guide/replication-controller)), even for singletons. Controllers provide self-healing with a cluster scope, as well as replication and rollout management.
The use of collective APIs as the primary user-facing primitive is relatively common among cluster scheduling systems, including [Borg](https://research.google.com/pubs/pub43438), [Marathon](https://mesosphere.github.io/marathon/docs/rest-api), [Aurora](http://aurora.apache.org/documentation/latest/configuration-reference/#job-schema), and [Tupperware](http://www.slideshare.net/Docker/aravindnarayanan-facebook140613153626phpapp02-37588997).

View File

@ -1,7 +1,7 @@
---
title: "Kubernetes User Guide: Managing Applications: Prerequisites"
---
To deploy and manage applications on Kubernetes, you'll use the Kubernetes command-line tool, [kubectl](kubectl/kubectl). It lets you inspect your cluster resources, create, delete, and update components, and much more. You will use it to look at your new cluster and bring up example apps.
To deploy and manage applications on Kubernetes, you'll use the Kubernetes command-line tool, [kubectl](/{{page.version}}/docs/user-guide/kubectl/kubectl). It lets you inspect your cluster resources, create, delete, and update components, and much more. You will use it to look at your new cluster and bring up example apps.
## Installing kubectl
@ -38,7 +38,7 @@ export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
## Configuring kubectl
In order for kubectl to find and access the Kubernetes cluster, it needs a [kubeconfig file](kubeconfig-file), which is created automatically when creating a cluster using kube-up.sh (see the [getting started guides](/{{page.version}}/docs/getting-started-guides/) for more about creating clusters). If you need access to a cluster you didn't create, see the [Sharing Cluster Access document](sharing-clusters).
In order for kubectl to find and access the Kubernetes cluster, it needs a [kubeconfig file](/{{page.version}}/docs/user-guide/kubeconfig-file), which is created automatically when creating a cluster using kube-up.sh (see the [getting started guides](/{{page.version}}/docs/getting-started-guides/) for more about creating clusters). If you need access to a cluster you didn't create, see the [Sharing Cluster Access document](/{{page.version}}/docs/user-guide/sharing-clusters).
By default, kubectl configuration lives at `~/.kube/config`.
#### Making sure you're ready
@ -53,4 +53,4 @@ If you see a url response, you are ready to go.
## What's next?
[Learn how to launch and expose your application.](quick-start)
[Learn how to launch and expose your application.](/{{page.version}}/docs/user-guide/quick-start)

View File

@ -8,9 +8,9 @@ You've seen [how to configure and deploy pods and containers](/{{page.version}}/
## Persistent storage
The container file system only lives as long as the container does, so when a container crashes and restarts, changes to the filesystem will be lost and the container will restart from a clean slate. To access more-persistent storage, outside the container file system, you need a [*volume*](volumes). This is especially important to stateful applications, such as key-value stores and databases.
The container file system only lives as long as the container does, so when a container crashes and restarts, changes to the filesystem will be lost and the container will restart from a clean slate. To access more-persistent storage, outside the container file system, you need a [*volume*](/{{page.version}}/docs/user-guide/volumes). This is especially important to stateful applications, such as key-value stores and databases.
For example, [Redis](http://redis.io/) is a key-value cache and store, which we use in the [guestbook](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/guestbook/) and other examples. We can add a volume to it to store persistent data as follows:
For example, [Redis](http://redis.io/) is a key-value cache and store, which we use in the [guestbook](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) and other examples. We can add a volume to it to store persistent data as follows:
```yaml
apiVersion: v1
@ -39,15 +39,15 @@ spec:
name: data # must match the name of the volume, above
```
`emptyDir` volumes live for the lifespan of the [pod](pods), which is longer than the lifespan of any one container, so if the container fails and is restarted, our storage will live on.
`emptyDir` volumes live for the lifespan of the [pod](/{{page.version}}/docs/user-guide/pods), which is longer than the lifespan of any one container, so if the container fails and is restarted, our storage will live on.
In addition to the local disk storage provided by `emptyDir`, Kubernetes supports many different network-attached storage solutions, including PD on GCE and EBS on EC2, which are preferred for critical data, and will handle details such as mounting and unmounting the devices on the nodes. See [the volumes doc](volumes) for more details.
In addition to the local disk storage provided by `emptyDir`, Kubernetes supports many different network-attached storage solutions, including PD on GCE and EBS on EC2, which are preferred for critical data, and will handle details such as mounting and unmounting the devices on the nodes. See [the volumes doc](/{{page.version}}/docs/user-guide/volumes) for more details.
## Distributing credentials
Many applications need credentials, such as passwords, OAuth tokens, and TLS keys, to authenticate with other applications, databases, and services. Storing these credentials in container images or environment variables is less than ideal, since the credentials can then be copied by anyone with access to the image, pod/container specification, host file system, or host Docker daemon.
Kubernetes provides a mechanism, called [*secrets*](secrets), that facilitates delivery of sensitive credentials to applications. A `Secret` is a simple resource containing a map of data. For instance, a simple secret with a username and password might look as follows:
Kubernetes provides a mechanism, called [*secrets*](/{{page.version}}/docs/user-guide/secrets), that facilitates delivery of sensitive credentials to applications. A `Secret` is a simple resource containing a map of data. For instance, a simple secret with a username and password might look as follows:
```yaml
apiVersion: v1
@ -104,14 +104,14 @@ spec:
name: supersecret
```
For more details, see the [secrets document](secrets), [example](secrets/) and [design doc](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/secrets.md).
For more details, see the [secrets document](/{{page.version}}/docs/user-guide/secrets), [example](/{{page.version}}/docs/user-guide/secrets/) and [design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/secrets.md).
## Authenticating with a private image registry
Secrets can also be used to pass [image registry credentials](images/#using-a-private-registry).
Secrets can also be used to pass [image registry credentials](/{{page.version}}/docs/user-guide/images/#using-a-private-registry).
First, create a `.dockercfg` file, such as running `docker login <registry.domain>`.
Then put the resulting `.dockercfg` file into a [secret resource](secrets). For example:
Then put the resulting `.dockercfg` file into a [secret resource](/{{page.version}}/docs/user-guide/secrets). For example:
```shell
$ docker login
@ -159,9 +159,9 @@ spec:
## Helper containers
[Pods](pods) support running multiple containers co-located together. They can be used to host vertically integrated application stacks, but their primary motivation is to support auxiliary helper programs that assist the primary application. Typical examples are data pullers, data pushers, and proxies.
[Pods](/{{page.version}}/docs/user-guide/pods) support running multiple containers co-located together. They can be used to host vertically integrated application stacks, but their primary motivation is to support auxiliary helper programs that assist the primary application. Typical examples are data pullers, data pushers, and proxies.
Such containers typically need to communicate with one another, often through the file system. This can be achieved by mounting the same volume into both containers. An example of this pattern would be a web server with a [program that polls a git repository](http://releases.k8s.io/release-1.1/contrib/git-sync/) for new updates:
Such containers typically need to communicate with one another, often through the file system. This can be achieved by mounting the same volume into both containers. An example of this pattern would be a web server with a [program that polls a git repository](http://releases.k8s.io/{{page.githubbranch}}/contrib/git-sync/) for new updates:
```yaml
apiVersion: v1
@ -202,7 +202,7 @@ More examples can be found in our [blog article](http://blog.kubernetes.io/2015/
Kubernetes's scheduler will place applications only where they have adequate CPU and memory, but it can only do so if it knows how much [resources they require](/{{page.version}}/docs/user-guide/compute-resources). The consequence of specifying too little CPU is that the containers could be starved of CPU if too many other containers were scheduled onto the same node. Similarly, containers could die unpredictably due to running out of memory if no memory were requested, which can be especially likely for large-memory applications.
If no resource requirements are specified, a nominal amount of resources is assumed. (This default is applied via a [LimitRange](/{{page.version}}/docs/admin/limitrange/) for the default [Namespace](namespaces). It can be viewed with `kubectl describe limitrange limits`.) You may explicitly specify the amount of resources required as follows:
If no resource requirements are specified, a nominal amount of resources is assumed. (This default is applied via a [LimitRange](/{{page.version}}/docs/admin/limitrange/) for the default [Namespace](/{{page.version}}/docs/user-guide/namespaces). It can be viewed with `kubectl describe limitrange limits`.) You may explicitly specify the amount of resources required as follows:
```yaml
apiVersion: v1
@ -234,13 +234,13 @@ spec:
memory: 64Mi
```
The container will die due to OOM (out of memory) if it exceeds its specified limit, so specifying a value a little higher than expected generally improves reliability. By specifying request, pod is guaranteed to be able to use that much of resource when needed. See [Resource QoS](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/proposals/resource-qos.md) for the difference between resource limits and requests.
The container will die due to OOM (out of memory) if it exceeds its specified limit, so specifying a value a little higher than expected generally improves reliability. By specifying request, pod is guaranteed to be able to use that much of resource when needed. See [Resource QoS](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/resource-qos.md) for the difference between resource limits and requests.
If you're not sure how much resources to request, you can first launch the application without specifying resources, and use [resource usage monitoring](monitoring) to determine appropriate values.
If you're not sure how much resources to request, you can first launch the application without specifying resources, and use [resource usage monitoring](/{{page.version}}/docs/user-guide/monitoring) to determine appropriate values.
## Liveness and readiness probes (aka health checks)
Many applications running for long periods of time eventually transition to broken states, and cannot recover except by restarting them. Kubernetes provides [*liveness probes*](pod-states/#container-probes) to detect and remedy such situations.
Many applications running for long periods of time eventually transition to broken states, and cannot recover except by restarting them. Kubernetes provides [*liveness probes*](/{{page.version}}/docs/user-guide/pod-states/#container-probes) to detect and remedy such situations.
A common way to probe an application is using HTTP, which can be specified as follows:
@ -272,7 +272,7 @@ spec:
Other times, applications are only temporarily unable to serve, and will recover on their own. Typically in such cases you'd prefer not to kill the application, but don't want to send it requests, either, since the application won't respond correctly or at all. A common such scenario is loading large data or configuration files during application startup. Kubernetes provides *readiness probes* to detect and mitigate such situations. Readiness probes are configured similarly to liveness probes, just using the `readinessProbe` field. A pod with containers reporting that they are not ready will not receive traffic through Kubernetes [services](/{{page.version}}/docs/user-guide/connecting-applications).
For more details (e.g., how to specify command-based probes), see the [example in the walkthrough](walkthrough/k8s201/#health-checking), the [standalone example](liveness/), and the [documentation](pod-states/#container-probes).
For more details (e.g., how to specify command-based probes), see the [example in the walkthrough](/{{page.version}}/docs/user-guide/walkthrough/k8s201/#health-checking), the [standalone example](/{{page.version}}/docs/user-guide/liveness/), and the [documentation](/{{page.version}}/docs/user-guide/pod-states/#container-probes).
## Lifecycle hooks and termination notice
@ -309,7 +309,7 @@ spec:
## Termination message
In order to achieve a reasonably high level of availability, especially for actively developed applications, it's important to debug failures quickly. Kubernetes can speed debugging by surfacing causes of fatal errors in a way that can be display using [`kubectl`](kubectl/kubectl) or the [UI](ui), in addition to general [log collection](logging). It is possible to specify a `terminationMessagePath` where a container will write its 'death rattle'?, such as assertion failure messages, stack traces, exceptions, and so on. The default path is `/dev/termination-log`.
In order to achieve a reasonably high level of availability, especially for actively developed applications, it's important to debug failures quickly. Kubernetes can speed debugging by surfacing causes of fatal errors in a way that can be display using [`kubectl`](/{{page.version}}/docs/user-guide/kubectl/kubectl) or the [UI](/{{page.version}}/docs/user-guide/ui), in addition to general [log collection](/{{page.version}}/docs/user-guide/logging). It is possible to specify a `terminationMessagePath` where a container will write its 'death rattle'?, such as assertion failure messages, stack traces, exceptions, and so on. The default path is `/dev/termination-log`.
Here is a toy example:
@ -340,4 +340,4 @@ $ kubectl get pods/pod-w-message -o go-template="{{range .status.containerStatus
```
## What's next?
[Learn more about managing deployments.](managing-deployments)
[Learn more about managing deployments.](/{{page.version}}/docs/user-guide/managing-deployments)

Some files were not shown because too many files have changed in this diff Show More