diff --git a/v1.1/docs/admin/admission-controllers.md b/v1.1/docs/admin/admission-controllers.md index 8a2c3329303..eeda423a337 100644 --- a/v1.1/docs/admin/admission-controllers.md +++ b/v1.1/docs/admin/admission-controllers.md @@ -75,7 +75,7 @@ This plug-in will observe the incoming request and ensure that it does not viola enumerated in the `ResourceQuota` object in a `Namespace`. If you are using `ResourceQuota` objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints. -See the [resourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/admission_control_resource_quota.md) and the [example of Resource Quota](resourcequota/) for more details. +See the [resourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_resource_quota.md) and the [example of Resource Quota](/{{page.version}}/docs/admin/resourcequota/) for more details. It is strongly encouraged that this plug-in is configured last in the sequence of admission control plug-ins. This is so that quota is not prematurely incremented only for the request to be rejected later in admission control. @@ -88,7 +88,7 @@ your Kubernetes deployment, you MUST use this plug-in to enforce those constrain be used to apply default resource requests to Pods that don't specify any; currently, the default LimitRanger applies a 0.1 CPU requirement to all Pods in the `default` namespace. -See the [limitRange design doc](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/admission_control_limit_range.md) and the [example of Limit Range](limitrange/) for more details. +See the [limitRange design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_limit_range.md) and the [example of Limit Range](/{{page.version}}/docs/admin/limitrange/) for more details. ### InitialResources (experimental) @@ -97,7 +97,7 @@ then the plug-in auto-populates a compute resource request based on historical u If there is not enough data to make a decision the Request is left unchanged. When the plug-in sets a compute resource request, it annotates the pod with information on what compute resources it auto-populated. -See the [InitialResouces proposal](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/proposals/initial-resources.md) for more details. +See the [InitialResouces proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/initial-resources.md) for more details. ### NamespaceExists (deprecated) diff --git a/v1.1/docs/admin/authorization.md b/v1.1/docs/admin/authorization.md index 8336438fa71..909de53838f 100644 --- a/v1.1/docs/admin/authorization.md +++ b/v1.1/docs/admin/authorization.md @@ -85,7 +85,7 @@ To permit an action Policy with an unset namespace applies regardless of namespa 3. Kubelet can read and write events: `{"user":"kubelet", "resource": "events"}` 4. Bob can just read pods in namespace "projectCaribou": `{"user":"bob", "resource": "pods", "readonly": true, "namespace": "projectCaribou"}` -[Complete file example](http://releases.k8s.io/release-1.1/pkg/auth/authorizer/abac/example_policy_file.jsonl) +[Complete file example](http://releases.k8s.io/{{page.githubbranch}}/pkg/auth/authorizer/abac/example_policy_file.jsonl) ### A quick note on service accounts diff --git a/v1.1/docs/admin/cluster-components.md b/v1.1/docs/admin/cluster-components.md index 389e4e19b2f..711b52046f3 100644 --- a/v1.1/docs/admin/cluster-components.md +++ b/v1.1/docs/admin/cluster-components.md @@ -15,7 +15,7 @@ unsatisfied). Master components could in theory be run on any node in the cluster. However, for simplicity, current set up scripts typically start all master components on the same VM, and does not run user containers on this VM. See -[high-availability.md](high-availability) for an example multi-master-VM setup. +[high-availability.md](/{{page.version}}/docs/admin/high-availability) for an example multi-master-VM setup. Even in the future, when Kubernetes is fully self-hosting, it will probably be wise to only allow master components to schedule on a subset of nodes, to limit @@ -24,19 +24,19 @@ node-compromising security exploit. ### kube-apiserver -[kube-apiserver](kube-apiserver) exposes the Kubernetes API; it is the front-end for the +[kube-apiserver](/{{page.version}}/docs/admin/kube-apiserver) exposes the Kubernetes API; it is the front-end for the Kubernetes control plane. It is designed to scale horizontally (i.e., one scales -it by running more of them-- [high-availability.md](high-availability)). +it by running more of them-- [high-availability.md](/{{page.version}}/docs/admin/high-availability)). ### etcd -[etcd](etcd) is used as Kubernetes' backing store. All cluster data is stored here. +[etcd](/{{page.version}}/docs/admin/etcd) is used as Kubernetes' backing store. All cluster data is stored here. Proper administration of a Kubernetes cluster includes a backup plan for etcd's data. ### kube-controller-manager -[kube-controller-manager](kube-controller-manager) is a binary that runs controllers, which are the +[kube-controller-manager](/{{page.version}}/docs/admin/kube-controller-manager) is a binary that runs controllers, which are the background threads that handle routine tasks in the cluster. Logically, each controller is a separate process, but to reduce the number of moving pieces in the system, they are all compiled into a single binary and run in a single @@ -57,7 +57,7 @@ These controllers include: ### kube-scheduler -[kube-scheduler](kube-scheduler) watches newly created pods that have no node assigned, and +[kube-scheduler](/{{page.version}}/docs/admin/kube-scheduler) watches newly created pods that have no node assigned, and selects a node for them to run on. ### addons @@ -65,17 +65,17 @@ selects a node for them to run on. Addons are pods and services that implement cluster features. They don't run on the master VM, but currently the default setup scripts that make the API calls to create these pods and services does run on the master VM. See: -[kube-master-addons](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/kube-master-addons/kube-master-addons.sh) +[kube-master-addons](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/salt/kube-master-addons/kube-master-addons.sh) Addon objects are created in the "kube-system" namespace. Example addons are: -* [DNS](http://releases.k8s.io/release-1.1/cluster/addons/dns/) provides cluster local DNS. -* [kube-ui](http://releases.k8s.io/release-1.1/cluster/addons/kube-ui/) provides a graphical UI for the +* [DNS](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/) provides cluster local DNS. +* [kube-ui](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/kube-ui/) provides a graphical UI for the cluster. -* [fluentd-elasticsearch](http://releases.k8s.io/release-1.1/cluster/addons/fluentd-elasticsearch/) provides - log storage. Also see the [gcp version](http://releases.k8s.io/release-1.1/cluster/addons/fluentd-gcp/). -* [cluster-monitoring](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/) provides +* [fluentd-elasticsearch](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/fluentd-elasticsearch/) provides + log storage. Also see the [gcp version](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/fluentd-gcp/). +* [cluster-monitoring](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/) provides monitoring for the cluster. ## Node components @@ -85,7 +85,7 @@ the Kubernetes runtime environment. ### kubelet -[kubelet](kubelet) is the primary node agent. It: +[kubelet](/{{page.version}}/docs/admin/kubelet) is the primary node agent. It: * Watches for pods that have been assigned to its node (either by apiserver or via local configuration file) and: * Mounts the pod's required volumes @@ -98,7 +98,7 @@ the Kubernetes runtime environment. ### kube-proxy -[kube-proxy](kube-proxy) enables the Kubernetes service abstraction by maintaining +[kube-proxy](/{{page.version}}/docs/admin/kube-proxy) enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding. ### docker diff --git a/v1.1/docs/admin/cluster-large.md b/v1.1/docs/admin/cluster-large.md index 206b59456e2..a4499434b49 100644 --- a/v1.1/docs/admin/cluster-large.md +++ b/v1.1/docs/admin/cluster-large.md @@ -13,7 +13,7 @@ At v1.0, Kubernetes supports clusters up to 100 nodes with 30 pods per node and A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by a "master" (the cluster-level control plane). -Normally the number of nodes in a cluster is controlled by the the value `NUM_MINIONS` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/release-1.1/cluster/gce/config-default.sh)). +Normally the number of nodes in a cluster is controlled by the the value `NUM_MINIONS` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/{{page.githubbranch}}/cluster/gce/config-default.sh)). Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up. @@ -56,14 +56,14 @@ These limits, however, are based on data collected from addons running on 4-node To avoid running into cluster addon resource issues, when creating a cluster with many nodes, consider the following: - Scale memory and CPU limits for each of the following addons, if used, along with the size of cluster (there is one replica of each handling the entire cluster so memory and CPU usage tends to grow proportionally with size/load on cluster): - - Heapster ([GCM/GCL backed](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/google/heapster-controller.yaml), [InfluxDB backed](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/influxdb/heapster-controller.yaml), [InfluxDB/GCL backed](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/googleinfluxdb/heapster-controller-combined.yaml), [standalone](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/standalone/heapster-controller.yaml)) - * [InfluxDB and Grafana](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml) - * [skydns, kube2sky, and dns etcd](http://releases.k8s.io/release-1.1/cluster/addons/dns/skydns-rc.yaml.in) - * [Kibana](http://releases.k8s.io/release-1.1/cluster/addons/fluentd-elasticsearch/kibana-controller.yaml) + - Heapster ([GCM/GCL backed](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/google/heapster-controller.yaml), [InfluxDB backed](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/influxdb/heapster-controller.yaml), [InfluxDB/GCL backed](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/googleinfluxdb/heapster-controller-combined.yaml), [standalone](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/standalone/heapster-controller.yaml)) + * [InfluxDB and Grafana](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml) + * [skydns, kube2sky, and dns etcd](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/skydns-rc.yaml.in) + * [Kibana](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/fluentd-elasticsearch/kibana-controller.yaml) * Scale number of replicas for the following addons, if used, along with the size of cluster (there are multiple replicas of each so increasing replicas should help handle increased load, but, since load per replica also increases slightly, also consider increasing CPU/memory limits): - * [elasticsearch](http://releases.k8s.io/release-1.1/cluster/addons/fluentd-elasticsearch/es-controller.yaml) + * [elasticsearch](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/fluentd-elasticsearch/es-controller.yaml) * Increase memory and CPU limits slightly for each of the following addons, if used, along with the size of cluster (there is one replica per node but CPU/memory usage increases slightly along with cluster load/size as well): - * [FluentD with ElasticSearch Plugin](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/fluentd-es/fluentd-es.yaml) - * [FluentD with GCP Plugin](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml) + * [FluentD with ElasticSearch Plugin](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/salt/fluentd-es/fluentd-es.yaml) + * [FluentD with GCP Plugin](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml) For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](/{{page.version}}/docs/user-guide/compute-resources/#troubleshooting). \ No newline at end of file diff --git a/v1.1/docs/admin/cluster-management.md b/v1.1/docs/admin/cluster-management.md index b4b7a69664f..2d36198b0fd 100644 --- a/v1.1/docs/admin/cluster-management.md +++ b/v1.1/docs/admin/cluster-management.md @@ -63,7 +63,7 @@ recommend testing the upgrade on an experimental cluster before performing the u ## Resizing a cluster -If your cluster runs short on resources you can easily add more machines to it if your cluster is running in [Node self-registration mode](node/#self-registration-of-nodes). +If your cluster runs short on resources you can easily add more machines to it if your cluster is running in [Node self-registration mode](/{{page.version}}/docs/admin/node/#self-registration-of-nodes). If you're using GCE or GKE it's done by resizing Instance Group managing your Nodes. It can be accomplished by modifying number of instances on `Compute > Compute Engine > Instance groups > your group > Edit group` [Google Cloud Console page](https://console.developers.google.com) or using gcloud CLI: ```shell @@ -145,7 +145,7 @@ kubectl replace nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedu If you deleted the node's VM instance and created a new one, then a new schedulable node resource will be created automatically when you create a new VM instance (if you're using a cloud provider that supports -node discovery; currently this is only Google Compute Engine, not including CoreOS on Google Compute Engine using kube-register). See [Node](node) for more details. +node discovery; currently this is only Google Compute Engine, not including CoreOS on Google Compute Engine using kube-register). See [Node](/{{page.version}}/docs/admin/node) for more details. ## Advanced Topics diff --git a/v1.1/docs/admin/cluster-troubleshooting.md b/v1.1/docs/admin/cluster-troubleshooting.md index 245d3442910..ca9c164449e 100644 --- a/v1.1/docs/admin/cluster-troubleshooting.md +++ b/v1.1/docs/admin/cluster-troubleshooting.md @@ -89,7 +89,7 @@ Mitigations: - Action use IaaS providers reliable storage (e.g GCE PD or AWS EBS volume) for VMs with apiserver+etcd - Mitigates: Apiserver backing storage lost -- Action: Use (experimental) [high-availability](high-availability) configuration +- Action: Use (experimental) [high-availability](/{{page.version}}/docs/admin/high-availability) configuration - Mitigates: Master VM shutdown or master components (scheduler, API server, controller-managing) crashing - Will tolerate one or more simultaneous node or component failures - Mitigates: Apiserver backing storage (i.e., etcd's data directory) lost @@ -108,5 +108,5 @@ Mitigations: - Mitigates: Node shutdown - Mitigates: Kubelet software fault -- Action: [Multiple independent clusters](multi-cluster) (and avoid making risky changes to all clusters at once) +- Action: [Multiple independent clusters](/{{page.version}}/docs/admin/multi-cluster) (and avoid making risky changes to all clusters at once) - Mitigates: Everything listed above. \ No newline at end of file diff --git a/v1.1/docs/admin/daemons.md b/v1.1/docs/admin/daemons.md index 610a0fa250a..af69017a261 100644 --- a/v1.1/docs/admin/daemons.md +++ b/v1.1/docs/admin/daemons.md @@ -75,7 +75,7 @@ Normally, the machine that a pod runs on is selected by the Kubernetes scheduler created by the Daemon controller have the machine already selected (`.spec.nodeName` is specified when the pod is created, so it is ignored by the scheduler). Therefore: - - the [`unschedulable`](node/#manual-node-administration) field of a node is not respected + - the [`unschedulable`](/{{page.version}}/docs/admin/node/#manual-node-administration) field of a node is not respected by the daemon set controller. - daemon set controller can make pods even when the scheduler has not been started, which can help cluster bootstrap. @@ -140,7 +140,7 @@ use a Daemon Set rather than creating individual pods. ### Static Pods It is possible to create pods by writing a file to a certain directory watched by Kubelet. These -are called [static pods](static-pods). +are called [static pods](/{{page.version}}/docs/admin/static-pods). Unlike DaemonSet, static pods cannot be managed with kubectl or other Kubernetes API clients. Static pods do not depend on the apiserver, making them useful in cluster bootstrapping cases. Also, static pods may be deprecated in the future. diff --git a/v1.1/docs/admin/dns.md b/v1.1/docs/admin/dns.md index 4b41105af4d..a275a91283a 100644 --- a/v1.1/docs/admin/dns.md +++ b/v1.1/docs/admin/dns.md @@ -1,7 +1,7 @@ --- title: "DNS Integration with Kubernetes" --- -As of Kubernetes 0.8, DNS is offered as a [cluster add-on](http://releases.k8s.io/release-1.1/cluster/addons/README.md). +As of Kubernetes 0.8, DNS is offered as a [cluster add-on](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/README.md). If enabled, a DNS Pod and Service will be scheduled on the cluster, and the kubelets will be configured to tell individual containers to use the DNS Service's IP to resolve DNS names. @@ -36,4 +36,4 @@ time. ## For more information -See [the docs for the DNS cluster addon](http://releases.k8s.io/release-1.1/cluster/addons/dns/README.md). \ No newline at end of file +See [the docs for the DNS cluster addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md). \ No newline at end of file diff --git a/v1.1/docs/admin/etcd.md b/v1.1/docs/admin/etcd.md index f0f1eccc434..b2b0e213b1b 100644 --- a/v1.1/docs/admin/etcd.md +++ b/v1.1/docs/admin/etcd.md @@ -13,7 +13,7 @@ internet at large), because access to etcd is equivalent to root in your cluster. Data Reliability: for reasonable safety, either etcd needs to be run as a -[cluster](high-availability/#clustering-etcd) (multiple machines each running +[cluster](/{{page.version}}/docs/admin/high-availability/#clustering-etcd) (multiple machines each running etcd) or etcd's data directory should be located on durable storage (e.g., GCE's persistent disk). In either case, if high availability is required--as it might be in a production cluster--the data directory ought to be [backed up @@ -23,14 +23,14 @@ to reduce downtime in case of corruption. ## Default configuration The default setup scripts use kubelet's file-based static pods feature to run etcd in a -[pod](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/etcd/etcd.manifest). This manifest should only +[pod](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/salt/etcd/etcd.manifest). This manifest should only be run on master VMs. The default location that kubelet scans for manifests is `/etc/kubernetes/manifests/`. ## Kubernetes's usage of etcd By default, Kubernetes objects are stored under the `/registry` key in etcd. -This path can be prefixed by using the [kube-apiserver](kube-apiserver) flag +This path can be prefixed by using the [kube-apiserver](/{{page.version}}/docs/admin/kube-apiserver) flag `--etcd-prefix="/foo"`. `etcd` is the only place that Kubernetes keeps state. diff --git a/v1.1/docs/admin/high-availability.md b/v1.1/docs/admin/high-availability.md index 573ae211693..626070c2594 100644 --- a/v1.1/docs/admin/high-availability.md +++ b/v1.1/docs/admin/high-availability.md @@ -53,11 +53,11 @@ choices. For example, on systemd-based systems (e.g. RHEL, CentOS), you can run If you are extending from a standard Kubernetes installation, the `kubelet` binary should already be present on your system. You can run `which kubelet` to determine if the binary is in fact installed. If it is not installed, you should install the [kubelet binary](https://storage.googleapis.com/kubernetes-release/release/v0.19.3/bin/linux/amd64/kubelet), the -[kubelet init file](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/kubelet/initd) and [high-availability/default-kubelet](high-availability/default-kubelet) +[kubelet init file](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/salt/kubelet/initd) and [high-availability/default-kubelet](/{{page.version}}/docs/admin/high-availability/default-kubelet) scripts. -If you are using monit, you should also install the monit daemon (`apt-get install monit`) and the [high-availability/monit-kubelet](high-availability/monit-kubelet) and -[high-availability/monit-docker](high-availability/monit-docker) configs. +If you are using monit, you should also install the monit daemon (`apt-get install monit`) and the [high-availability/monit-kubelet](/{{page.version}}/docs/admin/high-availability/monit-kubelet) and +[high-availability/monit-docker](/{{page.version}}/docs/admin/high-availability/monit-docker) configs. On systemd systems you `systemctl enable kubelet` and `systemctl enable docker`. @@ -86,7 +86,7 @@ First, hit the etcd discovery service to create a new token: curl https://discovery.etcd.io/new?size=3 ``` -On each node, copy the [etcd.yaml](high-availability/etcd.yaml) file into `/etc/kubernetes/manifests/etcd.yaml` +On each node, copy the [etcd.yaml](/{{page.version}}/docs/admin/high-availability/etcd.yaml) file into `/etc/kubernetes/manifests/etcd.yaml` The kubelet on each node actively monitors the contents of that directory, and it will create an instance of the `etcd` server from the definition of the pod specified in `etcd.yaml`. @@ -156,7 +156,7 @@ The easiest way to create this directory, may be to copy it from the master node ### Starting the API Server -Once these files exist, copy the [kube-apiserver.yaml](high-availability/kube-apiserver.yaml) into `/etc/kubernetes/manifests/` on each master node. +Once these files exist, copy the [kube-apiserver.yaml](/{{page.version}}/docs/admin/high-availability/kube-apiserver.yaml) into `/etc/kubernetes/manifests/` on each master node. The kubelet monitors this directory, and will automatically create an instance of the `kube-apiserver` container using the pod definition specified in the file. @@ -185,7 +185,7 @@ master election. On each of the three apiserver nodes, we run a small utility a election protocol using etcd "compare and swap". If the apiserver node wins the election, it starts the master component it is managing (e.g. the scheduler), if it loses the election, it ensures that any master components running on the node (e.g. the scheduler) are stopped. -In the future, we expect to more tightly integrate this lease-locking into the scheduler and controller-manager binaries directly, as described in the [high availability design proposal](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/proposals/high-availability.md) +In the future, we expect to more tightly integrate this lease-locking into the scheduler and controller-manager binaries directly, as described in the [high availability design proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/high-availability.md) ### Installing configuration files @@ -197,11 +197,11 @@ touch /var/log/kube-controller-manager.log ``` Next, set up the descriptions of the scheduler and controller manager pods on each node. -by copying [kube-scheduler.yaml](high-availability/kube-scheduler.yaml) and [kube-controller-manager.yaml](high-availability/kube-controller-manager.yaml) into the `/srv/kubernetes/` directory. +by copying [kube-scheduler.yaml](/{{page.version}}/docs/admin/high-availability/kube-scheduler.yaml) and [kube-controller-manager.yaml](high-availability//{{page.version}}/docs/admin/kube-controller-manager.yaml) into the `/srv/kubernetes/` directory. ### Running the podmaster -Now that the configuration files are in place, copy the [podmaster.yaml](high-availability/podmaster.yaml) config file into `/etc/kubernetes/manifests/` +Now that the configuration files are in place, copy the [podmaster.yaml](/{{page.version}}/docs/admin/high-availability/podmaster.yaml) config file into `/etc/kubernetes/manifests/` As before, the kubelet on the node monitors this directory, and will start an instance of the podmaster using the pod specification provided in `podmaster.yaml`. diff --git a/v1.1/docs/admin/limitrange/index.md b/v1.1/docs/admin/limitrange/index.md index db756720dfb..f6c4ea6d18b 100644 --- a/v1.1/docs/admin/limitrange/index.md +++ b/v1.1/docs/admin/limitrange/index.md @@ -26,7 +26,7 @@ This example demonstrates how limits can be applied to a Kubernetes namespace to min/max resource limits per pod. In addition, this example demonstrates how you can apply default resource limits to pods in the absence of an end-user specified value. -See [LimitRange design doc](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/admission_control_limit_range.md) for more information. For a detailed description of the Kubernetes resource model, see [Resources](/{{page.version}}/docs/user-guide/compute-resources) +See [LimitRange design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_limit_range.md) for more information. For a detailed description of the Kubernetes resource model, see [Resources](/{{page.version}}/docs/user-guide/compute-resources) ## Step 0: Prerequisites diff --git a/v1.1/docs/admin/multi-cluster.md b/v1.1/docs/admin/multi-cluster.md index 87b719ee51a..d7f72d1cbf7 100644 --- a/v1.1/docs/admin/multi-cluster.md +++ b/v1.1/docs/admin/multi-cluster.md @@ -7,7 +7,7 @@ This document describes some of the issues to consider when making a decision ab Note that at present, Kubernetes does not offer a mechanism to aggregate multiple clusters into a single virtual cluster. However, -we [plan to do this in the future](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/proposals/federation.md). +we [plan to do this in the future](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/federation.md). ## Scope of a single cluster diff --git a/v1.1/docs/admin/namespaces.md b/v1.1/docs/admin/namespaces.md index 663c035952d..44660c1cb3e 100644 --- a/v1.1/docs/admin/namespaces.md +++ b/v1.1/docs/admin/namespaces.md @@ -37,7 +37,7 @@ The Namespace provides a unique scope for: ## Usage -Look [here](namespaces/) for an in depth example of namespaces. +Look [here](/{{page.version}}/docs/admin/namespaces/) for an in depth example of namespaces. ### Viewing namespaces @@ -84,13 +84,13 @@ to define *Hard* resource usage limits that a *Namespace* may consume. A limit range defines min/max constraints on the amount of resources a single entity can consume in a *Namespace*. -See [Admission control: Limit Range](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/admission_control_limit_range.md) +See [Admission control: Limit Range](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_limit_range.md) A namespace can be in one of two phases: * `Active` the namespace is in use * `Terminating` the namespace is being deleted, and can not be used for new objects -See the [design doc](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/namespaces.md#phases) for more details. +See the [design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#phases) for more details. ### Creating a new namespace @@ -105,7 +105,7 @@ metadata: Note that the name of your namespace must be a DNS compatible label. -More information on the `finalizers` field can be found in the namespace [design doc](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/namespaces.md#finalizers). +More information on the `finalizers` field can be found in the namespace [design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#finalizers). Then run: @@ -132,7 +132,7 @@ This delete is asynchronous, so for a time you will see the namespace in the `Te ## Namespaces and DNS -When you create a [Service](/{{page.version}}/docs/user-guide/services), it creates a corresponding [DNS entry](dns). +When you create a [Service](/{{page.version}}/docs/user-guide/services), it creates a corresponding [DNS entry](/{{page.version}}/docs/admin/dns). This entry is of the form `..svc.cluster.local`, which means that if a container just uses `` it will resolve to the service which is local to a namespace. This is useful for using the same configuration across @@ -141,5 +141,5 @@ across namespaces, you need to use the fully qualified domain name (FQDN). ## Design -Details of the design of namespaces in Kubernetes, including a [detailed example](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/namespaces.md#example-openshift-origin-managing-a-kubernetes-namespace) -can be found in the [namespaces design doc](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/namespaces.md) \ No newline at end of file +Details of the design of namespaces in Kubernetes, including a [detailed example](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md#example-openshift-origin-managing-a-kubernetes-namespace) +can be found in the [namespaces design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md) \ No newline at end of file diff --git a/v1.1/docs/admin/namespaces/index.md b/v1.1/docs/admin/namespaces/index.md index 8a0fbe0a72c..84d10b25756 100644 --- a/v1.1/docs/admin/namespaces/index.md +++ b/v1.1/docs/admin/namespaces/index.md @@ -1,7 +1,7 @@ --- title: "Kubernetes Namespaces" --- -Kubernetes _[namespaces](/{{page.version}}/docs/admin/namespaces)_ help different projects, teams, or customers to share a Kubernetes cluster. +Kubernetes _namespaces_ help different projects, teams, or customers to share a Kubernetes cluster. It does this by providing the following: @@ -49,7 +49,7 @@ One pattern this organization could follow is to partition the Kubernetes cluste Let's create two new namespaces to hold our work. -Use the file [`namespace-dev.json`](namespace-dev.json) which describes a development namespace: +Use the file [`namespace-dev.json`](/{{page.version}}/docs/admin/namespacesnamespace-dev.json) which describes a development namespace: @@ -66,7 +66,7 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo } ``` -[Download example](namespace-dev.json) +[Download example](/{{page.version}}/docs/admin/namespacesnamespace-dev.json) Create the development namespace using kubectl. diff --git a/v1.1/docs/admin/networking.md b/v1.1/docs/admin/networking.md index 8a9e1875d19..1864e0bc253 100644 --- a/v1.1/docs/admin/networking.md +++ b/v1.1/docs/admin/networking.md @@ -163,7 +163,7 @@ people have reported success with Flannel and Kubernetes. ### OpenVSwitch -[OpenVSwitch](ovs-networking) is a somewhat more mature but also +[OpenVSwitch](/{{page.version}}/docs/admin/ovs-networking) is a somewhat more mature but also complicated way to build an overlay network. This is endorsed by several of the "Big Shops" for networking. @@ -181,4 +181,4 @@ IPs. The early design of the networking model and its rationale, and some future plans are described in more detail in the [networking design -document](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/networking.md). \ No newline at end of file +document](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/networking.md). \ No newline at end of file diff --git a/v1.1/docs/admin/node.md b/v1.1/docs/admin/node.md index 07e9e1bd51b..d7e83eb0cc8 100644 --- a/v1.1/docs/admin/node.md +++ b/v1.1/docs/admin/node.md @@ -10,7 +10,7 @@ title: "Node" may be a VM or physical machine, depending on the cluster. Each node has the services necessary to run [Pods](/{{page.version}}/docs/user-guide/pods) and is managed by the master components. The services on a node include docker, kubelet and network proxy. See -[The Kubernetes Node](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/architecture.md#the-kubernetes-node) section in the +[The Kubernetes Node](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/architecture.md#the-kubernetes-node) section in the architecture design doc for more details. ## Node Status diff --git a/v1.1/docs/admin/resource-quota.md b/v1.1/docs/admin/resource-quota.md index e4181821924..8da9a434504 100755 --- a/v1.1/docs/admin/resource-quota.md +++ b/v1.1/docs/admin/resource-quota.md @@ -147,8 +147,8 @@ restrictions around nodes: pods from several namespaces may run on the same node ## Example -See a [detailed example for how to use resource quota](resourcequota/). +See a [detailed example for how to use resource quota](/{{page.version}}/docs/admin/resourcequota/). ## Read More -See [ResourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/admission_control_resource_quota.md) for more information. +See [ResourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_resource_quota.md) for more information. \ No newline at end of file diff --git a/v1.1/docs/admin/resourcequota/index.md b/v1.1/docs/admin/resourcequota/index.md index bb784cb8049..e02f4251bde 100644 --- a/v1.1/docs/admin/resourcequota/index.md +++ b/v1.1/docs/admin/resourcequota/index.md @@ -3,7 +3,7 @@ title: "Resource Quota" --- This example demonstrates how [resource quota](/{{page.version}}/docs/admin/admission-controllers/#resourcequota) and [limitsranger](/{{page.version}}/docs/admin/admission-controllers/#limitranger) can be applied to a Kubernetes namespace. -See [ResourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/admission_control_resource_quota.md) for more information. +See [ResourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_resource_quota.md) for more information. This example assumes you have a functional Kubernetes setup. diff --git a/v1.1/docs/admin/salt.md b/v1.1/docs/admin/salt.md index c1a7bfb7592..72b9b7a356e 100644 --- a/v1.1/docs/admin/salt.md +++ b/v1.1/docs/admin/salt.md @@ -99,4 +99,4 @@ We should define a grains.conf key that captures more specifically what network ## Further reading -The [cluster/saltbase](http://releases.k8s.io/release-1.1/cluster/saltbase/) tree has more details on the current SaltStack configuration. \ No newline at end of file +The [cluster/saltbase](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/) tree has more details on the current SaltStack configuration. \ No newline at end of file diff --git a/v1.1/docs/api.md b/v1.1/docs/api.md index af2000a6477..7141e074861 100644 --- a/v1.1/docs/api.md +++ b/v1.1/docs/api.md @@ -30,7 +30,7 @@ multiple API versions, each at a different API path, such as `/api/v1` or We chose to version at the API level rather than at the resource or field level to ensure that the API presents a clear, consistent view of system resources and behavior, and to enable controlling access to end-of-lifed and/or experimental APIs. Note that API versioning and Software versioning are only indirectly related. The [API and release -versioning proposal](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/versioning.md) describes the relationship between API versioning and +versioning proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/versioning.md) describes the relationship between API versioning and software versioning. @@ -60,7 +60,7 @@ in more detail in the [API Changes documentation](/{{page.version}}/docs/devel/a ## API groups To make it easier to extend the Kubernetes API, we are in the process of implementing [*API -groups*](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/proposals/api-group.md). These are simply different interfaces to read and/or modify the +groups*](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/api-group.md). These are simply different interfaces to read and/or modify the same underlying resources. The API group is specified in a REST path and in the `apiVersion` field of a serialized object. @@ -73,7 +73,7 @@ Currently there are two API groups in use: In the future we expect that there will be more API groups, all at REST path `/apis/$API_GROUP` and using `apiVersion: $API_GROUP/$VERSION`. We expect that there will be a way for (third parties to -create their own API groups](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/extending-api.md), and to avoid naming collisions. +create their own API groups](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/extending-api.md), and to avoid naming collisions. ## Enabling resources in the extensions group diff --git a/v1.1/docs/devel/api-conventions.md b/v1.1/docs/devel/api-conventions.md index 42555ac4703..4a0f1034b9e 100644 --- a/v1.1/docs/devel/api-conventions.md +++ b/v1.1/docs/devel/api-conventions.md @@ -139,13 +139,13 @@ In general, condition values may change back and forth, but some condition trans A typical oscillating condition type is `Ready`, which indicates the object was believed to be fully operational at the time it was last probed. A possible monotonic condition could be `Succeeded`. A `False` status for `Succeeded` would imply failure. An object that was still active would not have a `Succeeded` condition, or its status would be `Unknown`. -Some resources in the v1 API contain fields called **`phase`**, and associated `message`, `reason`, and other status fields. The pattern of using `phase` is deprecated. Newer API types should use conditions instead. Phase was essentially a state-machine enumeration field, that contradicted [system-design principles](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/principles.md#control-logic) and hampered evolution, since [adding new enum values breaks backward compatibility](/{{page.version}}/docs/devel/api_changes). Rather than encouraging clients to infer implicit properties from phases, we intend to explicitly expose the conditions that clients need to monitor. Conditions also have the benefit that it is possible to create some conditions with uniform meaning across all resource types, while still exposing others that are unique to specific resource types. See [#7856](http://issues.k8s.io/7856) for more details and discussion. +Some resources in the v1 API contain fields called **`phase`**, and associated `message`, `reason`, and other status fields. The pattern of using `phase` is deprecated. Newer API types should use conditions instead. Phase was essentially a state-machine enumeration field, that contradicted [system-design principles](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/principles.md#control-logic) and hampered evolution, since [adding new enum values breaks backward compatibility](/{{page.version}}/docs/devel/api_changes). Rather than encouraging clients to infer implicit properties from phases, we intend to explicitly expose the conditions that clients need to monitor. Conditions also have the benefit that it is possible to create some conditions with uniform meaning across all resource types, while still exposing others that are unique to specific resource types. See [#7856](http://issues.k8s.io/7856) for more details and discussion. In condition types, and everywhere else they appear in the API, **`Reason`** is intended to be a one-word, CamelCase representation of the category of cause of the current status, and **`Message`** is intended to be a human-readable phrase or sentence, which may contain specific details of the individual occurrence. `Reason` is intended to be used in concise output, such as one-line `kubectl get` output, and in summarizing occurrences of causes, whereas `Message` is intended to be presented to users in detailed status explanations, such as `kubectl describe` output. Historical information status (e.g., last transition time, failure counts) is only provided with reasonable effort, and is not guaranteed to not be lost. -Status information that may be large (especially proportional in size to collections of other resources, such as lists of references to other objects -- see below) and/or rapidly changing, such as [resource usage](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/resources.md#usage-data), should be put into separate objects, with possibly a reference from the original object. This helps to ensure that GETs and watch remain reasonably efficient for the majority of clients, which may not need that data. +Status information that may be large (especially proportional in size to collections of other resources, such as lists of references to other objects -- see below) and/or rapidly changing, such as [resource usage](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/resources.md#usage-data), should be put into separate objects, with possibly a reference from the original object. This helps to ensure that GETs and watch remain reasonably efficient for the majority of clients, which may not need that data. Some resources report the `observedGeneration`, which is the `generation` most recently observed by the component responsible for acting upon changes to the desired state of the resource. This can be used, for instance, to ensure that the reported status reflects the most recent desired status. diff --git a/v1.1/docs/devel/api_changes.md b/v1.1/docs/devel/api_changes.md index 171c6b74c01..efbebb3f814 100644 --- a/v1.1/docs/devel/api_changes.md +++ b/v1.1/docs/devel/api_changes.md @@ -251,7 +251,7 @@ Breaking compatibility of a beta or stable API version, such as v1, is unaccepta Compatibility for experimental or alpha APIs is not strictly required, but breaking compatibility should not be done lightly, as it disrupts all users of the feature. Experimental APIs may be removed. Alpha and beta API versions may be deprecated -and eventually removed wholesale, as described in the [versioning document](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/versioning.md). +and eventually removed wholesale, as described in the [versioning document](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/versioning.md). Document incompatible changes across API versions under the [conversion tips](/{{page.version}}/docs/api/). If your change is going to be backward incompatible or might be a breaking change for API @@ -494,7 +494,7 @@ doing! ## Write end-to-end tests -Check out the [E2E docs](e2e-tests) for detailed information about how to write end-to-end +Check out the [E2E docs](/{{page.version}}/docs/devel/e2e-tests) for detailed information about how to write end-to-end tests for your feature. ## Examples and docs diff --git a/v1.1/docs/devel/cherry-picks.md b/v1.1/docs/devel/cherry-picks.md index 067e41ff7fb..d3141fce326 100644 --- a/v1.1/docs/devel/cherry-picks.md +++ b/v1.1/docs/devel/cherry-picks.md @@ -22,7 +22,7 @@ particular, they may be self-merged by the release branch owner without fanfare, in the case the release branch owner knows the cherry pick was already requested - this should not be the norm, but it may happen. -[Contributor License Agreements](http://releases.k8s.io/release-1.1/CONTRIBUTING.md) is considered implicit +[Contributor License Agreements](http://releases.k8s.io/{{page.githubbranch}}/CONTRIBUTING.md) is considered implicit for all code within cherry-pick pull requests, ***unless there is a large conflict***. diff --git a/v1.1/docs/devel/client-libraries.md b/v1.1/docs/devel/client-libraries.md index 850bdec2767..025102f19b0 100644 --- a/v1.1/docs/devel/client-libraries.md +++ b/v1.1/docs/devel/client-libraries.md @@ -3,7 +3,7 @@ title: "Kubernetes API client libraries" --- ### Supported - * [Go](http://releases.k8s.io/release-1.1/pkg/client/) + * [Go](http://releases.k8s.io/{{page.githubbranch}}/pkg/client/) ### User Contributed diff --git a/v1.1/docs/devel/coding-conventions.md b/v1.1/docs/devel/coding-conventions.md index a90040e6298..648ea02cbd5 100644 --- a/v1.1/docs/devel/coding-conventions.md +++ b/v1.1/docs/devel/coding-conventions.md @@ -2,13 +2,11 @@ title: "devel/coding-conventions" --- -Code conventions - - Bash - https://google-styleguide.googlecode.com/svn/trunk/shell.xml - Ensure that build, release, test, and cluster-management scripts run on OS X - Go - - Ensure your code passes the [presubmit checks](development/#hooks) + - Ensure your code passes the [presubmit checks](/{{page.version}}/docs/devel/development/#hooks) - [Go Code Review Comments](https://github.com/golang/go/wiki/CodeReviewComments) - [Effective Go](https://golang.org/doc/effective_go) - Comment your code. @@ -27,8 +25,8 @@ Code conventions - API conventions - [API changes](/{{page.version}}/docs/devel/api_changes) - [API conventions](/{{page.version}}/docs/devel/api-conventions) - - [Kubectl conventions](kubectl-conventions) - - [Logging conventions](logging) + - [Kubectl conventions](/{{page.version}}/docs/devel/kubectl-conventions) + - [Logging conventions](/{{page.version}}/docs/devel/logging) Testing conventions @@ -58,6 +56,3 @@ Coding advice - Go - [Go landmines](https://gist.github.com/lavalamp/4bd23295a9f32706a48f) - - - diff --git a/v1.1/docs/devel/collab.md b/v1.1/docs/devel/collab.md index 1a00723f3d1..38446765a5a 100644 --- a/v1.1/docs/devel/collab.md +++ b/v1.1/docs/devel/collab.md @@ -38,7 +38,4 @@ PRs that are incorrectly judged to be merge-able, may be reverted and subject to ## Holds -Any maintainer or core contributor who wants to review a PR but does not have time immediately may put a hold on a PR simply by saying so on the PR discussion and offering an ETA measured in single-digit days at most. Any PR that has a hold shall not be merged until the person who requested the hold acks the review, withdraws their hold, or is overruled by a preponderance of maintainers. - - - +Any maintainer or core contributor who wants to review a PR but does not have time immediately may put a hold on a PR simply by saying so on the PR discussion and offering an ETA measured in single-digit days at most. Any PR that has a hold shall not be merged until the person who requested the hold acks the review, withdraws their hold, or is overruled by a preponderance of maintainers. \ No newline at end of file diff --git a/v1.1/docs/devel/developer-guides/vagrant.md b/v1.1/docs/devel/developer-guides/vagrant.md index 3b9408deef3..c4ec53a4946 100644 --- a/v1.1/docs/devel/developer-guides/vagrant.md +++ b/v1.1/docs/devel/developer-guides/vagrant.md @@ -251,7 +251,7 @@ my-nginx nginx run=my-nginx 3 ``` We did not start any services, hence there are none listed. But we see three replicas displayed properly. -Check the [guestbook](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/guestbook/) application to learn how to create a service. +Check the [guestbook](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) application to learn how to create a service. You can already play with scaling the replicas with: ```shell diff --git a/v1.1/docs/devel/development.md b/v1.1/docs/devel/development.md index 9cfec6a6a39..ad11e15e995 100644 --- a/v1.1/docs/devel/development.md +++ b/v1.1/docs/devel/development.md @@ -3,7 +3,7 @@ title: "Development Guide" --- # Releases and Official Builds -Official releases are built in Docker containers. Details are [here](http://releases.k8s.io/release-1.1/build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below. +Official releases are built in Docker containers. Details are [here](http://releases.k8s.io/{{page.githubbranch}}/build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below. ## Go development environment @@ -66,7 +66,7 @@ git push -f origin myfeature 1. Visit https://github.com/$YOUR_GITHUB_USERNAME/kubernetes 2. Click the "Compare and pull request" button next to your "myfeature" branch. -3. Check out the pull request [process](pull-requests) for more details +3. Check out the pull request [process](/{{page.version}}/docs/devel/pull-requests) for more details ### When to retain commits and when to squash @@ -80,7 +80,7 @@ fixups (e.g. automated doc formatting), use one or more commits for the changes to tooling and a final commit to apply the fixup en masse. This makes reviews much easier. -See [Faster Reviews](faster_reviews) for more details. +See [Faster Reviews](/{{page.version}}/docs/devel/faster_reviews) for more details. ## godep and dependency management @@ -297,18 +297,18 @@ go run hack/e2e.go -v -ctl='delete pod foobar' ## Conformance testing End-to-end testing, as described above, is for [development -distributions](writing-a-getting-started-guide). A conformance test is used on -a [versioned distro](writing-a-getting-started-guide). +distributions](/{{page.version}}/docs/devel/writing-a-getting-started-guide). A conformance test is used on +a [versioned distro](/{{page.version}}/docs/devel/writing-a-getting-started-guide). The conformance test runs a subset of the e2e-tests against a manually-created cluster. It does not require support for up/push/down and other operations. To run a conformance test, you need to know the IP of the master for your cluster and the authorization arguments to use. The conformance test is intended to run against a cluster at a specific binary release of Kubernetes. -See [conformance-test.sh](http://releases.k8s.io/release-1.1/hack/conformance-test.sh). +See [conformance-test.sh](http://releases.k8s.io/{{page.githubbranch}}/hack/conformance-test.sh). ## Testing out flaky tests -[Instructions here](flaky-tests) +[Instructions here](/{{page.version}}/docs/devel/flaky-tests) ## Regenerating the CLI documentation diff --git a/v1.1/docs/devel/getting-builds.md b/v1.1/docs/devel/getting-builds.md index 2ab4a8661de..61fb9efb2e1 100644 --- a/v1.1/docs/devel/getting-builds.md +++ b/v1.1/docs/devel/getting-builds.md @@ -1,7 +1,7 @@ --- title: "Getting Kubernetes Builds" --- -You can use [hack/get-build.sh](http://releases.k8s.io/release-1.1/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build). +You can use [hack/get-build.sh](http://releases.k8s.io/{{page.githubbranch}}/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build). Run `./hack/get-build.sh -h` for its usage. diff --git a/v1.1/docs/devel/index.md b/v1.1/docs/devel/index.md index 24875adbeff..350f60088ee 100644 --- a/v1.1/docs/devel/index.md +++ b/v1.1/docs/devel/index.md @@ -64,7 +64,7 @@ Guide](/{{page.version}}/docs/admin/). Authorization applies to all HTTP requests on the main apiserver port. This doc explains the available authorization implementations. -* **Admission Control Plugins** ([admission_control](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/admission_control.md)) +* **Admission Control Plugins** ([admission_control](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control.md)) ## Building releases diff --git a/v1.1/docs/devel/scheduler.md b/v1.1/docs/devel/scheduler.md index 92f6318e83b..fad381259ca 100755 --- a/v1.1/docs/devel/scheduler.md +++ b/v1.1/docs/devel/scheduler.md @@ -21,30 +21,30 @@ divided by the node's capacity). Finally, the node with the highest priority is chosen (or, if there are multiple such nodes, then one of them is chosen at random). The code for this main scheduling loop is in the function `Schedule()` in -[plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/release-1.1/plugin/pkg/scheduler/generic_scheduler.go) +[plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/{{page.githubbranch}}/plugin/pkg/scheduler/generic_scheduler.go) ## Scheduler extensibility The scheduler is extensible: the cluster administrator can choose which of the pre-defined scheduling policies to apply, and can add new ones. The built-in predicates and priorities are -defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/release-1.1/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and -[plugin/pkg/scheduler/algorithm/priorities/priorities.go](http://releases.k8s.io/release-1.1/plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively. +defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/{{page.githubbranch}}/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and +[plugin/pkg/scheduler/algorithm/priorities/priorities.go](http://releases.k8s.io/{{page.githubbranch}}/plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively. The policies that are applied when scheduling can be chosen in one of two ways. Normally, the policies used are selected by the functions `defaultPredicates()` and `defaultPriorities()` in -[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/release-1.1/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). +[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/{{page.githubbranch}}/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). However, the choice of policies can be overridden by passing the command-line flag `--policy-config-file` to the scheduler, pointing to a JSON file specifying which scheduling policies to use. See -[examples/scheduler-policy-config.json](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/scheduler-policy-config.json) for an example +[examples/scheduler-policy-config.json](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/scheduler-policy-config.json) for an example config file. (Note that the config file format is versioned; the API is defined in -[plugin/pkg/scheduler/api](http://releases.k8s.io/release-1.1/plugin/pkg/scheduler/api/)). +[plugin/pkg/scheduler/api](http://releases.k8s.io/{{page.githubbranch}}/plugin/pkg/scheduler/api/)). Thus to add a new scheduling policy, you should modify predicates.go or priorities.go, and either register the policy in `defaultPredicates()` or `defaultPriorities()`, or use a policy config file. ## Exploring the code If you want to get a global picture of how the scheduler works, you can start in -[plugin/cmd/kube-scheduler/app/server.go](http://releases.k8s.io/release-1.1/plugin/cmd/kube-scheduler/app/server.go) +[plugin/cmd/kube-scheduler/app/server.go](http://releases.k8s.io/{{page.githubbranch}}/plugin/cmd/kube-scheduler/app/server.go) diff --git a/v1.1/docs/devel/scheduler_algorithm.md b/v1.1/docs/devel/scheduler_algorithm.md index 7d751b27e72..d4046e8c9e6 100755 --- a/v1.1/docs/devel/scheduler_algorithm.md +++ b/v1.1/docs/devel/scheduler_algorithm.md @@ -1,20 +1,20 @@ --- title: "Scheduler Algorithm in Kubernetes" --- -For each unscheduled Pod, the Kubernetes scheduler tries to find a node across the cluster according to a set of rules. A general introduction to the Kubernetes scheduler can be found at [scheduler.md](scheduler). In this document, the algorithm of how to select a node for the Pod is explained. There are two steps before a destination node of a Pod is chosen. The first step is filtering all the nodes and the second is ranking the remaining nodes to find a best fit for the Pod. +For each unscheduled Pod, the Kubernetes scheduler tries to find a node across the cluster according to a set of rules. A general introduction to the Kubernetes scheduler can be found at [scheduler.md](/{{page.version}}/docs/devel/scheduler). In this document, the algorithm of how to select a node for the Pod is explained. There are two steps before a destination node of a Pod is chosen. The first step is filtering all the nodes and the second is ranking the remaining nodes to find a best fit for the Pod. ## Filtering the nodes The purpose of filtering the nodes is to filter out the nodes that do not meet certain requirements of the Pod. For example, if the free resource on a node (measured by the capacity minus the sum of the resource requests of all the Pods that already run on the node) is less than the Pod's required resource, the node should not be considered in the ranking phase so it is filtered out. Currently, there are several "predicates" implementing different filtering policies, including: - `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted. -- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of requests of all Pods on the node. To learn more about the resource QoS in Kubernetes, please check [QoS proposal](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/proposals/resource-qos.md). +- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of requests of all Pods on the node. To learn more about the resource QoS in Kubernetes, please check [QoS proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/resource-qos.md). - `PodFitsHostPorts`: Check if any HostPort required by the Pod is already occupied on the node. - `PodFitsHost`: Filter out all nodes except the one specified in the PodSpec's NodeName field. - `PodSelectorMatches`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field ([Here](/{{page.version}}/docs/user-guide/node-selection/) is an example of how to use `nodeSelector` field). - `CheckNodeLabelPresence`: Check if all the specified labels exist on a node or not, regardless of the value. -The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/release-1.1/plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/release-1.1/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). +The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/{{page.githubbranch}}/plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/{{page.githubbranch}}/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). ## Ranking the nodes @@ -32,7 +32,7 @@ Currently, Kubernetes scheduler provides some practical priority functions, incl - `CalculateSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on the same node. - `CalculateAntiAffinityPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on nodes with the same value for a particular label. -The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](http://releases.k8s.io/release-1.1/plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/release-1.1/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler) for how to customize). +The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](http://releases.k8s.io/{{page.githubbranch}}/plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/{{page.githubbranch}}/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](/{{page.version}}/docs/devel/scheduler) for how to customize). diff --git a/v1.1/docs/devel/writing-a-getting-started-guide.md b/v1.1/docs/devel/writing-a-getting-started-guide.md index 6ab66166c83..547d5c457b3 100644 --- a/v1.1/docs/devel/writing-a-getting-started-guide.md +++ b/v1.1/docs/devel/writing-a-getting-started-guide.md @@ -37,7 +37,7 @@ These guidelines say *what* to do. See the Rationale section for *why*. own repo. - Add or update a row in [The Matrix](/{{page.version}}/docs/getting-started-guides/). - State the binary version of Kubernetes that you tested clearly in your Guide doc. - - Setup a cluster and run the [conformance test](development/#conformance-testing) against it, and report the + - Setup a cluster and run the [conformance test](/{{page.version}}/docs/devel/development/#conformance-testing) against it, and report the results in your PR. - Versioned distros should typically not modify or add code in `cluster/`. That is just scripts for developer distros. diff --git a/v1.1/docs/getting-started-guides/aws.md b/v1.1/docs/getting-started-guides/aws.md index 9d2ee8416e4..5e9aa47754f 100644 --- a/v1.1/docs/getting-started-guides/aws.md +++ b/v1.1/docs/getting-started-guides/aws.md @@ -28,16 +28,16 @@ export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash ``` -NOTE: This script calls [cluster/kube-up.sh](http://releases.k8s.io/release-1.1/cluster/kube-up.sh) -which in turn calls [cluster/aws/util.sh](http://releases.k8s.io/release-1.1/cluster/aws/util.sh) -using [cluster/aws/config-default.sh](http://releases.k8s.io/release-1.1/cluster/aws/config-default.sh). +NOTE: This script calls [cluster/kube-up.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/kube-up.sh) +which in turn calls [cluster/aws/util.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/aws/util.sh) +using [cluster/aws/config-default.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/aws/config-default.sh). This process takes about 5 to 10 minutes. Once the cluster is up, the IP addresses of your master and node(s) will be printed, as well as information about the default services running in the cluster (monitoring, logging, dns). User credentials and security tokens are written in `~/.kube/config`, they will be necessary to use the CLI or the HTTP Basic Auth. By default, the script will provision a new VPC and a 4 node k8s cluster in us-west-2a (Oregon) with `t2.micro` instances running on Ubuntu. -You can override the variables defined in [config-default.sh](http://releases.k8s.io/release-1.1/cluster/aws/config-default.sh) to change this behavior as follows: +You can override the variables defined in [config-default.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/aws/config-default.sh) to change this behavior as follows: ```shell export KUBE_AWS_ZONE=eu-west-1c @@ -84,9 +84,9 @@ For more information, please read [kubeconfig files](/{{page.version}}/docs/user See [a simple nginx example](/{{page.version}}/docs/user-guide/simple-nginx) to try out your new cluster. -The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/guestbook/) +The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) -For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/) +For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) ## Tearing down the cluster diff --git a/v1.1/docs/getting-started-guides/binary_release.md b/v1.1/docs/getting-started-guides/binary_release.md index e9ceedfbd34..fc6e32630a9 100644 --- a/v1.1/docs/getting-started-guides/binary_release.md +++ b/v1.1/docs/getting-started-guides/binary_release.md @@ -21,7 +21,7 @@ cd kubernetes make release ``` -For more details on the release process see the [`build/` directory](http://releases.k8s.io/release-1.1/build/) +For more details on the release process see the [`build/` directory](http://releases.k8s.io/{{page.githubbranch}}/build/) diff --git a/v1.1/docs/getting-started-guides/coreos.md b/v1.1/docs/getting-started-guides/coreos.md index 6aabfdcf7a9..c139fe5acc0 100644 --- a/v1.1/docs/getting-started-guides/coreos.md +++ b/v1.1/docs/getting-started-guides/coreos.md @@ -1,9 +1,6 @@ --- title: "Getting Started on CoreOS" --- - - - * TOC {:toc} diff --git a/v1.1/docs/getting-started-guides/coreos/azure/index.md b/v1.1/docs/getting-started-guides/coreos/azure/index.md index 73e5a0e3c1a..c00f2f7dd6d 100644 --- a/v1.1/docs/getting-started-guides/coreos/azure/index.md +++ b/v1.1/docs/getting-started-guides/coreos/azure/index.md @@ -212,7 +212,7 @@ You then should be able to access it from anywhere via the Azure virtual IP for You now have a full-blow cluster running in Azure, congrats! -You should probably try deploy other [example apps](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/) or write your own ;) +You should probably try deploy other [example apps](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) or write your own ;) ## Tear down... diff --git a/v1.1/docs/getting-started-guides/coreos/bare_metal_calico.md b/v1.1/docs/getting-started-guides/coreos/bare_metal_calico.md index 3779adcbcd1..363ca4374cd 100644 --- a/v1.1/docs/getting-started-guides/coreos/bare_metal_calico.md +++ b/v1.1/docs/getting-started-guides/coreos/bare_metal_calico.md @@ -117,4 +117,4 @@ Once complete, restart the server. When it comes back up, you should have SSH a ## Testing the Cluster You should now have a functional bare-metal Kubernetes cluster with one master and two compute hosts. -Try running the [guestbook demo](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/guestbook/) to test out your new cluster! \ No newline at end of file +Try running the [guestbook demo](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) to test out your new cluster! \ No newline at end of file diff --git a/v1.1/docs/getting-started-guides/coreos/bare_metal_offline.md b/v1.1/docs/getting-started-guides/coreos/bare_metal_offline.md index f40ffc430e5..844d9b9cd6e 100644 --- a/v1.1/docs/getting-started-guides/coreos/bare_metal_offline.md +++ b/v1.1/docs/getting-started-guides/coreos/bare_metal_offline.md @@ -648,7 +648,7 @@ Now that the CoreOS with Kubernetes installed is up and running lets spin up som See [a simple nginx example](/{{page.version}}/docs/user-guide/simple-nginx) to try out your new cluster. -For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/). +For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/). ## Helping commands for debugging diff --git a/v1.1/docs/getting-started-guides/dcos.md b/v1.1/docs/getting-started-guides/dcos.md index 7dd1af36982..4301b28af16 100644 --- a/v1.1/docs/getting-started-guides/dcos.md +++ b/v1.1/docs/getting-started-guides/dcos.md @@ -28,7 +28,7 @@ Explore the following resources for more information about Kubernetes, Kubernete - [DCOS Documentation](https://docs.mesosphere.com/) - [Managing DCOS Services](https://docs.mesosphere.com/services/kubernetes/) -- [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/) +- [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) - [Kubernetes on Mesos Documentation](https://releases.k8s.io/release-1.1/contrib/mesos/README.md) - [Kubernetes on Mesos Release Notes](https://github.com/mesosphere/kubernetes-mesos/releases) - [Kubernetes on DCOS Package Source](https://github.com/mesosphere/kubernetes-mesos) @@ -105,7 +105,7 @@ $ dcos kubectl get pods --namespace=kube-system Names and ages may vary. -Now that Kubernetes is installed on DCOS, you may wish to explore the [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/) or the [Kubernetes User Guide](/{{page.version}}/docs/user-guide/). +Now that Kubernetes is installed on DCOS, you may wish to explore the [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) or the [Kubernetes User Guide](/{{page.version}}/docs/user-guide/). ## Uninstall diff --git a/v1.1/docs/getting-started-guides/docker-multinode.md b/v1.1/docs/getting-started-guides/docker-multinode.md index dc9118a42bc..ef285c2eaae 100644 --- a/v1.1/docs/getting-started-guides/docker-multinode.md +++ b/v1.1/docs/getting-started-guides/docker-multinode.md @@ -10,7 +10,7 @@ Here's a diagram of what the final result will look like: ![Kubernetes Single Node on Docker](/images/docs/k8s-docker.png) _Note_: -These instructions are somewhat significantly more advanced than the [single node](docker) instructions. If you are +These instructions are somewhat significantly more advanced than the [single node](/{{page.version}}/docs/getting-started-guides/docker) instructions. If you are interested in just starting to explore Kubernetes, we recommend that you start there. _Note_: @@ -81,4 +81,4 @@ See [here](/{{page.version}}/docs/getting-started-guides/docker-multinode/deploy Once your cluster has been created you can [test it out](/{{page.version}}/docs/getting-started-guides/docker-multinode/testing) -For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/) +For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) diff --git a/v1.1/docs/getting-started-guides/docker-multinode/deployDNS.md b/v1.1/docs/getting-started-guides/docker-multinode/deployDNS.md index 701800a4946..a8c988f04c3 100644 --- a/v1.1/docs/getting-started-guides/docker-multinode/deployDNS.md +++ b/v1.1/docs/getting-started-guides/docker-multinode/deployDNS.md @@ -5,9 +5,9 @@ title: "Deploy DNS" First of all, download the template dns rc and svc file from -[skydns-rc template](skydns-rc.yaml.in) +[skydns-rc template](/{{page.version}}/docs/getting-started-guides/docker-multinode/skydns-rc.yaml.in) -[skydns-svc template](skydns-svc.yaml.in) +[skydns-svc template](/{{page.version}}/docs/getting-started-guides/docker-multinode/skydns-svc.yaml.in) ### Set env diff --git a/v1.1/docs/getting-started-guides/docker-multinode/master.md b/v1.1/docs/getting-started-guides/docker-multinode/master.md index 386d2704968..4363da02db8 100644 --- a/v1.1/docs/getting-started-guides/docker-multinode/master.md +++ b/v1.1/docs/getting-started-guides/docker-multinode/master.md @@ -176,4 +176,4 @@ If all else fails, ask questions on [Slack](/{{page.version}}/docs/troubleshooti ### Next steps -Move on to [adding one or more workers](worker) or [deploy a dns](deployDNS) \ No newline at end of file +Move on to [adding one or more workers](/{{page.version}}/docs/getting-started-guides/docker-multinode/worker) or [deploy a dns](/{{page.version}}/docs/getting-started-guides/docker-multinode/deployDNS) \ No newline at end of file diff --git a/v1.1/docs/getting-started-guides/docker-multinode/worker.md b/v1.1/docs/getting-started-guides/docker-multinode/worker.md index fb3b37b378c..3b54496f474 100644 --- a/v1.1/docs/getting-started-guides/docker-multinode/worker.md +++ b/v1.1/docs/getting-started-guides/docker-multinode/worker.md @@ -3,7 +3,7 @@ title: "Adding a Kubernetes worker node via Docker." --- These instructions are very similar to the master set-up above, but they are duplicated for clarity. You need to repeat these instructions for each node you want to join the cluster. -We will assume that the IP address of this node is `${NODE_IP}` and you have the IP address of the master in `${MASTER_IP}` that you created in the [master instructions](master). +We will assume that the IP address of this node is `${NODE_IP}` and you have the IP address of the master in `${MASTER_IP}` that you created in the [master instructions](/{{page.version}}/docs/getting-started-guides/docker-multinode/master). For each worker node, there are three steps: @@ -136,4 +136,4 @@ sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1 ### Next steps -Move on to [testing your cluster](testing) or add another node](#). \ No newline at end of file +Move on to [testing your cluster](/{{page.version}}/docs/getting-started-guides/docker-multinode/testing) or add another node](#). \ No newline at end of file diff --git a/v1.1/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md b/v1.1/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md index 0538f251526..9920477255e 100644 --- a/v1.1/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md +++ b/v1.1/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md @@ -1,7 +1,7 @@ --- title: "Kubernetes multiple nodes cluster with flannel on Fedora" --- -This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](fedora_manual_config) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network. +This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](/{{page.version}}/docs/getting-started-guides/fedora/fedora_manual_config) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network. * TOC {:toc} diff --git a/v1.1/docs/getting-started-guides/gce.md b/v1.1/docs/getting-started-guides/gce.md index 5635a967a91..6fb13ae5017 100644 --- a/v1.1/docs/getting-started-guides/gce.md +++ b/v1.1/docs/getting-started-guides/gce.md @@ -40,7 +40,7 @@ wget -q -O - https://get.k8s.io | bash Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster. -By default, some containers will already be running on your cluster. Containers like `kibana` and `elasticsearch` provide [logging](logging), while `heapster` provides [monitoring](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/README.md) services. +By default, some containers will already be running on your cluster. Containers like `kibana` and `elasticsearch` provide [logging](/{{page.version}}/docs/getting-started-guides/logging), while `heapster` provides [monitoring](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/README.md) services. The script run by the commands above creates a cluster with the name/prefix "kubernetes". It defines one specific cluster config, so you can't run it more than once. @@ -53,7 +53,7 @@ cluster/kube-up.sh If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster. -If you run into trouble, please see the section on [troubleshooting](gce/#troubleshooting), post to the +If you run into trouble, please see the section on [troubleshooting](/{{page.version}}/docs/getting-started-guides/gce/#troubleshooting), post to the [google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on [Slack](/{{page.version}}/docs/troubleshooting/#slack). The next few steps will show you: @@ -152,7 +152,7 @@ Some of the pods may take a few seconds to start up (during this time they'll sh Then, see [a simple nginx example](/{{page.version}}/docs/user-guide/simple-nginx) to try out your new cluster. -For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/). The [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/guestbook/) is a good "getting started" walkthrough. +For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/). The [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) is a good "getting started" walkthrough. ### Tearing down the cluster diff --git a/v1.1/docs/getting-started-guides/index.md b/v1.1/docs/getting-started-guides/index.md index 3460f48fec7..f6a0dbd6dd9 100644 --- a/v1.1/docs/getting-started-guides/index.md +++ b/v1.1/docs/getting-started-guides/index.md @@ -5,7 +5,7 @@ Kubernetes can run on a range of platforms, from your laptop, to VMs on a cloud bare metal servers. The effort required to set up a cluster varies from running a single command to crafting your own customized cluster. We'll guide you in picking a solution that fits for your needs. -If you just want to "kick the tires" on Kubernetes, we recommend the [local Docker-based](docker) solution. +If you just want to "kick the tires" on Kubernetes, we recommend the [local Docker-based](/{{page.version}}/docs/getting-started-guides/docker) solution. The local Docker-based solution is one of several [Local cluster](#local-machine-solutions) solutions that are quick to set up, but are limited to running on one machine. @@ -31,9 +31,9 @@ But their size and availability is limited to that of a single machine. The local-machine solutions are: -- [Local Docker-based](docker) (recommended starting point) -- [Vagrant](vagrant) (works on any platform with Vagrant: Linux, MacOS, or Windows.) -- [No-VM local cluster](locally) (Linux only) +- [Local Docker-based](/{{page.version}}/docs/getting-started-guides/docker) (recommended starting point) +- [Vagrant](/{{page.version}}/docs/getting-started-guides/vagrant) (works on any platform with Vagrant: Linux, MacOS, or Windows.) +- [No-VM local cluster](/{{page.version}}/docs/getting-started-guides/locally) (Linux only) ### Hosted Solutions @@ -58,7 +58,7 @@ base operating systems. If you can find a guide below that matches your needs, use it. It may be a little out of date, but it will be easier than starting from scratch. If you do want to start from scratch because you have special requirements or just because you want to understand what is underneath a Kubernetes -cluster, try the [Getting Started from Scratch](scratch) guide. +cluster, try the [Getting Started from Scratch](/{{page.version}}/docs/getting-started-guides/scratch) guide. If you are interested in supporting Kubernetes on a new platform, check out our [advice for writing a new solution](/{{page.version}}/docs/devel/writing-a-getting-started-guide). diff --git a/v1.1/docs/getting-started-guides/juju.md b/v1.1/docs/getting-started-guides/juju.md index 16515e3c53b..bb7e05152b1 100644 --- a/v1.1/docs/getting-started-guides/juju.md +++ b/v1.1/docs/getting-started-guides/juju.md @@ -192,7 +192,7 @@ juju add-unit docker # creates unit docker/2, kubernetes/2, docker-flannel/2 ## Launch the "k8petstore" example app -The [k8petstore example](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/k8petstore/) is available as a +The [k8petstore example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/k8petstore/) is available as a [juju action](https://jujucharms.com/docs/devel/actions). ```shell @@ -221,7 +221,7 @@ juju destroy-environment --force `juju env` The Kubernetes charms and bundles can be found in the `kubernetes` project on github.com: - - [Bundle Repository](http://releases.k8s.io/release-1.1/cluster/juju/bundles) + - [Bundle Repository](http://releases.k8s.io/{{page.githubbranch}}/cluster/juju/bundles) * [Kubernetes master charm](https://releases.k8s.io/release-1.1/cluster/juju/charms/trusty/kubernetes-master) * [Kubernetes node charm](https://releases.k8s.io/release-1.1/cluster/juju/charms/trusty/kubernetes) - [More about Juju](https://jujucharms.com) diff --git a/v1.1/docs/getting-started-guides/locally.md b/v1.1/docs/getting-started-guides/locally.md index d321b4ae7aa..0803420158a 100644 --- a/v1.1/docs/getting-started-guides/locally.md +++ b/v1.1/docs/getting-started-guides/locally.md @@ -8,7 +8,7 @@ title: "Getting started locally" #### Linux -Not running Linux? Consider running Linux in a local virtual machine with [Vagrant](vagrant), or on a cloud provider like [Google Compute Engine](gce) +Not running Linux? Consider running Linux in a local virtual machine with [Vagrant](/{{page.version}}/docs/getting-started-guides/vagrant), or on a cloud provider like [Google Compute Engine](/{{page.version}}/docs/getting-started-guides/gce) #### Docker diff --git a/v1.1/docs/getting-started-guides/logging-elasticsearch.md b/v1.1/docs/getting-started-guides/logging-elasticsearch.md index adc0e69c81a..6cd410ac2bf 100644 --- a/v1.1/docs/getting-started-guides/logging-elasticsearch.md +++ b/v1.1/docs/getting-started-guides/logging-elasticsearch.md @@ -2,7 +2,7 @@ title: "Cluster Level Logging with Elasticsearch and Kibana" --- On the Google Compute Engine (GCE) platform the default cluster level logging support targets -[Google Cloud Logging](https://cloud.google.com/logging/docs/) as described at the [Logging](logging) getting +[Google Cloud Logging](https://cloud.google.com/logging/docs/) as described at the [Logging](/{{page.version}}/docs/getting-started-guides/logging) getting started page. Here we describe how to set up a cluster to ingest logs into Elasticsearch and view them using Kibana as an alternative to Google Cloud Logging. diff --git a/v1.1/docs/getting-started-guides/logging.md b/v1.1/docs/getting-started-guides/logging.md index c1632c4efb9..da313799a54 100644 --- a/v1.1/docs/getting-started-guides/logging.md +++ b/v1.1/docs/getting-started-guides/logging.md @@ -19,12 +19,12 @@ monitoring-heapster-v1-20ej 0/1 Running 9 32 Here is the same information in a picture which shows how the pods might be placed on specific nodes. -![Cluster](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/blog-logging/diagrams/cloud-logging.png) +![Cluster](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/blog-logging/diagrams/cloud-logging.png) This diagram shows four nodes created on a Google Compute Engine cluster with the name of each VM node on a purple background. The internal and public IPs of each node are shown on gray boxes and the pods running in each node are shown in green boxes. Each pod box shows the name of the pod and the namespace it runs in, the IP address of the pod and the images which are run as part of the pod's execution. Here we see that every node is running a fluentd-cloud-logging pod which is collecting the log output of the containers running on the same node and sending them to Google Cloud Logging. A pod which provides the [cluster DNS service](/{{page.version}}/docs/admin/dns) runs on one of the nodes and a pod which provides monitoring support runs on another node. -To help explain how cluster level logging works let's start off with a synthetic log generator pod specification [counter-pod.yaml](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/blog-logging/counter-pod.yaml): +To help explain how cluster level logging works let's start off with a synthetic log generator pod specification [counter-pod.yaml](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/blog-logging/counter-pod.yaml): @@ -41,7 +41,7 @@ spec: 'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done'] ``` -[Download example](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/blog-logging/counter-pod.yaml) +[Download example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/blog-logging/counter-pod.yaml) This pod specification has one container which runs a bash script when the container is born. This script simply writes out the value of a counter and the date once per second and runs indefinitely. Let's create the pod in the default @@ -64,7 +64,7 @@ This step may take a few minutes to download the ubuntu:14.04 image during which One of the nodes is now running the counter pod: -![Counter Pod](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/blog-logging/diagrams/27gf-counter.png) +![Counter Pod](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/blog-logging/diagrams/27gf-counter.png) When the pod status changes to `Running` we can use the kubectl logs command to view the output of this counter pod. @@ -213,6 +213,6 @@ $ cat 21\:00\:00_21\:59\:59_S0.json | jq '.structPayload.log' ... ``` -This page has touched briefly on the underlying mechanisms that support gathering cluster level logs on a Kubernetes deployment. The approach here only works for gathering the standard output and standard error output of the processes running in the pod's containers. To gather other logs that are stored in files one can use a sidecar container to gather the required files as described at the page [Collecting log files within containers with Fluentd](http://releases.k8s.io/release-1.1/contrib/logging/fluentd-sidecar-gcp/README.md) and sending them to the Google Cloud Logging service. +This page has touched briefly on the underlying mechanisms that support gathering cluster level logs on a Kubernetes deployment. The approach here only works for gathering the standard output and standard error output of the processes running in the pod's containers. To gather other logs that are stored in files one can use a sidecar container to gather the required files as described at the page [Collecting log files within containers with Fluentd](http://releases.k8s.io/{{page.githubbranch}}/contrib/logging/fluentd-sidecar-gcp/README.md) and sending them to the Google Cloud Logging service. Some of the material in this section also appears in the blog article [Cluster Level Logging with Kubernetes](http://blog.kubernetes.io/2015/06/cluster-level-logging-with-kubernetes) \ No newline at end of file diff --git a/v1.1/docs/getting-started-guides/mesos-docker.md b/v1.1/docs/getting-started-guides/mesos-docker.md index 168e1227054..c3a2020587f 100644 --- a/v1.1/docs/getting-started-guides/mesos-docker.md +++ b/v1.1/docs/getting-started-guides/mesos-docker.md @@ -184,7 +184,7 @@ host machine (mac). To learn more about Pods, Volumes, Labels, Services, and Replication Controllers, start with the [Kubernetes Walkthrough](/{{page.version}}/docs/user-guide/walkthrough/). - To skip to a more advanced example, see the [Guestbook Example](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/guestbook/) + To skip to a more advanced example, see the [Guestbook Example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) 1. Destroy cluster diff --git a/v1.1/docs/getting-started-guides/rkt/index.md b/v1.1/docs/getting-started-guides/rkt/index.md index c6191166de8..40ba194b904 100644 --- a/v1.1/docs/getting-started-guides/rkt/index.md +++ b/v1.1/docs/getting-started-guides/rkt/index.md @@ -94,7 +94,7 @@ scripts. The master node is always Ubuntu. See [a simple nginx example](/{{page.version}}/docs/user-guide/simple-nginx) to try out your new cluster. -For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/). +For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/). ### Debugging diff --git a/v1.1/docs/getting-started-guides/scratch.md b/v1.1/docs/getting-started-guides/scratch.md index a3f54608211..3a520a1b4b4 100644 --- a/v1.1/docs/getting-started-guides/scratch.md +++ b/v1.1/docs/getting-started-guides/scratch.md @@ -63,7 +63,7 @@ accomplished in two ways: - Configure network to route Pod IPs - Harder to setup from scratch. - - Google Compute Engine ([GCE](gce)) and [AWS](/{{page.version}}/docs/getting-started-guides/aws) guides use this approach. + - Google Compute Engine ([GCE](/{{page.version}}/docs/getting-started-guides/gce)) and [AWS](/{{page.version}}/docs/getting-started-guides/aws) guides use this approach. - Need to make the Pod IPs routable by programming routers, switches, etc. - Can be configured external to Kubernetes, or can implement in the "Routes" interface of a Cloud Provider module. - Generally highest performance. @@ -815,7 +815,7 @@ At this point you should be able to run through one of the basic examples, such ### Running the Conformance Test -You may want to try to run the [Conformance test](http://releases.k8s.io/release-1.1/hack/conformance-test.sh). Any failures may give a hint as to areas that need more attention. +You may want to try to run the [Conformance test](http://releases.k8s.io/{{page.githubbranch}}/hack/conformance-test.sh). Any failures may give a hint as to areas that need more attention. ### Networking diff --git a/v1.1/docs/getting-started-guides/ubuntu-calico.md b/v1.1/docs/getting-started-guides/ubuntu-calico.md index f86d1f4f086..83fcf4a38be 100644 --- a/v1.1/docs/getting-started-guides/ubuntu-calico.md +++ b/v1.1/docs/getting-started-guides/ubuntu-calico.md @@ -242,7 +242,7 @@ Replace `` in `calico-kubernetes-ubuntu-demo-master/dns/skydns-rc.yam ## Launch other Services With Calico-Kubernetes -At this point, you have a fully functioning cluster running on kubernetes with a master and 2 nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/) to set up other services on your cluster. +At this point, you have a fully functioning cluster running on kubernetes with a master and 2 nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) to set up other services on your cluster. ## Connectivity to outside the cluster diff --git a/v1.1/docs/getting-started-guides/ubuntu.md b/v1.1/docs/getting-started-guides/ubuntu.md index 03bbeecf7e5..eb914032098 100644 --- a/v1.1/docs/getting-started-guides/ubuntu.md +++ b/v1.1/docs/getting-started-guides/ubuntu.md @@ -139,7 +139,7 @@ NAME LABELS STATUS 10.10.103.250 kubernetes.io/hostname=10.10.103.250 Ready ``` -Also you can run Kubernetes [guest-example](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/guestbook/) to build a redis backend cluster on the k8s. +Also you can run Kubernetes [guest-example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) to build a redis backend cluster on the k8s. ### Deploy addons @@ -258,4 +258,4 @@ Some examples are as follows: The script will not delete any resources of your cluster, it just replaces the binaries. You can use `kubectl` command to check if the newly upgraded k8s is working correctly. -For example, use `$ kubectl get nodes` to see if all of your nodes are ready.Or refer to [test-it-out](ubuntu/#test-it-out) \ No newline at end of file +For example, use `$ kubectl get nodes` to see if all of your nodes are ready.Or refer to [test-it-out](/{{page.version}}/docs/getting-started-guides/ubuntu/#test-it-out) \ No newline at end of file diff --git a/v1.1/docs/getting-started-guides/vagrant.md b/v1.1/docs/getting-started-guides/vagrant.md index b61c0b4c1a5..6b79301bb69 100644 --- a/v1.1/docs/getting-started-guides/vagrant.md +++ b/v1.1/docs/getting-started-guides/vagrant.md @@ -240,7 +240,7 @@ my-nginx 10.0.0.1 80/TCP run=my-nginx ``` We did not start any services, hence there are none listed. But we see three replicas displayed properly. -Check the [guestbook](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/guestbook/) application to learn how to create a service. +Check the [guestbook](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) application to learn how to create a service. You can already play with scaling the replicas with: ```shell diff --git a/v1.1/docs/index.md b/v1.1/docs/index.md index fe59329e8a5..12740b191fc 100644 --- a/v1.1/docs/index.md +++ b/v1.1/docs/index.md @@ -17,15 +17,15 @@ title: "Kubernetes Documentation: releases.k8s.io/release-1.1" * The [API object documentation](http://kubernetes.io/third_party/swagger-ui/) is a detailed description of all fields found in core API objects. -* An overview of the [Design of Kubernetes](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/) +* An overview of the [Design of Kubernetes](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/) -* There are example files and walkthroughs in the [examples](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples) +* There are example files and walkthroughs in the [examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples) folder. -* If something went wrong, see the [troubleshooting](troubleshooting) document for how to debug. +* If something went wrong, see the [troubleshooting](/{{page.version}}/docs/troubleshooting) document for how to debug. You should also check the [known issues](/{{page.version}}/docs/user-guide/known-issues) for the release you're using. -* To report a security issue, see [Reporting a Security Issue](reporting-security-issues). +* To report a security issue, see [Reporting a Security Issue](/{{page.version}}/docs/reporting-security-issues). diff --git a/v1.1/docs/user-guide/accessing-the-cluster.md b/v1.1/docs/user-guide/accessing-the-cluster.md index 4cacf6363a2..ae379eac5d1 100644 --- a/v1.1/docs/user-guide/accessing-the-cluster.md +++ b/v1.1/docs/user-guide/accessing-the-cluster.md @@ -22,8 +22,8 @@ Check the location and credentials that kubectl knows about with this command: $ kubectl config view ``` -Many of the [examples](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/) provide an introduction to using -kubectl and complete documentation is found in the [kubectl manual](kubectl/kubectl). +Many of the [examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) provide an introduction to using +kubectl and complete documentation is found in the [kubectl manual](/{{page.version}}/docs/user-guide/kubectl/kubectl). ### Directly accessing the REST API @@ -52,7 +52,7 @@ Run it like this: $ kubectl proxy --port=8080 & ``` -See [kubectl proxy](kubectl/kubectl_proxy) for more details. +See [kubectl proxy](/{{page.version}}/docs/user-guide/kubectl/kubectl_proxy) for more details. Then you can explore the API with curl, wget, or a browser, like so: @@ -98,8 +98,8 @@ with future high-availability support. There are [client libraries](/{{page.version}}/docs/devel/client-libraries) for accessing the API from several languages. The Kubernetes project-supported -[Go](http://releases.k8s.io/release-1.1/pkg/client/) -client library can use the same [kubeconfig file](kubeconfig-file) +[Go](http://releases.k8s.io/{{page.githubbranch}}/pkg/client/) +client library can use the same [kubeconfig file](/{{page.version}}/docs/user-guide/kubeconfig-file) as the kubectl CLI does to locate and authenticate to the apiserver. See documentation for other libraries for how they authenticate. @@ -114,7 +114,7 @@ the `kubernetes` DNS name, which resolves to a Service IP which in turn will be routed to an apiserver. The recommended way to authenticate to the apiserver is with a -[service account](service-accounts) credential. By kube-system, a pod +[service account](/{{page.version}}/docs/user-guide/service-accounts) credential. By kube-system, a pod is associated with a service account, and a credential (token) for that service account is placed into the filesystem tree of each container in that pod, at `/var/run/secrets/kubernetes.io/serviceaccount/token`. @@ -125,7 +125,7 @@ From within a pod the recommended ways to connect to API are: process within a container. This proxies the Kubernetes API to the localhost interface of the pod, so that other processes in any container of the pod can access it. See this [example of using kubectl proxy - in a pod](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/kubectl-container/). + in a pod](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/kubectl-container/). - use the Go client library, and create a client using the `client.NewInCluster()` factory. This handles locating and authenticating to the apiserver. @@ -136,7 +136,7 @@ In each case, the credentials of the pod are used to communicate securely with t The previous section was about connecting the Kubernetes API server. This section is about connecting to other services running on Kubernetes cluster. In Kubernetes, the -[nodes](/{{page.version}}/docs/admin/node), [pods](pods) and [services](services) all have +[nodes](/{{page.version}}/docs/admin/node), [pods](/{{page.version}}/docs/user-guide/pods) and [services](/{{page.version}}/docs/user-guide/services) all have their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a cluster will not be routable, so they will not be reachable from a machine outside the cluster, such as your desktop machine. @@ -147,8 +147,8 @@ You have several options for connecting to nodes, pods and services from outside - Access services through public IPs. - Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside - the cluster. See the [services](services) and - [kubectl expose](kubectl/kubectl_expose) documentation. + the cluster. See the [services](/{{page.version}}/docs/user-guide/services) and + [kubectl expose](/{{page.version}}/docs/user-guide/kubectl/kubectl_expose) documentation. - Depending on your cluster environment, this may just expose the service to your corporate network, or it may expose it to the internet. Think about whether the service being exposed is secure. Does it do its own authentication? @@ -164,7 +164,7 @@ You have several options for connecting to nodes, pods and services from outside - Only works for HTTP/HTTPS. - Described [here](#discovering-builtin-services). - Access from a node or pod in the cluster. - - Run a pod, and then connect to a shell in it using [kubectl exec](kubectl/kubectl_exec). + - Run a pod, and then connect to a shell in it using [kubectl exec](/{{page.version}}/docs/user-guide/kubectl/kubectl_exec). Connect to other nodes, pods, and services from that shell. - Some clusters may allow you to ssh to a node in the cluster. From there you may be able to access cluster services. This is a non-standard method, and will work on some clusters but @@ -251,7 +251,7 @@ There are several different proxies you may encounter when using Kubernetes: - proxy to target may use HTTP or HTTPS as chosen by proxy using available information - can be used to reach a Node, Pod, or Service - does load balancing when used to reach a Service - 1. The [kube proxy](services/#ips-and-vips): + 1. The [kube proxy](/{{page.version}}/docs/user-guide/services/#ips-and-vips): - runs on each node - proxies UDP and TCP - does not understand HTTP diff --git a/v1.1/docs/user-guide/annotations.md b/v1.1/docs/user-guide/annotations.md index 97ac60c00f0..cd847eb5173 100644 --- a/v1.1/docs/user-guide/annotations.md +++ b/v1.1/docs/user-guide/annotations.md @@ -1,7 +1,7 @@ --- title: "Annotations" --- -We have [labels](labels) for identifying metadata. +We have [labels](/{{page.version}}/docs/user-guide/labels) for identifying metadata. It is also useful to be able to attach arbitrary non-identifying metadata, for retrieval by API clients such as tools, libraries, etc. This information may be large, may be structured or unstructured, may include characters not permitted by labels, etc. Such information would not be used for object selection and therefore doesn't belong in labels. diff --git a/v1.1/docs/user-guide/application-troubleshooting.md b/v1.1/docs/user-guide/application-troubleshooting.md index a7da58580ba..fd42b4215ee 100644 --- a/v1.1/docs/user-guide/application-troubleshooting.md +++ b/v1.1/docs/user-guide/application-troubleshooting.md @@ -186,6 +186,6 @@ check: #### More information -If none of the above solves your problem, follow the instructions in [Debugging Service document](debugging-services) to make sure that your `Service` is running, has `Endpoints`, and your `Pods` are actually serving; you have DNS working, iptables rules installed, and kube-proxy does not seem to be misbehaving. +If none of the above solves your problem, follow the instructions in [Debugging Service document](/{{page.version}}/docs/user-guide/debugging-services) to make sure that your `Service` is running, has `Endpoints`, and your `Pods` are actually serving; you have DNS working, iptables rules installed, and kube-proxy does not seem to be misbehaving. You may also visit [troubleshooting document](/{{page.version}}/docs/troubleshooting/) for more information. \ No newline at end of file diff --git a/v1.1/docs/user-guide/compute-resources.md b/v1.1/docs/user-guide/compute-resources.md index 3fc0ebabe94..e2f700a8a7a 100644 --- a/v1.1/docs/user-guide/compute-resources.md +++ b/v1.1/docs/user-guide/compute-resources.md @@ -4,20 +4,20 @@ title: "Compute Resources" * TOC {:toc} -When specifying a [pod](pods), you can optionally specify how much CPU and memory (RAM) each +When specifying a [pod](/{{page.version}}/docs/user-guide/pods), you can optionally specify how much CPU and memory (RAM) each container needs. When containers have their resource requests specified, the scheduler is able to make better decisions about which nodes to place pods on; and when containers have their limits specified, contention for resources on a node can be handled in a specified manner. For more details about the difference between requests and limits, please refer to -[Resource QoS](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/proposals/resource-qos.md). +[Resource QoS](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/resource-qos.md). *CPU* and *memory* are each a *resource type*. A resource type has a base unit. CPU is specified in units of cores. Memory is specified in units of bytes. CPU and RAM are collectively referred to as *compute resources*, or just *resources*. Compute resources are measureable quantities which can be requested, allocated, and consumed. They are -distinct from [API resources](working-with-resources). API resources, such as pods and -[services](services) are objects that can be written to and retrieved from the Kubernetes API +distinct from [API resources](/{{page.version}}/docs/user-guide/working-with-resources). API resources, such as pods and +[services](/{{page.version}}/docs/user-guide/services) are objects that can be written to and retrieved from the Kubernetes API server. ## Resource Requests and Limits of Pod and Container @@ -111,7 +111,7 @@ To determine if a container cannot be scheduled or is being killed due to resour The resource usage of a pod is reported as part of the Pod status. -If [optional monitoring](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/README.md) is configured for your cluster, +If [optional monitoring](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/README.md) is configured for your cluster, then pod resource usage can be retrieved from the monitoring system. ## Troubleshooting @@ -234,11 +234,11 @@ We can see that this container was terminated because `reason:OOM Killed`, where The current system only allows resource quantities to be specified on a container. It is planned to improve accounting for resources which are shared by all containers in a pod, -such as [EmptyDir volumes](volumes/#emptydir). +such as [EmptyDir volumes](/{{page.version}}/docs/user-guide/volumes/#emptydir). The current system only supports container requests and limits for CPU and Memory. It is planned to add new resource types, including a node disk space -resource, and a framework for adding custom [resource types](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/resources.md#resource-types). +resource, and a framework for adding custom [resource types](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/resources.md#resource-types). Kubernetes supports overcommitment of resources by supporting multiple levels of [Quality of Service](http://issue.k8s.io/168). diff --git a/v1.1/docs/user-guide/config-best-practices.md b/v1.1/docs/user-guide/config-best-practices.md index d5984190e87..50dcb770865 100644 --- a/v1.1/docs/user-guide/config-best-practices.md +++ b/v1.1/docs/user-guide/config-best-practices.md @@ -9,16 +9,16 @@ This document is meant to highlight and consolidate in one place configuration b 1. Group related objects together in a single file. This is often better than separate files. 1. Use `kubectl create -f ` where possible. This looks for config objects in all `.yaml`, `.yml`, and `.json` files in `` and passes them to create. 1. Create a service before corresponding replication controllers so that the scheduler can spread the pods comprising the service. You can also create the replication controller without specifying replicas, create the service, then scale up the replication controller, which may work better in an example using progressive disclosure and may have benefits in real scenarios also, such as ensuring one replica works before creating lots of them) -1. Don't use `hostPort` unless absolutely necessary (e.g., for a node daemon) as it will prevent certain scheduling configurations due to port conflicts. Use the apiserver proxying or port forwarding for debug/admin access, or a service for external service access. If you need to expose a pod's port on the host machine, consider using a [NodePort](services/#type--loadbalancer) service before resorting to `hostPort`. If you only need access to the port for debugging purposes, you can also use the [kubectl proxy and apiserver proxy](/{{page.version}}/docs/user-guide/connecting-to-applications-proxy) or [kubectl port-forward](/{{page.version}}/docs/user-guide/connecting-to-applications-port-forward). +1. Don't use `hostPort` unless absolutely necessary (e.g., for a node daemon) as it will prevent certain scheduling configurations due to port conflicts. Use the apiserver proxying or port forwarding for debug/admin access, or a service for external service access. If you need to expose a pod's port on the host machine, consider using a [NodePort](/{{page.version}}/docs/user-guide/services/#type--loadbalancer) service before resorting to `hostPort`. If you only need access to the port for debugging purposes, you can also use the [kubectl proxy and apiserver proxy](/{{page.version}}/docs/user-guide/connecting-to-applications-proxy) or [kubectl port-forward](/{{page.version}}/docs/user-guide/connecting-to-applications-port-forward). 1. Don't use `hostNetwork` for the same reasons as `hostPort`. 1. Don't specify default values unnecessarily, to simplify and minimize configs. For example, omit the selector and labels in ReplicationController if you want them to be the same as the labels in its podTemplate, since those fields are populated from the podTemplate labels by default. 1. Instead of attaching one label to a set of pods to represent a service (e.g., `service: myservice`) and another to represent the replication controller managing the pods (e.g., `controller: mycontroller`), attach labels that identify semantic attributes of your application or deployment and select the appropriate subsets in your service and replication controller, such as `{ app: myapp, tier: frontend, deployment: v3 }`. A service can be made to span multiple deployments, such as across rolling updates, by simply omitting release-specific labels from its selector, rather than updating a service's selector to match the replication controller's selector fully. -1. Use kubectl bulk operations (via files and/or labels) for get and delete. See [label selectors](labels/#label-selectors) and [using labels effectively](managing-deployments/#using-labels-effectively). -1. Use kubectl run and expose to quickly create and expose single container replication controllers. See the [quick start guide](quick-start) for an example. -1. Use headless services for easy service discovery when you don't need kube-proxy load balancing. See [headless services](services/#headless-services). +1. Use kubectl bulk operations (via files and/or labels) for get and delete. See [label selectors](/{{page.version}}/docs/user-guide/labels/#label-selectors) and [using labels effectively](/{{page.version}}/docs/user-guide/managing-deployments/#using-labels-effectively). +1. Use kubectl run and expose to quickly create and expose single container replication controllers. See the [quick start guide](/{{page.version}}/docs/user-guide/quick-start) for an example. +1. Use headless services for easy service discovery when you don't need kube-proxy load balancing. See [headless services](/{{page.version}}/docs/user-guide/services/#headless-services). 1. Use kubectl delete rather than stop. Delete has a superset of the functionality of stop and stop is deprecated. 1. If there is a viable alternative to naked pods (i.e. pods not bound to a controller), go with the alternative. Controllers are almost always preferable to creating pods (except for some `restartPolicy: Never` scenarios). A minimal Job is coming. See [#1624](http://issue.k8s.io/1624). Naked pods will not be rescheduled in the event of node failure. -1. Put a version number or hash as a suffix to the name and in a label on a replication controller to facilitate rolling update, as we do for [--image](kubectl/kubectl_rolling-update). This is necessary because rolling-update actually creates a new controller as opposed to modifying the existing controller. This does not play well with version agnostic controller names. +1. Put a version number or hash as a suffix to the name and in a label on a replication controller to facilitate rolling update, as we do for [--image](/{{page.version}}/docs/user-guide/kubectl/kubectl_rolling-update). This is necessary because rolling-update actually creates a new controller as opposed to modifying the existing controller. This does not play well with version agnostic controller names. 1. Put an object description in an annotation to allow better introspection. diff --git a/v1.1/docs/user-guide/configuring-containers.md b/v1.1/docs/user-guide/configuring-containers.md index 13a30671610..e1f2c99d8ae 100644 --- a/v1.1/docs/user-guide/configuring-containers.md +++ b/v1.1/docs/user-guide/configuring-containers.md @@ -89,7 +89,7 @@ spec: # specification of the pod's contents args: ["/bin/echo \"${MESSAGE}\""] ``` -However, a shell isn't necessary just to expand environment variables. Kubernetes will do it for you if you use [`$(ENVVAR)` syntax](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/expansion): +However, a shell isn't necessary just to expand environment variables. Kubernetes will do it for you if you use [`$(ENVVAR)` syntax](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/expansion): ```yaml command: ["/bin/echo"] diff --git a/v1.1/docs/user-guide/connecting-applications.md b/v1.1/docs/user-guide/connecting-applications.md index 4deaca260c3..6b4c350eb46 100644 --- a/v1.1/docs/user-guide/connecting-applications.md +++ b/v1.1/docs/user-guide/connecting-applications.md @@ -113,11 +113,11 @@ NAME ENDPOINTS nginxsvc 10.245.0.14:80,10.245.0.15:80 ``` -You should now be able to curl the nginx Service on `10.0.116.146:80` from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire, if you're curious about how this works you can read more about the [service proxy](services/#virtual-ips-and-service-proxies). +You should now be able to curl the nginx Service on `10.0.116.146:80` from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire, if you're curious about how this works you can read more about the [service proxy](/{{page.version}}/docs/user-guide/services/#virtual-ips-and-service-proxies). ## Accessing the Service -Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. The former works out of the box while the latter requires the [kube-dns cluster addon](http://releases.k8s.io/release-1.1/cluster/addons/dns/README.md). +Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. The former works out of the box while the latter requires the [kube-dns cluster addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md). ### Environment Variables @@ -155,7 +155,7 @@ NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE kube-dns 10.179.240.10 53/UDP,53/TCP k8s-app=kube-dns 8d ``` -If it isn't running, you can [enable it](http://releases.k8s.io/release-1.1/cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived IP (nginxsvc), and a dns server that has assigned a name to that IP (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Let's create another pod to test this: +If it isn't running, you can [enable it](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived IP (nginxsvc), and a dns server that has assigned a name to that IP (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Let's create another pod to test this: ```yaml $ cat curlpod.yaml @@ -196,9 +196,9 @@ Till now we have only accessed the nginx server from within the cluster. Before * Self signed certificates for https (unless you already have an identity certificate) * An nginx server configured to use the certificates -* A [secret](secrets) that makes the certificates accessible to pods +* A [secret](/{{page.version}}/docs/user-guide/secrets) that makes the certificates accessible to pods -You can acquire all these from the [nginx https example](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/https-nginx/), in short: +You can acquire all these from the [nginx https example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/https-nginx/), in short: ```shell $ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json @@ -262,7 +262,7 @@ spec: Noteworthy points about the nginx-app manifest: - It contains both rc and service specification in the same file -- The [nginx server](https://github.com/kubernetes/kubernetes/tree/{{ page.githubbranch }}/examples/https-nginx/default.conf) serves http traffic on port 80 and https traffic on 443, and nginx Service exposes both ports. +- The [nginx server](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/https-nginx/default.conf) serves http traffic on port 80 and https traffic on 443, and nginx Service exposes both ports. - Each container has access to the keys through a volume mounted at /etc/nginx/ssl. This is setup *before* the nginx server is started. ```shell @@ -383,4 +383,4 @@ cluster/private cloud network. ## What's next? -[Learn about more Kubernetes features that will help you run containers reliably in production.](production-pods) +[Learn about more Kubernetes features that will help you run containers reliably in production.](/{{page.version}}/docs/user-guide/production-pods) diff --git a/v1.1/docs/user-guide/connecting-to-applications-port-forward.md b/v1.1/docs/user-guide/connecting-to-applications-port-forward.md index e188fda9c2a..f7c8923ec6c 100644 --- a/v1.1/docs/user-guide/connecting-to-applications-port-forward.md +++ b/v1.1/docs/user-guide/connecting-to-applications-port-forward.md @@ -1,7 +1,7 @@ --- title: "Connecting to applications: kubectl port-forward" --- -kubectl port-forward forwards connections to a local port to a port on a pod. Its man page is available [here](kubectl/kubectl_port-forward). Compared to [kubectl proxy](/{{page.version}}/docs/user-guide/accessing-the-cluster/#using-kubectl-proxy), `kubectl port-forward` is more generic as it can forward TCP traffic while `kubectl proxy` can only forward HTTP traffic. This guide demonstrates how to use `kubectl port-forward` to connect to a Redis database, which may be useful for database debugging. +kubectl port-forward forwards connections to a local port to a port on a pod. Its man page is available [here](/{{page.version}}/docs/user-guide/kubectl/kubectl_port-forward). Compared to [kubectl proxy](/{{page.version}}/docs/user-guide/accessing-the-cluster/#using-kubectl-proxy), `kubectl port-forward` is more generic as it can forward TCP traffic while `kubectl proxy` can only forward HTTP traffic. This guide demonstrates how to use `kubectl port-forward` to connect to a Redis database, which may be useful for database debugging. ## Creating a Redis master diff --git a/v1.1/docs/user-guide/connecting-to-applications-proxy.md b/v1.1/docs/user-guide/connecting-to-applications-proxy.md index 2d6571e9bfa..296badbdaf3 100644 --- a/v1.1/docs/user-guide/connecting-to-applications-proxy.md +++ b/v1.1/docs/user-guide/connecting-to-applications-proxy.md @@ -1,7 +1,7 @@ --- title: "Connecting to applications: kubectl proxy and apiserver proxy" --- -You have seen the [basics](/{{page.version}}/docs/user-guide/accessing-the-cluster) about `kubectl proxy` and `apiserver proxy`. This guide shows how to use them together to access a service([kube-ui](ui)) running on the Kubernetes cluster from your workstation. +You have seen the [basics](/{{page.version}}/docs/user-guide/accessing-the-cluster) about `kubectl proxy` and `apiserver proxy`. This guide shows how to use them together to access a service([kube-ui](/{{page.version}}/docs/user-guide/ui)) running on the Kubernetes cluster from your workstation. ## Getting the apiserver proxy URL of kube-ui @@ -13,7 +13,7 @@ $ kubectl cluster-info | grep "KubeUI" KubeUI is running at https://173.255.119.104/api/v1/proxy/namespaces/kube-system/services/kube-ui ``` -if this command does not find the URL, try the steps [here](ui/#accessing-the-ui). +if this command does not find the URL, try the steps [here](/{{page.version}}/docs/user-guide/ui/#accessing-the-ui). ## Connecting to the kube-ui service from your local workstation diff --git a/v1.1/docs/user-guide/container-environment.md b/v1.1/docs/user-guide/container-environment.md index faeda32955d..5bebd7b1a32 100644 --- a/v1.1/docs/user-guide/container-environment.md +++ b/v1.1/docs/user-guide/container-environment.md @@ -6,7 +6,7 @@ This document describes the environment for Kubelet managed containers on a Kube This cluster information makes it possible to build applications that are *cluster aware*. Additionally, the Kubernetes container environment defines a series of hooks that are surfaced to optional hook handlers defined as part of individual containers.  Container hooks are somewhat analogous to operating system signals in a traditional process model.   However these hooks are designed to make it easier to build reliable, scalable cloud applications in the Kubernetes cluster.  Containers that participate in this cluster lifecycle become *cluster native*. -Another important part of the container environment is the file system that is available to the container. In Kubernetes, the filesystem is a combination of an [image](images) and one or more [volumes](volumes). +Another important part of the container environment is the file system that is available to the container. In Kubernetes, the filesystem is a combination of an [image](/{{page.version}}/docs/user-guide/images) and one or more [volumes](/{{page.version}}/docs/user-guide/volumes). The following sections describe both the cluster information provided to containers, as well as the hooks and life-cycle that allows containers to interact with the management system. @@ -21,7 +21,7 @@ There are two types of information that are available within the container envir Currently, the Pod name for the pod in which the container is running is set as the hostname of the container, and is accessible through all calls to access the hostname within the container (e.g. the hostname command, or the [gethostname][1] function call in libc), but this is planned to change in the future and should not be used. -The Pod name and namespace are also available as environment variables via the [downward API](downward-api). Additionally, user-defined environment variables from the pod definition, are also available to the container, as are any environment variables specified statically in the Docker image. +The Pod name and namespace are also available as environment variables via the [downward API](/{{page.version}}/docs/user-guide/downward-api). Additionally, user-defined environment variables from the pod definition, are also available to the container, as are any environment variables specified statically in the Docker image. In the future, we anticipate expanding this information with richer information about the container.  Examples include available memory, number of restarts, and in general any state that you could get from the call to GET /pods on the API server. @@ -36,7 +36,7 @@ FOO_SERVICE_HOST= FOO_SERVICE_PORT= ``` -Services have dedicated IP address, and are also surfaced to the container via DNS (If [DNS addon](http://releases.k8s.io/release-1.1/cluster/addons/dns/) is enabled).  Of course DNS is still not an enumerable protocol, so we will continue to provide environment variables so that containers can do discovery. +Services have dedicated IP address, and are also surfaced to the container via DNS (If [DNS addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/) is enabled).  Of course DNS is still not an enumerable protocol, so we will continue to provide environment variables so that containers can do discovery. ## Container Hooks @@ -52,7 +52,7 @@ This hook is sent immediately after a container is created.  It notifies the co *PreStop* -This hook is called immediately before a container is terminated. No parameters are passed to the handler. This event handler is blocking, and must complete before the call to delete the container is sent to the Docker daemon. The SIGTERM notification sent by Docker is also still sent. A more complete description of termination behavior can be found in [Termination of Pods](pods/#termination-of-pods). +This hook is called immediately before a container is terminated. No parameters are passed to the handler. This event handler is blocking, and must complete before the call to delete the container is sent to the Docker daemon. The SIGTERM notification sent by Docker is also still sent. A more complete description of termination behavior can be found in [Termination of Pods](/{{page.version}}/docs/user-guide/pods/#termination-of-pods). ### Hook Handler Execution diff --git a/v1.1/docs/user-guide/containers.md b/v1.1/docs/user-guide/containers.md index 03e6ab38560..28ef14fdc36 100644 --- a/v1.1/docs/user-guide/containers.md +++ b/v1.1/docs/user-guide/containers.md @@ -18,11 +18,11 @@ we can use: Docker images have metadata associated with them that is used to store information about the image. The image author may use this to define defaults for the command and arguments to run a container -when the user does not supply values. Docker calls the fields for commands and arguments -`Entrypoint` and `Cmd` respectively. The full details for this feature are too complicated to -describe here, mostly due to the fact that the docker API allows users to specify both of these +when the user does not supply values. Docker calls the fields for commands and arguments +`Entrypoint` and `Cmd` respectively. The full details for this feature are too complicated to +describe here, mostly due to the fact that the Docker API allows users to specify both of these fields as either a string array or a string and there are subtle differences in how those cases are -handled. We encourage the curious to check out [docker's documentation]() for this feature. +handled. We encourage the curious to check out Docker's documentation for this feature. Kubernetes allows you to override both the image's default command (docker `Entrypoint`) and args (docker `Cmd`) with the `Command` and `Args` fields of `Container`. The rules are: @@ -90,7 +90,4 @@ The relationship between Docker's capabilities and [Linux capabilities](http://m | LEASE | CAP_LEASE | | SETFCAP | CAP_SETFCAP | | WAKE_ALARM | CAP_WAKE_ALARM | -| BLOCK_SUSPEND | CAP_BLOCK_SUSPEND | - - - +| BLOCK_SUSPEND | CAP_BLOCK_SUSPEND | \ No newline at end of file diff --git a/v1.1/docs/user-guide/deploying-applications.md b/v1.1/docs/user-guide/deploying-applications.md index cc8a9012f05..da31b79d2eb 100644 --- a/v1.1/docs/user-guide/deploying-applications.md +++ b/v1.1/docs/user-guide/deploying-applications.md @@ -1,18 +1,18 @@ --- title: "Kubernetes User Guide: Managing Applications: Deploying continuously running applications" --- -You previously read about how to quickly deploy a simple replicated application using [`kubectl run`](quick-start) and how to configure and launch single-run containers using pods ([Configuring containers](/{{page.version}}/docs/user-guide/configuring-containers)). Here you'll use the configuration-based approach to deploy a continuously running, replicated application. +You previously read about how to quickly deploy a simple replicated application using [`kubectl run`](/{{page.version}}/docs/user-guide/quick-start) and how to configure and launch single-run containers using pods ([Configuring containers](/{{page.version}}/docs/user-guide/configuring-containers)). Here you'll use the configuration-based approach to deploy a continuously running, replicated application. * TOC {:toc} ## Launching a set of replicas using a configuration file -Kubernetes creates and manages sets of replicated containers (actually, replicated [Pods](pods)) using [*Replication Controllers*](replication-controller). +Kubernetes creates and manages sets of replicated containers (actually, replicated [Pods](/{{page.version}}/docs/user-guide/pods)) using [*Replication Controllers*](/{{page.version}}/docs/user-guide/replication-controller). A replication controller simply ensures that a specified number of pod "replicas" are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. It's analogous to Google Compute Engine's [Instance Group Manager](https://cloud.google.com/compute/docs/instance-groups/manager/) or AWS's [Auto-scaling Group](http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingGroup) (with no scaling policies). -The replication controller created to run nginx by `kubectl run` in the [Quick start](quick-start) could be specified using YAML as follows: +The replication controller created to run nginx by `kubectl run` in the [Quick start](/{{page.version}}/docs/user-guide/quick-start) could be specified using YAML as follows: ```yaml apiVersion: v1 @@ -70,7 +70,7 @@ my-nginx-buaiq 1/1 Running 0 51s ## Deleting replication controllers -When you want to kill your application, delete your replication controller, as in the [Quick start](quick-start): +When you want to kill your application, delete your replication controller, as in the [Quick start](/{{page.version}}/docs/user-guide/quick-start): ```shell $ kubectl delete rc my-nginx @@ -83,7 +83,7 @@ If you try to delete the pods before deleting the replication controller, it wil ## Labels -Kubernetes uses user-defined key-value attributes called [*labels*](labels) to categorize and identify sets of resources, such as pods and replication controllers. The example above specified a single label in the pod template, with key `app` and value `nginx`. All pods created carry that label, which can be viewed using `-L`: +Kubernetes uses user-defined key-value attributes called [*labels*](/{{page.version}}/docs/user-guide/labels) to categorize and identify sets of resources, such as pods and replication controllers. The example above specified a single label in the pod template, with key `app` and value `nginx`. All pods created carry that label, which can be viewed using `-L`: ```shell $ kubectl get pods -L app @@ -100,7 +100,7 @@ CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS APP my-nginx nginx nginx app=nginx 2 nginx ``` -More importantly, the pod template's labels are used to create a [`selector`](labels/#label-selectors) that will match pods carrying those labels. You can see this field by requesting it using the [Go template output format of `kubectl get`](kubectl/kubectl_get): +More importantly, the pod template's labels are used to create a [`selector`](/{{page.version}}/docs/user-guide/labels/#label-selectors) that will match pods carrying those labels. You can see this field by requesting it using the [Go template output format of `kubectl get`](/{{page.version}}/docs/user-guide/kubectl/kubectl_get): ```shell $ kubectl get rc my-nginx -o template --template="{{.spec.selector}}" diff --git a/v1.1/docs/user-guide/deployments.md b/v1.1/docs/user-guide/deployments.md index 4dc38563d91..afecce18952 100644 --- a/v1.1/docs/user-guide/deployments.md +++ b/v1.1/docs/user-guide/deployments.md @@ -57,7 +57,7 @@ spec: - containerPort: 80 ``` -[Download example](nginx-deployment.yaml) +[Download example](/{{page.version}}/docs/user-guide/nginx-deployment.yaml) Run the example by downloading the example file and then running this command: @@ -135,7 +135,7 @@ spec: - containerPort: 80 ``` -[Download example](new-nginx-deployment.yaml) +[Download example](/{{page.version}}/docs/user-guide/new-nginx-deployment.yaml) @@ -250,7 +250,7 @@ before changing course. As with all other Kubernetes configs, a Deployment needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, -see [here](deploying-applications), [here](/{{page.version}}/docs/user-guide/configuring-containers), and [here](working-with-resources). +see [here](/{{page.version}}/docs/user-guide/deploying-applications), [here](/{{page.version}}/docs/user-guide/configuring-containers), and [here](/{{page.version}}/docs/user-guide/working-with-resources). A Deployment also needs a [`.spec` section](/{{page.version}}/docs/devel/api-conventions/#spec-and-status). @@ -258,8 +258,8 @@ A Deployment also needs a [`.spec` section](/{{page.version}}/docs/devel/api-con The `.spec.template` is the only required field of the `.spec`. -The `.spec.template` is a [pod template](replication-controller/#pod-template). It has exactly -the same schema as a [pod](pods), except it is nested and does not have an +The `.spec.template` is a [pod template](/{{page.version}}/docs/user-guide/replication-controller/#pod-template). It has exactly +the same schema as a [pod](/{{page.version}}/docs/user-guide/pods), except it is nested and does not have an `apiVersion` or `kind`. ### Replicas @@ -347,5 +347,5 @@ Note: This is not implemented yet. ### kubectl rolling update -[Kubectl rolling update](kubectl/kubectl_rolling-update) also updates pods and replication controllers in a similar fashion. +[Kubectl rolling update](/{{page.version}}/docs/user-guide/kubectl/kubectl_rolling-update) also updates pods and replication controllers in a similar fashion. But deployments is declarative and is server side. \ No newline at end of file diff --git a/v1.1/docs/user-guide/docker-cli-to-kubectl.md b/v1.1/docs/user-guide/docker-cli-to-kubectl.md index 6a8a1ed6fec..83dbb0981b5 100644 --- a/v1.1/docs/user-guide/docker-cli-to-kubectl.md +++ b/v1.1/docs/user-guide/docker-cli-to-kubectl.md @@ -8,7 +8,7 @@ In this doc, we introduce the Kubernetes command line for interacting with the a #### docker run -How do I run an nginx container and expose it to the world? Checkout [kubectl run](kubectl/kubectl_run). +How do I run an nginx container and expose it to the world? Checkout [kubectl run](/{{page.version}}/docs/user-guide/kubectl/kubectl_run). With docker: @@ -30,7 +30,7 @@ replicationcontroller "nginx-app" created $ kubectl expose rc nginx-app --port=80 --name=nginx-http ``` -With kubectl, we create a [replication controller](replication-controller) which will make sure that N pods are running nginx (where N is the number of replicas stated in the spec, which defaults to 1). We also create a [service](services) with a selector that matches the replication controller's selector. See the [Quick start](quick-start) for more information. +With kubectl, we create a [replication controller](/{{page.version}}/docs/user-guide/replication-controller) which will make sure that N pods are running nginx (where N is the number of replicas stated in the spec, which defaults to 1). We also create a [service](/{{page.version}}/docs/user-guide/services) with a selector that matches the replication controller's selector. See the [Quick start](/{{page.version}}/docs/user-guide/quick-start) for more information. By default images are run in the background, similar to `docker run -d ...`, if you want to run things in the foreground, use: @@ -45,7 +45,7 @@ To destroy the replication controller (and it's pods) you need to run `kubectl #### docker ps -How do I list what is currently running? Checkout [kubectl get](kubectl/kubectl_get). +How do I list what is currently running? Checkout [kubectl get](/{{page.version}}/docs/user-guide/kubectl/kubectl_get). With docker: @@ -65,7 +65,7 @@ nginx-app-5jyvm 1/1 Running 0 1h #### docker attach -How do I attach to a process that is already running in a container? Checkout [kubectl attach](kubectl/kubectl_attach) +How do I attach to a process that is already running in a container? Checkout [kubectl attach](/{{page.version}}/docs/user-guide/kubectl/kubectl_attach) With docker: @@ -89,7 +89,7 @@ $ kubectl attach -it nginx-app-5jyvm #### docker exec -How do I execute a command in a container? Checkout [kubectl exec](kubectl/kubectl_exec). +How do I execute a command in a container? Checkout [kubectl exec](/{{page.version}}/docs/user-guide/kubectl/kubectl_exec). With docker: @@ -128,11 +128,11 @@ $ kubectl exec -ti nginx-app-5jyvm -- /bin/sh # exit ``` -For more information see [Getting into containers](getting-into-containers). +For more information see [Getting into containers](/{{page.version}}/docs/user-guide/getting-into-containers). #### docker logs -How do I follow stdout/stderr of a running process? Checkout [kubectl logs](kubectl/kubectl_logs). +How do I follow stdout/stderr of a running process? Checkout [kubectl logs](/{{page.version}}/docs/user-guide/kubectl/kubectl_logs). With docker: @@ -159,11 +159,11 @@ $ kubectl logs --previous nginx-app-zibvs 10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-" ``` -See [Logging](logging) for more information. +See [Logging](/{{page.version}}/docs/user-guide/logging) for more information. #### docker stop and docker rm -How do I stop and delete a running process? Checkout [kubectl delete](kubectl/kubectl_delete). +How do I stop and delete a running process? Checkout [kubectl delete](/{{page.version}}/docs/user-guide/kubectl/kubectl_delete). With docker @@ -197,11 +197,11 @@ Notice that we don't delete the pod directly. With kubectl we want to delete the #### docker login -There is no direct analog of `docker login` in kubectl. If you are interested in using Kubernetes with a private registry, see [Using a Private Registry](images/#using-a-private-registry). +There is no direct analog of `docker login` in kubectl. If you are interested in using Kubernetes with a private registry, see [Using a Private Registry](/{{page.version}}/docs/user-guide/images/#using-a-private-registry). #### docker version -How do I get the version of my client and server? Checkout [kubectl version](kubectl/kubectl_version). +How do I get the version of my client and server? Checkout [kubectl version](/{{page.version}}/docs/user-guide/kubectl/kubectl_version). With docker: @@ -229,7 +229,7 @@ Server Version: version.Info{Major:"0", Minor:"21+", GitVersion:"v0.21.1-411-g32 #### docker info -How do I get miscellaneous info about my environment and configuration? Checkout [kubectl cluster-info](kubectl/kubectl_cluster-info). +How do I get miscellaneous info about my environment and configuration? Checkout [kubectl cluster-info](/{{page.version}}/docs/user-guide/kubectl/kubectl_cluster-info). With docker: diff --git a/v1.1/docs/user-guide/downward-api.md b/v1.1/docs/user-guide/downward-api.md index 74e8f882dbf..d334af8210f 100644 --- a/v1.1/docs/user-guide/downward-api.md +++ b/v1.1/docs/user-guide/downward-api.md @@ -76,7 +76,7 @@ spec: restartPolicy: Never ``` -[Download example](downward-api/dapi-pod.yaml) +[Download example](/{{page.version}}/docs/user-guide/downward-api/dapi-pod.yaml) @@ -86,7 +86,7 @@ Using a similar syntax it's possible to expose pod information to containers usi Downward API are dumped to a mounted volume. This is achieved using a `downwardAPI` volume type and the different items represent the files to be created. `fieldPath` references the field to be exposed. -Downward API volume permits to store more complex data like [`metadata.labels`](labels) and [`metadata.annotations`](/{{page.version}}/docs/user-guide/annotations). Currently key/value pair set fields are saved using `key="value"` format: +Downward API volume permits to store more complex data like [`metadata.labels`](/{{page.version}}/docs/user-guide/labels) and [`metadata.annotations`](/{{page.version}}/docs/user-guide/annotations). Currently key/value pair set fields are saved using `key="value"` format: ```conf key1="value1" @@ -145,10 +145,10 @@ spec: fieldPath: metadata.annotations ``` -[Download example](downward-api/volume/dapi-volume.yaml) +[Download example](/{{page.version}}/docs/user-guide/downward-api/volume/dapi-volume.yaml) Some more thorough examples: - * [environment variables](environment-guide/) - * [downward API](downward-api/) \ No newline at end of file + * [environment variables](/{{page.version}}/docs/user-guide/environment-guide/) + * [downward API](/{{page.version}}/docs/user-guide/downward-api/) \ No newline at end of file diff --git a/v1.1/docs/user-guide/downward-api/index.md b/v1.1/docs/user-guide/downward-api/index.md index 6d4e0946fe5..4457769c566 100644 --- a/v1.1/docs/user-guide/downward-api/index.md +++ b/v1.1/docs/user-guide/downward-api/index.md @@ -15,7 +15,7 @@ started](/{{page.version}}/docs/getting-started-guides/) for installation instru Containers consume the downward API using environment variables. The downward API allows containers to be injected with the name and namespace of the pod the container is in. -Use the [`examples/downward-api/dapi-pod.yaml`](dapi-pod.yaml) file to create a Pod with a container that consumes the +Use the [`dapi-pod.yaml`](/{{page.version}}/docs/user-guide/downward-api/dapi-pod.yaml) file to create a Pod with a container that consumes the downward API. ```shell diff --git a/v1.1/docs/user-guide/environment-guide/index.md b/v1.1/docs/user-guide/environment-guide/index.md index 5d20ac0be55..62c4fe1b714 100644 --- a/v1.1/docs/user-guide/environment-guide/index.md +++ b/v1.1/docs/user-guide/environment-guide/index.md @@ -69,7 +69,7 @@ Backend Namespace: default ``` First the frontend pod's information is printed. The pod name and -[namespace](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/namespaces.md) are retrieved from the +[namespace](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md) are retrieved from the [Downward API](/{{page.version}}/docs/user-guide/downward-api). Next, `USER_VAR` is the name of an environment variable set in the [pod definition](/{{page.version}}/docs/user-guide/environment-guide/show-rc.yaml). Then, the dynamic Kubernetes environment diff --git a/v1.1/docs/user-guide/getting-into-containers.md b/v1.1/docs/user-guide/getting-into-containers.md index 04acc7c3b7a..bd4c0cb7dbd 100644 --- a/v1.1/docs/user-guide/getting-into-containers.md +++ b/v1.1/docs/user-guide/getting-into-containers.md @@ -5,7 +5,7 @@ Developers can use `kubectl exec` to run commands in a container. This guide dem ## Using kubectl exec to check the environment variables of a container -Kubernetes exposes [services](services/#environment-variables) through environment variables. It is convenient to check these environment variables using `kubectl exec`. +Kubernetes exposes [services](/{{page.version}}/docs/user-guide/services/#environment-variables) through environment variables. It is convenient to check these environment variables using `kubectl exec`. We first create a pod and a service, diff --git a/v1.1/docs/user-guide/horizontal-pod-autoscaler.md b/v1.1/docs/user-guide/horizontal-pod-autoscaler.md index a232e4221bf..9cb3d5b6cad 100644 --- a/v1.1/docs/user-guide/horizontal-pod-autoscaler.md +++ b/v1.1/docs/user-guide/horizontal-pod-autoscaler.md @@ -28,21 +28,21 @@ Then, it compares the arithmetic mean of the pods' CPU utilization with the targ CPU utilization is the recent CPU usage of a pod divided by the sum of CPU requested by the pod's containers. Please note that if some of the pod's containers do not have CPU request set, CPU utilization for the pod will not be defined and the autoscaler will not take any action. -Further details of the autoscaling algorithm are given [here](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/horizontal-pod-autoscaler.md#autoscaling-algorithm). +Further details of the autoscaling algorithm are given [here](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#autoscaling-algorithm). Autoscaler uses heapster to collect CPU utilization. Therefore, it is required to deploy heapster monitoring in your cluster for autoscaling to work. Autoscaler accesses corresponding replication controller or deployment by scale sub-resource. Scale is an interface which allows to dynamically set the number of replicas and to learn the current state of them. -More details on scale sub-resource can be found [here](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/horizontal-pod-autoscaler.md#scale-subresource). +More details on scale sub-resource can be found [here](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#scale-subresource). ## API Object Horizontal pod autoscaler is a top-level resource in the Kubernetes REST API (currently in [beta](/{{page.version}}/docs/api/)#api-versioning)). More details about the API object can be found at -[HorizontalPodAutoscaler Object](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object). +[HorizontalPodAutoscaler Object](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object). ## Support for horizontal pod autoscaler in kubectl @@ -55,7 +55,7 @@ In addition, there is a special `kubectl autoscale` command that allows for easy For instance, executing `kubectl autoscale rc foo --min=2 --max=5 --cpu-percent=80` will create an autoscaler for replication controller *foo*, with target CPU utilization set to `80%` and the number of replicas between 2 and 5. -The detailed documentation of `kubectl autoscale` can be found [here](kubectl/kubectl_autoscale). +The detailed documentation of `kubectl autoscale` can be found [here](/{{page.version}}/docs/user-guide/kubectl/kubectl_autoscale). ## Autoscaling during rolling update @@ -73,11 +73,6 @@ the horizontal pod autoscaler will not be bound to the new replication controlle ## Further reading -* Design documentation: [Horizontal Pod Autoscaling](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/horizontal-pod-autoscaler.md). -* Manual of autoscale command in kubectl: [kubectl autoscale](kubectl/kubectl_autoscale). -* Usage example of [Horizontal Pod Autoscaler](horizontal-pod-autoscaling/). - - - - - +* Design documentation: [Horizontal Pod Autoscaling](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md). +* Manual of autoscale command in kubectl: [kubectl autoscale](/{{page.version}}/docs/user-guide/kubectl/kubectl_autoscale). +* Usage example of [Horizontal Pod Autoscaler](/{{page.version}}/docs/user-guide/horizontal-pod-autoscaling/). \ No newline at end of file diff --git a/v1.1/docs/user-guide/horizontal-pod-autoscaling/index.md b/v1.1/docs/user-guide/horizontal-pod-autoscaling/index.md index d0d5ae5c143..168c087a777 100644 --- a/v1.1/docs/user-guide/horizontal-pod-autoscaling/index.md +++ b/v1.1/docs/user-guide/horizontal-pod-autoscaling/index.md @@ -20,7 +20,7 @@ heapster monitoring will be turned-on by default). To demonstrate horizontal pod autoscaler we will use a custom docker image based on php-apache server. The image can be found [here](https://releases.k8s.io/release-1.1/docs/user-guide/horizontal-pod-autoscaling/image). -It defines [index.php](image/index.php) page which performs some CPU intensive computations. +It defines [index.php](/{{page.version}}/docs/user-guide/horizontal-pod-autoscaling/image/index.php) page which performs some CPU intensive computations. First, we will start a replication controller running the image and expose it as an external service: @@ -69,7 +69,7 @@ OK! ## Step Two: Create horizontal pod autoscaler Now that the server is running, we will create a horizontal pod autoscaler for it. -To create it, we will use the [hpa-php-apache.yaml](hpa-php-apache.yaml) file, which looks like this: +To create it, we will use the [hpa-php-apache.yaml](/{{page.version}}/docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml) file, which looks like this: ```yaml apiVersion: extensions/v1beta1 @@ -93,7 +93,7 @@ controlled by the php-apache replication controller we created in the first step Roughly speaking, the horizontal autoscaler will increase and decrease the number of replicas (via the replication controller) so as to maintain an average CPU utilization across all Pods of 50% (since each pod requests 200 milli-cores by [kubectl run](#kubectl-run), this means average CPU utilization of 100 milli-cores). -See [here](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/horizontal-pod-autoscaler.md#autoscaling-algorithm) for more details on the algorithm. +See [here](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md#autoscaling-algorithm) for more details on the algorithm. We will create the autoscaler by executing the following command: @@ -102,8 +102,8 @@ $ kubectl create -f docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.ya horizontalpodautoscaler "php-apache" created ``` -Alternatively, we can create the autoscaler using [kubectl autoscale](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/user-guide/kubectl/kubectl_autoscale.md). -The following command will create the equivalent autoscaler as defined in the [hpa-php-apache.yaml](hpa-php-apache.yaml) file: +Alternatively, we can create the autoscaler using [kubectl autoscale](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/user-guide/kubectl/kubectl_autoscale.md). +The following command will create the equivalent autoscaler as defined in the [hpa-php-apache.yaml](/{{page.version}}/docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml) file: ```shell $ kubectl autoscale rc php-apache --cpu-percent=50 --min=1 --max=10 diff --git a/v1.1/docs/user-guide/identifiers.md b/v1.1/docs/user-guide/identifiers.md index 5b081cad672..b0ab62887df 100644 --- a/v1.1/docs/user-guide/identifiers.md +++ b/v1.1/docs/user-guide/identifiers.md @@ -3,11 +3,11 @@ title: "Identifiers" --- All objects in the Kubernetes REST API are unambiguously identified by a Name and a UID. -For non-unique user-provided attributes, Kubernetes provides [labels](labels) and [annotations](/{{page.version}}/docs/user-guide/annotations). +For non-unique user-provided attributes, Kubernetes provides [labels](/{{page.version}}/docs/user-guide/labels) and [annotations](/{{page.version}}/docs/user-guide/annotations). ## Names -Names are generally client-provided. Only one object of a given kind can have a given name at a time (i.e., they are spatially unique). But if you delete an object, you can make a new object with the same name. Names are the used to refer to an object in a resource URL, such as `/api/v1/pods/some-name`. By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restrictions. See the [identifiers design doc](https://github.com/kubernetes/kubernetes/blob/{{ page.githubbranch }}/docs/design/identifiers.md) for the precise syntax rules for names. +Names are generally client-provided. Only one object of a given kind can have a given name at a time (i.e., they are spatially unique). But if you delete an object, you can make a new object with the same name. Names are the used to refer to an object in a resource URL, such as `/api/v1/pods/some-name`. By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restrictions. See the [identifiers design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/identifiers.md) for the precise syntax rules for names. ## UIDs diff --git a/v1.1/docs/user-guide/images.md b/v1.1/docs/user-guide/images.md index 04ec3fe0f8b..42807edd690 100644 --- a/v1.1/docs/user-guide/images.md +++ b/v1.1/docs/user-guide/images.md @@ -146,7 +146,7 @@ where node creation is automated. Kubernetes supports specifying registry keys on a pod. First, create a `.dockercfg`, such as running `docker login `. -Then put the resulting `.dockercfg` file into a [secret resource](secrets). For example: +Then put the resulting `.dockercfg` file into a [secret resource](/{{page.version}}/docs/user-guide/secrets). For example: ```shell $ docker login @@ -201,7 +201,7 @@ spec: This needs to be done for each pod that is using a private registry. However, setting of this field can be automated by setting the imagePullSecrets -in a [serviceAccount](service-accounts) resource. +in a [serviceAccount](/{{page.version}}/docs/user-guide/service-accounts) resource. Currently, all pods will potentially have read access to any images which were pulled using imagePullSecrets. That is, imagePullSecrets does *NOT* protect your diff --git a/v1.1/docs/user-guide/ingress.md b/v1.1/docs/user-guide/ingress.md index 4e3e81da1ed..f78efb40ee2 100644 --- a/v1.1/docs/user-guide/ingress.md +++ b/v1.1/docs/user-guide/ingress.md @@ -67,13 +67,13 @@ rules: *POSTing this to the API server will have no effect if you have not configured an [Ingress controller](#ingress-controllers).* -__Lines 1-4__: As with all other Kubernetes config, an Ingress needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see [here](simple-yaml), [here](/{{page.version}}/docs/user-guide/configuring-containers), and [here](working-with-resources). +__Lines 1-4__: As with all other Kubernetes config, an Ingress needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see [here](/{{page.version}}/docs/user-guide/simple-yaml), [here](/{{page.version}}/docs/user-guide/configuring-containers), and [here](/{{page.version}}/docs/user-guide/working-with-resources). __Lines 5-7__: Ingress [spec](/{{page.version}}/docs/devel/api-conventions/#spec-and-status) has all the information needed to configure a loadbalancer or proxy server. Most importantly, it contains a list of rules matched against all incoming requests. Currently the Ingress resource only supports http rules. __Lines 8-9__: Each http rule contains the following information: A host (eg: foo.bar.com, defaults to * in this example), a list of paths (eg: /testpath) each of which has an associated backend (test:80). Both the host and path must match the content of an incoming request before the loadbalancer directs traffic to the backend. -__Lines 10-12__: A backend is a service:port combination as described in the [services doc](services). Ingress traffic is typically sent directly to the endpoints matching a backend. +__Lines 10-12__: A backend is a service:port combination as described in the [services doc](/{{page.version}}/docs/user-guide/services). Ingress traffic is typically sent directly to the endpoints matching a backend. __Global Parameters__: For the sake of simplicity the example Ingress has no global parameters, see the [api-reference](https://releases.k8s.io/release-1.1/pkg/apis/extensions/v1beta1/types.go) for a full definition of the resource. One can specify a global default backend in the absence of which requests that don't match a path in the spec are sent to the default backend of the Ingress controller. Though the Ingress resource doesn't support HTTPS yet, security configs would also be global. @@ -100,7 +100,7 @@ spec: servicePort: 80 ``` -[Download example](ingress.yaml) +[Download example](/{{page.version}}/docs/user-guide/ingress.yaml) If you create it using `kubectl -f` you should see: diff --git a/v1.1/docs/user-guide/introspection-and-debugging.md b/v1.1/docs/user-guide/introspection-and-debugging.md index df62654f151..16aec7d2e44 100644 --- a/v1.1/docs/user-guide/introspection-and-debugging.md +++ b/v1.1/docs/user-guide/introspection-and-debugging.md @@ -308,9 +308,9 @@ status: Learn about additional debugging tools, including: -* [Logging](logging) -* [Monitoring](monitoring) -* [Getting into containers via `exec`](getting-into-containers) +* [Logging](/{{page.version}}/docs/user-guide/logging) +* [Monitoring](/{{page.version}}/docs/user-guide/monitoring) +* [Getting into containers via `exec`](/{{page.version}}/docs/user-guide/getting-into-containers) * [Connecting to containers via proxies](/{{page.version}}/docs/user-guide/connecting-to-applications-proxy) * [Connecting to containers via port forwarding](/{{page.version}}/docs/user-guide/connecting-to-applications-port-forward) diff --git a/v1.1/docs/user-guide/jobs.md b/v1.1/docs/user-guide/jobs.md index 70944f34e30..8b42daddd74 100644 --- a/v1.1/docs/user-guide/jobs.md +++ b/v1.1/docs/user-guide/jobs.md @@ -42,7 +42,7 @@ spec: restartPolicy: Never ``` -[Download example](job.yaml) +[Download example](/{{page.version}}/docs/user-guide/job.yaml) Run the example job by downloading the example file and then running this command: @@ -93,7 +93,7 @@ $ kubectl logs pi-aiw0a ## Writing a Job Spec As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields. For -general information about working with config files, see [here](simple-yaml), +general information about working with config files, see [here](/{{page.version}}/docs/user-guide/simple-yaml), [here](/{{page.version}}/docs/user-guide/configuring-containers), and [here](working-with-resources). A Job also needs a [`.spec` section](/{{page.version}}/docs/devel/api-conventions/#spec-and-status). @@ -102,14 +102,14 @@ A Job also needs a [`.spec` section](/{{page.version}}/docs/devel/api-convention The `.spec.template` is the only required field of the `.spec`. -The `.spec.template` is a [pod template](replication-controller/#pod-template). It has exactly -the same schema as a [pod](pods), except it is nested and does not have an `apiVersion` or +The `.spec.template` is a [pod template](/{{page.version}}/docs/user-guide/replication-controller/#pod-template). It has exactly +the same schema as a [pod](/{{page.version}}/docs/user-guide/pods), except it is nested and does not have an `apiVersion` or `kind`. In addition to required fields for a Pod, a pod template in a job must specify appropriate lables (see [pod selector](#pod-selector) and an appropriate restart policy. -Only a [`RestartPolicy`](pod-states) equal to `Never` or `OnFailure` are allowed. +Only a [`RestartPolicy`](/{{page.version}}/docs/user-guide/pod-states) equal to `Never` or `OnFailure` are allowed. ### Pod Selector @@ -117,7 +117,7 @@ The `.spec.selector` field is a label query over a set of pods. The `spec.selector` is an object consisting of two fields: -* `matchLabels` - works the same as the `.spec.selector` of a [ReplicationController](replication-controller) +* `matchLabels` - works the same as the `.spec.selector` of a [ReplicationController](/{{page.version}}/docs/user-guide/replication-controller) * `matchExpressions` - allows to build more sophisticated selectors by specyfing key, list of values and an operator that relates the key and values. @@ -161,7 +161,7 @@ a non-zero exit code, or the Container was killed for exceeding a memory limit, happens, and the `.spec.template.containers[].restartPolicy = "OnFailure"`, then the Pod stays on the node, but the Container is re-run. Therefore, your program needs to handle the the case when it is restarted locally, or else specify `.spec.template.containers[].restartPolicy = "Never"`. -See [pods-states](pod-states) for more information on `restartPolicy`. +See [pods-states](/{{page.version}}/docs/user-guide/pod-states) for more information on `restartPolicy`. An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node (node is upgraded, rebooted, delelted, etc.), or if a container of the Pod fails and the @@ -188,11 +188,11 @@ requires only a single pod. ### Replication Controller -Jobs are complementary to [Replication Controllers](replication-controller). +Jobs are complementary to [Replication Controllers](/{{page.version}}/docs/user-guide/replication-controller). A Replication Controller manages pods which are not expected to terminate (e.g. web servers), and a Job manages pods that are expected to terminate (e.g. batch jobs). -As discussed in [life of a pod](pod-states), `Job` is *only* appropriate for pods with +As discussed in [life of a pod](/{{page.version}}/docs/user-guide/pod-states), `Job` is *only* appropriate for pods with `RestartPolicy` equal to `OnFailure` or `Never`. (Note: If `RestartPolicy` is not set, the default value is `Always`.) diff --git a/v1.1/docs/user-guide/kubeconfig-file.md b/v1.1/docs/user-guide/kubeconfig-file.md index 5872ec81ce1..434cfe3212b 100644 --- a/v1.1/docs/user-guide/kubeconfig-file.md +++ b/v1.1/docs/user-guide/kubeconfig-file.md @@ -122,7 +122,7 @@ The rules for loading and merging the kubeconfig files are straightforward, but ## Manipulation of kubeconfig via `kubectl config ` In order to more easily manipulate kubeconfig files, there are a series of subcommands to `kubectl config` to help. -See [kubectl/kubectl_config.md](kubectl/kubectl_config) for help. +See [kubectl/kubectl_config.md](/{{page.version}}/docs/user-guide/kubectl/kubectl_config) for help. ### Example diff --git a/v1.1/docs/user-guide/kubectl-overview.md b/v1.1/docs/user-guide/kubectl-overview.md index b55d6f72b3a..b866e23a483 100644 --- a/v1.1/docs/user-guide/kubectl-overview.md +++ b/v1.1/docs/user-guide/kubectl-overview.md @@ -1,7 +1,7 @@ --- title: "kubectl overview" --- -Use this overview of the `kubectl` command line interface to help you start running commands against Kubernetes clusters. This overview quickly covers `kubectl` syntax, describes the command operations, and provides common examples. For details about each command, including all the supported flags and subcommands, see the [kubectl](kubectl/kubectl) reference documentation. +Use this overview of the `kubectl` command line interface to help you start running commands against Kubernetes clusters. This overview quickly covers `kubectl` syntax, describes the command operations, and provides common examples. For details about each command, including all the supported flags and subcommands, see the [kubectl](/{{page.version}}/docs/user-guide/kubectl/kubectl) reference documentation. TODO: Auto-generate this file to ensure it's always in sync with any `kubectl` changes, see [#14177](http://pr.k8s.io/14177). @@ -74,7 +74,7 @@ Operation | Syntax | Description `stop` | `kubectl stop` | Deprecated: Instead, see `kubectl delete`. `version` | `kubectl version [--client] [flags]` | Display the Kubernetes version running on the client and server. -Remember: For more about command operations, see the [kubectl](kubectl/kubectl) reference documentation. +Remember: For more about command operations, see the [kubectl](/{{page.version}}/docs/user-guide/kubectl/kubectl) reference documentation. ## Resource types @@ -101,7 +101,7 @@ Resource type | Abbreviated alias ## Output options -Use the following sections for information about how you can format or sort the output of certain commands. For details about which commands support the various output options, see the [kubectl](kubectl/kubectl) reference documentation. +Use the following sections for information about how you can format or sort the output of certain commands. For details about which commands support the various output options, see the [kubectl](/{{page.version}}/docs/user-guide/kubectl/kubectl) reference documentation. ### Formatting output @@ -120,8 +120,8 @@ Output format | Description `-o=custom-columns=` | Print a table using a comma separated list of [custom columns](#custom-columns). `-o=custom-columns-file=` | Print a table using the [custom columns](#custom-columns) template in the `` file. `-o=json` | Output a JSON formatted API object. -`-o=jsonpath=