Example links use kubernetes/examples

Fix #5203

Previously, many of the Kubernetes links used
`https://github.com/kubernetes/kubernetes/examples`. This directory was
deprecated through
cb712e41d4,
in favor of the new `examples` repo hosted at
`https://github.com/kubernetes/examples`. This commit updates all links
accordingly.
pull/5240/head
mattjmcnaughton 2017-08-29 09:10:08 -04:00
parent 8c9e84c201
commit 959cd767f5
26 changed files with 49 additions and 49 deletions

View File

@ -137,7 +137,7 @@ If you're interested in learning more about `kubectl`, go ahead and read [kubect
The examples we've used so far apply at most a single label to any resource. There are many scenarios where multiple labels should be used to distinguish sets from one another. The examples we've used so far apply at most a single label to any resource. There are many scenarios where multiple labels should be used to distinguish sets from one another.
For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels: For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels:
```yaml ```yaml
labels: labels:

View File

@ -19,11 +19,11 @@ This is a living document. If you think of something that is not on this list bu
- Write your configuration files using YAML rather than JSON. Though these formats can be used interchangeably in almost all scenarios, YAML tends to be more user-friendly. - Write your configuration files using YAML rather than JSON. Though these formats can be used interchangeably in almost all scenarios, YAML tends to be more user-friendly.
- Group related objects into a single file whenever it makes sense. One file is often easier to manage than several. See the [guestbook-all-in-one.yaml](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/all-in-one/guestbook-all-in-one.yaml) file as an example of this syntax. - Group related objects into a single file whenever it makes sense. One file is often easier to manage than several. See the [guestbook-all-in-one.yaml](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/all-in-one/guestbook-all-in-one.yaml) file as an example of this syntax.
Note also that many `kubectl` commands can be called on a directory, so you can also call `kubectl create` on a directory of config files. See below for more details. Note also that many `kubectl` commands can be called on a directory, so you can also call `kubectl create` on a directory of config files. See below for more details.
- Don't specify default values unnecessarily, in order to simplify and minimize configs, and to reduce error. For example, omit the selector and labels in a `ReplicationController` if you want them to be the same as the labels in its `podTemplate`, since those fields are populated from the `podTemplate` labels by default. See the [guestbook app's](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) .yaml files for some [examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/frontend-deployment.yaml) of this. - Don't specify default values unnecessarily, in order to simplify and minimize configs, and to reduce error. For example, omit the selector and labels in a `ReplicationController` if you want them to be the same as the labels in its `podTemplate`, since those fields are populated from the `podTemplate` labels by default. See the [guestbook app's](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/) .yaml files for some [examples](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/frontend-deployment.yaml) of this.
- Put an object description in an annotation to allow better introspection. - Put an object description in an annotation to allow better introspection.
@ -58,7 +58,7 @@ This is a living document. If you think of something that is not on this list bu
## Using Labels ## Using Labels
- Define and use [labels](/docs/user-guide/labels/) that identify __semantic attributes__ of your application or deployment. For example, instead of attaching a label to a set of pods to explicitly represent some service (For example, `service: myservice`), or explicitly representing the replication controller managing the pods (for example, `controller: mycontroller`), attach labels that identify semantic attributes, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. This will let you select the object groups appropriate to the context— for example, a service for all "tier: frontend" pods, or all "test" phase components of app "myapp". See the [guestbook](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) app for an example of this approach. - Define and use [labels](/docs/user-guide/labels/) that identify __semantic attributes__ of your application or deployment. For example, instead of attaching a label to a set of pods to explicitly represent some service (For example, `service: myservice`), or explicitly representing the replication controller managing the pods (for example, `controller: mycontroller`), attach labels that identify semantic attributes, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. This will let you select the object groups appropriate to the context— for example, a service for all "tier: frontend" pods, or all "test" phase components of app "myapp". See the [guestbook](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/) app for an example of this approach.
A service can be made to span multiple deployments, such as is done across [rolling updates](/docs/tasks/run-application/rolling-update-replication-controller/), by simply omitting release-specific labels from its selector, rather than updating a service's selector to match the replication controller's selector fully. A service can be made to span multiple deployments, such as is done across [rolling updates](/docs/tasks/run-application/rolling-update-replication-controller/), by simply omitting release-specific labels from its selector, rather than updating a service's selector to match the replication controller's selector fully.

View File

@ -169,7 +169,7 @@ Till now we have only accessed the nginx server from within the cluster. Before
* An nginx server configured to use the certificates * An nginx server configured to use the certificates
* A [secret](/docs/user-guide/secrets) that makes the certificates accessible to pods * A [secret](/docs/user-guide/secrets) that makes the certificates accessible to pods
You can acquire all these from the [nginx https example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/https-nginx/), in short: You can acquire all these from the [nginx https example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/https-nginx/), in short:
```shell ```shell
$ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json $ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json
@ -188,7 +188,7 @@ Now modify your nginx replicas to start an https server using the certificate in
Noteworthy points about the nginx-secure-app manifest: Noteworthy points about the nginx-secure-app manifest:
- It contains both Deployment and Service specification in the same file. - It contains both Deployment and Service specification in the same file.
- The [nginx server](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/https-nginx/default.conf) serves http traffic on port 80 and https traffic on 443, and nginx Service exposes both ports. - The [nginx server](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/https-nginx/default.conf) serves http traffic on port 80 and https traffic on 443, and nginx Service exposes both ports.
- Each container has access to the keys through a volume mounted at /etc/nginx/ssl. This is setup *before* the nginx server is started. - Each container has access to the keys through a volume mounted at /etc/nginx/ssl. This is setup *before* the nginx server is started.
```shell ```shell

View File

@ -345,7 +345,7 @@ kube-dns 10.180.3.17:53,10.180.3.17:53 1h
If you do not see the endpoints, see endpoints section in the [debugging services documentation](/docs/tasks/debug-application-cluster/debug-service/). If you do not see the endpoints, see endpoints section in the [debugging services documentation](/docs/tasks/debug-application-cluster/debug-service/).
For additional Kubernetes DNS examples, see the [cluster-dns examples](https://git.k8s.io/kubernetes/examples/cluster-dns) in the Kubernetes GitHub repository. For additional Kubernetes DNS examples, see the [cluster-dns examples](https://github.com/kubernetes/examples/tree/master/staging/cluster-dns) in the Kubernetes GitHub repository.
## Kubernetes Federation (Multiple Zone support) ## Kubernetes Federation (Multiple Zone support)

View File

@ -536,7 +536,7 @@ parameters:
``` ```
$ kubectl create secret generic heketi-secret --type="kubernetes.io/glusterfs" --from-literal=key='opensesame' --namespace=default $ kubectl create secret generic heketi-secret --type="kubernetes.io/glusterfs" --from-literal=key='opensesame' --namespace=default
``` ```
Example of a secret can be found in [glusterfs-provisioning-secret.yaml](https://git.k8s.io/kubernetes/examples/persistent-volume-provisioning/glusterfs/glusterfs-secret.yaml). Example of a secret can be found in [glusterfs-provisioning-secret.yaml](https://github.com/kubernetes/examples/tree/master/staging/persistent-volume-provisioning/glusterfs/glusterfs-secret.yaml).
* `clusterid`: `630372ccdc720a92c681fb928f27b53f` is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids, for ex: * `clusterid`: `630372ccdc720a92c681fb928f27b53f` is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids, for ex:
"8452344e2becec931ece4e33c4674e4e,42982310de6c63381718ccfa6d8cf397". This is an optional parameter. "8452344e2becec931ece4e33c4674e4e,42982310de6c63381718ccfa6d8cf397". This is an optional parameter.
* `gidMin`, `gidMax` : The minimum and maximum value of GID range for the storage class. A unique value (GID) in this range ( gidMin-gidMax ) will be used for dynamically provisioned volumes. These are optional values. If not specified, the volume will be provisioned with a value between 2000-2147483647 which are defaults for gidMin and gidMax respectively. * `gidMin`, `gidMax` : The minimum and maximum value of GID range for the storage class. A unique value (GID) in this range ( gidMin-gidMax ) will be used for dynamically provisioned volumes. These are optional values. If not specified, the volume will be provisioned with a value between 2000-2147483647 which are defaults for gidMin and gidMax respectively.
@ -631,7 +631,7 @@ parameters:
vSphere Infrastructure(VI) administrator can specify storage requirements for applications in terms of storage capabilities while creating a storage class inside Kubernetes. Please note that while creating a StorageClass, administrator should specify storage capability names used in the table above as these names might differ from the ones used by VSAN. For example - Number of disk stripes per object is referred to as stripeWidth in VSAN documentation however vSphere Cloud Provider uses a friendly name diskStripes. vSphere Infrastructure(VI) administrator can specify storage requirements for applications in terms of storage capabilities while creating a storage class inside Kubernetes. Please note that while creating a StorageClass, administrator should specify storage capability names used in the table above as these names might differ from the ones used by VSAN. For example - Number of disk stripes per object is referred to as stripeWidth in VSAN documentation however vSphere Cloud Provider uses a friendly name diskStripes.
You can see [vSphere example](https://git.k8s.io/kubernetes/examples/volumes/vsphere) for more details. You can see [vSphere example](https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere) for more details.
#### Ceph RBD #### Ceph RBD

View File

@ -300,7 +300,7 @@ writers simultaneously.
**Important:** You must have your own NFS server running with the share exported before you can use it. **Important:** You must have your own NFS server running with the share exported before you can use it.
{: .caution} {: .caution}
See the [NFS example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/nfs) for more details. See the [NFS example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/nfs) for more details.
### iscsi ### iscsi
@ -319,7 +319,7 @@ and then serve it in parallel from as many pods as you need. Unfortunately,
iSCSI volumes can only be mounted by a single consumer in read-write mode - no iSCSI volumes can only be mounted by a single consumer in read-write mode - no
simultaneous writers allowed. simultaneous writers allowed.
See the [iSCSI example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/iscsi) for more details. See the [iSCSI example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/iscsi) for more details.
### fc (fibre channel) ### fc (fibre channel)
@ -331,7 +331,7 @@ targetWWNs expect that those WWNs are from multi-path connections.
**Important:** You must configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them. **Important:** You must configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them.
{: .caution} {: .caution}
See the [FC example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/fibre_channel) for more details. See the [FC example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/fibre_channel) for more details.
### flocker ### flocker
@ -347,7 +347,7 @@ can be "handed off" between pods as required.
**Important:** You must have your own Flocker installation running before you can use it. **Important:** You must have your own Flocker installation running before you can use it.
{: .caution} {: .caution}
See the [Flocker example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/flocker) for more details. See the [Flocker example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/flocker) for more details.
### glusterfs ### glusterfs
@ -362,7 +362,7 @@ simultaneously.
**Important:** You must have your own GlusterFS installation running before you can use it. **Important:** You must have your own GlusterFS installation running before you can use it.
{: .caution} {: .caution}
See the [GlusterFS example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/glusterfs) for more details. See the [GlusterFS example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/glusterfs) for more details.
### rbd ### rbd
@ -382,7 +382,7 @@ and then serve it in parallel from as many pods as you need. Unfortunately,
RBD volumes can only be mounted by a single consumer in read-write mode - no RBD volumes can only be mounted by a single consumer in read-write mode - no
simultaneous writers allowed. simultaneous writers allowed.
See the [RBD example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/rbd) for more details. See the [RBD example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/rbd) for more details.
### cephfs ### cephfs
@ -396,7 +396,7 @@ writers simultaneously.
**Important:** You must have your own Ceph server running with the share exported before you can use it. **Important:** You must have your own Ceph server running with the share exported before you can use it.
{: .caution} {: .caution}
See the [CephFS example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/cephfs/) for more details. See the [CephFS example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/cephfs/) for more details.
### gitRepo ### gitRepo
@ -555,20 +555,20 @@ A `FlexVolume` enables users to mount vendor volumes into a pod. It expects vend
drivers are installed in the volume plugin path on each kubelet node. This is drivers are installed in the volume plugin path on each kubelet node. This is
an alpha feature and may change in future. an alpha feature and may change in future.
More details are in [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/flexvolume/README.md). More details are in [here](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/flexvolume/README.md).
### AzureFileVolume ### AzureFileVolume
A `AzureFileVolume` is used to mount a Microsoft Azure File Volume (SMB 2.1 and 3.0) A `AzureFileVolume` is used to mount a Microsoft Azure File Volume (SMB 2.1 and 3.0)
into a Pod. into a Pod.
More details can be found [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/azure_file/README.md). More details can be found [here](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/azure_file/README.md).
### AzureDiskVolume ### AzureDiskVolume
A `AzureDiskVolume` is used to mount a Microsoft Azure [Data Disk](https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-about-disks-vhds/) into a Pod. A `AzureDiskVolume` is used to mount a Microsoft Azure [Data Disk](https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-about-disks-vhds/) into a Pod.
More details can be found [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/azure_disk/README.md). More details can be found [here](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/azure_disk/README.md).
### vsphereVolume ### vsphereVolume
@ -626,7 +626,7 @@ spec:
volumePath: "[DatastoreName] volumes/myDisk" volumePath: "[DatastoreName] volumes/myDisk"
fsType: ext4 fsType: ext4
``` ```
More examples can be found [here](https://git.k8s.io/kubernetes/examples/volumes/vsphere). More examples can be found [here](https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere).
### Quobyte ### Quobyte
@ -636,7 +636,7 @@ A `Quobyte` volume allows an existing [Quobyte](http://www.quobyte.com) volume t
**Important:** You must have your own Quobyte setup running with the volumes created before you can use it. **Important:** You must have your own Quobyte setup running with the volumes created before you can use it.
{: .caution} {: .caution}
See the [Quobyte example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/quobyte) for more details. See the [Quobyte example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/quobyte) for more details.
### PortworxVolume ### PortworxVolume
A `PortworxVolume` is an elastic block storage layer that runs hyperconverged with Kubernetes. Portworx fingerprints storage in a A `PortworxVolume` is an elastic block storage layer that runs hyperconverged with Kubernetes. Portworx fingerprints storage in a
@ -669,7 +669,7 @@ spec:
**Important:** Make sure you have an existing PortworxVolume with name `pxvol` before using it in the pod. **Important:** Make sure you have an existing PortworxVolume with name `pxvol` before using it in the pod.
{: .caution} {: .caution}
More details and examples can be found [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/portworx/README.md). More details and examples can be found [here](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/portworx/README.md).
### ScaleIO ### ScaleIO
ScaleIO is a software-based storage platform that can use existing hardware to create clusters of scalable ScaleIO is a software-based storage platform that can use existing hardware to create clusters of scalable
@ -705,7 +705,7 @@ spec:
fsType: xfs fsType: xfs
``` ```
For further detail, please the see the [ScaleIO examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/scaleio). For further detail, please the see the [ScaleIO examples](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/scaleio).
### StorageOS ### StorageOS
A `storageos` volume allows an existing [StorageOS](https://www.storageos.com) volume to be mounted into your pod. A `storageos` volume allows an existing [StorageOS](https://www.storageos.com) volume to be mounted into your pod.
@ -747,7 +747,7 @@ spec:
fsType: ext4 fsType: ext4
``` ```
For more information including Dynamic Provisioning and Persistent Volume Claims, please see the [StorageOS examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/storageos). For more information including Dynamic Provisioning and Persistent Volume Claims, please see the [StorageOS examples](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/storageos).
### local ### local

View File

@ -366,7 +366,7 @@ of custom controller for those pods. This allows the most flexibility, but may
complicated to get started with and offers less integration with Kubernetes. complicated to get started with and offers less integration with Kubernetes.
One example of this pattern would be a Job which starts a Pod which runs a script that in turn One example of this pattern would be a Job which starts a Pod which runs a script that in turn
starts a Spark master controller (see [spark example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/spark/README.md)), runs a spark starts a Spark master controller (see [spark example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/spark/README.md)), runs a spark
driver, and then cleans up. driver, and then cleans up.
An advantage of this approach is that the overall process gets the completion guarantee of a Job An advantage of this approach is that the overall process gets the completion guarantee of a Job

View File

@ -39,7 +39,7 @@ This doc assumes familiarity with the following Kubernetes concepts:
* [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/) * [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/)
* [Headless Services](/docs/user-guide/services/#headless-services) * [Headless Services](/docs/user-guide/services/#headless-services)
* [Persistent Volumes](/docs/concepts/storage/volumes/) * [Persistent Volumes](/docs/concepts/storage/volumes/)
* [Persistent Volume Provisioning](http://releases.k8s.io/{{page.githubbranch}}/examples/persistent-volume-provisioning/README.md) * [Persistent Volume Provisioning](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/persistent-volume-provisioning/README.md)
You need a working Kubernetes cluster at version >= 1.3, with a healthy DNS [cluster addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/README.md) at version >= 15. You cannot use PetSet on a hosted Kubernetes provider that has disabled `alpha` resources. You need a working Kubernetes cluster at version >= 1.3, with a healthy DNS [cluster addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/README.md) at version >= 15. You cannot use PetSet on a hosted Kubernetes provider that has disabled `alpha` resources.
@ -95,7 +95,7 @@ Before you start deploying applications as PetSets, there are a few limitations
* PetSet is an *alpha* resource, not available in any Kubernetes release prior to 1.3. * PetSet is an *alpha* resource, not available in any Kubernetes release prior to 1.3.
* As with all alpha/beta resources, it can be disabled through the `--runtime-config` option passed to the apiserver, and in fact most likely will be disabled on hosted offerings of Kubernetes. * As with all alpha/beta resources, it can be disabled through the `--runtime-config` option passed to the apiserver, and in fact most likely will be disabled on hosted offerings of Kubernetes.
* The only updatable field on a PetSet is `replicas`. * The only updatable field on a PetSet is `replicas`.
* The storage for a given pet must either be provisioned by a [persistent volume provisioner](http://releases.k8s.io/{{page.githubbranch}}/examples/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin. Note that persistent volume provisioning is also currently in alpha. * The storage for a given pet must either be provisioned by a [persistent volume provisioner](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin. Note that persistent volume provisioning is also currently in alpha.
* Deleting and/or scaling a PetSet down will *not* delete the volumes associated with the PetSet. This is done to ensure safety first, your data is more valuable than an auto purge of all related PetSet resources. **Deleting the Persistent Volume Claims will result in a deletion of the associated volumes**. * Deleting and/or scaling a PetSet down will *not* delete the volumes associated with the PetSet. This is done to ensure safety first, your data is more valuable than an auto purge of all related PetSet resources. **Deleting the Persistent Volume Claims will result in a deletion of the associated volumes**.
* All PetSets currently require a "governing service", or a Service responsible for the network identity of the pets. The user is responsible for this Service. * All PetSets currently require a "governing service", or a Service responsible for the network identity of the pets. The user is responsible for this Service.
* Updating an existing PetSet is currently a manual process, meaning you either need to deploy a new PetSet with the new image version, or orphan Pets one by one, update their image, and join them back to the cluster. * Updating an existing PetSet is currently a manual process, meaning you either need to deploy a new PetSet with the new image version, or orphan Pets one by one, update their image, and join them back to the cluster.

View File

@ -42,7 +42,7 @@ provides a set of stateless replicas. Controllers such as
* StatefulSet is a beta resource, not available in any Kubernetes release prior to 1.5. * StatefulSet is a beta resource, not available in any Kubernetes release prior to 1.5.
* As with all alpha/beta resources, you can disable StatefulSet through the `--runtime-config` option passed to the apiserver. * As with all alpha/beta resources, you can disable StatefulSet through the `--runtime-config` option passed to the apiserver.
* The storage for a given Pod must either be provisioned by a [PersistentVolume Provisioner](http://releases.k8s.io/{{page.githubbranch}}/examples/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin. * The storage for a given Pod must either be provisioned by a [PersistentVolume Provisioner](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin.
* Deleting and/or scaling a StatefulSet down will *not* delete the volumes associated with the StatefulSet. This is done to ensure data safety, which is generally more valuable than an automatic purge of all related StatefulSet resources. * Deleting and/or scaling a StatefulSet down will *not* delete the volumes associated with the StatefulSet. This is done to ensure data safety, which is generally more valuable than an automatic purge of all related StatefulSet resources.
* StatefulSets currently require a [Headless Service](/docs/concepts/services-networking/service/#headless-services) to be responsible for the network identity of the Pods. You are responsible for creating this Service. * StatefulSets currently require a [Headless Service](/docs/concepts/services-networking/service/#headless-services) to be responsible for the network identity of the Pods. You are responsible for creating this Service.

View File

@ -142,9 +142,9 @@ For more information, please read [kubeconfig files](/docs/concepts/cluster-admi
See [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster. See [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster.
The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/)
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/)
## Scaling the cluster ## Scaling the cluster

View File

@ -653,7 +653,7 @@ Now that the CoreOS with Kubernetes installed is up and running lets spin up som
See [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster. See [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster.
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/). For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/).
## Helping commands for debugging ## Helping commands for debugging

View File

@ -33,7 +33,7 @@ Explore the following resources for more information about Kubernetes, Kubernete
- [DCOS Documentation](https://docs.mesosphere.com/) - [DCOS Documentation](https://docs.mesosphere.com/)
- [Managing DCOS Services](https://docs.mesosphere.com/services/kubernetes/) - [Managing DCOS Services](https://docs.mesosphere.com/services/kubernetes/)
- [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) - [Kubernetes Examples](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/)
- [Kubernetes on Mesos Documentation](https://github.com/kubernetes-incubator/kube-mesos-framework/blob/master/README.md) - [Kubernetes on Mesos Documentation](https://github.com/kubernetes-incubator/kube-mesos-framework/blob/master/README.md)
- [Kubernetes on Mesos Release Notes](https://github.com/mesosphere/kubernetes-mesos/releases) - [Kubernetes on Mesos Release Notes](https://github.com/mesosphere/kubernetes-mesos/releases)
- [Kubernetes on DCOS Package Source](https://github.com/mesosphere/kubernetes-mesos) - [Kubernetes on DCOS Package Source](https://github.com/mesosphere/kubernetes-mesos)
@ -110,7 +110,7 @@ $ dcos kubectl get pods --namespace=kube-system
Names and ages may vary. Names and ages may vary.
Now that Kubernetes is installed on DCOS, you may wish to explore the [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/README.md) or the [Kubernetes User Guide](/docs/user-guide/). Now that Kubernetes is installed on DCOS, you may wish to explore the [Kubernetes Examples](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/README.md) or the [Kubernetes User Guide](/docs/user-guide/).
## Uninstall ## Uninstall

View File

@ -135,7 +135,7 @@ Some of the pods may take a few seconds to start up (during this time they'll sh
Then, see [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster. Then, see [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster.
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/). The [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) is a good "getting started" walkthrough. For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/). The [guestbook example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/) is a good "getting started" walkthrough.
### Tearing down the cluster ### Tearing down the cluster

View File

@ -216,7 +216,7 @@ sudo route -n add -net 172.17.0.0 $(docker-machine ip kube-dev)
To learn more about Pods, Volumes, Labels, Services, and Replication Controllers, start with the To learn more about Pods, Volumes, Labels, Services, and Replication Controllers, start with the
[Kubernetes Tutorials](/docs/tutorials/). [Kubernetes Tutorials](/docs/tutorials/).
To skip to a more advanced example, see the [Guestbook Example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) To skip to a more advanced example, see the [Guestbook Example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/)
1. Destroy cluster 1. Destroy cluster

View File

@ -333,7 +333,7 @@ Future work will add instructions to this guide to enable support for Kubernetes
[6]: http://mesos.apache.org/ [6]: http://mesos.apache.org/
[7]: https://github.com/kubernetes-incubator/kube-mesos-framework/blob/master/docs/issues.md [7]: https://github.com/kubernetes-incubator/kube-mesos-framework/blob/master/docs/issues.md
[8]: https://github.com/mesosphere/kubernetes-mesos/issues [8]: https://github.com/mesosphere/kubernetes-mesos/issues
[9]: https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples [9]: https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/
[10]: http://open.mesosphere.com/getting-started/cloud/google/mesosphere/#vpn-setup [10]: http://open.mesosphere.com/getting-started/cloud/google/mesosphere/#vpn-setup
[11]: https://git.k8s.io/kubernetes/cluster/addons/dns/README.md#kube-dns [11]: https://git.k8s.io/kubernetes/cluster/addons/dns/README.md#kube-dns
[12]: https://git.k8s.io/kubernetes/cluster/addons/dns/kubedns-controller.yaml.in [12]: https://git.k8s.io/kubernetes/cluster/addons/dns/kubedns-controller.yaml.in

View File

@ -167,7 +167,7 @@ Once the nginx pod is running, use the port-forward command to set up a proxy fr
You should now see nginx on [http://localhost:8888](). You should now see nginx on [http://localhost:8888]().
For more complex examples please see the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/). For more complex examples please see the [examples directory](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/).
## Administering your cluster with Openstack ## Administering your cluster with Openstack

View File

@ -151,7 +151,7 @@ The `kube-up` script is not yet supported on AWS. Instead, we recommend followin
### Deploy apps to the cluster ### Deploy apps to the cluster
After creating the cluster, you can start deploying applications. For an introductory example, [deploy a simple nginx web server](/docs/user-guide/simple-nginx). Note that this example did not have to be modified for use with a "rktnetes" cluster. More examples can be found in the [Kubernetes examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/). After creating the cluster, you can start deploying applications. For an introductory example, [deploy a simple nginx web server](/docs/user-guide/simple-nginx). Note that this example did not have to be modified for use with a "rktnetes" cluster. More examples can be found in the [Kubernetes examples directory](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/).
## Modular isolation with interchangeable stage1 images ## Modular isolation with interchangeable stage1 images

View File

@ -170,7 +170,7 @@ NAME STATUS AGE VERSION
10.10.103.250 Ready 3d v1.6.0+fff5156 10.10.103.250 Ready 3d v1.6.0+fff5156
``` ```
Also you can run Kubernetes [guest-example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) to build a redis backend cluster Also you can run Kubernetes [guest-example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/) to build a redis backend cluster
### Deploy addons ### Deploy addons

View File

@ -33,7 +33,7 @@ vSphere Cloud Provider allows using vSphere managed storage within Kubernetes. I
Documentation for how to use vSphere managed storage can be found in the [persistent volumes user guide](/docs/concepts/storage/persistent-volumes/#vsphere) and the [volumes user guide](/docs/concepts/storage/volumes/#vspherevolume). Documentation for how to use vSphere managed storage can be found in the [persistent volumes user guide](/docs/concepts/storage/persistent-volumes/#vsphere) and the [volumes user guide](/docs/concepts/storage/volumes/#vspherevolume).
Examples can be found [here](https://git.k8s.io/kubernetes/examples/volumes/vsphere). Examples can be found [here](https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere).
#### Enable vSphere Cloud Provider #### Enable vSphere Cloud Provider

View File

@ -23,7 +23,7 @@ Check the location and credentials that kubectl knows about with this command:
$ kubectl config view $ kubectl config view
``` ```
Many of the [examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) provide an introduction to using Many of the [examples](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/) provide an introduction to using
kubectl and complete documentation is found in the [kubectl manual](/docs/user-guide/kubectl/index). kubectl and complete documentation is found in the [kubectl manual](/docs/user-guide/kubectl/index).
### Directly accessing the REST API ### Directly accessing the REST API
@ -172,7 +172,7 @@ From within a pod the recommended ways to connect to API are:
process within a container. This proxies the process within a container. This proxies the
Kubernetes API to the localhost interface of the pod, so that other processes Kubernetes API to the localhost interface of the pod, so that other processes
in any container of the pod can access it. See this [example of using kubectl proxy in any container of the pod can access it. See this [example of using kubectl proxy
in a pod](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/kubectl-container/). in a pod](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/kubectl-container/).
- use the Go client library, and create a client using the `rest.InClusterConfig()` and `kubernetes.NewForConfig()` functions. - use the Go client library, and create a client using the `rest.InClusterConfig()` and `kubernetes.NewForConfig()` functions.
They handle locating and authenticating to the apiserver. [example](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go) They handle locating and authenticating to the apiserver. [example](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go)

View File

@ -31,7 +31,7 @@ Check the location and credentials that kubectl knows about with this command:
$ kubectl config view $ kubectl config view
``` ```
Many of the [examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) provide an introduction to using Many of the [examples](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/) provide an introduction to using
kubectl. Complete documentation is found in the [kubectl manual](/docs/user-guide/kubectl/index). kubectl. Complete documentation is found in the [kubectl manual](/docs/user-guide/kubectl/index).
### Directly accessing the REST API ### Directly accessing the REST API
@ -194,7 +194,7 @@ From within a pod the recommended ways to connect to API are:
process within a container. This proxies the process within a container. This proxies the
Kubernetes API to the localhost interface of the pod, so that other processes Kubernetes API to the localhost interface of the pod, so that other processes
in any container of the pod can access it. See this [example of using kubectl proxy in any container of the pod can access it. See this [example of using kubectl proxy
in a pod](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/kubectl-container/). in a pod](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/kubectl-container/).
- use the Go client library, and create a client using the `rest.InClusterConfig()` and `kubernetes.NewForConfig()` functions. - use the Go client library, and create a client using the `rest.InClusterConfig()` and `kubernetes.NewForConfig()` functions.
They handle locating and authenticating to the apiserver. [example](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go) They handle locating and authenticating to the apiserver. [example](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go)

View File

@ -31,7 +31,7 @@ Here is an overview of the steps in this example:
## Starting Redis ## Starting Redis
For this example, for simplicity, we will start a single instance of Redis. For this example, for simplicity, we will start a single instance of Redis.
See the [Redis Example](https://git.k8s.io/kubernetes/examples/guestbook) for an example See the [Redis Example](https://github.com/kubernetes/examples/tree/master/guestbook) for an example
of deploying Redis scalably and redundantly. of deploying Redis scalably and redundantly.
Start a temporary Pod running Redis and a service so we can find it. Start a temporary Pod running Redis and a service so we can find it.

View File

@ -23,7 +23,7 @@ following Kubernetes concepts.
* [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/) * [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/)
* [Headless Services](/docs/concepts/services-networking/service/#headless-services) * [Headless Services](/docs/concepts/services-networking/service/#headless-services)
* [PersistentVolumes](/docs/concepts/storage/volumes/) * [PersistentVolumes](/docs/concepts/storage/volumes/)
* [PersistentVolume Provisioning](http://releases.k8s.io/{{page.githubbranch}}/examples/persistent-volume-provisioning/) * [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/persistent-volume-provisioning/)
* [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) * [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/)
* [kubectl CLI](/docs/user-guide/kubectl) * [kubectl CLI](/docs/user-guide/kubectl)

View File

@ -26,7 +26,7 @@ Kubernetes concepts.
* [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/) * [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/)
* [Headless Services](/docs/concepts/services-networking/service/#headless-services) * [Headless Services](/docs/concepts/services-networking/service/#headless-services)
* [PersistentVolumes](/docs/concepts/storage/volumes/) * [PersistentVolumes](/docs/concepts/storage/volumes/)
* [PersistentVolume Provisioning](http://releases.k8s.io/{{page.githubbranch}}/examples/persistent-volume-provisioning/) * [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/persistent-volume-provisioning/)
* [ConfigMaps](/docs/tasks/configure-pod-container/configmap/) * [ConfigMaps](/docs/tasks/configure-pod-container/configmap/)
* [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) * [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/)
* [PodDisruptionBudgets](/docs/admin/disruptions/#specifying-a-poddisruptionbudget) * [PodDisruptionBudgets](/docs/admin/disruptions/#specifying-a-poddisruptionbudget)

View File

@ -162,4 +162,4 @@ Finally, we have also introduced an environment variable to the `git-monitor` co
## What's Next? ## What's Next?
Continue on to [Kubernetes 201](/docs/user-guide/walkthrough/k8s201) or Continue on to [Kubernetes 201](/docs/user-guide/walkthrough/k8s201) or
for a complete application see the [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) for a complete application see the [guestbook example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/)

View File

@ -225,4 +225,4 @@ For more information about health checking, see [Container Probes](/docs/user-gu
## What's Next? ## What's Next?
For a complete application see the [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/). For a complete application see the [guestbook example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/).