Merge branch 'master' into master

pull/1157/head
fbsolo 2016-09-04 17:29:58 -07:00 committed by GitHub
commit 06defce9ef
17 changed files with 90 additions and 53 deletions

1
.gitignore vendored
View File

@ -4,3 +4,4 @@
_site/**
.sass-cache/**
CNAME
.travis.yml

View File

@ -1,7 +1,16 @@
language: go
go:
- 1.6
- 1.6.2
# Don't want default ./... here:
install: go get -t -v ./test
script: go test -v ./test
install:
- export PATH=$GOPATH/bin:$PATH
- mkdir -p $HOME/gopath/src/k8s.io
- mv $TRAVIS_BUILD_DIR $HOME/gopath/src/k8s.io/kubernetes.github.io
- go get -t -v k8s.io/kubernetes.github.io/test
- git clone --depth=50 --branch=master https://github.com/kubernetes/md-check $HOME/gopath/src/k8s.io/md-check
- go get -t -v k8s.io/md-check
script:
- go test -v k8s.io/kubernetes.github.io/test
- $GOPATH/bin/md-check --root-dir=$HOME/gopath/src/k8s.io/kubernetes.github.io

View File

@ -34,7 +34,6 @@ title: Community
<div class="company-logos">
<img src="/images/community_logos/zulily_logo.png">
<img src="/images/community_logos/we_pay_logo.png">
<img src="/images/community_logos/viacom_logo.png">
<img src="/images/community_logos/goldman_sachs_logo.png">
<img src="/images/community_logos/ebay_logo.png">
<img src="/images/community_logos/box_logo.png">
@ -72,6 +71,9 @@ title: Community
<a href="http://docs.datadoghq.com/integrations/kubernetes/"><img src="/images/community_logos/datadog_logo.png"></a>
<a href="https://apprenda.com/kubernetes-support/"><img src="/images/community_logos/apprenda_logo.png"></a>
<a href="http://www.ibm.com/cloud-computing/"><img src="/images/community_logos/ibm_logo.png"></a>
<a href="http://info.crunchydata.com/blog/advanced-crunchy-containers-for-postgresql"><img src="/images/community_logos/crunchy_data_logo.png"></a>
<a href="https://content.mirantis.com/Containerizing-OpenStack-on-Kubernetes-Video-Landing-Page.html"><img src="/images/community_logos/mirantis_logo.png"></a>
<a href="http://blog.aquasec.com/security-best-practices-for-kubernetes-deployment"><img src="/images/community_logos/aqua_logo.png"></a>
</div>
</div>
</main>

View File

@ -117,21 +117,6 @@ When the plug-in sets a compute resource request, it annotates the pod with info
See the [InitialResouces proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/initial-resources.md) for more details.
### NamespaceExists (deprecated)
This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes `Namespace`
and reject the request if the `Namespace` was not previously created. We strongly recommend running
this plug-in to ensure integrity of your data.
The functionality of this admission controller has been merged into `NamespaceLifecycle`
### NamespaceAutoProvision (deprecated)
This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes `Namespace`
and create a new `Namespace` if one did not already exist previously.
We strongly recommend `NamespaceLifecycle` over `NamespaceAutoProvision`.
### NamespaceLifecycle
This plug-in enforces that a `Namespace` that is undergoing termination cannot have new objects created in it,

View File

@ -10,9 +10,9 @@ assignees:
At {{page.version}}, Kubernetes supports clusters with up to 1000 nodes. More specifically, we support configurations that meet *all* of the following criteria:
* No more than 1000 nodes
* No more than 30000 total pods
* No more than 60000 total containers
* No more than 2000 nodes
* No more than 60000 total pods
* No more than 120000 total containers
* No more than 100 pods per node
* TOC

View File

@ -185,6 +185,10 @@ cluster state, such as the controller manager and scheduler. To achieve this re
instances of these actors, in case a machine dies. To achieve this, we are going to use a lease-lock in the API to perform
master election. We will use the `--leader-elect` flag for each scheduler and controller-manager, using a lease in the API will ensure that only 1 instance of the scheduler and controller-manager are running at once.
The scheduler and controller-manager can be configured to talk to the API server that is on the same node (i.e. 127.0.0.1), or it can be configured to communicate using the load balanced IP address of the API servers. Regardless of how they are configured, the scheduler and controller-manager will complete the leader election process mentioned above when using the `--leader-elect` flag.
In case of a failure accessing the API server, the elected leader will not be able to renew the lease, causing a new leader to be elected. This is especially relevant when configuring the scheduler and controller-manager to access the API server via 127.0.0.1, and the API server on the same node is unavailable.
### Installing configuration files
First, create empty log files on each node, so that Docker will mount the files not make new directories:

View File

@ -91,7 +91,7 @@ coreos:
ExecStart=/opt/bin/kube-apiserver \
--service-account-key-file=/opt/bin/kube-serviceaccount.key \
--service-account-lookup=false \
--admission-control=NamespaceLifecycle,NamespaceAutoProvision,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
--runtime-config=api/v1 \
--allow-privileged=true \
--insecure-bind-address=0.0.0.0 \

View File

@ -97,7 +97,7 @@ KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd-servers=http://kube-master:4001"
# Remove ServiceAccount from this line to run without API Tokens
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ResourceQuota"
```
* Create /var/run/kubernetes on master:

View File

@ -95,17 +95,29 @@ potential issues with client/server version skew.
You may find it useful to enable `kubectl` bash completion:
```
$ source ./contrib/completions/bash/kubectl
```
* If you're using kubectl with Kubernetes version 1.2 or earlier, you can source the kubectl completion script as follows:<br>
```
$ source ./contrib/completions/bash/kubectl
```
**Note**: This will last for the duration of your bash session. If you want to make this permanent you need to add this line in your bash profile.
* If you're using kubectl with Kubernetes version 1.3, use the `kubectl completion` command as follows:<br>
```
$ source <(kubectl completion bash)
```
Alternatively, on most linux distributions you can also move the completions file to your bash_completions.d like this:
**Note**: The above commands will last for the duration of your bash session. If you want to make this permanent you need to add corresponding command in your bash profile.
```
$ cp ./contrib/completions/bash/kubectl /etc/bash_completion.d/
```
Alternatively, on most linux distributions you can also add a completions file to your bash_completions.d as follows:
* For kubectl with Kubernetes v1.2 or earlier:<br>
```
$ cp ./contrib/completions/bash/kubectl /etc/bash_completion.d/
```
* For kubectl with Kubernetes v1.3:<br>
```
$ kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl
```
but then you have to update it when you update kubectl.

View File

@ -127,7 +127,7 @@ List the nodes in your cluster by running:
kubectl get nodes
```
Minikube contains a built-in Docker daemon that for running containers.
Minikube contains a built-in Docker daemon for running containers.
If you use another Docker daemon for building your containers, you will have to publish them to a registry before minikube can pull them.
You can use minikube's built in Docker daemon to avoid this extra step of pushing your images.
Use the built-in Docker daemon with:
@ -136,7 +136,7 @@ Use the built-in Docker daemon with:
eval $(minikube docker-env)
```
This command sets up the Docker environment variables so a Docker client can communicate with the minikube Docker daemon.
Minikube currently supports only docker version 1.11.1 on the server, which is what is supported by Kubernetes 1.3. With a newer docker version you'll get this [issue](https://github.com/kubernetes/minikube/issues/338).
Minikube currently supports only docker version 1.11.1 on the server, which is what is supported by Kubernetes 1.3. With a newer docker version, you'll get this [issue](https://github.com/kubernetes/minikube/issues/338).
```shell
docker ps

View File

@ -25,10 +25,14 @@ mkdir -p $GOPATH
export PATH=$PATH:$GOPATH/bin
```
4. Install the govc tool to interact with ESXi/vCenter:
4. Install the govc tool to interact with ESXi/vCenter. Head to [govc Releases](https://github.com/vmware/govmomi/releases) to download the latest.
```shell
go get github.com/vmware/govmomi/govc
# Sample commands for v0.8.0 for 64 bit Linux.
curl -OL https://github.com/vmware/govmomi/releases/download/v0.8.0/govc_linux_amd64.gz
gzip -d govc_linux_amd64.gz
chmod +x govc_linux_amd64
mv govc_linux_amd64 /usr/local/bin/govc
```
5. Get or build a [binary release](/docs/getting-started-guides/binary_release)
@ -43,7 +47,7 @@ md5sum -c kube.vmdk.gz.md5
gzip -d kube.vmdk.gz
```
Import this VMDK into your vSphere datastore:
Configure the environment for govc
```shell
export GOVC_URL='hostname' # hostname of the vc
@ -52,9 +56,30 @@ export GOVC_PASSWORD='password' # password for the above username
export GOVC_NETWORK='Network Name' # Name of the network the vms should join. Many times it could be "VM Network"
export GOVC_INSECURE=1 # If the host above uses a self-signed cert
export GOVC_DATASTORE='target datastore'
# To get resource pool via govc: govc ls -l 'host/*' | grep ResourcePool | awk '{print $1}' | xargs -n1 -t govc pool.info
export GOVC_RESOURCE_POOL='resource pool or cluster with access to datastore'
export GOVC_GUEST_LOGIN='kube:kube' # Used for logging into kube.vmdk during deployment.
export GOVC_PORT=443 # The port to be used by vSphere cloud provider plugin
# To get datacente via govc: govc datacenter.info
export GOVC_DATACENTER='ha-datacenter' # The datacenter to be used by vSphere cloud provider plugin
```
Sample environment
```shell
export GOVC_URL='10.161.236.217'
export GOVC_USERNAME='administrator'
export GOVC_PASSWORD='MyPassword1'
export GOVC_NETWORK='VM Network'
export GOVC_INSECURE=1
export GOVC_DATASTORE='datastore1'
export GOVC_RESOURCE_POOL='/Datacenter/host/10.20.104.24/Resources'
export GOVC_GUEST_LOGIN='kube:kube'
export GOVC_PORT='443'
export GOVC_DATACENTER='Datacenter'
```
Import this VMDK into your vSphere datastore:
```shell
govc import.vmdk kube.vmdk ./kube/
```
@ -63,6 +88,7 @@ Verify that the VMDK was correctly uploaded and expanded to ~3GiB:
```shell
govc datastore.ls ./kube/
```
If you need to debug any part of the deployment, the guest login for
the image that you imported is `kube:kube`. It is normally specified
in the GOVC_GUEST_LOGIN parameter above.
@ -110,7 +136,7 @@ going on (find yourself authorized with your SSH key, or use the password
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Vmware vSphere | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/vsphere) | | Community ([@imkin](https://github.com/imkin))
Vmware vSphere | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/vsphere) | | Community ([@imkin](https://github.com/imkin)), ([@abrarshivani](https://github.com/abrarshivani)), ([@kerneltime](https://github.com/kerneltime)), ([@kerneltime](https://github.com/luomiao))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -423,7 +423,7 @@ gs://artifacts.<$PROJECT_ID>.appspot.com/
And then to remove the all the images under this path, run:
```shell
gsutil rm -r gs://artifacts.<$PROJECT_ID>.appspot.com/
gsutil rm -r gs://artifacts.$PROJECT_ID.appspot.com/
```
You can also delete the entire Google Cloud project but note that you must first disable billing on the project. Additionally, deleting a project will only happen after the current billing cycle ends.

View File

@ -238,7 +238,7 @@ equal to `""` is always interpreted to be requesting a PV with no class, so it
can only be bound to PVs with no class (no annotation or one set equal to
`""`). A PVC with no annotation is not quite the same and is treated differently
by the cluster depending on whether the
[`DefaultStorageClass` admission plugin](docs/admin/admission-controllers/#defaultstorageclass)
[`DefaultStorageClass` admission plugin](/docs/admin/admission-controllers/#defaultstorageclass)
is turned on.
* If the admission plugin is turned on, the administrator may specify a
@ -256,7 +256,8 @@ same way as PVCs that have their annotation set to `""`.
When a PVC specifies a `selector` in addition to requesting a `StorageClass`,
the requirements are ANDed together: only a PV of the requested class and with
the requested labels may be bound to the PVC.
the requested labels may be bound to the PVC. Note that currently, a PVC with a
non-empty `selector` can't have a PV dynamically provisioned for it.
In the future after beta, the `volume.beta.kubernetes.io/storage-class`
annotation will become an attribute.
@ -295,13 +296,12 @@ dynamically provisioned.
The name of a `StorageClass` object is significant, and is how users can
request a particular class. Administrators set the name and other parameters
of a class, all of which are opaque to users, when first creating
`StorageClass` objects, and the objects cannot be updated once they are
created.
of a class when first creating `StorageClass` objects, and the objects cannot
be updated once they are created.
Administrators can specify a default `StorageClass` just for PVCs that don't
request any particular class to bind to: see the
[`PersistentVolumeClaim` section](docs/user-guide/persistent-volumes/#class-1)
[`PersistentVolumeClaim` section](#persistentvolumeclaims)
for details.
```yaml
@ -373,16 +373,14 @@ provisioner: kubernetes.io/glusterfs
parameters:
endpoint: "glusterfs-cluster"
resturl: "http://127.0.0.1:8081"
restauthenabled: "true"
restuser: "admin"
restuserkey: "password"
```
* `endpoint`: `glusterfs-cluster` is the endpoint/service name which includes GlusterFS trusted pool IP addresses and this parameter is mandatory.
* `resturl` : Gluster REST service url which provision gluster volumes on demand. The format should be `IPaddress:Port` and this is a mandatory parameter for GlusterFS dynamic provisioner.
* `restauthenabled` : Gluster REST service authentication boolean is required if the authentication is enabled on the REST server. If this value is 'true', 'restuser' and 'restuserkey' have to be filled.
* `restuser` : Gluster REST service user who has access to create volumes in the Gluster Trusted Pool.
* `restuserkey` : Gluster REST service user's password which will be used for authentication to the REST server.
* `resturl` : Gluster REST service url which provision gluster volumes on demand. The format should be a valid URL and this is a mandatory parameter for GlusterFS dynamic provisioner.
* `restuser` : Gluster REST service user who has access to create volumes in the Gluster Trusted Pool. This parameter is optional, empty string will be used when omitted.
* `restuserkey` : Gluster REST service user's password which will be used for authentication to the REST server. This parameter is optional, empty string will be used when omitted.
#### OpenStack Cinder

View File

@ -114,7 +114,7 @@ for example the [Kubelet](/docs/admin/kubelet/) or Docker.
The replication controller can itself have labels (`.metadata.labels`). Typically, you
would set these the same as the `.spec.template.metadata.labels`; if `.metadata.labels` is not specified
then it is defaulted to `.spec.template.metadata.labels`. However, they are allowed to be
different, and the `.metadata.labels` do not affec the behavior of the replication controller.
different, and the `.metadata.labels` do not affect the behavior of the replication controller.
### Pod Selector
@ -254,8 +254,8 @@ Use a [`Job`](/docs/user-guide/jobs/) instead of a replication controller for po
### DaemonSet
Use a [`DaemonSet`](/docs/admin/daemons/) instead of a replication controller for pods that provide a
machine-level function, such as machine monitoring or machine logging. These pods have a lifetime is tied
to machine lifetime: the pod needs to be running on the machine before other pods start, and are
machine-level function, such as machine monitoring or machine logging. These pods have a lifetime that is tied
to a machine lifetime: the pod needs to be running on the machine before other pods start, and are
safe to terminate when the machine is otherwise ready to be rebooted/shutdown.
## For more information

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB