Merge branch 'master' into master
commit
49491ae333
|
@ -4,3 +4,4 @@
|
||||||
_site/**
|
_site/**
|
||||||
.sass-cache/**
|
.sass-cache/**
|
||||||
CNAME
|
CNAME
|
||||||
|
.travis.yml
|
||||||
|
|
15
.travis.yml
15
.travis.yml
|
@ -1,7 +1,16 @@
|
||||||
language: go
|
language: go
|
||||||
go:
|
go:
|
||||||
- 1.6
|
- 1.6.2
|
||||||
|
|
||||||
# Don't want default ./... here:
|
# Don't want default ./... here:
|
||||||
install: go get -t -v ./test
|
install:
|
||||||
script: go test -v ./test
|
- export PATH=$GOPATH/bin:$PATH
|
||||||
|
- mkdir -p $HOME/gopath/src/k8s.io
|
||||||
|
- mv $TRAVIS_BUILD_DIR $HOME/gopath/src/k8s.io/kubernetes.github.io
|
||||||
|
- go get -t -v k8s.io/kubernetes.github.io/test
|
||||||
|
- git clone --depth=50 --branch=master https://github.com/kubernetes/md-check $HOME/gopath/src/k8s.io/md-check
|
||||||
|
- go get -t -v k8s.io/md-check
|
||||||
|
|
||||||
|
script:
|
||||||
|
- go test -v k8s.io/kubernetes.github.io/test
|
||||||
|
- $GOPATH/bin/md-check --root-dir=$HOME/gopath/src/k8s.io/kubernetes.github.io
|
||||||
|
|
|
@ -34,7 +34,6 @@ title: Community
|
||||||
<div class="company-logos">
|
<div class="company-logos">
|
||||||
<img src="/images/community_logos/zulily_logo.png">
|
<img src="/images/community_logos/zulily_logo.png">
|
||||||
<img src="/images/community_logos/we_pay_logo.png">
|
<img src="/images/community_logos/we_pay_logo.png">
|
||||||
<img src="/images/community_logos/viacom_logo.png">
|
|
||||||
<img src="/images/community_logos/goldman_sachs_logo.png">
|
<img src="/images/community_logos/goldman_sachs_logo.png">
|
||||||
<img src="/images/community_logos/ebay_logo.png">
|
<img src="/images/community_logos/ebay_logo.png">
|
||||||
<img src="/images/community_logos/box_logo.png">
|
<img src="/images/community_logos/box_logo.png">
|
||||||
|
@ -72,6 +71,9 @@ title: Community
|
||||||
<a href="http://docs.datadoghq.com/integrations/kubernetes/"><img src="/images/community_logos/datadog_logo.png"></a>
|
<a href="http://docs.datadoghq.com/integrations/kubernetes/"><img src="/images/community_logos/datadog_logo.png"></a>
|
||||||
<a href="https://apprenda.com/kubernetes-support/"><img src="/images/community_logos/apprenda_logo.png"></a>
|
<a href="https://apprenda.com/kubernetes-support/"><img src="/images/community_logos/apprenda_logo.png"></a>
|
||||||
<a href="http://www.ibm.com/cloud-computing/"><img src="/images/community_logos/ibm_logo.png"></a>
|
<a href="http://www.ibm.com/cloud-computing/"><img src="/images/community_logos/ibm_logo.png"></a>
|
||||||
|
<a href="http://info.crunchydata.com/blog/advanced-crunchy-containers-for-postgresql"><img src="/images/community_logos/crunchy_data_logo.png"></a>
|
||||||
|
<a href="https://content.mirantis.com/Containerizing-OpenStack-on-Kubernetes-Video-Landing-Page.html"><img src="/images/community_logos/mirantis_logo.png"></a>
|
||||||
|
<a href="http://blog.aquasec.com/security-best-practices-for-kubernetes-deployment"><img src="/images/community_logos/aqua_logo.png"></a>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</main>
|
</main>
|
||||||
|
|
|
@ -117,21 +117,6 @@ When the plug-in sets a compute resource request, it annotates the pod with info
|
||||||
|
|
||||||
See the [InitialResouces proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/initial-resources.md) for more details.
|
See the [InitialResouces proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/initial-resources.md) for more details.
|
||||||
|
|
||||||
### NamespaceExists (deprecated)
|
|
||||||
|
|
||||||
This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes `Namespace`
|
|
||||||
and reject the request if the `Namespace` was not previously created. We strongly recommend running
|
|
||||||
this plug-in to ensure integrity of your data.
|
|
||||||
|
|
||||||
The functionality of this admission controller has been merged into `NamespaceLifecycle`
|
|
||||||
|
|
||||||
### NamespaceAutoProvision (deprecated)
|
|
||||||
|
|
||||||
This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes `Namespace`
|
|
||||||
and create a new `Namespace` if one did not already exist previously.
|
|
||||||
|
|
||||||
We strongly recommend `NamespaceLifecycle` over `NamespaceAutoProvision`.
|
|
||||||
|
|
||||||
### NamespaceLifecycle
|
### NamespaceLifecycle
|
||||||
|
|
||||||
This plug-in enforces that a `Namespace` that is undergoing termination cannot have new objects created in it,
|
This plug-in enforces that a `Namespace` that is undergoing termination cannot have new objects created in it,
|
||||||
|
|
|
@ -45,8 +45,13 @@ with the request:
|
||||||
All values are opaque to the authentication system and only hold significance
|
All values are opaque to the authentication system and only hold significance
|
||||||
when interpreted by an [authorizer](/docs/admin/authorization/).
|
when interpreted by an [authorizer](/docs/admin/authorization/).
|
||||||
|
|
||||||
Multiple authentication methods may be enabled at once. In these cases, the first
|
You can enable multiple authentication methods at once. You should usually use at least two methods:
|
||||||
authenticator to successfully authenticate the request short-circuits evaluation.
|
|
||||||
|
- service account tokens for service accounts
|
||||||
|
- at least one other method for user authentication.
|
||||||
|
|
||||||
|
When multiple are enabled, the first authenticator module
|
||||||
|
to successfully authenticate the request short-circuits evaluation.
|
||||||
The API server does not guarantee the order authenticators run in.
|
The API server does not guarantee the order authenticators run in.
|
||||||
|
|
||||||
### X509 Client Certs
|
### X509 Client Certs
|
||||||
|
@ -189,7 +194,9 @@ verify ID token's signature and determine the end users identity.
|
||||||
To enable the plugin, pass the following required flags:
|
To enable the plugin, pass the following required flags:
|
||||||
|
|
||||||
* `--oidc-issuer-url` URL of the provider which allows the API server to discover
|
* `--oidc-issuer-url` URL of the provider which allows the API server to discover
|
||||||
public signing keys. Only URLs which use the `https://` scheme are accepted.
|
public signing keys. Only URLs which use the `https://` scheme are accepted. This is typically
|
||||||
|
the provider's URL without a path, for example "https://accounts.google.com" or "https://login.salesforce.com".
|
||||||
|
|
||||||
* `--oidc-client-id` A client id that all tokens must be issued for.
|
* `--oidc-client-id` A client id that all tokens must be issued for.
|
||||||
|
|
||||||
Importantly, the API server is not an OAuth2 client, rather it can only be
|
Importantly, the API server is not an OAuth2 client, rather it can only be
|
||||||
|
@ -212,6 +219,17 @@ other claims, such as `email`, depending on their provider.
|
||||||
* `--oidc-groups-claim` JWT claim to use as the user's group. If the claim is present
|
* `--oidc-groups-claim` JWT claim to use as the user's group. If the claim is present
|
||||||
it must be an array of strings.
|
it must be an array of strings.
|
||||||
|
|
||||||
|
Kubernetes does not provide an OpenID Connect Identity Provider.
|
||||||
|
You can use an existing public OpenID Connect Identity Provider (such as Google, or [others](http://connect2id.com/products/nimbus-oauth-openid-connect-sdk/openid-connect-providers)).
|
||||||
|
Or, you can run your own Identity Provider, such as CoreOS [dex](https://github.com/coreos/dex), [Keycloak](https://github.com/keycloak/keycloak) or CloudFoundary [UAA](https://github.com/cloudfoundry/uaa).
|
||||||
|
|
||||||
|
The provider needs to support [OpenID connect discovery]https://openid.net/specs/openid-connect-discovery-1_0.html); not all do.
|
||||||
|
|
||||||
|
Setup instructions for specific systems:
|
||||||
|
|
||||||
|
- [UAA]: http://apigee.com/about/blog/engineering/kubernetes-authentication-enterprise
|
||||||
|
- [Dex]: https://speakerdeck.com/ericchiang/kubernetes-access-control-with-dex
|
||||||
|
|
||||||
### Webhook Token Authentication
|
### Webhook Token Authentication
|
||||||
|
|
||||||
Webhook authentication is a hook for verifying bearer tokens.
|
Webhook authentication is a hook for verifying bearer tokens.
|
||||||
|
|
|
@ -204,7 +204,7 @@ As of 1.3 RBAC mode is in alpha and considered experimental.
|
||||||
|
|
||||||
To use RBAC, you must both enable the authorization module with `--authorization-mode=RBAC`,
|
To use RBAC, you must both enable the authorization module with `--authorization-mode=RBAC`,
|
||||||
and [enable the API version](
|
and [enable the API version](
|
||||||
docs/admin/cluster-management.md/#Turn-on-or-off-an-api-version-for-your-cluster),
|
cluster-management.md/#Turn-on-or-off-an-API-version-for-your-cluster),
|
||||||
with a `--runtime-config=` that includes `rbac.authorization.k8s.io/v1alpha1`.
|
with a `--runtime-config=` that includes `rbac.authorization.k8s.io/v1alpha1`.
|
||||||
|
|
||||||
### Roles, RolesBindings, ClusterRoles, and ClusterRoleBindings
|
### Roles, RolesBindings, ClusterRoles, and ClusterRoleBindings
|
||||||
|
|
|
@ -10,9 +10,9 @@ assignees:
|
||||||
|
|
||||||
At {{page.version}}, Kubernetes supports clusters with up to 1000 nodes. More specifically, we support configurations that meet *all* of the following criteria:
|
At {{page.version}}, Kubernetes supports clusters with up to 1000 nodes. More specifically, we support configurations that meet *all* of the following criteria:
|
||||||
|
|
||||||
* No more than 1000 nodes
|
* No more than 2000 nodes
|
||||||
* No more than 30000 total pods
|
* No more than 60000 total pods
|
||||||
* No more than 60000 total containers
|
* No more than 120000 total containers
|
||||||
* No more than 100 pods per node
|
* No more than 100 pods per node
|
||||||
|
|
||||||
* TOC
|
* TOC
|
||||||
|
|
|
@ -185,6 +185,10 @@ cluster state, such as the controller manager and scheduler. To achieve this re
|
||||||
instances of these actors, in case a machine dies. To achieve this, we are going to use a lease-lock in the API to perform
|
instances of these actors, in case a machine dies. To achieve this, we are going to use a lease-lock in the API to perform
|
||||||
master election. We will use the `--leader-elect` flag for each scheduler and controller-manager, using a lease in the API will ensure that only 1 instance of the scheduler and controller-manager are running at once.
|
master election. We will use the `--leader-elect` flag for each scheduler and controller-manager, using a lease in the API will ensure that only 1 instance of the scheduler and controller-manager are running at once.
|
||||||
|
|
||||||
|
The scheduler and controller-manager can be configured to talk to the API server that is on the same node (i.e. 127.0.0.1), or it can be configured to communicate using the load balanced IP address of the API servers. Regardless of how they are configured, the scheduler and controller-manager will complete the leader election process mentioned above when using the `--leader-elect` flag.
|
||||||
|
|
||||||
|
In case of a failure accessing the API server, the elected leader will not be able to renew the lease, causing a new leader to be elected. This is especially relevant when configuring the scheduler and controller-manager to access the API server via 127.0.0.1, and the API server on the same node is unavailable.
|
||||||
|
|
||||||
### Installing configuration files
|
### Installing configuration files
|
||||||
|
|
||||||
First, create empty log files on each node, so that Docker will mount the files not make new directories:
|
First, create empty log files on each node, so that Docker will mount the files not make new directories:
|
||||||
|
|
|
@ -91,7 +91,7 @@ coreos:
|
||||||
ExecStart=/opt/bin/kube-apiserver \
|
ExecStart=/opt/bin/kube-apiserver \
|
||||||
--service-account-key-file=/opt/bin/kube-serviceaccount.key \
|
--service-account-key-file=/opt/bin/kube-serviceaccount.key \
|
||||||
--service-account-lookup=false \
|
--service-account-lookup=false \
|
||||||
--admission-control=NamespaceLifecycle,NamespaceAutoProvision,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
|
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
|
||||||
--runtime-config=api/v1 \
|
--runtime-config=api/v1 \
|
||||||
--allow-privileged=true \
|
--allow-privileged=true \
|
||||||
--insecure-bind-address=0.0.0.0 \
|
--insecure-bind-address=0.0.0.0 \
|
||||||
|
|
|
@ -97,7 +97,7 @@ KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
|
||||||
KUBE_ETCD_SERVERS="--etcd-servers=http://kube-master:4001"
|
KUBE_ETCD_SERVERS="--etcd-servers=http://kube-master:4001"
|
||||||
|
|
||||||
# Remove ServiceAccount from this line to run without API Tokens
|
# Remove ServiceAccount from this line to run without API Tokens
|
||||||
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
|
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ResourceQuota"
|
||||||
```
|
```
|
||||||
|
|
||||||
* Create /var/run/kubernetes on master:
|
* Create /var/run/kubernetes on master:
|
||||||
|
|
|
@ -95,17 +95,29 @@ potential issues with client/server version skew.
|
||||||
|
|
||||||
You may find it useful to enable `kubectl` bash completion:
|
You may find it useful to enable `kubectl` bash completion:
|
||||||
|
|
||||||
```
|
* If you're using kubectl with Kubernetes version 1.2 or earlier, you can source the kubectl completion script as follows:<br>
|
||||||
$ source ./contrib/completions/bash/kubectl
|
```
|
||||||
```
|
$ source ./contrib/completions/bash/kubectl
|
||||||
|
```
|
||||||
|
|
||||||
**Note**: This will last for the duration of your bash session. If you want to make this permanent you need to add this line in your bash profile.
|
* If you're using kubectl with Kubernetes version 1.3, use the `kubectl completion` command as follows:<br>
|
||||||
|
```
|
||||||
|
$ source <(kubectl completion bash)
|
||||||
|
```
|
||||||
|
|
||||||
Alternatively, on most linux distributions you can also move the completions file to your bash_completions.d like this:
|
**Note**: The above commands will last for the duration of your bash session. If you want to make this permanent you need to add corresponding command in your bash profile.
|
||||||
|
|
||||||
```
|
Alternatively, on most linux distributions you can also add a completions file to your bash_completions.d as follows:
|
||||||
$ cp ./contrib/completions/bash/kubectl /etc/bash_completion.d/
|
|
||||||
```
|
* For kubectl with Kubernetes v1.2 or earlier:<br>
|
||||||
|
```
|
||||||
|
$ cp ./contrib/completions/bash/kubectl /etc/bash_completion.d/
|
||||||
|
```
|
||||||
|
|
||||||
|
* For kubectl with Kubernetes v1.3:<br>
|
||||||
|
```
|
||||||
|
$ kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl
|
||||||
|
```
|
||||||
|
|
||||||
but then you have to update it when you update kubectl.
|
but then you have to update it when you update kubectl.
|
||||||
|
|
||||||
|
|
|
@ -127,7 +127,7 @@ List the nodes in your cluster by running:
|
||||||
kubectl get nodes
|
kubectl get nodes
|
||||||
```
|
```
|
||||||
|
|
||||||
Minikube contains a built-in Docker daemon that for running containers.
|
Minikube contains a built-in Docker daemon for running containers.
|
||||||
If you use another Docker daemon for building your containers, you will have to publish them to a registry before minikube can pull them.
|
If you use another Docker daemon for building your containers, you will have to publish them to a registry before minikube can pull them.
|
||||||
You can use minikube's built in Docker daemon to avoid this extra step of pushing your images.
|
You can use minikube's built in Docker daemon to avoid this extra step of pushing your images.
|
||||||
Use the built-in Docker daemon with:
|
Use the built-in Docker daemon with:
|
||||||
|
@ -136,7 +136,7 @@ Use the built-in Docker daemon with:
|
||||||
eval $(minikube docker-env)
|
eval $(minikube docker-env)
|
||||||
```
|
```
|
||||||
This command sets up the Docker environment variables so a Docker client can communicate with the minikube Docker daemon.
|
This command sets up the Docker environment variables so a Docker client can communicate with the minikube Docker daemon.
|
||||||
Minikube currently supports only docker version 1.11.1 on the server, which is what is supported by Kubernetes 1.3. With a newer docker version you'll get this [issue](https://github.com/kubernetes/minikube/issues/338).
|
Minikube currently supports only docker version 1.11.1 on the server, which is what is supported by Kubernetes 1.3. With a newer docker version, you'll get this [issue](https://github.com/kubernetes/minikube/issues/338).
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
docker ps
|
docker ps
|
||||||
|
|
|
@ -25,10 +25,14 @@ mkdir -p $GOPATH
|
||||||
export PATH=$PATH:$GOPATH/bin
|
export PATH=$PATH:$GOPATH/bin
|
||||||
```
|
```
|
||||||
|
|
||||||
4. Install the govc tool to interact with ESXi/vCenter:
|
4. Install the govc tool to interact with ESXi/vCenter. Head to [govc Releases](https://github.com/vmware/govmomi/releases) to download the latest.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
go get github.com/vmware/govmomi/govc
|
# Sample commands for v0.8.0 for 64 bit Linux.
|
||||||
|
curl -OL https://github.com/vmware/govmomi/releases/download/v0.8.0/govc_linux_amd64.gz
|
||||||
|
gzip -d govc_linux_amd64.gz
|
||||||
|
chmod +x govc_linux_amd64
|
||||||
|
mv govc_linux_amd64 /usr/local/bin/govc
|
||||||
```
|
```
|
||||||
|
|
||||||
5. Get or build a [binary release](/docs/getting-started-guides/binary_release)
|
5. Get or build a [binary release](/docs/getting-started-guides/binary_release)
|
||||||
|
@ -43,7 +47,7 @@ md5sum -c kube.vmdk.gz.md5
|
||||||
gzip -d kube.vmdk.gz
|
gzip -d kube.vmdk.gz
|
||||||
```
|
```
|
||||||
|
|
||||||
Import this VMDK into your vSphere datastore:
|
Configure the environment for govc
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
export GOVC_URL='hostname' # hostname of the vc
|
export GOVC_URL='hostname' # hostname of the vc
|
||||||
|
@ -52,9 +56,30 @@ export GOVC_PASSWORD='password' # password for the above username
|
||||||
export GOVC_NETWORK='Network Name' # Name of the network the vms should join. Many times it could be "VM Network"
|
export GOVC_NETWORK='Network Name' # Name of the network the vms should join. Many times it could be "VM Network"
|
||||||
export GOVC_INSECURE=1 # If the host above uses a self-signed cert
|
export GOVC_INSECURE=1 # If the host above uses a self-signed cert
|
||||||
export GOVC_DATASTORE='target datastore'
|
export GOVC_DATASTORE='target datastore'
|
||||||
|
# To get resource pool via govc: govc ls -l 'host/*' | grep ResourcePool | awk '{print $1}' | xargs -n1 -t govc pool.info
|
||||||
export GOVC_RESOURCE_POOL='resource pool or cluster with access to datastore'
|
export GOVC_RESOURCE_POOL='resource pool or cluster with access to datastore'
|
||||||
export GOVC_GUEST_LOGIN='kube:kube' # Used for logging into kube.vmdk during deployment.
|
export GOVC_GUEST_LOGIN='kube:kube' # Used for logging into kube.vmdk during deployment.
|
||||||
|
export GOVC_PORT=443 # The port to be used by vSphere cloud provider plugin
|
||||||
|
# To get datacente via govc: govc datacenter.info
|
||||||
|
export GOVC_DATACENTER='ha-datacenter' # The datacenter to be used by vSphere cloud provider plugin
|
||||||
|
```
|
||||||
|
|
||||||
|
Sample environment
|
||||||
|
```shell
|
||||||
|
export GOVC_URL='10.161.236.217'
|
||||||
|
export GOVC_USERNAME='administrator'
|
||||||
|
export GOVC_PASSWORD='MyPassword1'
|
||||||
|
export GOVC_NETWORK='VM Network'
|
||||||
|
export GOVC_INSECURE=1
|
||||||
|
export GOVC_DATASTORE='datastore1'
|
||||||
|
export GOVC_RESOURCE_POOL='/Datacenter/host/10.20.104.24/Resources'
|
||||||
|
export GOVC_GUEST_LOGIN='kube:kube'
|
||||||
|
export GOVC_PORT='443'
|
||||||
|
export GOVC_DATACENTER='Datacenter'
|
||||||
|
```
|
||||||
|
|
||||||
|
Import this VMDK into your vSphere datastore:
|
||||||
|
```shell
|
||||||
govc import.vmdk kube.vmdk ./kube/
|
govc import.vmdk kube.vmdk ./kube/
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -63,6 +88,7 @@ Verify that the VMDK was correctly uploaded and expanded to ~3GiB:
|
||||||
```shell
|
```shell
|
||||||
govc datastore.ls ./kube/
|
govc datastore.ls ./kube/
|
||||||
```
|
```
|
||||||
|
|
||||||
If you need to debug any part of the deployment, the guest login for
|
If you need to debug any part of the deployment, the guest login for
|
||||||
the image that you imported is `kube:kube`. It is normally specified
|
the image that you imported is `kube:kube`. It is normally specified
|
||||||
in the GOVC_GUEST_LOGIN parameter above.
|
in the GOVC_GUEST_LOGIN parameter above.
|
||||||
|
@ -110,7 +136,7 @@ going on (find yourself authorized with your SSH key, or use the password
|
||||||
|
|
||||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||||
Vmware vSphere | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/vsphere) | | Community ([@imkin](https://github.com/imkin))
|
Vmware vSphere | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/vsphere) | | Community ([@imkin](https://github.com/imkin)), ([@abrarshivani](https://github.com/abrarshivani)), ([@kerneltime](https://github.com/kerneltime)), ([@kerneltime](https://github.com/luomiao))
|
||||||
|
|
||||||
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
|
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
|
||||||
|
|
||||||
|
|
|
@ -423,7 +423,7 @@ gs://artifacts.<$PROJECT_ID>.appspot.com/
|
||||||
And then to remove the all the images under this path, run:
|
And then to remove the all the images under this path, run:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
gsutil rm -r gs://artifacts.<$PROJECT_ID>.appspot.com/
|
gsutil rm -r gs://artifacts.$PROJECT_ID.appspot.com/
|
||||||
```
|
```
|
||||||
|
|
||||||
You can also delete the entire Google Cloud project but note that you must first disable billing on the project. Additionally, deleting a project will only happen after the current billing cycle ends.
|
You can also delete the entire Google Cloud project but note that you must first disable billing on the project. Additionally, deleting a project will only happen after the current billing cycle ends.
|
||||||
|
|
|
@ -238,7 +238,7 @@ equal to `""` is always interpreted to be requesting a PV with no class, so it
|
||||||
can only be bound to PVs with no class (no annotation or one set equal to
|
can only be bound to PVs with no class (no annotation or one set equal to
|
||||||
`""`). A PVC with no annotation is not quite the same and is treated differently
|
`""`). A PVC with no annotation is not quite the same and is treated differently
|
||||||
by the cluster depending on whether the
|
by the cluster depending on whether the
|
||||||
[`DefaultStorageClass` admission plugin](docs/admin/admission-controllers/#defaultstorageclass)
|
[`DefaultStorageClass` admission plugin](/docs/admin/admission-controllers/#defaultstorageclass)
|
||||||
is turned on.
|
is turned on.
|
||||||
|
|
||||||
* If the admission plugin is turned on, the administrator may specify a
|
* If the admission plugin is turned on, the administrator may specify a
|
||||||
|
@ -256,7 +256,8 @@ same way as PVCs that have their annotation set to `""`.
|
||||||
|
|
||||||
When a PVC specifies a `selector` in addition to requesting a `StorageClass`,
|
When a PVC specifies a `selector` in addition to requesting a `StorageClass`,
|
||||||
the requirements are ANDed together: only a PV of the requested class and with
|
the requirements are ANDed together: only a PV of the requested class and with
|
||||||
the requested labels may be bound to the PVC.
|
the requested labels may be bound to the PVC. Note that currently, a PVC with a
|
||||||
|
non-empty `selector` can't have a PV dynamically provisioned for it.
|
||||||
|
|
||||||
In the future after beta, the `volume.beta.kubernetes.io/storage-class`
|
In the future after beta, the `volume.beta.kubernetes.io/storage-class`
|
||||||
annotation will become an attribute.
|
annotation will become an attribute.
|
||||||
|
@ -295,13 +296,12 @@ dynamically provisioned.
|
||||||
|
|
||||||
The name of a `StorageClass` object is significant, and is how users can
|
The name of a `StorageClass` object is significant, and is how users can
|
||||||
request a particular class. Administrators set the name and other parameters
|
request a particular class. Administrators set the name and other parameters
|
||||||
of a class, all of which are opaque to users, when first creating
|
of a class when first creating `StorageClass` objects, and the objects cannot
|
||||||
`StorageClass` objects, and the objects cannot be updated once they are
|
be updated once they are created.
|
||||||
created.
|
|
||||||
|
|
||||||
Administrators can specify a default `StorageClass` just for PVCs that don't
|
Administrators can specify a default `StorageClass` just for PVCs that don't
|
||||||
request any particular class to bind to: see the
|
request any particular class to bind to: see the
|
||||||
[`PersistentVolumeClaim` section](docs/user-guide/persistent-volumes/#class-1)
|
[`PersistentVolumeClaim` section](#persistentvolumeclaims)
|
||||||
for details.
|
for details.
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
|
@ -373,16 +373,14 @@ provisioner: kubernetes.io/glusterfs
|
||||||
parameters:
|
parameters:
|
||||||
endpoint: "glusterfs-cluster"
|
endpoint: "glusterfs-cluster"
|
||||||
resturl: "http://127.0.0.1:8081"
|
resturl: "http://127.0.0.1:8081"
|
||||||
restauthenabled: "true"
|
|
||||||
restuser: "admin"
|
restuser: "admin"
|
||||||
restuserkey: "password"
|
restuserkey: "password"
|
||||||
```
|
```
|
||||||
|
|
||||||
* `endpoint`: `glusterfs-cluster` is the endpoint/service name which includes GlusterFS trusted pool IP addresses and this parameter is mandatory.
|
* `endpoint`: `glusterfs-cluster` is the endpoint/service name which includes GlusterFS trusted pool IP addresses and this parameter is mandatory.
|
||||||
* `resturl` : Gluster REST service url which provision gluster volumes on demand. The format should be `IPaddress:Port` and this is a mandatory parameter for GlusterFS dynamic provisioner.
|
* `resturl` : Gluster REST service url which provision gluster volumes on demand. The format should be a valid URL and this is a mandatory parameter for GlusterFS dynamic provisioner.
|
||||||
* `restauthenabled` : Gluster REST service authentication boolean is required if the authentication is enabled on the REST server. If this value is 'true', 'restuser' and 'restuserkey' have to be filled.
|
* `restuser` : Gluster REST service user who has access to create volumes in the Gluster Trusted Pool. This parameter is optional, empty string will be used when omitted.
|
||||||
* `restuser` : Gluster REST service user who has access to create volumes in the Gluster Trusted Pool.
|
* `restuserkey` : Gluster REST service user's password which will be used for authentication to the REST server. This parameter is optional, empty string will be used when omitted.
|
||||||
* `restuserkey` : Gluster REST service user's password which will be used for authentication to the REST server.
|
|
||||||
|
|
||||||
#### OpenStack Cinder
|
#### OpenStack Cinder
|
||||||
|
|
||||||
|
|
|
@ -114,7 +114,7 @@ for example the [Kubelet](/docs/admin/kubelet/) or Docker.
|
||||||
The replication controller can itself have labels (`.metadata.labels`). Typically, you
|
The replication controller can itself have labels (`.metadata.labels`). Typically, you
|
||||||
would set these the same as the `.spec.template.metadata.labels`; if `.metadata.labels` is not specified
|
would set these the same as the `.spec.template.metadata.labels`; if `.metadata.labels` is not specified
|
||||||
then it is defaulted to `.spec.template.metadata.labels`. However, they are allowed to be
|
then it is defaulted to `.spec.template.metadata.labels`. However, they are allowed to be
|
||||||
different, and the `.metadata.labels` do not affec the behavior of the replication controller.
|
different, and the `.metadata.labels` do not affect the behavior of the replication controller.
|
||||||
|
|
||||||
### Pod Selector
|
### Pod Selector
|
||||||
|
|
||||||
|
@ -254,8 +254,8 @@ Use a [`Job`](/docs/user-guide/jobs/) instead of a replication controller for po
|
||||||
### DaemonSet
|
### DaemonSet
|
||||||
|
|
||||||
Use a [`DaemonSet`](/docs/admin/daemons/) instead of a replication controller for pods that provide a
|
Use a [`DaemonSet`](/docs/admin/daemons/) instead of a replication controller for pods that provide a
|
||||||
machine-level function, such as machine monitoring or machine logging. These pods have a lifetime is tied
|
machine-level function, such as machine monitoring or machine logging. These pods have a lifetime that is tied
|
||||||
to machine lifetime: the pod needs to be running on the machine before other pods start, and are
|
to a machine lifetime: the pod needs to be running on the machine before other pods start, and are
|
||||||
safe to terminate when the machine is otherwise ready to be rebooted/shutdown.
|
safe to terminate when the machine is otherwise ready to be rebooted/shutdown.
|
||||||
|
|
||||||
## For more information
|
## For more information
|
||||||
|
|
|
@ -642,8 +642,6 @@ you must use `ls -la` to see them when listing directory contents.
|
||||||
|
|
||||||
### Use-case: Secret visible to one container in a pod
|
### Use-case: Secret visible to one container in a pod
|
||||||
|
|
||||||
<a name="use-case-two-containers"></a>
|
|
||||||
|
|
||||||
Consider a program that needs to handle HTTP requests, do some complex business
|
Consider a program that needs to handle HTTP requests, do some complex business
|
||||||
logic, and then sign some messages with an HMAC. Because it has complex
|
logic, and then sign some messages with an HMAC. Because it has complex
|
||||||
application logic, there might be an unnoticed remote file reading exploit in
|
application logic, there might be an unnoticed remote file reading exploit in
|
||||||
|
@ -688,7 +686,7 @@ Therefore, one Pod does not have access to the secrets of another pod.
|
||||||
There may be several containers in a pod. However, each container in a pod has
|
There may be several containers in a pod. However, each container in a pod has
|
||||||
to request the secret volume in its `volumeMounts` for it to be visible within
|
to request the secret volume in its `volumeMounts` for it to be visible within
|
||||||
the container. This can be used to construct useful [security partitions at the
|
the container. This can be used to construct useful [security partitions at the
|
||||||
Pod level](#use-case-two-containers).
|
Pod level](#use-case-secret-visible-to-one-container-in-a-pod).
|
||||||
|
|
||||||
### Risks
|
### Risks
|
||||||
|
|
||||||
|
|
Binary file not shown.
After Width: | Height: | Size: 8.5 KiB |
Binary file not shown.
After Width: | Height: | Size: 29 KiB |
Binary file not shown.
After Width: | Height: | Size: 11 KiB |
Loading…
Reference in New Issue