Move Guide topic: Sharing Clusters

reviewable/pr2879/r1
Andrew Chen 2017-03-16 15:15:28 -07:00
parent be37f8f986
commit 3191d64cbb
4 changed files with 129 additions and 121 deletions

View File

@ -62,6 +62,7 @@ toc:
- docs/tasks/administer-cluster/safely-drain-node.md
- docs/tasks/administer-cluster/change-pv-reclaim-policy.md
- docs/tasks/administer-cluster/limit-storage-consumption.md
- docs/tasks/administer-cluster/share-configuration.md
- title: Administering Federation
section:

View File

@ -0,0 +1,125 @@
---
assignees:
- mikedanese
- thockin
title: Sharing Cluster Access with kubeconfig
---
Client access to a running Kubernetes cluster can be shared by copying
the `kubectl` client config bundle ([kubeconfig](/docs/user-guide/kubeconfig-file)).
This config bundle lives in `$HOME/.kube/config`, and is generated
by `cluster/kube-up.sh`. Sample steps for sharing `kubeconfig` below.
**1. Create a cluster**
```shell
$ cluster/kube-up.sh
```
**2. Copy `kubeconfig` to new host**
```shell
$ scp $HOME/.kube/config user@remotehost:/path/to/.kube/config
```
**3. On new host, make copied `config` available to `kubectl`**
* Option A: copy to default location
```shell
$ mv /path/to/.kube/config $HOME/.kube/config
```
* Option B: copy to working directory (from which kubectl is run)
```shell
$ mv /path/to/.kube/config $PWD
```
* Option C: manually pass `kubeconfig` location to `kubectl`
```shell
# via environment variable
$ export KUBECONFIG=/path/to/.kube/config
# via commandline flag
$ kubectl ... --kubeconfig=/path/to/.kube/config
```
## Manually Generating `kubeconfig`
`kubeconfig` is generated by `kube-up` but you can generate your own
using (any desired subset of) the following commands.
```shell
# create kubeconfig entry
$ kubectl config set-cluster $CLUSTER_NICK \
--server=https://1.1.1.1 \
--certificate-authority=/path/to/apiserver/ca_file \
--embed-certs=true \
# Or if tls not needed, replace --certificate-authority and --embed-certs with
--insecure-skip-tls-verify=true \
--kubeconfig=/path/to/standalone/.kube/config
# create user entry
$ kubectl config set-credentials $USER_NICK \
# bearer token credentials, generated on kube master
--token=$token \
# use either username|password or token, not both
--username=$username \
--password=$password \
--client-certificate=/path/to/crt_file \
--client-key=/path/to/key_file \
--embed-certs=true \
--kubeconfig=/path/to/standalone/.kube/config
# create context entry
$ kubectl config set-context $CONTEXT_NAME \
--cluster=$CLUSTER_NICK \
--user=$USER_NICK \
--kubeconfig=/path/to/standalone/.kube/config
```
Notes:
* The `--embed-certs` flag is needed to generate a standalone
`kubeconfig`, that will work as-is on another host.
* `--kubeconfig` is both the preferred file to load config from and the file to
save config too. In the above commands the `--kubeconfig` file could be
omitted if you first run
```shell
$ export KUBECONFIG=/path/to/standalone/.kube/config
```
* The ca_file, key_file, and cert_file referenced above are generated on the
kube master at cluster turnup. They can be found on the master under
`/srv/kubernetes`. Bearer token/basic auth are also generated on the kube master.
For more details on `kubeconfig` see [kubeconfig-file.md](/docs/user-guide/kubeconfig-file),
and/or run `kubectl config -h`.
## Merging `kubeconfig` Example
`kubectl` loads and merges config from the following locations (in order)
1. `--kubeconfig=/path/to/.kube/config` command line flag
2. `KUBECONFIG=/path/to/.kube/config` env variable
3. `$HOME/.kube/config`
If you create clusters A, B on host1, and clusters C, D on host2, you can
make all four clusters available on both hosts by running
```shell
# on host2, copy host1's default kubeconfig, and merge it from env
$ scp host1:/path/to/home1/.kube/config /path/to/other/.kube/config
$ export KUBECONFIG=/path/to/other/.kube/config
# on host1, copy host2's default kubeconfig and merge it from env
$ scp host2:/path/to/home2/.kube/config /path/to/other/.kube/config
$ export KUBECONFIG=/path/to/other/.kube/config
```
Detailed examples and explanation of `kubeconfig` loading/merging rules can be found in [kubeconfig-file](/docs/user-guide/kubeconfig-file).

View File

@ -104,7 +104,7 @@ sufficiently up-to-date using `kubectl version`.
## Configuring kubectl
In order for kubectl to find and access the Kubernetes cluster, it needs a [kubeconfig file](/docs/user-guide/kubeconfig-file), which is created automatically when creating a cluster using kube-up.sh (see the [getting started guides](/docs/getting-started-guides/) for more about creating clusters). If you need access to a cluster you didn't create, see the [Sharing Cluster Access document](/docs/user-guide/sharing-clusters).
In order for kubectl to find and access the Kubernetes cluster, it needs a [kubeconfig file](/docs/user-guide/kubeconfig-file), which is created automatically when creating a cluster using kube-up.sh (see the [getting started guides](/docs/getting-started-guides/) for more about creating clusters). If you need access to a cluster you didn't create, see the [Sharing Cluster Access document](/docs/tasks/administer-cluster/share-configuration/).
By default, kubectl configuration lives at `~/.kube/config`.
#### Making sure you're ready

View File

@ -1,125 +1,7 @@
---
assignees:
- mikedanese
- thockin
title: Sharing Cluster Access with kubeconfig
---
Client access to a running Kubernetes cluster can be shared by copying
the `kubectl` client config bundle ([kubeconfig](/docs/user-guide/kubeconfig-file)).
This config bundle lives in `$HOME/.kube/config`, and is generated
by `cluster/kube-up.sh`. Sample steps for sharing `kubeconfig` below.
{% include user-guide-content-moved.md %}
**1. Create a cluster**
```shell
$ cluster/kube-up.sh
```
**2. Copy `kubeconfig` to new host**
```shell
$ scp $HOME/.kube/config user@remotehost:/path/to/.kube/config
```
**3. On new host, make copied `config` available to `kubectl`**
* Option A: copy to default location
```shell
$ mv /path/to/.kube/config $HOME/.kube/config
```
* Option B: copy to working directory (from which kubectl is run)
```shell
$ mv /path/to/.kube/config $PWD
```
* Option C: manually pass `kubeconfig` location to `kubectl`
```shell
# via environment variable
$ export KUBECONFIG=/path/to/.kube/config
# via commandline flag
$ kubectl ... --kubeconfig=/path/to/.kube/config
```
## Manually Generating `kubeconfig`
`kubeconfig` is generated by `kube-up` but you can generate your own
using (any desired subset of) the following commands.
```shell
# create kubeconfig entry
$ kubectl config set-cluster $CLUSTER_NICK \
--server=https://1.1.1.1 \
--certificate-authority=/path/to/apiserver/ca_file \
--embed-certs=true \
# Or if tls not needed, replace --certificate-authority and --embed-certs with
--insecure-skip-tls-verify=true \
--kubeconfig=/path/to/standalone/.kube/config
# create user entry
$ kubectl config set-credentials $USER_NICK \
# bearer token credentials, generated on kube master
--token=$token \
# use either username|password or token, not both
--username=$username \
--password=$password \
--client-certificate=/path/to/crt_file \
--client-key=/path/to/key_file \
--embed-certs=true \
--kubeconfig=/path/to/standalone/.kube/config
# create context entry
$ kubectl config set-context $CONTEXT_NAME \
--cluster=$CLUSTER_NICK \
--user=$USER_NICK \
--kubeconfig=/path/to/standalone/.kube/config
```
Notes:
* The `--embed-certs` flag is needed to generate a standalone
`kubeconfig`, that will work as-is on another host.
* `--kubeconfig` is both the preferred file to load config from and the file to
save config too. In the above commands the `--kubeconfig` file could be
omitted if you first run
```shell
$ export KUBECONFIG=/path/to/standalone/.kube/config
```
* The ca_file, key_file, and cert_file referenced above are generated on the
kube master at cluster turnup. They can be found on the master under
`/srv/kubernetes`. Bearer token/basic auth are also generated on the kube master.
For more details on `kubeconfig` see [kubeconfig-file.md](/docs/user-guide/kubeconfig-file),
and/or run `kubectl config -h`.
## Merging `kubeconfig` Example
`kubectl` loads and merges config from the following locations (in order)
1. `--kubeconfig=/path/to/.kube/config` command line flag
2. `KUBECONFIG=/path/to/.kube/config` env variable
3. `$HOME/.kube/config`
If you create clusters A, B on host1, and clusters C, D on host2, you can
make all four clusters available on both hosts by running
```shell
# on host2, copy host1's default kubeconfig, and merge it from env
$ scp host1:/path/to/home1/.kube/config /path/to/other/.kube/config
$ export KUBECONFIG=/path/to/other/.kube/config
# on host1, copy host2's default kubeconfig and merge it from env
$ scp host2:/path/to/home2/.kube/config /path/to/other/.kube/config
$ export KUBECONFIG=/path/to/other/.kube/config
```
Detailed examples and explanation of `kubeconfig` loading/merging rules can be found in [kubeconfig-file](/docs/user-guide/kubeconfig-file).
[Sharing Cluster Access with kubeconfig](/docs/tasks/administer-cluster/share-configuration/)