website/v1.1/docs/user-guide/sharing-clusters.md

4.4 KiB

layout title
docwithnav Sharing Cluster Access

Sharing Cluster Access

Client access to a running Kubernetes cluster can be shared by copying the kubectl client config bundle (kubeconfig). This config bundle lives in $HOME/.kube/config, and is generated by cluster/kube-up.sh. Sample steps for sharing kubeconfig below.

1. Create a cluster

{% highlight console %} {% raw %} $ cluster/kube-up.sh {% endraw %} {% endhighlight %}

2. Copy kubeconfig to new host

{% highlight console %} {% raw %} $ scp $HOME/.kube/config user@remotehost:/path/to/.kube/config {% endraw %} {% endhighlight %}

3. On new host, make copied config available to kubectl

  • Option A: copy to default location

{% highlight console %} {% raw %} $ mv /path/to/.kube/config $HOME/.kube/config {% endraw %} {% endhighlight %}

  • Option B: copy to working directory (from which kubectl is run)

{% highlight console %} {% raw %} $ mv /path/to/.kube/config $PWD {% endraw %} {% endhighlight %}

  • Option C: manually pass kubeconfig location to kubectl

{% highlight console %} {% raw %}

via environment variable

$ export KUBECONFIG=/path/to/.kube/config

via commandline flag

$ kubectl ... --kubeconfig=/path/to/.kube/config {% endraw %} {% endhighlight %}

Manually Generating kubeconfig

kubeconfig is generated by kube-up but you can generate your own using (any desired subset of) the following commands.

{% highlight console %} {% raw %}

create kubeconfig entry

$ kubectl config set-cluster $CLUSTER_NICK
--server=https://1.1.1.1
--certificate-authority=/path/to/apiserver/ca_file
--embed-certs=true
# Or if tls not needed, replace --certificate-authority and --embed-certs with --insecure-skip-tls-verify=true
--kubeconfig=/path/to/standalone/.kube/config

create user entry

$ kubectl config set-credentials $USER_NICK
# bearer token credentials, generated on kube master --token=$token
# use either username|password or token, not both --username=$username
--password=$password
--client-certificate=/path/to/crt_file
--client-key=/path/to/key_file
--embed-certs=true
--kubeconfig=/path/to/standalone/.kube/config

create context entry

$ kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NICKNAME --user=$USER_NICK {% endraw %} {% endhighlight %}

Notes:

  • The --embed-certs flag is needed to generate a standalone kubeconfig, that will work as-is on another host.
  • --kubeconfig is both the preferred file to load config from and the file to save config too. In the above commands the --kubeconfig file could be omitted if you first run

{% highlight console %} {% raw %} $ export KUBECONFIG=/path/to/standalone/.kube/config {% endraw %} {% endhighlight %}

  • The ca_file, key_file, and cert_file referenced above are generated on the kube master at cluster turnup. They can be found on the master under /srv/kubernetes. Bearer token/basic auth are also generated on the kube master.

For more details on kubeconfig see kubeconfig-file.md, and/or run kubectl config -h.

Merging kubeconfig Example

kubectl loads and merges config from the following locations (in order)

  1. --kubeconfig=/path/to/.kube/config command line flag
  2. KUBECONFIG=/path/to/.kube/config env variable
  3. $PWD/.kube/config
  4. $HOME/.kube/config

If you create clusters A, B on host1, and clusters C, D on host2, you can make all four clusters available on both hosts by running

{% highlight console %} {% raw %}

on host2, copy host1's default kubeconfig, and merge it from env

$ scp host1:/path/to/home1/.kube/config /path/to/other/.kube/config

$ export $KUBECONFIG=/path/to/other/.kube/config

on host1, copy host2's default kubeconfig and merge it from env

$ scp host2:/path/to/home2/.kube/config /path/to/other/.kube/config

$ export $KUBECONFIG=/path/to/other/.kube/config {% endraw %} {% endhighlight %}

Detailed examples and explanation of kubeconfig loading/merging rules can be found in kubeconfig-file.md.

Analytics