Merge branch 'master' into acr

reviewable/pr1734/r4
Brendan Burns 2016-12-16 13:19:41 -08:00 committed by GitHub
commit b44002382b
2 changed files with 143 additions and 87 deletions

View File

@ -141,10 +141,10 @@ By default, `kubeadm init` automatically generates the token used to initialise
each new node. If you would like to manually specify this token, you can use the
`--token` flag. The token must be of the format `<6 character string>.<16 character string>`.
- `--use-kubernetes-version` (default 'v1.4.4') the kubernetes version to initialise
- `--use-kubernetes-version` (default 'v1.5.1') the kubernetes version to initialise
`kubeadm` was originally built for Kubernetes version **v1.4.0**, older versions are not
supported. With this flag you can try any future version, e.g. **v1.5.0-beta.1**
supported. With this flag you can try any future version, e.g. **v1.6.0-beta.1**
whenever it comes out (check [releases page](https://github.com/kubernetes/kubernetes/releases)
for a full list of available versions).
@ -168,6 +168,59 @@ necessary.
By default, when `kubeadm init` runs, a token is generated and revealed in the output.
That's the token you should use here.
## Using kubeadm with a configuration file
WARNING: kubeadm is in alpha and the configuration API syntax will likely change before GA.
It's possible to configure kubeadm with a configuration file instead of command line flags, and some more advanced features may only be
available as configuration file options.
### Sample Master Configuration
```yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddresses:
- <address1|string>
- <address2|string>
bindPort: <int>
externalDNSNames:
- <dnsname1|string>
- <dnsname2|string>
cloudProvider: <string>
discovery:
bindPort: <int>
etcd:
endpoints:
- <endpoint1|string>
- <endpoint2|string>
caFile: <path|string>
certFile: <path|string>
keyFile: <path|string>
kubernetesVersion: <string>
networking:
dnsDomain: <string>
serviceSubnet: <cidr>
podSubnet: <cidr>
secrets:
givenToken: <token|string>
```
### Sample Node Configuration
```yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: NodeConfiguration
apiPort: <int>
discoveryPort: <int>
masterAddresses:
- <master1>
secrets:
givenToken: <token|string>
```
## Automating kubeadm
Rather than copying the token you obtained from `kubeadm init` to each node, as
@ -175,13 +228,12 @@ in the basic `kubeadm` tutorials, you can parallelize the token distribution for
easier automation. To implement this automation, you must know the IP address
that the master will have after it is started.
1. Generate a token. This token must have the form `<6 character string>.<16
character string>`
1. Generate a token. This token must have the form `<6 character string>.<16 character string>`.
Here is a simple python one-liner for this:
Kubeadm can pre-generate a token for you:
```
python -c 'import random; print "%0x.%0x" % (random.SystemRandom().getrandbits(3*8), random.SystemRandom().getrandbits(8*8))'
```console
$ kubeadm token generate
```
1. Start both the master node and the worker nodes concurrently with this token. As they come up they should find each other and form the cluster.
@ -191,6 +243,7 @@ Once the cluster is up, you can grab the admin credentials from the master node
## Environment variables
There are some environment variables that modify the way that `kubeadm` works. Most users will have no need to set these.
These enviroment variables are a short-term solution, eventually they will be integrated in the kubeadm configuration file.
| Variable | Default | Description |
| --- | --- | --- |
@ -200,36 +253,10 @@ There are some environment variables that modify the way that `kubeadm` works.
| `KUBE_HYPERKUBE_IMAGE` | `` | If set, use a single hyperkube image with this name. If not set, individual images per server component will be used. |
| `KUBE_DISCOVERY_IMAGE` | `gcr.io/google_containers/kube-discovery-<arch>:1.0` | The bootstrap discovery helper image to use. |
| `KUBE_ETCD_IMAGE` | `gcr.io/google_containers/etcd-<arch>:2.2.5` | The etcd container image to use. |
| `KUBE_COMPONENT_LOGLEVEL` | `--v=4` | Logging configuration for all Kubernetes components |
| `KUBE_REPO_PREFIX` | `gcr.io/google_containers` | The image prefix for all images that are used. |
## Releases and release notes
If you already have kubeadm installed and want to upgrade, run `apt-get update && apt-get upgrade` or `yum update` to get the latest version of kubeadm.
- Second release between v1.4 and v1.5: `v1.5.0-alpha.2.421+a6bea3d79b8bba`
- Switch to the 10.96.0.0/12 subnet: [#35290](https://github.com/kubernetes/kubernetes/pull/35290)
- Fix kubeadm on AWS by including /etc/ssl/certs in the controller-manager [#33681](https://github.com/kubernetes/kubernetes/pull/33681)
- The API was refactored and is now componentconfig: [#33728](https://github.com/kubernetes/kubernetes/pull/33728), [#34147](https://github.com/kubernetes/kubernetes/pull/34147) and [#34555](https://github.com/kubernetes/kubernetes/pull/34555)
- Allow kubeadm to get config options from a file: [#34501](https://github.com/kubernetes/kubernetes/pull/34501), [#34885](https://github.com/kubernetes/kubernetes/pull/34885) and [#34891](https://github.com/kubernetes/kubernetes/pull/34891)
- Implement preflight checks: [#34341](https://github.com/kubernetes/kubernetes/pull/34341) and [#35843](https://github.com/kubernetes/kubernetes/pull/35843)
- Using kubernetes v1.4.4 by default: [#34419](https://github.com/kubernetes/kubernetes/pull/34419) and [#35270](https://github.com/kubernetes/kubernetes/pull/35270)
- Make api and discovery ports configurable and default to 6443: [#34719](https://github.com/kubernetes/kubernetes/pull/34719)
- Implement kubeadm reset: [#34807](https://github.com/kubernetes/kubernetes/pull/34807)
- Make kubeadm poll/wait for endpoints instead of directly fail when the master isn't available [#34703](https://github.com/kubernetes/kubernetes/pull/34703) and [#34718](https://github.com/kubernetes/kubernetes/pull/34718)
- Allow empty directories in the directory preflight check: [#35632](https://github.com/kubernetes/kubernetes/pull/35632)
- Started adding unit tests: [#35231](https://github.com/kubernetes/kubernetes/pull/35231), [#35326](https://github.com/kubernetes/kubernetes/pull/35326) and [#35332](https://github.com/kubernetes/kubernetes/pull/35332)
- Various enhancements: [#35075](https://github.com/kubernetes/kubernetes/pull/35075), [#35111](https://github.com/kubernetes/kubernetes/pull/35111), [#35119](https://github.com/kubernetes/kubernetes/pull/35119), [#35124](https://github.com/kubernetes/kubernetes/pull/35124), [#35265](https://github.com/kubernetes/kubernetes/pull/35265) and [#35777](https://github.com/kubernetes/kubernetes/pull/35777)
- Bug fixes: [#34352](https://github.com/kubernetes/kubernetes/pull/34352), [#34558](https://github.com/kubernetes/kubernetes/pull/34558), [#34573](https://github.com/kubernetes/kubernetes/pull/34573), [#34834](https://github.com/kubernetes/kubernetes/pull/34834), [#34607](https://github.com/kubernetes/kubernetes/pull/34607), [#34907](https://github.com/kubernetes/kubernetes/pull/34907) and [#35796](https://github.com/kubernetes/kubernetes/pull/35796)
- Initial v1.4 release: `v1.5.0-alpha.0.1534+cf7301f16c0363`
## Troubleshooting
* Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure `net.bridge.bridge-nf-call-iptables` is set to 1 in your sysctl config, eg.
```
# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
```
Refer to the [CHANGELOG.md](https://github.com/kubernetes/kubeadm/blob/master/CHANGELOG.md) for more information.

View File

@ -14,7 +14,7 @@ li>.highlighter-rouge {position:relative; top:3px;}
## Overview
This quickstart shows you how to easily install a secure Kubernetes cluster on machines running Ubuntu 16.04, CentOS 7 or HypriotOS v1.0.1+.
The installation uses a tool called `kubeadm` which is part of Kubernetes 1.4.
The installation uses a tool called `kubeadm` which is part of Kubernetes.
This process works with local VMs, physical servers and/or cloud servers.
It is simple enough that you can easily integrate its use into your own automation (Terraform, Chef, Puppet, etc).
@ -38,7 +38,7 @@ If you are not constrained, other tools build on kubeadm to give you complete cl
## Prerequisites
1. One or more machines running Ubuntu 16.04, CentOS 7 or HypriotOS v1.0.1+
1. One or more machines running Ubuntu 16.04+, CentOS 7 or HypriotOS v1.0.1+
1. 1GB or more of RAM per machine (any less will leave little room for your apps)
1. Full network connectivity between all machines in the cluster (public or private network is fine)
@ -62,12 +62,12 @@ You will install the following packages on all the machines:
* `kubeadm`: the command to bootstrap the cluster.
NOTE: If you already have kubeadm installed, you should do a `apt-get update && apt-get upgrade` or `yum update` to get the latest version of kubeadm.
See the reference doc if you want to read about the different [kubeadm releases](/docs/admin/kubeadm)
See the reference doc if you want to read about the different [kubeadm releases](https://github.com/kubernetes/kubeadm/blob/master/CHANGELOG.md)
For each host in turn:
* SSH into the machine and become `root` if you are not already (for example, run `sudo su -`).
* If the machine is running Ubuntu 16.04 or HypriotOS v1.0.1, run:
* If the machine is running Ubuntu or HypriotOS, run:
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
@ -78,7 +78,7 @@ For each host in turn:
# apt-get install -y docker.io
# apt-get install -y kubelet kubeadm kubectl kubernetes-cni
If the machine is running CentOS 7, run:
If the machine is running CentOS, run:
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
@ -124,24 +124,36 @@ This may take several minutes.
The output should look like:
<master/tokens> generated token: "f0c861.753c505740ecde4c"
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
<master/apiclient> all control plane components are healthy after 61.346626 seconds
<master/apiclient> waiting for at least one node to register and become ready
<master/apiclient> first node is ready after 4.506807 seconds
<master/discovery> created essential addon: kube-discovery
<master/addons> created essential addon: kube-proxy
<master/addons> created essential addon: kube-dns
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[init] Using Kubernetes version: v1.5.1
[tokens] Generated token: "064158.548b9ddb1d3fad3e"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 61.317580 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 6.556101 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 6.020980 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns
Kubernetes master initialised successfully!
Your Kubernetes master has initialized successfully!
You can connect any number of nodes by running:
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
kubeadm join --token <token> <master-ip>
You can now join any number of machines by running the following on each node:
kubeadm join --token=<token> <master-ip>
Make a record of the `kubeadm join` command that `kubeadm init` outputs.
You will need this in a moment.
@ -161,9 +173,9 @@ This will remove the "dedicated" taint from any nodes that have it, including th
### (3/4) Installing a pod network
You must install a pod network add-on so that your pods can communicate with each other.
You must install a pod network add-on so that your pods can communicate with each other.
**It is necessary to do this before you try to deploy any applications to your cluster, and before `kube-dns` will start up. Note also that `kubeadm` only supports CNI based networks and therefore kubenet based networks will not work.**
**It is necessary to do this before you try to deploy any applications to your cluster, and before `kube-dns` will start up. Note also that `kubeadm` only supports CNI based networks and therefore kubenet based networks will not work.**
Several projects provide Kubernetes pod networks using CNI, some of which
also support [Network Policy](/docs/user-guide/networkpolicies/). See the [add-ons page](/docs/admin/addons/) for a complete list of available network add-ons.
@ -189,13 +201,22 @@ If you want to add any new machines as nodes to your cluster, for each machine:
For example:
# kubeadm join --token <token> <master-ip>
<util/tokens> validating provided token
<node/discovery> created cluster info discovery client, requesting info from "http://138.68.156.129:9898/cluster-info/v1/?token-id=0f8588"
<node/discovery> cluster info object received, verifying signature using given token
<node/discovery> cluster info signature and contents are valid, will use API endpoints [https://138.68.156.129:443]
<node/csr> created API client to obtain unique certificate for this node, generating keys and certificate signing request
<node/csr> received signed certificate from the API server, generating kubelet configuration
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[tokens] Validating provided token
[discovery] Created cluster info discovery client, requesting info from "http://192.168.x.y:9898/cluster-info/v1/?token-id=f11877"
[discovery] Cluster info object received, verifying signature using given token
[discovery] Cluster info signature and contents are valid, will use API endpoints [https://192.168.x.y:6443]
[bootstrap] Trying to connect to endpoint https://192.168.x.y:6443
[bootstrap] Detected server version: v1.5.1
[bootstrap] Successfully established connection with endpoint "https://192.168.x.y:6443"
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server:
Issuer: CN=kubernetes | Subject: CN=system:node:yournode | CA: false
Not before: 2016-12-15 19:44:00 +0000 UTC Not After: 2017-12-15 19:44:00 +0000 UTC
[csr] Generating kubelet configuration
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
@ -206,8 +227,6 @@ For example:
A few seconds later, you should notice that running `kubectl get nodes` on the master shows a cluster with as many machines as you created.
Note that there currently isn't a out-of-the-box way of connecting to the Master's API Server via `kubectl` from a node. Read issue [#35729](https://github.com/kubernetes/kubernetes/issues/35729) for more details.
### (Optional) Controlling your cluster from machines other than the master
In order to get a kubectl on your laptop for example to talk to your cluster, you need to copy the `KubeConfig` file from your master to your laptop like this:
@ -217,7 +236,7 @@ In order to get a kubectl on your laptop for example to talk to your cluster, yo
### (Optional) Connecting to the API Server
If you want to connect to the API Server for viewing the dashboard (note: not deployed by default) from outside the cluster for example, you can use `kubectl proxy`:
If you want to connect to the API Server for viewing the dashboard (note: the dashboard isn't deployed by default) from outside the cluster for example, you can use `kubectl proxy`:
# scp root@<master ip>:/etc/kubernetes/admin.conf .
# kubectl --kubeconfig ./admin.conf proxy
@ -277,7 +296,7 @@ See the [list of add-ons](/docs/admin/addons/) to explore other add-ons, includi
* Slack Channel: [#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
* Mailing List: [kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
* [GitHub Issues](https://github.com/kubernetes/kubernetes/issues): please tag `kubeadm` issues with `@kubernetes/sig-cluster-lifecycle`
* [GitHub Issues in the kubeadm repository](https://github.com/kubernetes/kubeadm/issues)
## kubeadm is multi-platform
@ -285,11 +304,7 @@ kubeadm deb packages and binaries are built for amd64, arm and arm64, following
deb-packages are released for ARM and ARM 64-bit, but not RPMs (yet, reach out if there's interest).
ARM had some issues when making v1.4, see [#32517](https://github.com/kubernetes/kubernetes/pull/32517) [#33485](https://github.com/kubernetes/kubernetes/pull/33485), [#33117](https://github.com/kubernetes/kubernetes/pull/33117) and [#33376](https://github.com/kubernetes/kubernetes/pull/33376).
However, thanks to the PRs above, `kube-apiserver` works on ARM from the `v1.4.1` release, so make sure you're at least using `v1.4.1` when running on ARM 32-bit
The multiarch flannel daemonset can be installed this way.
Currently, only the pod network flannel is working on multiple architectures. You can install it this way:
# export ARCH=amd64
# curl -sSL "https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml?raw=true" | sed "s/amd64/${ARCH}/g" | kubectl create -f -
@ -297,33 +312,47 @@ The multiarch flannel daemonset can be installed this way.
Replace `ARCH=amd64` with `ARCH=arm` or `ARCH=arm64` depending on the platform you're running on.
Note that the Raspberry Pi 3 is in ARM 32-bit mode, so for RPi 3 you should set `ARCH` to `arm`, not `arm64`.
## Cloudprovider integrations (experimental)
Enabling specific cloud providers is a common request, this currently requires manual configuration and is therefore not yet supported. If you wish to do so,
edit the `kubeadm` dropin for the `kubelet` service (`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`) on all nodes, including the master.
If your cloud provider requires any extra packages installed on host, for example for volume mounting/unmounting, install those packages.
Specify the `--cloud-provider` flag to kubelet and set it to the cloud of your choice. If your cloudprovider requires a configuration
file, create the file `/etc/kubernetes/cloud-config` on every node and set the values your cloud requires. Also append
`--cloud-config=/etc/kubernetes/cloud-config` to the kubelet arguments.
Lastly, run `kubeadm init --cloud-provider=xxx` to bootstrap your cluster with cloud provider features.
This workflow is not yet fully supported, however we hope to make it extremely easy to spin up clusters with cloud providers in the future.
(See [this proposal](https://github.com/kubernetes/community/pull/128) for more information) The [Kubelet Dynamic Settings](https://github.com/kubernetes/kubernetes/pull/29459) feature may also help to fully automate this process in the future.
## Limitations
Please note: `kubeadm` is a work in progress and these limitations will be addressed in due course.
Also you can take a look at the troubleshooting section in the [reference document](/docs/admin/kubeadm/#troubleshooting)
1. The cluster created here doesn't have cloud-provider integrations by default, so for example it doesn't work automatically with (for example) [Load Balancers](/docs/user-guide/load-balancer/) (LBs) or [Persistent Volumes](/docs/user-guide/persistent-volumes/walkthrough/) (PVs).
To set up kubeadm with CloudProvider integrations (it's experimental, but try), refer to the [kubeadm reference](/docs/admin/kubeadm/) document.
Workaround: use the [NodePort feature of services](/docs/user-guide/services/#type-nodeport) for exposing applications to the internet.
1. The cluster created here has a single master, with a single `etcd` database running on it.
This means that if the master fails, your cluster loses its configuration data and will need to be recreated from scratch.
Adding HA support (multiple `etcd` servers, multiple API servers, etc) to `kubeadm` is still a work-in-progress.
Workaround: regularly [back up etcd](https://coreos.com/etcd/docs/latest/admin_guide.html).
The `etcd` data directory configured by `kubeadm` is at `/var/lib/etcd` on the master.
1. `kubectl logs` is broken with `kubeadm` clusters due to [#22770](https://github.com/kubernetes/kubernetes/issues/22770).
Workaround: use `docker logs` on the nodes where the containers are running as a workaround.
1. The HostPort functionality does not work with kubeadm due to that CNI networking is used, see issue [#31307](https://github.com/kubernetes/kubernetes/issues/31307).
1. The `HostPort` and `HostIP` functionality does not work with kubeadm due to that CNI networking is used, see issue [#31307](https://github.com/kubernetes/kubernetes/issues/31307).
Workaround: use the [NodePort feature of services](/docs/user-guide/services/#type-nodeport) instead, or use HostNetwork.
1. A running `firewalld` service may conflict with kubeadm, so if you want to run `kubeadm`, you should disable `firewalld` until issue [#35535](https://github.com/kubernetes/kubernetes/issues/35535) is resolved.
1. Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure `net.bridge.bridge-nf-call-iptables` is set to 1 in your sysctl config, eg.
Workaround: Disable `firewalld` or configure it to allow Kubernetes the pod and service cidrs.
1. If you see errors like `etcd cluster unavailable or misconfigured`, it's because of high load on the machine which makes the `etcd` container a bit unresponsive (it might miss some requests) and therefore kubelet will restart it. This will get better with `etcd3`.
```console
# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
```
Workaround: Set `failureThreshold` in `/etc/kubernetes/manifests/etcd.json` to a larger value.
1. There is no built-in way of fetching the token easily once the cluster is up and running, but here is a `kubectl` command you can copy and paste that will print out the token for you:
```console
# kubectl -n kube-system get secret clusterinfo -o yaml | grep token-map | awk '{print $2}' | base64 -d | sed "s|{||g;s|}||g;s|:|.|g;s/\"//g;" | xargs echo
```
1. If you are using VirtualBox (directly or via Vagrant), you will need to ensure that `hostname -i` returns a routable IP address (i.e. one on the second network interface, not the first one).
By default, it doesn't do this and kubelet ends-up using first non-loopback network interface, which is usually NATed.