Delete untranslated contents in the l10n(ko) directory (#11267)
parent
8a7254c0b8
commit
fb624c10b4
|
@ -1,88 +0,0 @@
|
|||
---
|
||||
title: CoreOS on AWS or GCE
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
There are multiple guides on running Kubernetes with [CoreOS](https://coreos.com/kubernetes/docs/latest/).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Official CoreOS Guides
|
||||
|
||||
These guides are maintained by CoreOS and deploy Kubernetes the "CoreOS Way" with full TLS, the DNS add-on, and more. These guides pass Kubernetes conformance testing and we encourage you to [test this yourself](https://coreos.com/kubernetes/docs/latest/conformance-tests.html).
|
||||
|
||||
* [**AWS Multi-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html)
|
||||
|
||||
Guide and CLI tool for setting up a multi-node cluster on AWS.
|
||||
CloudFormation is used to set up a master and multiple workers in auto-scaling groups.
|
||||
|
||||
* [**Bare Metal Multi-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-baremetal.html#automated-provisioning)
|
||||
|
||||
Guide and HTTP/API service for PXE booting and provisioning a multi-node cluster on bare metal.
|
||||
[Ignition](https://coreos.com/ignition/docs/latest/) is used to provision a master and multiple workers on the first boot from disk.
|
||||
|
||||
* [**Vagrant Multi-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html)
|
||||
|
||||
Guide to setting up a multi-node cluster on Vagrant.
|
||||
The deployer can independently configure the number of etcd nodes, master nodes, and worker nodes to bring up a fully HA control plane.
|
||||
|
||||
* [**Vagrant Single-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html)
|
||||
|
||||
The quickest way to set up a Kubernetes development environment locally.
|
||||
As easy as `git clone`, `vagrant up` and configuring `kubectl`.
|
||||
|
||||
* [**Full Step by Step Guide**](https://coreos.com/kubernetes/docs/latest/getting-started.html)
|
||||
|
||||
A generic guide to setting up an HA cluster on any cloud or bare metal, with full TLS.
|
||||
Repeat the master or worker steps to configure more machines of that role.
|
||||
|
||||
## Community Guides
|
||||
|
||||
These guides are maintained by community members, cover specific platforms and use cases, and experiment with different ways of configuring Kubernetes on CoreOS.
|
||||
|
||||
* [**Easy Multi-node Cluster on Google Compute Engine**](https://github.com/rimusz/coreos-multi-node-k8s-gce/blob/master/README.md)
|
||||
|
||||
Scripted installation of a single master, multi-worker cluster on GCE.
|
||||
Kubernetes components are managed by [fleet](https://github.com/coreos/fleet).
|
||||
|
||||
* [**Multi-node cluster using cloud-config and Weave on Vagrant**](https://github.com/errordeveloper/weave-demos/blob/master/poseidon/README.md)
|
||||
|
||||
Configure a Vagrant-based cluster of 3 machines with networking provided by Weave.
|
||||
|
||||
* [**Multi-node cluster using cloud-config and Vagrant**](https://github.com/pires/kubernetes-vagrant-coreos-cluster/blob/master/README.md)
|
||||
|
||||
Configure a single master, multi-worker cluster locally, running on your choice of hypervisor: VirtualBox, Parallels, or VMware
|
||||
|
||||
* [**Single-node cluster using a small macOS App**](https://github.com/rimusz/kube-solo-osx/blob/master/README.md)
|
||||
|
||||
Guide to running a solo cluster (master + worker) controlled by an macOS menubar application.
|
||||
Uses xhyve + CoreOS under the hood.
|
||||
|
||||
* [**Multi-node cluster with Vagrant and fleet units using a small macOS App**](https://github.com/rimusz/coreos-osx-gui-kubernetes-cluster/blob/master/README.md)
|
||||
|
||||
Guide to running a single master, multi-worker cluster controlled by an macOS menubar application.
|
||||
Uses Vagrant under the hood.
|
||||
|
||||
* [**Multi-node cluster using cloud-config, CoreOS and VMware ESXi**](https://github.com/xavierbaude/VMware-coreos-multi-nodes-Kubernetes)
|
||||
|
||||
Configure a single master, single worker cluster on VMware ESXi.
|
||||
|
||||
* [**Single/Multi-node cluster using cloud-config, CoreOS and Foreman**](https://github.com/johscheuer/theforeman-coreos-kubernetes)
|
||||
|
||||
Configure a standalone Kubernetes or a Kubernetes cluster with [Foreman](https://theforeman.org).
|
||||
|
||||
## Support Level
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires))
|
||||
Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles))
|
||||
|
||||
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
|
||||
|
||||
{{% /capture %}}
|
|
@ -1,174 +0,0 @@
|
|||
---
|
||||
title: Installing Kubernetes on AWS with kops
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
This quickstart shows you how to easily install a Kubernetes cluster on AWS.
|
||||
It uses a tool called [`kops`](https://github.com/kubernetes/kops).
|
||||
|
||||
kops is an opinionated provisioning system:
|
||||
|
||||
* Fully automated installation
|
||||
* Uses DNS to identify clusters
|
||||
* Self-healing: everything runs in Auto-Scaling Groups
|
||||
* Multiple OS support (Debian, Ubuntu 16.04 supported, CentOS & RHEL, Amazon Linux and CoreOS) - see the [images.md](https://github.com/kubernetes/kops/blob/master/docs/images.md)
|
||||
* High-Availability support - see the [high_availability.md](https://github.com/kubernetes/kops/blob/master/docs/high_availability.md)
|
||||
* Can directly provision, or generate terraform manifests - see the [terraform.md](https://github.com/kubernetes/kops/blob/master/docs/terraform.md)
|
||||
|
||||
If your opinions differ from these you may prefer to build your own cluster using [kubeadm](/docs/admin/kubeadm/) as
|
||||
a building block. kops builds on the kubeadm work.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Creating a cluster
|
||||
|
||||
### (1/5) Install kops
|
||||
|
||||
#### Requirements
|
||||
|
||||
You must have [kubectl](/docs/tasks/tools/install-kubectl/) installed in order for kops to work.
|
||||
|
||||
#### Installation
|
||||
|
||||
Download kops from the [releases page](https://github.com/kubernetes/kops/releases) (it is also easy to build from source):
|
||||
|
||||
On macOS:
|
||||
|
||||
```shell
|
||||
curl -OL https://github.com/kubernetes/kops/releases/download/1.10.0/kops-darwin-amd64
|
||||
chmod +x kops-darwin-amd64
|
||||
mv kops-darwin-amd64 /usr/local/bin/kops
|
||||
# you can also install using Homebrew
|
||||
brew update && brew install kops
|
||||
```
|
||||
|
||||
On Linux:
|
||||
|
||||
```shell
|
||||
wget https://github.com/kubernetes/kops/releases/download/1.10.0/kops-linux-amd64
|
||||
chmod +x kops-linux-amd64
|
||||
mv kops-linux-amd64 /usr/local/bin/kops
|
||||
```
|
||||
|
||||
### (2/5) Create a route53 domain for your cluster
|
||||
|
||||
kops uses DNS for discovery, both inside the cluster and so that you can reach the kubernetes API server
|
||||
from clients.
|
||||
|
||||
kops has a strong opinion on the cluster name: it should be a valid DNS name. By doing so you will
|
||||
no longer get your clusters confused, you can share clusters with your colleagues unambiguously,
|
||||
and you can reach them without relying on remembering an IP address.
|
||||
|
||||
You can, and probably should, use subdomains to divide your clusters. As our example we will use
|
||||
`useast1.dev.example.com`. The API server endpoint will then be `api.useast1.dev.example.com`.
|
||||
|
||||
A Route53 hosted zone can serve subdomains. Your hosted zone could be `useast1.dev.example.com`,
|
||||
but also `dev.example.com` or even `example.com`. kops works with any of these, so typically
|
||||
you choose for organization reasons (e.g. you are allowed to create records under `dev.example.com`,
|
||||
but not under `example.com`).
|
||||
|
||||
Let's assume you're using `dev.example.com` as your hosted zone. You create that hosted zone using
|
||||
the [normal process](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingNewSubdomain.html), or
|
||||
with a command such as `aws route53 create-hosted-zone --name dev.example.com --caller-reference 1`.
|
||||
|
||||
You must then set up your NS records in the parent domain, so that records in the domain will resolve. Here,
|
||||
you would create NS records in `example.com` for `dev`. If it is a root domain name you would configure the NS
|
||||
records at your domain registrar (e.g. `example.com` would need to be configured where you bought `example.com`).
|
||||
|
||||
This step is easy to mess up (it is the #1 cause of problems!) You can double-check that
|
||||
your cluster is configured correctly if you have the dig tool by running:
|
||||
|
||||
`dig NS dev.example.com`
|
||||
|
||||
You should see the 4 NS records that Route53 assigned your hosted zone.
|
||||
|
||||
### (3/5) Create an S3 bucket to store your clusters state
|
||||
|
||||
kops lets you manage your clusters even after installation. To do this, it must keep track of the clusters
|
||||
that you have created, along with their configuration, the keys they are using etc. This information is stored
|
||||
in an S3 bucket. S3 permissions are used to control access to the bucket.
|
||||
|
||||
Multiple clusters can use the same S3 bucket, and you can share an S3 bucket between your colleagues that
|
||||
administer the same clusters - this is much easier than passing around kubecfg files. But anyone with access
|
||||
to the S3 bucket will have administrative access to all your clusters, so you don't want to share it beyond
|
||||
the operations team.
|
||||
|
||||
So typically you have one S3 bucket for each ops team (and often the name will correspond
|
||||
to the name of the hosted zone above!)
|
||||
|
||||
In our example, we chose `dev.example.com` as our hosted zone, so let's pick `clusters.dev.example.com` as
|
||||
the S3 bucket name.
|
||||
|
||||
* Export `AWS_PROFILE` (if you need to select a profile for the AWS CLI to work)
|
||||
|
||||
* Create the S3 bucket using `aws s3 mb s3://clusters.dev.example.com`
|
||||
|
||||
* You can `export KOPS_STATE_STORE=s3://clusters.dev.example.com` and then kops will use this location by default.
|
||||
We suggest putting this in your bash profile or similar.
|
||||
|
||||
|
||||
### (4/5) Build your cluster configuration
|
||||
|
||||
Run "kops create cluster" to create your cluster configuration:
|
||||
|
||||
`kops create cluster --zones=us-east-1c useast1.dev.example.com`
|
||||
|
||||
kops will create the configuration for your cluster. Note that it _only_ creates the configuration, it does
|
||||
not actually create the cloud resources - you'll do that in the next step with a `kops update cluster`. This
|
||||
give you an opportunity to review the configuration or change it.
|
||||
|
||||
It prints commands you can use to explore further:
|
||||
|
||||
* List your clusters with: `kops get cluster`
|
||||
* Edit this cluster with: `kops edit cluster useast1.dev.example.com`
|
||||
* Edit your node instance group: `kops edit ig --name=useast1.dev.example.com nodes`
|
||||
* Edit your master instance group: `kops edit ig --name=useast1.dev.example.com master-us-east-1c`
|
||||
|
||||
If this is your first time using kops, do spend a few minutes to try those out! An instance group is a
|
||||
set of instances, which will be registered as kubernetes nodes. On AWS this is implemented via auto-scaling-groups.
|
||||
You can have several instance groups, for example if you wanted nodes that are a mix of spot and on-demand instances, or
|
||||
GPU and non-GPU instances.
|
||||
|
||||
|
||||
### (5/5) Create the cluster in AWS
|
||||
|
||||
Run "kops update cluster" to create your cluster in AWS:
|
||||
|
||||
`kops update cluster useast1.dev.example.com --yes`
|
||||
|
||||
That takes a few seconds to run, but then your cluster will likely take a few minutes to actually be ready.
|
||||
`kops update cluster` will be the tool you'll use whenever you change the configuration of your cluster; it
|
||||
applies the changes you have made to the configuration to your cluster - reconfiguring AWS or kubernetes as needed.
|
||||
|
||||
For example, after you `kops edit ig nodes`, then `kops update cluster --yes` to apply your configuration, and
|
||||
sometimes you will also have to `kops rolling-update cluster` to roll out the configuration immediately.
|
||||
|
||||
Without `--yes`, `kops update cluster` will show you a preview of what it is going to do. This is handy
|
||||
for production clusters!
|
||||
|
||||
### Explore other add-ons
|
||||
|
||||
See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to explore other add-ons, including tools for logging, monitoring, network policy, visualization & control of your Kubernetes cluster.
|
||||
|
||||
## Cleanup
|
||||
|
||||
* To delete your cluster: `kops delete cluster useast1.dev.example.com --yes`
|
||||
|
||||
## Feedback
|
||||
|
||||
* Slack Channel: [#kops-users](https://kubernetes.slack.com/messages/kops-users/)
|
||||
* [GitHub Issues](https://github.com/kubernetes/kops/issues)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/user-guide/kubectl-overview/).
|
||||
* Learn about `kops` [advanced usage](https://github.com/kubernetes/kops)
|
||||
* See the `kops` [docs](https://github.com/kubernetes/kops) section for tutorials, best practices and advanced configuration options.
|
||||
|
||||
{{% /capture %}}
|
|
@ -1,120 +0,0 @@
|
|||
---
|
||||
title: Installing Kubernetes On-premises/Cloud Providers with Kubespray
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-incubator/kubespray).
|
||||
|
||||
Kubespray is a composition of [Ansible](http://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:
|
||||
|
||||
* a highly available cluster
|
||||
* composable attributes
|
||||
* support for most popular Linux distributions
|
||||
* Container Linux by CoreOS
|
||||
* Debian Jessie, Stretch, Wheezy
|
||||
* Ubuntu 16.04, 18.04
|
||||
* CentOS/RHEL 7
|
||||
* Fedora/CentOS Atomic
|
||||
* openSUSE Leap 42.3/Tumbleweed
|
||||
* continuous integration tests
|
||||
|
||||
To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/comparisons.md) to [kubeadm](/docs/admin/kubeadm/) and [kops](../kops).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Creating a cluster
|
||||
|
||||
### (1/5) Meet the underlay requirements
|
||||
|
||||
Provision servers with the following [requirements](https://github.com/kubernetes-incubator/kubespray#requirements):
|
||||
|
||||
* **Ansible v2.4 (or newer) and python-netaddr is installed on the machine that will run Ansible commands**
|
||||
* **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
|
||||
* The target servers must have **access to the Internet** in order to pull docker images
|
||||
* The target servers are configured to allow **IPv4 forwarding**
|
||||
* **Your ssh key must be copied** to all the servers part of your inventory
|
||||
* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall
|
||||
* If kubespray is ran from non-root user account, correct privilege escalation method should be configured in the target servers. Then the `ansible_become` flag or command parameters `--become` or `-b` should be specified
|
||||
|
||||
Kubespray provides the following utilities to help provision your environment:
|
||||
|
||||
* [Terraform](https://www.terraform.io/) scripts for the following cloud providers:
|
||||
* [AWS](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform/aws)
|
||||
* [OpenStack](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform/openstack)
|
||||
|
||||
### (2/5) Compose an inventory file
|
||||
|
||||
After you provision your servers, create an [inventory file for Ansible](http://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)".
|
||||
|
||||
### (3/5) Plan your cluster deployment
|
||||
|
||||
Kubespray provides the ability to customize many aspects of the deployment:
|
||||
|
||||
* Choice deployment mode: kubeadm or non-kubeadm
|
||||
* CNI (networking) plugins
|
||||
* DNS configuration
|
||||
* Choice of control plane: native/binary or containerized with docker or rkt
|
||||
* Component versions
|
||||
* Calico route reflectors
|
||||
* Component runtime options
|
||||
* docker
|
||||
* rkt
|
||||
* cri-o
|
||||
* Certificate generation methods
|
||||
|
||||
Kubespray customizations can be made to a [variable file](http://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes.
|
||||
|
||||
### (4/5) Deploy a Cluster
|
||||
|
||||
Next, deploy your cluster:
|
||||
|
||||
Cluster deployment using [ansible-playbook](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment).
|
||||
|
||||
```shell
|
||||
ansible-playbook -i your/inventory/hosts.ini cluster.yml -b -v \
|
||||
--private-key=~/.ssh/private_key
|
||||
```
|
||||
|
||||
Large deployments (100+ nodes) may require [specific adjustments](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/large-deployments.md) for best results.
|
||||
|
||||
### (5/5) Verify the deployment
|
||||
|
||||
Kubespray provides a way to verify inter-pod connectivity and DNS resolve with [Netchecker](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/netcheck.md). Netchecker ensures the netchecker-agents pods can resolve DNS requests and ping each over within the default namespace. Those pods mimic similar behavior of the rest of the workloads and serve as cluster health indicators.
|
||||
|
||||
## Cluster operations
|
||||
|
||||
Kubespray provides additional playbooks to manage your cluster: _scale_ and _upgrade_.
|
||||
|
||||
### Scale your cluster
|
||||
|
||||
You can add worker nodes from your cluster by running the scale playbook. For more information, see "[Adding nodes](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#adding-nodes)".
|
||||
You can remove worker nodes from your cluster by running the remove-node playbook. For more information, see "[Remove nodes](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#remove-nodes)".
|
||||
|
||||
### Upgrade your cluster
|
||||
|
||||
You can upgrade your cluster by running the upgrade-cluster playbook. For more information, see "[Upgrades](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/upgrades.md)".
|
||||
|
||||
## Cleanup
|
||||
|
||||
You can reset your nodes and wipe out all components installed with Kubespray via the [reset playbook](https://github.com/kubernetes-incubator/kubespray/blob/master/reset.yml).
|
||||
|
||||
{{< caution >}}
|
||||
When running the reset playbook, be sure not to accidentally target your production cluster!
|
||||
{{< /caution >}}
|
||||
|
||||
## Feedback
|
||||
|
||||
* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/)
|
||||
* [GitHub Issues](https://github.com/kubernetes-incubator/kubespray/issues)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
Check out planned work on Kubespray's [roadmap](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/roadmap.md).
|
||||
|
||||
{{% /capture %}}
|
|
@ -1,142 +0,0 @@
|
|||
#cloud-config
|
||||
|
||||
---
|
||||
write-files:
|
||||
- path: /etc/conf.d/nfs
|
||||
permissions: '0644'
|
||||
content: |
|
||||
OPTS_RPC_MOUNTD=""
|
||||
- path: /opt/bin/wupiao
|
||||
permissions: '0755'
|
||||
content: |
|
||||
#!/bin/bash
|
||||
# [w]ait [u]ntil [p]ort [i]s [a]ctually [o]pen
|
||||
[ -n "$1" ] && \
|
||||
until curl -o /dev/null -sIf http://${1}; do \
|
||||
sleep 1 && echo .;
|
||||
done;
|
||||
exit $?
|
||||
|
||||
hostname: master
|
||||
coreos:
|
||||
etcd2:
|
||||
name: master
|
||||
listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
|
||||
advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001
|
||||
initial-cluster-token: k8s_etcd
|
||||
listen-peer-urls: http://$private_ipv4:2380,http://$private_ipv4:7001
|
||||
initial-advertise-peer-urls: http://$private_ipv4:2380
|
||||
initial-cluster: master=http://$private_ipv4:2380
|
||||
initial-cluster-state: new
|
||||
fleet:
|
||||
metadata: "role=master"
|
||||
units:
|
||||
- name: etcd2.service
|
||||
command: start
|
||||
- name: generate-serviceaccount-key.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Generate service-account key file
|
||||
|
||||
[Service]
|
||||
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
|
||||
ExecStart=/bin/openssl genrsa -out /opt/bin/kube-serviceaccount.key 2048 2>/dev/null
|
||||
RemainAfterExit=yes
|
||||
Type=oneshot
|
||||
- name: setup-network-environment.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Setup Network Environment
|
||||
Documentation=https://github.com/kelseyhightower/setup-network-environment
|
||||
Requires=network-online.target
|
||||
After=network-online.target
|
||||
|
||||
[Service]
|
||||
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
|
||||
ExecStartPre=/usr/bin/curl -L -o /opt/bin/setup-network-environment -z /opt/bin/setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
|
||||
ExecStart=/opt/bin/setup-network-environment
|
||||
RemainAfterExit=yes
|
||||
Type=oneshot
|
||||
- name: fleet.service
|
||||
command: start
|
||||
- name: flanneld.service
|
||||
command: start
|
||||
drop-ins:
|
||||
- name: 50-network-config.conf
|
||||
content: |
|
||||
[Unit]
|
||||
Requires=etcd2.service
|
||||
[Service]
|
||||
ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}'
|
||||
- name: docker.service
|
||||
command: start
|
||||
- name: kube-apiserver.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Kubernetes API Server
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
Requires=setup-network-environment.service etcd2.service generate-serviceaccount-key.service
|
||||
After=setup-network-environment.service etcd2.service generate-serviceaccount-key.service
|
||||
|
||||
[Service]
|
||||
EnvironmentFile=/etc/network-environment
|
||||
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
|
||||
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-apiserver -z /opt/bin/kube-apiserver https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kube-apiserver
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver
|
||||
ExecStartPre=/opt/bin/wupiao 127.0.0.1:2379/v2/machines
|
||||
ExecStart=/opt/bin/kube-apiserver \
|
||||
--service-account-key-file=/opt/bin/kube-serviceaccount.key \
|
||||
--service-account-lookup=false \
|
||||
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
|
||||
--runtime-config=api/v1 \
|
||||
--allow-privileged=true \
|
||||
--insecure-bind-address=0.0.0.0 \
|
||||
--insecure-port=8080 \
|
||||
--kubelet-https=true \
|
||||
--secure-port=6443 \
|
||||
--service-cluster-ip-range=10.100.0.0/16 \
|
||||
--etcd-servers=http://127.0.0.1:2379 \
|
||||
--public-address-override=${DEFAULT_IPV4} \
|
||||
--logtostderr=true
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
- name: kube-controller-manager.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Kubernetes Controller Manager
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
Requires=kube-apiserver.service
|
||||
After=kube-apiserver.service
|
||||
|
||||
[Service]
|
||||
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-controller-manager -z /opt/bin/kube-controller-manager https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kube-controller-manager
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager
|
||||
ExecStart=/opt/bin/kube-controller-manager \
|
||||
--service-account-private-key-file=/opt/bin/kube-serviceaccount.key \
|
||||
--master=127.0.0.1:8080 \
|
||||
--logtostderr=true
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
- name: kube-scheduler.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Kubernetes Scheduler
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
Requires=kube-apiserver.service
|
||||
After=kube-apiserver.service
|
||||
|
||||
[Service]
|
||||
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-scheduler -z /opt/bin/kube-scheduler https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kube-scheduler
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler
|
||||
ExecStart=/opt/bin/kube-scheduler --master=127.0.0.1:8080
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
update:
|
||||
group: alpha
|
||||
reboot-strategy: off
|
|
@ -1,92 +0,0 @@
|
|||
#cloud-config
|
||||
write-files:
|
||||
- path: /opt/bin/wupiao
|
||||
permissions: '0755'
|
||||
content: |
|
||||
#!/bin/bash
|
||||
# [w]ait [u]ntil [p]ort [i]s [a]ctually [o]pen
|
||||
[ -n "$1" ] && [ -n "$2" ] && while ! curl --output /dev/null \
|
||||
--silent --head --fail \
|
||||
http://${1}:${2}; do sleep 1 && echo -n .; done;
|
||||
exit $?
|
||||
coreos:
|
||||
etcd2:
|
||||
listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
|
||||
advertise-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
|
||||
initial-cluster: master=http://<master-private-ip>:2380
|
||||
proxy: on
|
||||
fleet:
|
||||
metadata: "role=node"
|
||||
units:
|
||||
- name: etcd2.service
|
||||
command: start
|
||||
- name: fleet.service
|
||||
command: start
|
||||
- name: flanneld.service
|
||||
command: start
|
||||
- name: docker.service
|
||||
command: start
|
||||
- name: setup-network-environment.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Setup Network Environment
|
||||
Documentation=https://github.com/kelseyhightower/setup-network-environment
|
||||
Requires=network-online.target
|
||||
After=network-online.target
|
||||
|
||||
[Service]
|
||||
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
|
||||
ExecStartPre=/usr/bin/curl -L -o /opt/bin/setup-network-environment -z /opt/bin/setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
|
||||
ExecStart=/opt/bin/setup-network-environment
|
||||
RemainAfterExit=yes
|
||||
Type=oneshot
|
||||
- name: kube-proxy.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Kubernetes Proxy
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
Requires=setup-network-environment.service
|
||||
After=setup-network-environment.service
|
||||
|
||||
[Service]
|
||||
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-proxy -z /opt/bin/kube-proxy https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kube-proxy
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy
|
||||
# wait for kubernetes master to be up and ready
|
||||
ExecStartPre=/opt/bin/wupiao <master-private-ip> 8080
|
||||
ExecStart=/opt/bin/kube-proxy \
|
||||
--master=<master-private-ip>:8080 \
|
||||
--logtostderr=true
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
- name: kube-kubelet.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Kubernetes Kubelet
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
Requires=setup-network-environment.service
|
||||
After=setup-network-environment.service
|
||||
|
||||
[Service]
|
||||
EnvironmentFile=/etc/network-environment
|
||||
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kubelet -z /opt/bin/kubelet https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kubelet
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet
|
||||
# wait for kubernetes master to be up and ready
|
||||
ExecStartPre=/opt/bin/wupiao <master-private-ip> 8080
|
||||
ExecStart=/opt/bin/kubelet \
|
||||
--address=0.0.0.0 \
|
||||
--port=10250 \
|
||||
--hostname-override=${DEFAULT_IPV4} \
|
||||
--api-servers=<master-private-ip>:8080 \
|
||||
--allow-privileged=true \
|
||||
--logtostderr=true \
|
||||
--healthz-bind-address=0.0.0.0 \
|
||||
--healthz-port=10248
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
update:
|
||||
group: alpha
|
||||
reboot-strategy: off
|
|
@ -1,79 +0,0 @@
|
|||
---
|
||||
title: Customizing control plane configuration with kubeadm
|
||||
content_template: templates/concept
|
||||
weight: 40
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
The kubeadm configuration exposes the following fields that can override the default flags passed to control plane components such as the APIServer, ControllerManager and Scheduler:
|
||||
|
||||
- `APIServerExtraArgs`
|
||||
- `ControllerManagerExtraArgs`
|
||||
- `SchedulerExtraArgs`
|
||||
|
||||
These fields consist of `key: value` pairs. To override a flag for a control plane component:
|
||||
|
||||
1. Add the appropriate field to your configuration.
|
||||
2. Add the flags to override to the field.
|
||||
|
||||
For more details on each field in the configuration you can navigate to our
|
||||
[API reference pages](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#ClusterConfiguration).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## APIServer flags
|
||||
|
||||
For details, see the [reference documentation for kube-apiserver](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/).
|
||||
|
||||
Example usage:
|
||||
```yaml
|
||||
apiVersion: kubeadm.k8s.io/v1alpha3
|
||||
kind: ClusterConfiguration
|
||||
kubernetesVersion: v1.12.0
|
||||
metadata:
|
||||
name: 1.12-sample
|
||||
apiServerExtraArgs:
|
||||
advertise-address: 192.168.0.103
|
||||
anonymous-auth: false
|
||||
enable-admission-plugins: AlwaysPullImages,DefaultStorageClass
|
||||
audit-log-path: /home/johndoe/audit.log
|
||||
```
|
||||
|
||||
## ControllerManager flags
|
||||
|
||||
For details, see the [reference documentation for kube-controller-manager](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/).
|
||||
|
||||
Example usage:
|
||||
```yaml
|
||||
apiVersion: kubeadm.k8s.io/v1alpha3
|
||||
kind: ClusterConfiguration
|
||||
kubernetesVersion: v1.12.0
|
||||
metadata:
|
||||
name: 1.12-sample
|
||||
controllerManagerExtraArgs:
|
||||
cluster-signing-key-file: /home/johndoe/keys/ca.key
|
||||
bind-address: 0.0.0.0
|
||||
deployment-controller-sync-period: 50
|
||||
```
|
||||
|
||||
## Scheduler flags
|
||||
|
||||
For details, see the [reference documentation for kube-scheduler](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/).
|
||||
|
||||
Example usage:
|
||||
```yaml
|
||||
apiVersion: kubeadm.k8s.io/v1alpha3
|
||||
kind: ClusterConfiguration
|
||||
kubernetesVersion: v1.12.0
|
||||
metadata:
|
||||
name: 1.12-sample
|
||||
schedulerExtraArgs:
|
||||
address: 0.0.0.0
|
||||
config: /home/johndoe/schedconfig.yaml
|
||||
kubeconfig: /home/johndoe/kubeconfig.yaml
|
||||
```
|
||||
|
||||
{{% /capture %}}
|
|
@ -1,625 +0,0 @@
|
|||
---
|
||||
title: Creating a single master cluster with kubeadm
|
||||
content_template: templates/task
|
||||
weight: 30
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
<img src="https://raw.githubusercontent.com/cncf/artwork/master/kubernetes/certified-kubernetes/versionless/color/certified-kubernetes-color.png" align="right" width="150px">**kubeadm** helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices. With kubeadm, your cluster should pass [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification). Kubeadm also supports other cluster
|
||||
lifecycle functions, such as upgrades, downgrade, and managing [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/).
|
||||
|
||||
Because you can install kubeadm on various types of machine (e.g. laptop, server,
|
||||
Raspberry Pi, etc.), it's well suited for integration with provisioning systems
|
||||
such as Terraform or Ansible.
|
||||
|
||||
kubeadm's simplicity means it can serve a wide range of use cases:
|
||||
|
||||
- New users can start with kubeadm to try Kubernetes out for the first time.
|
||||
- Users familiar with Kubernetes can spin up clusters with kubeadm and test their applications.
|
||||
- Larger projects can include kubeadm as a building block in a more complex system that can also include other installer tools.
|
||||
|
||||
kubeadm is designed to be a simple way for new users to start trying
|
||||
Kubernetes out, possibly for the first time, a way for existing users to
|
||||
test their application on and stitch together a cluster easily, and also to be
|
||||
a building block in other ecosystem and/or installer tool with a larger
|
||||
scope.
|
||||
|
||||
You can install _kubeadm_ very easily on operating systems that support
|
||||
installing deb or rpm packages. The responsible SIG for kubeadm,
|
||||
[SIG Cluster Lifecycle](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle), provides these packages pre-built for you,
|
||||
but you may also on other OSes.
|
||||
|
||||
|
||||
### kubeadm Maturity
|
||||
|
||||
| Area | Maturity Level |
|
||||
|---------------------------|--------------- |
|
||||
| Command line UX | beta |
|
||||
| Implementation | beta |
|
||||
| Config file API | alpha |
|
||||
| Self-hosting | alpha |
|
||||
| kubeadm alpha subcommands | alpha |
|
||||
| CoreDNS | GA |
|
||||
| DynamicKubeletConfig | alpha |
|
||||
|
||||
|
||||
kubeadm's overall feature state is **Beta** and will soon be graduated to
|
||||
**General Availability (GA)** during 2018. Some sub-features, like self-hosting
|
||||
or the configuration file API are still under active development. The
|
||||
implementation of creating the cluster may change slightly as the tool evolves,
|
||||
but the overall implementation should be pretty stable. Any commands under
|
||||
`kubeadm alpha` are by definition, supported on an alpha level.
|
||||
|
||||
|
||||
### Support timeframes
|
||||
|
||||
Kubernetes releases are generally supported for nine months, and during that
|
||||
period a patch release may be issued from the release branch if a severe bug or
|
||||
security issue is found. Here are the latest Kubernetes releases and the support
|
||||
timeframe; which also applies to `kubeadm`.
|
||||
|
||||
| Kubernetes version | Release month | End-of-life-month |
|
||||
|--------------------|----------------|-------------------|
|
||||
| v1.6.x | March 2017 | December 2017 |
|
||||
| v1.7.x | June 2017 | March 2018 |
|
||||
| v1.8.x | September 2017 | June 2018 |
|
||||
| v1.9.x | December 2017 | September 2018 |
|
||||
| v1.10.x | March 2018 | December 2018 |
|
||||
| v1.11.x | June 2018 | March 2019 |
|
||||
| v1.12.x | September 2018 | June 2019 |
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
- One or more machines running a deb/rpm-compatible OS, for example Ubuntu or CentOS
|
||||
- 2 GB or more of RAM per machine. Any less leaves little room for your
|
||||
apps.
|
||||
- 2 CPUs or more on the master
|
||||
- Full network connectivity among all machines in the cluster. A public or
|
||||
private network is fine.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
## Objectives
|
||||
|
||||
* Install a single master Kubernetes cluster or [high availability cluster](https://kubernetes.io/docs/setup/independent/high-availability/)
|
||||
* Install a Pod network on the cluster so that your Pods can
|
||||
talk to each other
|
||||
|
||||
## Instructions
|
||||
|
||||
### Installing kubeadm on your hosts
|
||||
|
||||
See ["Installing kubeadm"](/docs/setup/independent/install-kubeadm/).
|
||||
|
||||
{{< note >}}
|
||||
If you have already installed kubeadm, run `apt-get update &&
|
||||
apt-get upgrade` or `yum update` to get the latest version of kubeadm.
|
||||
|
||||
When you upgrade, the kubelet restarts every few seconds as it waits in a crashloop for
|
||||
kubeadm to tell it what to do. This crashloop is expected and normal.
|
||||
After you initialize your master, the kubelet runs normally.
|
||||
{{< /note >}}
|
||||
|
||||
### Initializing your master
|
||||
|
||||
The master is the machine where the control plane components run, including
|
||||
etcd (the cluster database) and the API server (which the kubectl CLI
|
||||
communicates with).
|
||||
|
||||
1. Choose a pod network add-on, and verify whether it requires any arguments to
|
||||
be passed to kubeadm initialization. Depending on which
|
||||
third-party provider you choose, you might need to set the `--pod-network-cidr` to
|
||||
a provider-specific value. See [Installing a pod network add-on](#pod-network).
|
||||
1. (Optional) Unless otherwise specified, kubeadm uses the network interface associated
|
||||
with the default gateway to advertise the master's IP. To use a different
|
||||
network interface, specify the `--apiserver-advertise-address=<ip-address>` argument
|
||||
to `kubeadm init`. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you
|
||||
must specify an IPv6 address, for example `--apiserver-advertise-address=fd00::101`
|
||||
1. (Optional) Run `kubeadm config images pull` prior to `kubeadm init` to verify
|
||||
connectivity to gcr.io registries.
|
||||
|
||||
Now run:
|
||||
|
||||
```bash
|
||||
kubeadm init <args>
|
||||
```
|
||||
|
||||
### More information
|
||||
|
||||
For more information about `kubeadm init` arguments, see the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm/).
|
||||
|
||||
For a complete list of configuration options, see the [configuration file documentation](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file).
|
||||
|
||||
To customize control plane components, including optional IPv6 assignment to liveness probe for control plane components and etcd server, provide extra arguments to each component as documented in [custom arguments](/docs/admin/kubeadm#custom-args).
|
||||
|
||||
To run `kubeadm init` again, you must first [tear down the cluster](#tear-down).
|
||||
|
||||
If you join a node with a different architecture to your cluster, create a separate
|
||||
Deployment or DaemonSet for `kube-proxy` and `kube-dns` on the node. This is because the Docker images for these
|
||||
components do not currently support multi-architecture.
|
||||
|
||||
`kubeadm init` first runs a series of prechecks to ensure that the machine
|
||||
is ready to run Kubernetes. These prechecks expose warnings and exit on errors. `kubeadm init`
|
||||
then downloads and installs the cluster control plane components. This may take several minutes.
|
||||
The output should look like:
|
||||
|
||||
```none
|
||||
[init] Using Kubernetes version: vX.Y.Z
|
||||
[preflight] Running pre-flight checks
|
||||
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
|
||||
[certificates] Generated ca certificate and key.
|
||||
[certificates] Generated apiserver certificate and key.
|
||||
[certificates] apiserver serving cert is signed for DNS names [kubeadm-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4]
|
||||
[certificates] Generated apiserver-kubelet-client certificate and key.
|
||||
[certificates] Generated sa key and public key.
|
||||
[certificates] Generated front-proxy-ca certificate and key.
|
||||
[certificates] Generated front-proxy-client certificate and key.
|
||||
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
|
||||
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
|
||||
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
|
||||
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
|
||||
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
|
||||
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
|
||||
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
|
||||
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
|
||||
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
|
||||
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
|
||||
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
|
||||
[apiclient] All control plane components are healthy after 39.511972 seconds
|
||||
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
|
||||
[markmaster] Will mark node master as master by adding a label and a taint
|
||||
[markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master=""
|
||||
[bootstraptoken] Using token: <token>
|
||||
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
|
||||
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
|
||||
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
|
||||
[addons] Applied essential addon: CoreDNS
|
||||
[addons] Applied essential addon: kube-proxy
|
||||
|
||||
Your Kubernetes master has initialized successfully!
|
||||
|
||||
To start using your cluster, you need to run (as a regular user):
|
||||
|
||||
mkdir -p $HOME/.kube
|
||||
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
|
||||
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
||||
|
||||
You should now deploy a pod network to the cluster.
|
||||
Run "kubectl apply -f [podnetwork].yaml" with one of the addon options listed at:
|
||||
http://kubernetes.io/docs/admin/addons/
|
||||
|
||||
You can now join any number of machines by running the following on each node
|
||||
as root:
|
||||
|
||||
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
|
||||
```
|
||||
|
||||
To make kubectl work for your non-root user, run these commands, which are
|
||||
also part of the `kubeadm init` output:
|
||||
|
||||
```bash
|
||||
mkdir -p $HOME/.kube
|
||||
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
|
||||
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
||||
```
|
||||
|
||||
Alternatively, if you are the `root` user, you can run:
|
||||
|
||||
```bash
|
||||
export KUBECONFIG=/etc/kubernetes/admin.conf
|
||||
```
|
||||
|
||||
Make a record of the `kubeadm join` command that `kubeadm init` outputs. You
|
||||
need this command to [join nodes to your cluster](#join-nodes).
|
||||
|
||||
The token is used for mutual authentication between the master and the joining
|
||||
nodes. The token included here is secret. Keep it safe, because anyone with this
|
||||
token can add authenticated nodes to your cluster. These tokens can be listed,
|
||||
created, and deleted with the `kubeadm token` command. See the
|
||||
[kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm-token/).
|
||||
|
||||
### Installing a pod network add-on {#pod-network}
|
||||
|
||||
{{< caution >}}
|
||||
This section contains important information about installation and deployment order. Read it carefully before proceeding.
|
||||
{{< /caution >}}
|
||||
|
||||
You must install a pod network add-on so that your pods can communicate with
|
||||
each other.
|
||||
|
||||
**The network must be deployed before any applications. Also, CoreDNS will not start up before a network is installed.
|
||||
kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).**
|
||||
|
||||
Several projects provide Kubernetes pod networks using CNI, some of which also
|
||||
support [Network Policy](/docs/concepts/services-networking/networkpolicies/). See the [add-ons page](/docs/concepts/cluster-administration/addons/) for a complete list of available network add-ons.
|
||||
- IPv6 support was added in [CNI v0.6.0](https://github.com/containernetworking/cni/releases/tag/v0.6.0).
|
||||
- [CNI bridge](https://github.com/containernetworking/plugins/blob/master/plugins/main/bridge/README.md) and [local-ipam](https://github.com/containernetworking/plugins/blob/master/plugins/ipam/host-local/README.md) are the only supported IPv6 network plugins in Kubernetes version 1.9.
|
||||
|
||||
Note that kubeadm sets up a more secure cluster by default and enforces use of [RBAC](/docs/reference/access-authn-authz/rbac/).
|
||||
Make sure that your network manifest supports RBAC.
|
||||
|
||||
You can install a pod network add-on with the following command:
|
||||
|
||||
```bash
|
||||
kubectl apply -f <add-on.yaml>
|
||||
```
|
||||
|
||||
You can install only one pod network per cluster.
|
||||
|
||||
{{< tabs name="tabs-pod-install" >}}
|
||||
{{% tab name="Choose one..." %}}
|
||||
Please select one of the tabs to see installation instructions for the respective third-party Pod Network Provider.
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Calico" %}}
|
||||
For more information about using Calico, see [Quickstart for Calico on Kubernetes](https://docs.projectcalico.org/latest/getting-started/kubernetes/), [Installing Calico for policy and networking](https://docs.projectcalico.org/latest/getting-started/kubernetes/installation/calico), and other related resources.
|
||||
|
||||
For Calico to work correctly, you need to pass `--pod-network-cidr=192.168.0.0/16` to `kubeadm init` or update the `calico.yml` file to match your Pod network. Note that Calico works on `amd64` only.
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
|
||||
kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Canal" %}}
|
||||
Canal uses Calico for policy and Flannel for networking. Refer to the Calico documentation for the [official getting started guide](https://docs.projectcalico.org/latest/getting-started/kubernetes/installation/flannel).
|
||||
|
||||
For Canal to work correctly, `--pod-network-cidr=10.244.0.0/16` has to be passed to `kubeadm init`. Note that Canal works on `amd64` only.
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
|
||||
kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Cilium" %}}
|
||||
For more information about using Cilium with Kubernetes, see [Quickstart for Cilium on Kubernetes](http://docs.cilium.io/en/v1.2/kubernetes/quickinstall/) and [Kubernetes Install guide for Cilium](http://docs.cilium.io/en/v1.2/kubernetes/install/).
|
||||
|
||||
Passing `--pod-network-cidr` option to `kubeadm init` is not required, but highly recommended.
|
||||
|
||||
These commands will deploy Cilium with its own etcd managed by etcd operator.
|
||||
|
||||
```shell
|
||||
# Download required manifests from Cilium repository
|
||||
wget https://github.com/cilium/cilium/archive/v1.2.0.zip
|
||||
unzip v1.2.0.zip
|
||||
cd cilium-1.2.0/examples/kubernetes/addons/etcd-operator
|
||||
|
||||
# Generate and deploy etcd certificates
|
||||
export CLUSTER_DOMAIN=$(kubectl get ConfigMap --namespace kube-system coredns -o yaml | awk '/kubernetes/ {print $2}')
|
||||
tls/certs/gen-cert.sh $CLUSTER_DOMAIN
|
||||
tls/deploy-certs.sh
|
||||
|
||||
# Label kube-dns with fixed identity label
|
||||
kubectl label -n kube-system pod $(kubectl -n kube-system get pods -l k8s-app=kube-dns -o jsonpath='{range .items[]}{.metadata.name}{" "}{end}') io.cilium.fixed-identity=kube-dns
|
||||
|
||||
kubectl create -f ./
|
||||
|
||||
# Wait several minutes for Cilium, coredns and etcd pods to converge to a working state
|
||||
```
|
||||
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Flannel" %}}
|
||||
|
||||
For `flannel` to work correctly, you must pass `--pod-network-cidr=10.244.0.0/16` to `kubeadm init`.
|
||||
|
||||
Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1`
|
||||
to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information
|
||||
please see [here](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
|
||||
```
|
||||
Note that `flannel` works on `amd64`, `arm`, `arm64` and `ppc64le`, but until `flannel v0.11.0` is released
|
||||
you need to use the following manifest that supports all the architectures:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml
|
||||
```
|
||||
|
||||
For more information about `flannel`, see [the CoreOS flannel repository on GitHub
|
||||
](https://github.com/coreos/flannel).
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Kube-router" %}}
|
||||
Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1`
|
||||
to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information
|
||||
please see [here](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
|
||||
|
||||
Kube-router relies on kube-controller-manager to allocate pod CIDR for the nodes. Therefore, use `kubeadm init` with the `--pod-network-cidr` flag.
|
||||
|
||||
Kube-router provides pod networking, network policy, and high-performing IP Virtual Server(IPVS)/Linux Virtual Server(LVS) based service proxy.
|
||||
|
||||
For information on setting up Kubernetes cluster with Kube-router using kubeadm, please see official [setup guide](https://github.com/cloudnativelabs/kube-router/blob/master/docs/kubeadm.md).
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Romana" %}}
|
||||
Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1`
|
||||
to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information
|
||||
please see [here](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
|
||||
|
||||
The official Romana set-up guide is [here](https://github.com/romana/romana/tree/master/containerize#using-kubeadm).
|
||||
|
||||
Romana works on `amd64` only.
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://raw.githubusercontent.com/romana/romana/master/containerize/specs/romana-kubeadm.yml
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Weave Net" %}}
|
||||
Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1`
|
||||
to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information
|
||||
please see [here](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
|
||||
|
||||
The official Weave Net set-up guide is [here](https://www.weave.works/docs/net/latest/kube-addon/).
|
||||
|
||||
Weave Net works on `amd64`, `arm`, `arm64` and `ppc64le` without any extra action required.
|
||||
Weave Net sets hairpin mode by default. This allows Pods to access themselves via their Service IP address
|
||||
if they don't know their PodIP.
|
||||
|
||||
```shell
|
||||
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="JuniperContrail/TungstenFabric" %}}
|
||||
Provides overlay SDN solution, delivering multicloud networking, hybrid cloud networking,
|
||||
simultaneous overlay-underlay support, network policy enforcement, network isolation,
|
||||
service chaining and flexible load balancing.
|
||||
|
||||
There are multiple, flexible ways to install JuniperContrail/TungstenFabric CNI.
|
||||
|
||||
Kindly refer to this quickstart: [TungstenFabric](https://tungstenfabric.github.io/website/)
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
Once a pod network has been installed, you can confirm that it is working by
|
||||
checking that the CoreDNS pod is Running in the output of `kubectl get pods --all-namespaces`.
|
||||
And once the CoreDNS pod is up and running, you can continue by joining your nodes.
|
||||
|
||||
If your network is not working or CoreDNS is not in the Running state, check
|
||||
out our [troubleshooting docs](/docs/setup/independent/troubleshooting-kubeadm/).
|
||||
|
||||
### Master Isolation
|
||||
|
||||
By default, your cluster will not schedule pods on the master for security
|
||||
reasons. If you want to be able to schedule pods on the master, e.g. for a
|
||||
single-machine Kubernetes cluster for development, run:
|
||||
|
||||
```bash
|
||||
kubectl taint nodes --all node-role.kubernetes.io/master-
|
||||
```
|
||||
|
||||
With output looking something like:
|
||||
|
||||
```
|
||||
node "test-01" untainted
|
||||
taint "node-role.kubernetes.io/master:" not found
|
||||
taint "node-role.kubernetes.io/master:" not found
|
||||
```
|
||||
|
||||
This will remove the `node-role.kubernetes.io/master` taint from any nodes that
|
||||
have it, including the master node, meaning that the scheduler will then be able
|
||||
to schedule pods everywhere.
|
||||
|
||||
### Joining your nodes {#join-nodes}
|
||||
|
||||
The nodes are where your workloads (containers and pods, etc) run. To add new nodes to your cluster do the following for each machine:
|
||||
|
||||
* SSH to the machine
|
||||
* Become root (e.g. `sudo su -`)
|
||||
* Run the command that was output by `kubeadm init`. For example:
|
||||
|
||||
``` bash
|
||||
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
|
||||
```
|
||||
|
||||
If you do not have the token, you can get it by running the following command on the master node:
|
||||
|
||||
``` bash
|
||||
kubeadm token list
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
``` console
|
||||
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
|
||||
8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:
|
||||
signing token generated by bootstrappers:
|
||||
'kubeadm init'. kubeadm:
|
||||
default-node-token
|
||||
```
|
||||
|
||||
By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired,
|
||||
you can create a new token by running the following command on the master node:
|
||||
|
||||
``` bash
|
||||
kubeadm token create
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
``` console
|
||||
5didvk.d09sbcov8ph2amjw
|
||||
```
|
||||
|
||||
If you don't have the value of `--discovery-token-ca-cert-hash`, you can get it by running the following command chain on the master node:
|
||||
|
||||
``` bash
|
||||
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
|
||||
openssl dgst -sha256 -hex | sed 's/^.* //'
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
``` console
|
||||
8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
To specify an IPv6 tuple for `<master-ip>:<master-port>`, IPv6 address must be enclosed in square brackets, for example: `[fd00::101]:2073`.
|
||||
{{< /note >}}
|
||||
|
||||
The output should look something like:
|
||||
|
||||
```
|
||||
[preflight] Running pre-flight checks
|
||||
|
||||
... (log output of join workflow) ...
|
||||
|
||||
Node join complete:
|
||||
* Certificate signing request sent to master and response
|
||||
received.
|
||||
* Kubelet informed of new secure connection details.
|
||||
|
||||
Run 'kubectl get nodes' on the master to see this machine join.
|
||||
```
|
||||
|
||||
A few seconds later, you should notice this node in the output from `kubectl get
|
||||
nodes` when run on the master.
|
||||
|
||||
### (Optional) Controlling your cluster from machines other than the master
|
||||
|
||||
In order to get a kubectl on some other computer (e.g. laptop) to talk to your
|
||||
cluster, you need to copy the administrator kubeconfig file from your master
|
||||
to your workstation like this:
|
||||
|
||||
``` bash
|
||||
scp root@<master ip>:/etc/kubernetes/admin.conf .
|
||||
kubectl --kubeconfig ./admin.conf get nodes
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
The example above assumes SSH access is enabled for root. If that is not the
|
||||
case, you can copy the `admin.conf` file to be accessible by some other user
|
||||
and `scp` using that other user instead.
|
||||
|
||||
The `admin.conf` file gives the user _superuser_ privileges over the cluster.
|
||||
This file should be used sparingly. For normal users, it's recommended to
|
||||
generate an unique credential to which you whitelist privileges. You can do
|
||||
this with the `kubeadm alpha phase kubeconfig user --client-name <CN>`
|
||||
command. That command will print out a KubeConfig file to STDOUT which you
|
||||
should save to a file and distribute to your user. After that, whitelist
|
||||
privileges by using `kubectl create (cluster)rolebinding`.
|
||||
{{< /note >}}
|
||||
|
||||
### (Optional) Proxying API Server to localhost
|
||||
|
||||
If you want to connect to the API Server from outside the cluster you can use
|
||||
`kubectl proxy`:
|
||||
|
||||
```bash
|
||||
scp root@<master ip>:/etc/kubernetes/admin.conf .
|
||||
kubectl --kubeconfig ./admin.conf proxy
|
||||
```
|
||||
|
||||
You can now access the API Server locally at `http://localhost:8001/api/v1`
|
||||
|
||||
## Tear down {#tear-down}
|
||||
|
||||
To undo what kubeadm did, you should first [drain the
|
||||
node](/docs/reference/generated/kubectl/kubectl-commands#drain) and make
|
||||
sure that the node is empty before shutting it down.
|
||||
|
||||
Talking to the master with the appropriate credentials, run:
|
||||
|
||||
```bash
|
||||
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
|
||||
kubectl delete node <node name>
|
||||
```
|
||||
|
||||
Then, on the node being removed, reset all kubeadm installed state:
|
||||
|
||||
```bash
|
||||
kubeadm reset
|
||||
```
|
||||
|
||||
If you wish to start over simply run `kubeadm init` or `kubeadm join` with the
|
||||
appropriate arguments.
|
||||
|
||||
More options and information about the
|
||||
[`kubeadm reset command`](/docs/reference/setup-tools/kubeadm/kubeadm-reset/).
|
||||
|
||||
## Maintaining a cluster {#lifecycle}
|
||||
|
||||
Instructions for maintaining kubeadm clusters (e.g. upgrades,downgrades, etc.) can be found [here.](/docs/tasks/administer-cluster/kubeadm)
|
||||
|
||||
## Explore other add-ons {#other-addons}
|
||||
|
||||
See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to explore other add-ons,
|
||||
including tools for logging, monitoring, network policy, visualization &
|
||||
control of your Kubernetes cluster.
|
||||
|
||||
## What's next {#whats-next}
|
||||
|
||||
* Verify that your cluster is running properly with [Sonobuoy](https://github.com/heptio/sonobuoy)
|
||||
* Learn about kubeadm's advanced usage in the [kubeadm reference documentation](/docs/reference/setup-tools/kubeadm/kubeadm)
|
||||
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/user-guide/kubectl-overview/).
|
||||
* Configure log rotation. You can use **logrotate** for that. When using Docker, you can specify log rotation options for Docker daemon, for example `--log-driver=json-file --log-opt=max-size=10m --log-opt=max-file=5`. See [Configure and troubleshoot the Docker daemon](https://docs.docker.com/engine/admin/) for more details.
|
||||
|
||||
## Feedback {#feedback}
|
||||
|
||||
* For bugs, visit [kubeadm Github issue tracker](https://github.com/kubernetes/kubeadm/issues)
|
||||
* For support, visit kubeadm Slack Channel:
|
||||
[#kubeadm](https://kubernetes.slack.com/messages/kubeadm/)
|
||||
* General SIG Cluster Lifecycle Development Slack Channel:
|
||||
[#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
|
||||
* SIG Cluster Lifecycle [SIG information](#TODO)
|
||||
* SIG Cluster Lifecycle Mailing List:
|
||||
[kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
|
||||
|
||||
## Version skew policy {#version-skew-policy}
|
||||
|
||||
The kubeadm CLI tool of version vX.Y may deploy clusters with a control plane of version vX.Y or vX.(Y-1).
|
||||
kubeadm CLI vX.Y can also upgrade an existing kubeadm-created cluster of version vX.(Y-1).
|
||||
|
||||
Due to that we can't see into the future, kubeadm CLI vX.Y may or may not be able to deploy vX.(Y+1) clusters.
|
||||
|
||||
Example: kubeadm v1.8 can deploy both v1.7 and v1.8 clusters and upgrade v1.7 kubeadm-created clusters to
|
||||
v1.8.
|
||||
|
||||
Please also check our [installation guide](/docs/setup/independent/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl)
|
||||
for more information on the version skew between kubelets and the control plane.
|
||||
|
||||
## kubeadm works on multiple platforms {#multi-platform}
|
||||
|
||||
kubeadm deb/rpm packages and binaries are built for amd64, arm (32-bit), arm64, ppc64le, and s390x
|
||||
following the [multi-platform
|
||||
proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multi-platform.md).
|
||||
|
||||
Only some of the network providers offer solutions for all platforms. Please consult the list of
|
||||
network providers above or the documentation from each provider to figure out whether the provider
|
||||
supports your chosen platform.
|
||||
|
||||
## Limitations {#limitations}
|
||||
|
||||
Please note: kubeadm is a work in progress and these limitations will be
|
||||
addressed in due course.
|
||||
|
||||
1. The cluster created here has a single master, with a single etcd database
|
||||
running on it. This means that if the master fails, your cluster may lose
|
||||
data and may need to be recreated from scratch. Adding HA support
|
||||
(multiple etcd servers, multiple API servers, etc) to kubeadm is
|
||||
still a work-in-progress.
|
||||
|
||||
Workaround: regularly
|
||||
[back up etcd](https://coreos.com/etcd/docs/latest/admin_guide.html). The
|
||||
etcd data directory configured by kubeadm is at `/var/lib/etcd` on the master.
|
||||
|
||||
## Troubleshooting {#troubleshooting}
|
||||
|
||||
If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/independent/troubleshooting-kubeadm/).
|
||||
|
||||
|
||||
|
||||
|
|
@ -1,546 +0,0 @@
|
|||
---
|
||||
title: Creating Highly Available Clusters with kubeadm
|
||||
content_template: templates/task
|
||||
weight: 50
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
This page explains two different approaches to setting up a highly available Kubernetes
|
||||
cluster using kubeadm:
|
||||
|
||||
- With stacked masters. This approach requires less infrastructure. etcd members
|
||||
and control plane nodes are co-located.
|
||||
- With an external etcd cluster. This approach requires more infrastructure. The
|
||||
control plane nodes and etcd members are separated.
|
||||
|
||||
Your clusters must run Kubernetes version 1.12 or later. You should also be aware that
|
||||
setting up HA clusters with kubeadm is still experimental. You might encounter issues
|
||||
with upgrading your clusters, for example. We encourage you to try either approach,
|
||||
and provide feedback.
|
||||
|
||||
{{< caution >}}
|
||||
This page does not address running your cluster on a cloud provider.
|
||||
In a cloud environment, neither approach documented here works with Service objects
|
||||
of type LoadBalancer, or with dynamic PersistentVolumes.
|
||||
{{< /caution >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
For both methods you need this infrastructure:
|
||||
|
||||
- Three machines that meet [kubeadm's minimum
|
||||
requirements](/docs/setup/independent/install-kubeadm/#before-you-begin) for
|
||||
the masters
|
||||
- Three machines that meet [kubeadm's minimum
|
||||
requirements](/docs/setup/independent/install-kubeadm/#before-you-begin) for
|
||||
the workers
|
||||
- Full network connectivity between all machines in the cluster (public or
|
||||
private network is fine)
|
||||
- SSH access from one device to all nodes in the system
|
||||
- sudo privileges on all machines
|
||||
|
||||
For the external etcd cluster only, you also need:
|
||||
|
||||
- Three additional machines for etcd members
|
||||
|
||||
{{< note >}}
|
||||
The following examples run Calico as the Pod networking provider. If
|
||||
you run another networking provider, make sure to replace any default values as
|
||||
needed.
|
||||
{{< /note >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
## First steps for both methods
|
||||
|
||||
{{< note >}}
|
||||
All commands in this guide on any control plane or etcd node should be
|
||||
run as root.
|
||||
{{< /note >}}
|
||||
|
||||
- Find your pod CIDR. For details, see [the CNI network
|
||||
documentation](/docs/setup/independent/create-cluster-kubeadm/#pod-network).
|
||||
The example uses Calico, so the pod CIDR is `192.168.0.0/16`.
|
||||
|
||||
### Configure SSH
|
||||
|
||||
1. Enable ssh-agent on your main device that has access to all other nodes in
|
||||
the system:
|
||||
|
||||
```
|
||||
eval $(ssh-agent)
|
||||
```
|
||||
|
||||
1. Add your SSH identity to the session:
|
||||
|
||||
```
|
||||
ssh-add ~/.ssh/path_to_private_key
|
||||
```
|
||||
|
||||
1. SSH between nodes to check that the connection is working correctly.
|
||||
|
||||
- When you SSH to any node, make sure to add the `-A` flag:
|
||||
|
||||
```
|
||||
ssh -A 10.0.0.7
|
||||
```
|
||||
|
||||
- When using sudo on any node, make sure to preserve the environment so SSH
|
||||
forwarding works:
|
||||
|
||||
```
|
||||
sudo -E -s
|
||||
```
|
||||
|
||||
### Create load balancer for kube-apiserver
|
||||
|
||||
{{< note >}}
|
||||
There are many configurations for load balancers. The following
|
||||
example is only one option. Your cluster requirements may need a
|
||||
different configuration.
|
||||
{{< /note >}}
|
||||
|
||||
1. Create a kube-apiserver load balancer with a name that resolves to DNS.
|
||||
|
||||
- In a cloud environment you should place your control plane nodes behind a TCP
|
||||
forwarding load balancer. This load balancer distributes traffic to all
|
||||
healthy control plane nodes in its target list. The health check for
|
||||
an apiserver is a TCP check on the port the kube-apiserver listens on
|
||||
(default value `:6443`).
|
||||
|
||||
- It is not recommended to use an IP address directly in a cloud environment.
|
||||
|
||||
- The load balancer must be able to communicate with all control plane nodes
|
||||
on the apiserver port. It must also allow incoming traffic on its
|
||||
listening port.
|
||||
|
||||
1. Add the first control plane nodes to the load balancer and test the
|
||||
connection:
|
||||
|
||||
```sh
|
||||
nc -v LOAD_BALANCER_IP PORT
|
||||
```
|
||||
|
||||
- A connection refused error is expected because the apiserver is not yet
|
||||
running. A timeout, however, means the load balancer cannot communicate
|
||||
with the control plane node. If a timeout occurs, reconfigure the load
|
||||
balancer to communicate with the control plane node.
|
||||
|
||||
1. Add the remaining control plane nodes to the load balancer target group.
|
||||
|
||||
## Stacked control plane nodes
|
||||
|
||||
### Bootstrap the first stacked control plane node
|
||||
|
||||
{{< note >}}
|
||||
Optionally replace `stable` with a different version of Kubernetes, for example `v1.12.0`.
|
||||
{{< /note >}}
|
||||
|
||||
1. Create a `kubeadm-config.yaml` template file:
|
||||
|
||||
apiVersion: kubeadm.k8s.io/v1alpha3
|
||||
kind: ClusterConfiguration
|
||||
kubernetesVersion: stable
|
||||
apiServerCertSANs:
|
||||
- "LOAD_BALANCER_DNS"
|
||||
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
|
||||
etcd:
|
||||
local:
|
||||
extraArgs:
|
||||
listen-client-urls: "https://127.0.0.1:2379,https://CP0_IP:2379"
|
||||
advertise-client-urls: "https://CP0_IP:2379"
|
||||
listen-peer-urls: "https://CP0_IP:2380"
|
||||
initial-advertise-peer-urls: "https://CP0_IP:2380"
|
||||
initial-cluster: "CP0_HOSTNAME=https://CP0_IP:2380"
|
||||
serverCertSANs:
|
||||
- CP0_HOSTNAME
|
||||
- CP0_IP
|
||||
peerCertSANs:
|
||||
- CP0_HOSTNAME
|
||||
- CP0_IP
|
||||
networking:
|
||||
# This CIDR is a Calico default. Substitute or remove for your CNI provider.
|
||||
podSubnet: "192.168.0.0/16"
|
||||
|
||||
1. Replace the following variables in the template with the appropriate
|
||||
values for your cluster:
|
||||
|
||||
* `LOAD_BALANCER_DNS`
|
||||
* `LOAD_BALANCER_PORT`
|
||||
* `CP0_HOSTNAME`
|
||||
* `CP0_IP`
|
||||
|
||||
1. Run `kubeadm init --config kubeadm-config.yaml`
|
||||
|
||||
### Copy required files to other control plane nodes
|
||||
|
||||
The following certificates and other required files were created when you ran `kubeadm init`.
|
||||
Copy these files to your other control plane nodes:
|
||||
|
||||
- `/etc/kubernetes/pki/ca.crt`
|
||||
- `/etc/kubernetes/pki/ca.key`
|
||||
- `/etc/kubernetes/pki/sa.key`
|
||||
- `/etc/kubernetes/pki/sa.pub`
|
||||
- `/etc/kubernetes/pki/front-proxy-ca.crt`
|
||||
- `/etc/kubernetes/pki/front-proxy-ca.key`
|
||||
- `/etc/kubernetes/pki/etcd/ca.crt`
|
||||
- `/etc/kubernetes/pki/etcd/ca.key`
|
||||
|
||||
Copy the admin kubeconfig to the other control plane nodes:
|
||||
|
||||
- `/etc/kubernetes/admin.conf`
|
||||
|
||||
In the following example, replace
|
||||
`CONTROL_PLANE_IPS` with the IP addresses of the other control plane nodes.
|
||||
|
||||
```sh
|
||||
USER=ubuntu # customizable
|
||||
CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
|
||||
for host in ${CONTROL_PLANE_IPS}; do
|
||||
scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
|
||||
scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
|
||||
scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
|
||||
scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
|
||||
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
|
||||
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
|
||||
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
|
||||
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
|
||||
scp /etc/kubernetes/admin.conf "${USER}"@$host:
|
||||
done
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Remember that your config may differ from this example.
|
||||
{{< /note >}}
|
||||
|
||||
### Add the second stacked control plane node
|
||||
|
||||
1. Create a second, different `kubeadm-config.yaml` template file:
|
||||
|
||||
apiVersion: kubeadm.k8s.io/v1alpha3
|
||||
kind: ClusterConfiguration
|
||||
kubernetesVersion: stable
|
||||
apiServerCertSANs:
|
||||
- "LOAD_BALANCER_DNS"
|
||||
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
|
||||
etcd:
|
||||
local:
|
||||
extraArgs:
|
||||
listen-client-urls: "https://127.0.0.1:2379,https://CP1_IP:2379"
|
||||
advertise-client-urls: "https://CP1_IP:2379"
|
||||
listen-peer-urls: "https://CP1_IP:2380"
|
||||
initial-advertise-peer-urls: "https://CP1_IP:2380"
|
||||
initial-cluster: "CP0_HOSTNAME=https://CP0_IP:2380,CP1_HOSTNAME=https://CP1_IP:2380"
|
||||
initial-cluster-state: existing
|
||||
serverCertSANs:
|
||||
- CP1_HOSTNAME
|
||||
- CP1_IP
|
||||
peerCertSANs:
|
||||
- CP1_HOSTNAME
|
||||
- CP1_IP
|
||||
networking:
|
||||
# This CIDR is a calico default. Substitute or remove for your CNI provider.
|
||||
podSubnet: "192.168.0.0/16"
|
||||
|
||||
1. Replace the following variables in the template with the appropriate values for your cluster:
|
||||
|
||||
- `LOAD_BALANCER_DNS`
|
||||
- `LOAD_BALANCER_PORT`
|
||||
- `CP0_HOSTNAME`
|
||||
- `CP0_IP`
|
||||
- `CP1_HOSTNAME`
|
||||
- `CP1_IP`
|
||||
|
||||
1. Move the copied files to the correct locations:
|
||||
|
||||
```sh
|
||||
USER=ubuntu # customizable
|
||||
mkdir -p /etc/kubernetes/pki/etcd
|
||||
mv /home/${USER}/ca.crt /etc/kubernetes/pki/
|
||||
mv /home/${USER}/ca.key /etc/kubernetes/pki/
|
||||
mv /home/${USER}/sa.pub /etc/kubernetes/pki/
|
||||
mv /home/${USER}/sa.key /etc/kubernetes/pki/
|
||||
mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
|
||||
mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/
|
||||
mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
|
||||
mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
|
||||
mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf
|
||||
```
|
||||
|
||||
1. Run the kubeadm phase commands to bootstrap the kubelet:
|
||||
|
||||
```sh
|
||||
kubeadm alpha phase certs all --config kubeadm-config.yaml
|
||||
kubeadm alpha phase kubelet config write-to-disk --config kubeadm-config.yaml
|
||||
kubeadm alpha phase kubelet write-env-file --config kubeadm-config.yaml
|
||||
kubeadm alpha phase kubeconfig kubelet --config kubeadm-config.yaml
|
||||
systemctl start kubelet
|
||||
```
|
||||
|
||||
1. Run the commands to add the node to the etcd cluster:
|
||||
|
||||
```sh
|
||||
export CP0_IP=10.0.0.7
|
||||
export CP0_HOSTNAME=cp0
|
||||
export CP1_IP=10.0.0.8
|
||||
export CP1_HOSTNAME=cp1
|
||||
|
||||
export KUBECONFIG=/etc/kubernetes/admin.conf
|
||||
kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP1_HOSTNAME} https://${CP1_IP}:2380
|
||||
kubeadm alpha phase etcd local --config kubeadm-config.yaml
|
||||
```
|
||||
|
||||
- This command causes the etcd cluster to become unavailable for a
|
||||
brief period, after the node is added to the running cluster, and before the
|
||||
new node is joined to the etcd cluster.
|
||||
|
||||
1. Deploy the control plane components and mark the node as a master:
|
||||
|
||||
```sh
|
||||
kubeadm alpha phase kubeconfig all --config kubeadm-config.yaml
|
||||
kubeadm alpha phase controlplane all --config kubeadm-config.yaml
|
||||
kubeadm alpha phase mark-master --config kubeadm-config.yaml
|
||||
```
|
||||
|
||||
### Add the third stacked control plane node
|
||||
|
||||
1. Create a third, different `kubeadm-config.yaml` template file:
|
||||
|
||||
apiVersion: kubeadm.k8s.io/v1alpha3
|
||||
kind: ClusterConfiguration
|
||||
kubernetesVersion: stable
|
||||
apiServerCertSANs:
|
||||
- "LOAD_BALANCER_DNS"
|
||||
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
|
||||
etcd:
|
||||
local:
|
||||
extraArgs:
|
||||
listen-client-urls: "https://127.0.0.1:2379,https://CP2_IP:2379"
|
||||
advertise-client-urls: "https://CP2_IP:2379"
|
||||
listen-peer-urls: "https://CP2_IP:2380"
|
||||
initial-advertise-peer-urls: "https://CP2_IP:2380"
|
||||
initial-cluster: "CP0_HOSTNAME=https://CP0_IP:2380,CP1_HOSTNAME=https://CP1_IP:2380,CP2_HOSTNAME=https://CP2_IP:2380"
|
||||
initial-cluster-state: existing
|
||||
serverCertSANs:
|
||||
- CP2_HOSTNAME
|
||||
- CP2_IP
|
||||
peerCertSANs:
|
||||
- CP2_HOSTNAME
|
||||
- CP2_IP
|
||||
networking:
|
||||
# This CIDR is a calico default. Substitute or remove for your CNI provider.
|
||||
podSubnet: "192.168.0.0/16"
|
||||
|
||||
1. Replace the following variables in the template with the appropriate values for your cluster:
|
||||
|
||||
- `LOAD_BALANCER_DNS`
|
||||
- `LOAD_BALANCER_PORT`
|
||||
- `CP0_HOSTNAME`
|
||||
- `CP0_IP`
|
||||
- `CP1_HOSTNAME`
|
||||
- `CP1_IP`
|
||||
- `CP2_HOSTNAME`
|
||||
- `CP2_IP`
|
||||
|
||||
1. Move the copied files to the correct locations:
|
||||
|
||||
```sh
|
||||
USER=ubuntu # customizable
|
||||
mkdir -p /etc/kubernetes/pki/etcd
|
||||
mv /home/${USER}/ca.crt /etc/kubernetes/pki/
|
||||
mv /home/${USER}/ca.key /etc/kubernetes/pki/
|
||||
mv /home/${USER}/sa.pub /etc/kubernetes/pki/
|
||||
mv /home/${USER}/sa.key /etc/kubernetes/pki/
|
||||
mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
|
||||
mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/
|
||||
mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
|
||||
mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
|
||||
mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf
|
||||
```
|
||||
|
||||
1. Run the kubeadm phase commands to bootstrap the kubelet:
|
||||
|
||||
```sh
|
||||
kubeadm alpha phase certs all --config kubeadm-config.yaml
|
||||
kubeadm alpha phase kubelet config write-to-disk --config kubeadm-config.yaml
|
||||
kubeadm alpha phase kubelet write-env-file --config kubeadm-config.yaml
|
||||
kubeadm alpha phase kubeconfig kubelet --config kubeadm-config.yaml
|
||||
systemctl start kubelet
|
||||
```
|
||||
|
||||
1. Run the commands to add the node to the etcd cluster:
|
||||
|
||||
```sh
|
||||
export CP0_IP=10.0.0.7
|
||||
export CP0_HOSTNAME=cp0
|
||||
export CP2_IP=10.0.0.9
|
||||
export CP2_HOSTNAME=cp2
|
||||
|
||||
export KUBECONFIG=/etc/kubernetes/admin.conf
|
||||
kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP2_HOSTNAME} https://${CP2_IP}:2380
|
||||
kubeadm alpha phase etcd local --config kubeadm-config.yaml
|
||||
```
|
||||
|
||||
1. Deploy the control plane components and mark the node as a master:
|
||||
|
||||
```sh
|
||||
kubeadm alpha phase kubeconfig all --config kubeadm-config.yaml
|
||||
kubeadm alpha phase controlplane all --config kubeadm-config.yaml
|
||||
kubeadm alpha phase mark-master --config kubeadm-config.yaml
|
||||
```
|
||||
|
||||
## External etcd
|
||||
|
||||
### Set up the cluster
|
||||
|
||||
- Follow [these instructions](/docs/setup/independent/setup-ha-etcd-with-kubeadm/)
|
||||
to set up the etcd cluster.
|
||||
|
||||
#### Copy required files from an etcd node to all control plane nodes
|
||||
|
||||
In the following example, replace `USER` and `CONTROL_PLANE_HOSTS` values with values
|
||||
for your environment.
|
||||
|
||||
```sh
|
||||
# Make a list of required etcd certificate files
|
||||
cat << EOF > etcd-pki-files.txt
|
||||
/etc/kubernetes/pki/etcd/ca.crt
|
||||
/etc/kubernetes/pki/apiserver-etcd-client.crt
|
||||
/etc/kubernetes/pki/apiserver-etcd-client.key
|
||||
EOF
|
||||
|
||||
# create the archive
|
||||
tar -czf etcd-pki.tar.gz -T etcd-pki-files.txt
|
||||
|
||||
# copy the archive to the control plane nodes
|
||||
USER=ubuntu
|
||||
CONTROL_PLANE_HOSTS="10.0.0.7 10.0.0.8 10.0.0.9"
|
||||
for host in $CONTROL_PLANE_HOSTS; do
|
||||
scp etcd-pki.tar.gz "${USER}"@$host:
|
||||
done
|
||||
```
|
||||
|
||||
### Set up the first control plane node
|
||||
|
||||
1. Extract the etcd certificates
|
||||
|
||||
mkdir -p /etc/kubernetes/pki
|
||||
tar -xzf etcd-pki.tar.gz -C /etc/kubernetes/pki --strip-components=3
|
||||
|
||||
1. Create a `kubeadm-config.yaml`:
|
||||
|
||||
{{< note >}}
|
||||
Optionally replace `stable` with a different version of Kubernetes, for example `v1.11.3`.
|
||||
{{< /note >}}
|
||||
|
||||
apiVersion: kubeadm.k8s.io/v1alpha3
|
||||
kind: ClusterConfiguration
|
||||
kubernetesVersion: stable
|
||||
apiServerCertSANs:
|
||||
- "LOAD_BALANCER_DNS"
|
||||
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
|
||||
etcd:
|
||||
external:
|
||||
endpoints:
|
||||
- https://ETCD_0_IP:2379
|
||||
- https://ETCD_1_IP:2379
|
||||
- https://ETCD_2_IP:2379
|
||||
caFile: /etc/kubernetes/pki/etcd/ca.crt
|
||||
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
|
||||
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
|
||||
networking:
|
||||
# This CIDR is a calico default. Substitute or remove for your CNI provider.
|
||||
podSubnet: "192.168.0.0/16"
|
||||
|
||||
1. Replace the following variables in the template with the appropriate values for your cluster:
|
||||
|
||||
- `LOAD_BALANCER_DNS`
|
||||
- `LOAD_BALANCER_PORT`
|
||||
- `ETCD_0_IP`
|
||||
- `ETCD_1_IP`
|
||||
- `ETCD_2_IP`
|
||||
|
||||
1. Run `kubeadm init --config kubeadm-config.yaml`
|
||||
1. Copy the output join commamnd.
|
||||
|
||||
### Copy required files to the correct locations
|
||||
|
||||
The following pki files were created during the `kubeadm init` step and must be shared with
|
||||
all other control plane nodes.
|
||||
|
||||
- `/etc/kubernetes/pki/ca.crt`
|
||||
- `/etc/kubernetes/pki/ca.key`
|
||||
- `/etc/kubernetes/pki/sa.key`
|
||||
- `/etc/kubernetes/pki/sa.pub`
|
||||
- `/etc/kubernetes/pki/front-proxy-ca.crt`
|
||||
- `/etc/kubernetes/pki/front-proxy-ca.key`
|
||||
|
||||
In the following example, replace the list of
|
||||
`CONTROL_PLANE_IPS` values with the IP addresses of the other control plane nodes.
|
||||
|
||||
```sh
|
||||
# make a list of required kubernetes certificate files
|
||||
cat << EOF > certificate_files.txt
|
||||
/etc/kubernetes/pki/ca.crt
|
||||
/etc/kubernetes/pki/ca.key
|
||||
/etc/kubernetes/pki/sa.key
|
||||
/etc/kubernetes/pki/sa.pub
|
||||
/etc/kubernetes/pki/front-proxy-ca.crt
|
||||
/etc/kubernetes/pki/front-proxy-ca.key
|
||||
EOF
|
||||
|
||||
# create the archive
|
||||
tar -czf control-plane-certificates.tar.gz -T certificate_files.txt
|
||||
|
||||
USER=ubuntu # customizable
|
||||
CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
|
||||
for host in ${CONTROL_PLANE_IPS}; do
|
||||
scp control-plane-certificates.tar.gz "${USER}"@$host:
|
||||
done
|
||||
```
|
||||
|
||||
### Set up the other control plane nodes
|
||||
|
||||
1. Extract the required certificates
|
||||
|
||||
mkdir -p /etc/kubernetes/pki
|
||||
tar -xzf etcd-pki.tar.gz -C /etc/kubernetes/pki --strip-components 3
|
||||
tar -xzf control-plane-certificates.tar.gz -C /etc/kubernetes/pki --strip-components 3
|
||||
|
||||
1. Verify the location of the copied files.
|
||||
Your `/etc/kubernetes` directory should look like this:
|
||||
|
||||
- `/etc/kubernetes/pki/apiserver-etcd-client.crt`
|
||||
- `/etc/kubernetes/pki/apiserver-etcd-client.key`
|
||||
- `/etc/kubernetes/pki/ca.crt`
|
||||
- `/etc/kubernetes/pki/ca.key`
|
||||
- `/etc/kubernetes/pki/front-proxy-ca.crt`
|
||||
- `/etc/kubernetes/pki/front-proxy-ca.key`
|
||||
- `/etc/kubernetes/pki/sa.key`
|
||||
- `/etc/kubernetes/pki/sa.pub`
|
||||
- `/etc/kubernetes/pki/etcd/ca.crt`
|
||||
|
||||
1. Run the copied `kubeadm join` command from above. Add the flag "--experimental-control-plane".
|
||||
The final command will look something like this:
|
||||
|
||||
kubeadm join ha.k8s.example.com:6443 --token 5ynki1.3erp9i3yo7gqg1nv --discovery-token-ca-cert-hash sha256:a00055bd8c710a9906a3d91b87ea02976334e1247936ac061d867a0f014ecd81 --experimental-control-plane
|
||||
|
||||
## Common tasks after bootstrapping control plane
|
||||
|
||||
### Install a pod network
|
||||
|
||||
[Follow these instructions](/docs/setup/independent/create-cluster-kubeadm/#pod-network) to install
|
||||
the pod network. Make sure this corresponds to whichever pod CIDR you provided
|
||||
in the master configuration file.
|
||||
|
||||
### Install workers
|
||||
|
||||
Each worker node can now be joined to the cluster with the command returned from any of the
|
||||
`kubeadm init` commands.
|
||||
|
||||
{{% /capture %}}
|
|
@ -1,251 +0,0 @@
|
|||
---
|
||||
title: Installing kubeadm
|
||||
content_template: templates/task
|
||||
weight: 20
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
<img src="https://raw.githubusercontent.com/cncf/artwork/master/kubernetes/certified-kubernetes/versionless/color/certified-kubernetes-color.png" align="right" width="150px">This page shows how to install the `kubeadm` toolbox.
|
||||
For information how to create a cluster with kubeadm once you have performed this installation process,
|
||||
see the [Using kubeadm to Create a Cluster](/docs/setup/independent/create-cluster-kubeadm/) page.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
* One or more machines running one of:
|
||||
- Ubuntu 16.04+
|
||||
- Debian 9
|
||||
- CentOS 7
|
||||
- RHEL 7
|
||||
- Fedora 25/26 (best-effort)
|
||||
- HypriotOS v1.0.1+
|
||||
- Container Linux (tested with 1800.6.0)
|
||||
* 2 GB or more of RAM per machine (any less will leave little room for your apps)
|
||||
* 2 CPUs or more
|
||||
* Full network connectivity between all machines in the cluster (public or private network is fine)
|
||||
* Unique hostname, MAC address, and product_uuid for every node. See [here](#verify-the-mac-address-and-product-uuid-are-unique-for-every-node) for more details.
|
||||
* Certain ports are open on your machines. See [here](#check-required-ports) for more details.
|
||||
* Swap disabled. You **MUST** disable swap in order for the kubelet to work properly.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
## Verify the MAC address and product_uuid are unique for every node
|
||||
|
||||
* You can get the MAC address of the network interfaces using the command `ip link` or `ifconfig -a`
|
||||
* The product_uuid can be checked by using the command `sudo cat /sys/class/dmi/id/product_uuid`
|
||||
|
||||
It is very likely that hardware devices will have unique addresses, although some virtual machines may have
|
||||
identical values. Kubernetes uses these values to uniquely identify the nodes in the cluster.
|
||||
If these values are not unique to each node, the installation process
|
||||
may [fail](https://github.com/kubernetes/kubeadm/issues/31).
|
||||
|
||||
## Check network adapters
|
||||
|
||||
If you have more than one network adapter, and your Kubernetes components are not reachable on the default
|
||||
route, we recommend you add IP route(s) so Kubernetes cluster addresses go via the appropriate adapter.
|
||||
|
||||
## Check required ports
|
||||
|
||||
### Master node(s)
|
||||
|
||||
| Protocol | Direction | Port Range | Purpose | Used By |
|
||||
|----------|-----------|------------|-------------------------|---------------------------|
|
||||
| TCP | Inbound | 6443* | Kubernetes API server | All |
|
||||
| TCP | Inbound | 2379-2380 | etcd server client API | kube-apiserver, etcd |
|
||||
| TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
|
||||
| TCP | Inbound | 10251 | kube-scheduler | Self |
|
||||
| TCP | Inbound | 10252 | kube-controller-manager | Self |
|
||||
|
||||
### Worker node(s)
|
||||
|
||||
| Protocol | Direction | Port Range | Purpose | Used By |
|
||||
|----------|-----------|-------------|-----------------------|-------------------------|
|
||||
| TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
|
||||
| TCP | Inbound | 30000-32767 | NodePort Services** | All |
|
||||
|
||||
** Default port range for [NodePort Services](/docs/concepts/services-networking/service/).
|
||||
|
||||
Any port numbers marked with * are overridable, so you will need to ensure any
|
||||
custom ports you provide are also open.
|
||||
|
||||
Although etcd ports are included in master nodes, you can also host your own
|
||||
etcd cluster externally or on custom ports.
|
||||
|
||||
The pod network plugin you use (see below) may also require certain ports to be
|
||||
open. Since this differs with each pod network plugin, please see the
|
||||
documentation for the plugins about what port(s) those need.
|
||||
|
||||
## Installing runtime
|
||||
|
||||
Since v1.6.0, Kubernetes has enabled the use of CRI, Container Runtime Interface, by default.
|
||||
The container runtime used by default is Docker, which is enabled through the built-in
|
||||
`dockershim` CRI implementation inside of the `kubelet`.
|
||||
|
||||
Other CRI-based runtimes include:
|
||||
|
||||
- [cri-containerd](https://github.com/containerd/cri-containerd)
|
||||
- [cri-o](https://github.com/kubernetes-incubator/cri-o)
|
||||
- [frakti](https://github.com/kubernetes/frakti)
|
||||
- [rkt](https://github.com/kubernetes-incubator/rktlet)
|
||||
|
||||
Refer to the [CRI installation instructions](/docs/setup/cri.md) for more information.
|
||||
|
||||
## Installing kubeadm, kubelet and kubectl
|
||||
|
||||
You will install these packages on all of your machines:
|
||||
|
||||
* `kubeadm`: the command to bootstrap the cluster.
|
||||
|
||||
* `kubelet`: the component that runs on all of the machines in your cluster
|
||||
and does things like starting pods and containers.
|
||||
|
||||
* `kubectl`: the command line util to talk to your cluster.
|
||||
|
||||
kubeadm **will not** install or manage `kubelet` or `kubectl` for you, so you will
|
||||
need to ensure they match the version of the Kubernetes control panel you want
|
||||
kubeadm to install for you. If you do not, there is a risk of a version skew occurring that
|
||||
can lead to unexpected, buggy behaviour. However, _one_ minor version skew between the
|
||||
kubelet and the control plane is supported, but the kubelet version may never exceed the API
|
||||
server version. For example, kubelets running 1.7.0 should be fully compatible with a 1.8.0 API server,
|
||||
but not vice versa.
|
||||
|
||||
{{< warning >}}
|
||||
These instructions exclude all Kubernetes packages from any system upgrades.
|
||||
This is because kubeadm and Kubernetes require
|
||||
[special attention to upgrade](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11/).
|
||||
{{</ warning >}}
|
||||
|
||||
For more information on version skews, please read our
|
||||
[version skew policy](/docs/setup/independent/create-cluster-kubeadm/#version-skew-policy).
|
||||
|
||||
{{< tabs name="k8s_install" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
```bash
|
||||
apt-get update && apt-get install -y apt-transport-https curl
|
||||
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
|
||||
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
|
||||
deb http://apt.kubernetes.io/ kubernetes-xenial main
|
||||
EOF
|
||||
apt-get update
|
||||
apt-get install -y kubelet kubeadm kubectl
|
||||
apt-mark hold kubelet kubeadm kubectl
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
```bash
|
||||
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
|
||||
[kubernetes]
|
||||
name=Kubernetes
|
||||
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
repo_gpgcheck=1
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
exclude=kube*
|
||||
EOF
|
||||
setenforce 0
|
||||
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
|
||||
systemctl enable kubelet && systemctl start kubelet
|
||||
```
|
||||
|
||||
**Note:**
|
||||
|
||||
- Disabling SELinux by running `setenforce 0` is required to allow containers to access the host filesystem, which is required by pod networks for example.
|
||||
You have to do this until SELinux support is improved in the kubelet.
|
||||
- Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure
|
||||
`net.bridge.bridge-nf-call-iptables` is set to 1 in your `sysctl` config, e.g.
|
||||
|
||||
```bash
|
||||
cat <<EOF > /etc/sysctl.d/k8s.conf
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
EOF
|
||||
sysctl --system
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="Container Linux" %}}
|
||||
Install CNI plugins (required for most pod network):
|
||||
|
||||
```bash
|
||||
CNI_VERSION="v0.6.0"
|
||||
mkdir -p /opt/cni/bin
|
||||
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-amd64-${CNI_VERSION}.tgz" | tar -C /opt/cni/bin -xz
|
||||
```
|
||||
|
||||
Install crictl (required for kubeadm / Kubelet Container Runtime Interface (CRI))
|
||||
|
||||
```bash
|
||||
CRICTL_VERSION="v1.11.1"
|
||||
mkdir -p /opt/bin
|
||||
curl -L "https://github.com/kubernetes-incubator/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-amd64.tar.gz" | tar -C /opt/bin -xz
|
||||
```
|
||||
|
||||
Install `kubeadm`, `kubelet`, `kubectl` and add a `kubelet` systemd service:
|
||||
|
||||
```bash
|
||||
RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
|
||||
|
||||
mkdir -p /opt/bin
|
||||
cd /opt/bin
|
||||
curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl}
|
||||
chmod +x {kubeadm,kubelet,kubectl}
|
||||
|
||||
curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/kubelet.service" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service
|
||||
mkdir -p /etc/systemd/system/kubelet.service.d
|
||||
curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/10-kubeadm.conf" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
|
||||
```
|
||||
|
||||
Enable and start `kubelet`:
|
||||
|
||||
```bash
|
||||
systemctl enable kubelet && systemctl start kubelet
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
The kubelet is now restarting every few seconds, as it waits in a crashloop for
|
||||
kubeadm to tell it what to do.
|
||||
|
||||
## Configure cgroup driver used by kubelet on Master Node
|
||||
|
||||
When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet
|
||||
and set it in the `/var/lib/kubelet/kubeadm-flags.env` file during runtime.
|
||||
|
||||
If you are using a different CRI, you have to modify the file
|
||||
`/etc/default/kubelet` with your `cgroup-driver` value, like so:
|
||||
|
||||
```bash
|
||||
KUBELET_KUBEADM_EXTRA_ARGS=--cgroup-driver=<value>
|
||||
```
|
||||
|
||||
This file will be used by `kubeadm init` and `kubeadm join` to source extra
|
||||
user defined arguments for the kubelet.
|
||||
|
||||
Please mind, that you **only** have to do that if the cgroup driver of your CRI
|
||||
is not `cgroupfs`, because that is the default value in the kubelet already.
|
||||
|
||||
Restarting the kubelet is required:
|
||||
|
||||
```bash
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/independent/troubleshooting-kubeadm/).
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
* [Using kubeadm to Create a Cluster](/docs/setup/independent/create-cluster-kubeadm/)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
|
||||
|
|
@ -1,263 +0,0 @@
|
|||
---
|
||||
title: Set up a High Availability etcd cluster with kubeadm
|
||||
content_template: templates/task
|
||||
weight: 60
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
Kubeadm defaults to running a single member etcd cluster in a static pod managed
|
||||
by the kubelet on the control plane node. This is not a high availability setup
|
||||
as the etcd cluster contains only one member and cannot sustain any members
|
||||
becoming unavailable. This task walks through the process of creating a high
|
||||
availability etcd cluster of three members that can be used as an external etcd
|
||||
when using kubeadm to set up a kubernetes cluster.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
* Three hosts that can talk to each other over ports 2379 and 2380. This
|
||||
document assumes these default ports. However, they are configurable through
|
||||
the kubeadm config file.
|
||||
* Each host must [have docker, kubelet, and kubeadm installed][toolbox].
|
||||
* Some infrastructure to copy files between hosts. For example `ssh` and `scp`
|
||||
can satisfy this requirement.
|
||||
|
||||
[toolbox]: /docs/setup/independent/install-kubeadm/
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
## Setting up the cluster
|
||||
|
||||
The general approach is to generate all certs on one node and only distribute
|
||||
the *necessary* files to the other nodes.
|
||||
|
||||
{{< note >}}
|
||||
kubeadm contains all the necessary crytographic machinery to generate
|
||||
the certificates described below; no other cryptographic tooling is required for
|
||||
this example.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
1. Configure the kubelet to be a service manager for etcd.
|
||||
|
||||
Running etcd is simpler than running kubernetes so you must override the
|
||||
kubeadm-provided kubelet unit file by creating a new one with a higher
|
||||
precedence.
|
||||
|
||||
```sh
|
||||
cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
|
||||
[Service]
|
||||
ExecStart=
|
||||
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
|
||||
Restart=always
|
||||
EOF
|
||||
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet
|
||||
```
|
||||
|
||||
1. Create configuration files for kubeadm.
|
||||
|
||||
Generate one kubeadm configuration file for each host that will have an etcd
|
||||
member running on it using the following script.
|
||||
|
||||
```sh
|
||||
# Update HOST0, HOST1, and HOST2 with the IPs or resolvable names of your hosts
|
||||
export HOST0=10.0.0.6
|
||||
export HOST1=10.0.0.7
|
||||
export HOST2=10.0.0.8
|
||||
|
||||
# Create temp directories to store files that will end up on other hosts.
|
||||
mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/
|
||||
|
||||
ETCDHOSTS=(${HOST0} ${HOST1} ${HOST2})
|
||||
NAMES=("infra0" "infra1" "infra2")
|
||||
|
||||
for i in "${!ETCDHOSTS[@]}"; do
|
||||
HOST=${ETCDHOSTS[$i]}
|
||||
NAME=${NAMES[$i]}
|
||||
cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
|
||||
apiVersion: "kubeadm.k8s.io/v1alpha3"
|
||||
kind: ClusterConfiguration
|
||||
etcd:
|
||||
local:
|
||||
serverCertSANs:
|
||||
- "${HOST}"
|
||||
peerCertSANs:
|
||||
- "${HOST}"
|
||||
extraArgs:
|
||||
initial-cluster: infra0=https://${ETCDHOSTS[0]}:2380,infra1=https://${ETCDHOSTS[1]}:2380,infra2=https://${ETCDHOSTS[2]}:2380
|
||||
initial-cluster-state: new
|
||||
name: ${NAME}
|
||||
listen-peer-urls: https://${HOST}:2380
|
||||
listen-client-urls: https://${HOST}:2379
|
||||
advertise-client-urls: https://${HOST}:2379
|
||||
initial-advertise-peer-urls: https://${HOST}:2380
|
||||
EOF
|
||||
done
|
||||
```
|
||||
|
||||
1. Generate the certificate authority
|
||||
|
||||
If you already have a CA then the only action that is copying the CA's `crt` and
|
||||
`key` file to `/etc/kubernetes/pki/etcd/ca.crt` and
|
||||
`/etc/kubernetes/pki/etcd/ca.key`. After those files have been copied,
|
||||
proceed to the next step, "Create certificates for each member".
|
||||
|
||||
If you do not already have a CA then run this command on `$HOST0` (where you
|
||||
generated the configuration files for kubeadm).
|
||||
|
||||
```
|
||||
kubeadm alpha phase certs etcd-ca
|
||||
```
|
||||
|
||||
This creates two files
|
||||
|
||||
- `/etc/kubernetes/pki/etcd/ca.crt`
|
||||
- `/etc/kubernetes/pki/etcd/ca.key`
|
||||
|
||||
1. Create certificates for each member
|
||||
|
||||
```sh
|
||||
kubeadm alpha phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml
|
||||
kubeadm alpha phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml
|
||||
kubeadm alpha phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
|
||||
kubeadm alpha phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
|
||||
cp -R /etc/kubernetes/pki /tmp/${HOST2}/
|
||||
# cleanup non-reusable certificates
|
||||
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
|
||||
|
||||
kubeadm alpha phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml
|
||||
kubeadm alpha phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
|
||||
kubeadm alpha phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
|
||||
kubeadm alpha phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
|
||||
cp -R /etc/kubernetes/pki /tmp/${HOST1}/
|
||||
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
|
||||
|
||||
kubeadm alpha phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
|
||||
kubeadm alpha phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
|
||||
kubeadm alpha phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
|
||||
kubeadm alpha phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
|
||||
# No need to move the certs because they are for HOST0
|
||||
|
||||
# clean up certs that should not be copied off this host
|
||||
find /tmp/${HOST2} -name ca.key -type f -delete
|
||||
find /tmp/${HOST1} -name ca.key -type f -delete
|
||||
```
|
||||
|
||||
1. Copy certificates and kubeadm configs
|
||||
|
||||
The certificates have been generated and now they must be moved to their
|
||||
respective hosts.
|
||||
|
||||
```sh
|
||||
USER=ubuntu
|
||||
HOST=${HOST1}
|
||||
scp -r /tmp/${HOST}/* ${USER}@${HOST}:
|
||||
ssh ${USER}@${HOST}
|
||||
USER@HOST $ sudo -Es
|
||||
root@HOST $ chown -R root:root pki
|
||||
root@HOST $ mv pki /etc/kubernetes/
|
||||
```
|
||||
|
||||
1. Ensure all expected files exist
|
||||
|
||||
The complete list of required files on `$HOST0` is:
|
||||
|
||||
```
|
||||
/tmp/${HOST0}
|
||||
└── kubeadmcfg.yaml
|
||||
---
|
||||
/etc/kubernetes/pki
|
||||
├── apiserver-etcd-client.crt
|
||||
├── apiserver-etcd-client.key
|
||||
└── etcd
|
||||
├── ca.crt
|
||||
├── ca.key
|
||||
├── healthcheck-client.crt
|
||||
├── healthcheck-client.key
|
||||
├── peer.crt
|
||||
├── peer.key
|
||||
├── server.crt
|
||||
└── server.key
|
||||
```
|
||||
|
||||
On `$HOST1`:
|
||||
|
||||
```
|
||||
$HOME
|
||||
└── kubeadmcfg.yaml
|
||||
---
|
||||
/etc/kubernetes/pki
|
||||
├── apiserver-etcd-client.crt
|
||||
├── apiserver-etcd-client.key
|
||||
└── etcd
|
||||
├── ca.crt
|
||||
├── healthcheck-client.crt
|
||||
├── healthcheck-client.key
|
||||
├── peer.crt
|
||||
├── peer.key
|
||||
├── server.crt
|
||||
└── server.key
|
||||
```
|
||||
|
||||
On `$HOST2`
|
||||
|
||||
```
|
||||
$HOME
|
||||
└── kubeadmcfg.yaml
|
||||
---
|
||||
/etc/kubernetes/pki
|
||||
├── apiserver-etcd-client.crt
|
||||
├── apiserver-etcd-client.key
|
||||
└── etcd
|
||||
├── ca.crt
|
||||
├── healthcheck-client.crt
|
||||
├── healthcheck-client.key
|
||||
├── peer.crt
|
||||
├── peer.key
|
||||
├── server.crt
|
||||
└── server.key
|
||||
```
|
||||
|
||||
1. Create the static pod manifests
|
||||
|
||||
Now that the certificates and configs are in place it's time to create the
|
||||
manifests. On each host run the `kubeadm` command to generate a static manifest
|
||||
for etcd.
|
||||
|
||||
```sh
|
||||
root@HOST0 $ kubeadm alpha phase etcd local --config=/tmp/${HOST0}/kubeadmcfg.yaml
|
||||
root@HOST1 $ kubeadm alpha phase etcd local --config=/home/ubuntu/kubeadmcfg.yaml
|
||||
root@HOST2 $ kubeadm alpha phase etcd local --config=/home/ubuntu/kubeadmcfg.yaml
|
||||
```
|
||||
|
||||
1. Optional: Check the cluster health
|
||||
|
||||
```sh
|
||||
docker run --rm -it \
|
||||
--net host \
|
||||
-v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.2.18 etcdctl \
|
||||
--cert-file /etc/kubernetes/pki/etcd/peer.crt \
|
||||
--key-file /etc/kubernetes/pki/etcd/peer.key \
|
||||
--ca-file /etc/kubernetes/pki/etcd/ca.crt \
|
||||
--endpoints https://${HOST0}:2379 cluster-health
|
||||
...
|
||||
cluster is healthy
|
||||
```
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
Once your have a working 3 member etcd cluster, you can continue setting up a
|
||||
highly available control plane using the [external etcd method with
|
||||
kubeadm](/docs/setup/independent/high-availability/).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
@ -1,262 +0,0 @@
|
|||
---
|
||||
title: Troubleshooting kubeadm
|
||||
content_template: templates/concept
|
||||
weight: 70
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
As with any program, you might run into an error installing or running kubeadm.
|
||||
This page lists some common failure scenarios and have provided steps that can help you understand and fix the problem.
|
||||
|
||||
If your problem is not listed below, please follow the following steps:
|
||||
|
||||
- If you think your problem is a bug with kubeadm:
|
||||
- Go to [github.com/kubernetes/kubeadm](https://github.com/kubernetes/kubeadm/issues) and search for existing issues.
|
||||
- If no issue exists, please [open one](https://github.com/kubernetes/kubeadm/issues/new) and follow the issue template.
|
||||
|
||||
- If you are unsure about how kubeadm works, you can ask on [Slack](http://slack.k8s.io/) in #kubeadm, or open a question on [StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes). Please include
|
||||
relevant tags like `#kubernetes` and `#kubeadm` so folks can help you.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## `ebtables` or some similar executable not found during installation
|
||||
|
||||
If you see the following warnings while running `kubeadm init`
|
||||
|
||||
```sh
|
||||
[preflight] WARNING: ebtables not found in system path
|
||||
[preflight] WARNING: ethtool not found in system path
|
||||
```
|
||||
|
||||
Then you may be missing `ebtables`, `ethtool` or a similar executable on your node. You can install them with the following commands:
|
||||
|
||||
- For Ubuntu/Debian users, run `apt install ebtables ethtool`.
|
||||
- For CentOS/Fedora users, run `yum install ebtables ethtool`.
|
||||
|
||||
## kubeadm blocks waiting for control plane during installation
|
||||
|
||||
If you notice that `kubeadm init` hangs after printing out the following line:
|
||||
|
||||
```sh
|
||||
[apiclient] Created API client, waiting for the control plane to become ready
|
||||
```
|
||||
|
||||
This may be caused by a number of problems. The most common are:
|
||||
|
||||
- network connection problems. Check that your machine has full network connectivity before continuing.
|
||||
- the default cgroup driver configuration for the kubelet differs from that used by Docker.
|
||||
Check the system log file (e.g. `/var/log/message`) or examine the output from `journalctl -u kubelet`. If you see something like the following:
|
||||
|
||||
```shell
|
||||
error: failed to run Kubelet: failed to create kubelet:
|
||||
misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
|
||||
```
|
||||
|
||||
There are two common ways to fix the cgroup driver problem:
|
||||
|
||||
1. Install docker again following instructions
|
||||
[here](/docs/setup/independent/install-kubeadm/#installing-docker).
|
||||
1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to
|
||||
[Configure cgroup driver used by kubelet on Master Node](/docs/setup/independent/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node)
|
||||
for detailed instructions.
|
||||
|
||||
- control plane Docker containers are crashlooping or hanging. You can check this by running `docker ps` and investigating each container by running `docker logs`.
|
||||
|
||||
## kubeadm blocks when removing managed containers
|
||||
|
||||
The following could happen if Docker halts and does not remove any Kubernetes-managed containers:
|
||||
|
||||
```bash
|
||||
sudo kubeadm reset
|
||||
[preflight] Running pre-flight checks
|
||||
[reset] Stopping the kubelet service
|
||||
[reset] Unmounting mounted directories in "/var/lib/kubelet"
|
||||
[reset] Removing kubernetes-managed containers
|
||||
(block)
|
||||
```
|
||||
|
||||
A possible solution is to restart the Docker service and then re-run `kubeadm reset`:
|
||||
|
||||
```bash
|
||||
sudo systemctl restart docker.service
|
||||
sudo kubeadm reset
|
||||
```
|
||||
|
||||
Inspecting the logs for docker may also be useful:
|
||||
|
||||
```sh
|
||||
journalctl -ul docker
|
||||
```
|
||||
|
||||
## Pods in `RunContainerError`, `CrashLoopBackOff` or `Error` state
|
||||
|
||||
Right after `kubeadm init` there should not be any pods in these states.
|
||||
|
||||
- If there are pods in one of these states _right after_ `kubeadm init`, please open an
|
||||
issue in the kubeadm repo. `coredns` (or `kube-dns`) should be in the `Pending` state
|
||||
until you have deployed the network solution.
|
||||
- If you see Pods in the `RunContainerError`, `CrashLoopBackOff` or `Error` state
|
||||
after deploying the network solution and nothing happens to `coredns` (or `kube-dns`),
|
||||
it's very likely that the Pod Network solution and nothing happens to the DNS server, it's very
|
||||
likely that the Pod Network solution that you installed is somehow broken. You
|
||||
might have to grant it more RBAC privileges or use a newer version. Please file
|
||||
an issue in the Pod Network providers' issue tracker and get the issue triaged there.
|
||||
|
||||
## `coredns` (or `kube-dns`) is stuck in the `Pending` state
|
||||
|
||||
This is **expected** and part of the design. kubeadm is network provider-agnostic, so the admin
|
||||
should [install the pod network solution](/docs/concepts/cluster-administration/addons/)
|
||||
of choice. You have to install a Pod Network
|
||||
before CoreDNS may deployed fully. Hence the `Pending` state before the network is set up.
|
||||
|
||||
## `HostPort` services do not work
|
||||
|
||||
The `HostPort` and `HostIP` functionality is available depending on your Pod Network
|
||||
provider. Please contact the author of the Pod Network solution to find out whether
|
||||
`HostPort` and `HostIP` functionality are available.
|
||||
|
||||
Calico, Canal, and Flannel CNI providers are verified to support HostPort.
|
||||
|
||||
For more information, see the [CNI portmap documentation](https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md).
|
||||
|
||||
If your network provider does not support the portmap CNI plugin, you may need to use the [NodePort feature of
|
||||
services](/docs/concepts/services-networking/service/#nodeport) or use `HostNetwork=true`.
|
||||
|
||||
## Pods are not accessible via their Service IP
|
||||
|
||||
- Many network add-ons do not yet enable [hairpin mode](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#a-pod-cannot-reach-itself-via-service-ip)
|
||||
which allows pods to access themselves via their Service IP. This is an issue related to
|
||||
[CNI](https://github.com/containernetworking/cni/issues/476). Please contact the network
|
||||
add-on provider to get the latest status of their support for hairpin mode.
|
||||
|
||||
- If you are using VirtualBox (directly or via Vagrant), you will need to
|
||||
ensure that `hostname -i` returns a routable IP address. By default the first
|
||||
interface is connected to a non-routable host-only network. A work around
|
||||
is to modify `/etc/hosts`, see this [Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11)
|
||||
for an example.
|
||||
|
||||
## TLS certificate errors
|
||||
|
||||
The following error indicates a possible certificate mismatch.
|
||||
|
||||
```none
|
||||
# kubectl get pods
|
||||
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
|
||||
```
|
||||
|
||||
- Verify that the `$HOME/.kube/config` file contains a valid certificate, and
|
||||
regenerate a certificate if necessary. The certificates in a kubeconfig file
|
||||
are base64 encoded. The `base64 -d` command can be used to decode the certificate
|
||||
and `openssl x509 -text -noout` can be used for viewing the certificate information.
|
||||
- Another workaround is to overwrite the existing `kubeconfig` for the "admin" user:
|
||||
|
||||
```sh
|
||||
mv $HOME/.kube $HOME/.kube.bak
|
||||
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
|
||||
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
||||
```
|
||||
|
||||
## Default NIC When using flannel as the pod network in Vagrant
|
||||
|
||||
The following error might indicate that something was wrong in the pod network:
|
||||
|
||||
```sh
|
||||
Error from server (NotFound): the server could not find the requested resource
|
||||
```
|
||||
|
||||
- If you're using flannel as the pod network inside Vagrant, then you will have to specify the default interface name for flannel.
|
||||
|
||||
Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed.
|
||||
|
||||
This may lead to problems with flannel, which defaults to the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this, pass the `--iface eth1` flag to flannel so that the second interface is chosen.
|
||||
|
||||
## Non-public IP used for containers
|
||||
|
||||
In some situations `kubectl logs` and `kubectl run` commands may return with the following errors in an otherwise functional cluster:
|
||||
|
||||
```sh
|
||||
Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc65b868-glc5m/mysql: dial tcp 10.19.0.41:10250: getsockopt: no route to host
|
||||
```
|
||||
|
||||
- This may be due to Kubernetes using an IP that can not communicate with other IPs on the seemingly same subnet, possibly by policy of the machine provider.
|
||||
- Digital Ocean assigns a public IP to `eth0` as well as a private one to be used internally as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's `InternalIP` instead of the public one.
|
||||
|
||||
Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will not display the offending alias IP address. Alternatively an API endpoint specific to Digital Ocean allows to query for the anchor IP from the droplet:
|
||||
|
||||
```sh
|
||||
curl http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address
|
||||
```
|
||||
|
||||
The workaround is to tell `kubelet` which IP to use using `--node-ip`. When using Digital Ocean, it can be the public one (assigned to `eth0`) or the private one (assigned to `eth1`) should you want to use the optional private network. The [`KubeletExtraArgs` section of the kubeadm `NodeRegistrationOptions` structure](https://github.com/kubernetes/kubernetes/blob/release-1.12/cmd/kubeadm/app/apis/kubeadm/v1alpha3/types.go#L163-L166) can be used for this.
|
||||
|
||||
Then restart `kubelet`:
|
||||
|
||||
```sh
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet
|
||||
```
|
||||
|
||||
## Services with externalTrafficPolicy=Local are not reachable
|
||||
|
||||
On nodes where the hostname for the kubelet is overridden using the `--hostname-override` option, kube-proxy will default to treating 127.0.0.1 as the node IP, which results in rejecting connections for Services configured for `externalTrafficPolicy=Local`. This situation can be verified by checking the output of `kubectl -n kube-system logs <kube-proxy pod name>`:
|
||||
|
||||
```sh
|
||||
W0507 22:33:10.372369 1 server.go:586] Failed to retrieve node info: nodes "ip-10-0-23-78" not found
|
||||
W0507 22:33:10.372474 1 proxier.go:463] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
|
||||
```
|
||||
|
||||
A workaround for this is to modify the kube-proxy DaemonSet in the following way:
|
||||
|
||||
```sh
|
||||
kubectl -n kube-system patch --type json daemonset kube-proxy -p "$(cat <<'EOF'
|
||||
[
|
||||
{
|
||||
"op": "add",
|
||||
"path": "/spec/template/spec/containers/0/env",
|
||||
"value": [
|
||||
{
|
||||
"name": "NODE_NAME",
|
||||
"valueFrom": {
|
||||
"fieldRef": {
|
||||
"apiVersion": "v1",
|
||||
"fieldPath": "spec.nodeName"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"op": "add",
|
||||
"path": "/spec/template/spec/containers/0/command/-",
|
||||
"value": "--hostname-override=${NODE_NAME}"
|
||||
}
|
||||
]
|
||||
EOF
|
||||
)"
|
||||
|
||||
```
|
||||
|
||||
## `coredns` pods have `CrashLoopBackOff` or `Error` state
|
||||
|
||||
If you have nodes that are running SELinux with an older version of Docker you might experience a scenario
|
||||
where the `coredns` pods are not starting. To solve that you can try one of the following options:
|
||||
|
||||
- Upgrade to a [newer version of Docker](/docs/setup/independent/install-kubeadm/#installing-docker).
|
||||
- [Disable SELinux](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-enabling_and_disabling_selinux-disabling_selinux).
|
||||
- Modify the `coredns` deployment to set `allowPrivilegeEscalation` to `true`:
|
||||
|
||||
```bash
|
||||
kubectl -n kube-system get deployment coredns -o yaml | \
|
||||
sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | \
|
||||
kubectl apply -f -
|
||||
```
|
||||
|
||||
{{< warning >}}
|
||||
Disabling SELinux or setting `allowPrivilegeEscalation` to `true` can compromise
|
||||
the security of your cluster.
|
||||
{{< /warning >}}
|
||||
|
||||
{{% /capture %}}
|
|
@ -1,120 +0,0 @@
|
|||
---
|
||||
title: Cloudstack
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
[CloudStack](https://cloudstack.apache.org/) is a software to build public and private clouds based on hardware virtualization principles (traditional IaaS). To deploy Kubernetes on CloudStack there are several possibilities depending on the Cloud being used and what images are made available. CloudStack also has a vagrant plugin available, hence Vagrant could be used to deploy Kubernetes either using the existing shell provisioner or using new Salt based recipes.
|
||||
|
||||
[CoreOS](http://coreos.com) templates for CloudStack are built [nightly](http://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions.
|
||||
|
||||
This guide uses a single [Ansible playbook](https://github.com/apachecloudstack/k8s), which is completely automated and can deploy Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Prerequisites
|
||||
|
||||
```shell
|
||||
sudo apt-get install -y python-pip libssl-dev
|
||||
sudo pip install cs
|
||||
sudo pip install sshpubkeys
|
||||
sudo apt-get install software-properties-common
|
||||
sudo apt-add-repository ppa:ansible/ansible
|
||||
sudo apt-get update
|
||||
sudo apt-get install ansible
|
||||
```
|
||||
|
||||
On CloudStack server you also have to install libselinux-python :
|
||||
|
||||
```shell
|
||||
yum install libselinux-python
|
||||
```
|
||||
|
||||
[_cs_](https://github.com/exoscale/cs) is a python module for the CloudStack API.
|
||||
|
||||
Set your CloudStack endpoint, API keys and HTTP method used.
|
||||
|
||||
You can define them as environment variables: `CLOUDSTACK_ENDPOINT`, `CLOUDSTACK_KEY`, `CLOUDSTACK_SECRET` and `CLOUDSTACK_METHOD`.
|
||||
|
||||
Or create a `~/.cloudstack.ini` file:
|
||||
|
||||
```none
|
||||
[cloudstack]
|
||||
endpoint = <your cloudstack api endpoint>
|
||||
key = <your api access key>
|
||||
secret = <your api secret key>
|
||||
method = post
|
||||
```
|
||||
|
||||
We need to use the http POST method to pass the _large_ userdata to the coreOS instances.
|
||||
|
||||
### Clone the playbook
|
||||
|
||||
```shell
|
||||
git clone https://github.com/apachecloudstack/k8s
|
||||
cd kubernetes-cloudstack
|
||||
```
|
||||
|
||||
### Create a Kubernetes cluster
|
||||
|
||||
You simply need to run the playbook.
|
||||
|
||||
```shell
|
||||
ansible-playbook k8s.yml
|
||||
```
|
||||
|
||||
Some variables can be edited in the `k8s.yml` file.
|
||||
|
||||
```none
|
||||
vars:
|
||||
ssh_key: k8s
|
||||
k8s_num_nodes: 2
|
||||
k8s_security_group_name: k8s
|
||||
k8s_node_prefix: k8s2
|
||||
k8s_template: <templatename>
|
||||
k8s_instance_type: <serviceofferingname>
|
||||
```
|
||||
|
||||
This will start a Kubernetes master node and a number of compute nodes (by default 2).
|
||||
The `instance_type` and `template` are specific, edit them to specify your CloudStack cloud specific template and instance type (i.e. service offering).
|
||||
|
||||
Check the tasks and templates in `roles/k8s` if you want to modify anything.
|
||||
|
||||
Once the playbook as finished, it will print out the IP of the Kubernetes master:
|
||||
|
||||
```none
|
||||
TASK: [k8s | debug msg='k8s master IP is {{ k8s_master.default_ip }}'] ********
|
||||
```
|
||||
|
||||
SSH to it using the key that was created and using the _core_ user.
|
||||
|
||||
```shell
|
||||
ssh -i ~/.ssh/id_rsa_k8s core@<master IP>
|
||||
```
|
||||
|
||||
And you can list the machines in your cluster:
|
||||
|
||||
```shell
|
||||
fleetctl list-machines
|
||||
```
|
||||
|
||||
```none
|
||||
MACHINE IP METADATA
|
||||
a017c422... <node #1 IP> role=node
|
||||
ad13bf84... <master IP> role=master
|
||||
e9af8293... <node #2 IP> role=node
|
||||
```
|
||||
|
||||
## Support Level
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
CloudStack | Ansible | CoreOS | flannel | [docs](/docs/setup/on-premises-vm/cloudstack/) | | Community ([@Guiques](https://github.com/ltupin/))
|
||||
|
||||
For support level information on all solutions, see the [Table of solutions](/docs/setup/pick-right-solution/#table-of-solutions) chart.
|
||||
|
||||
{{% /capture %}}
|
|
@ -1,23 +0,0 @@
|
|||
---
|
||||
title: Kubernetes on DC/OS
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
Mesosphere provides an easy option to provision Kubernetes onto [DC/OS](https://mesosphere.com/product/), offering:
|
||||
|
||||
* Pure upstream Kubernetes
|
||||
* Single-click cluster provisioning
|
||||
* Highly available and secure by default
|
||||
* Kubernetes running alongside fast-data platforms (e.g. Akka, Cassandra, Kafka, Spark)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Official Mesosphere Guide
|
||||
|
||||
The canonical source of getting started on DC/OS is located in the [quickstart repo](https://github.com/mesosphere/dcos-kubernetes-quickstart).
|
||||
|
||||
{{% /capture %}}
|
|
@ -1,70 +0,0 @@
|
|||
---
|
||||
title: oVirt
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
oVirt is a virtual datacenter manager that delivers powerful management of multiple virtual machines on multiple hosts. Using KVM and libvirt, oVirt can be installed on Fedora, CentOS, or Red Hat Enterprise Linux hosts to set up and manage your virtual data center.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## oVirt Cloud Provider Deployment
|
||||
|
||||
The oVirt cloud provider allows to easily discover and automatically add new VM instances as nodes to your Kubernetes cluster.
|
||||
At the moment there are no community-supported or pre-loaded VM images including Kubernetes but it is possible to [import] or [install] Project Atomic (or Fedora) in a VM to [generate a template]. Any other distribution that includes Kubernetes may work as well.
|
||||
|
||||
It is mandatory to [install the ovirt-guest-agent] in the guests for the VM ip address and hostname to be reported to ovirt-engine and ultimately to Kubernetes.
|
||||
|
||||
Once the Kubernetes template is available it is possible to start instantiating VMs that can be discovered by the cloud provider.
|
||||
|
||||
[import]: https://ovedou.blogspot.it/2014/03/importing-glance-images-as-ovirt.html
|
||||
[install]: https://www.ovirt.org/documentation/quickstart/quickstart-guide/#create-virtual-machines
|
||||
[generate a template]: https://www.ovirt.org/documentation/quickstart/quickstart-guide/#using-templates
|
||||
[install the ovirt-guest-agent]: https://www.ovirt.org/documentation/how-to/guest-agent/install-the-guest-agent-in-fedora/
|
||||
|
||||
## Using the oVirt Cloud Provider
|
||||
|
||||
The oVirt Cloud Provider requires access to the oVirt REST-API to gather the proper information, the required credential should be specified in the `ovirt-cloud.conf` file:
|
||||
|
||||
```none
|
||||
[connection]
|
||||
uri = https://localhost:8443/ovirt-engine/api
|
||||
username = admin@internal
|
||||
password = admin
|
||||
```
|
||||
|
||||
In the same file it is possible to specify (using the `filters` section) what search query to use to identify the VMs to be reported to Kubernetes:
|
||||
|
||||
```none
|
||||
[filters]
|
||||
# Search query used to find nodes
|
||||
vms = tag=kubernetes
|
||||
```
|
||||
|
||||
In the above example all the VMs tagged with the `kubernetes` label will be reported as nodes to Kubernetes.
|
||||
|
||||
The `ovirt-cloud.conf` file then must be specified in kube-controller-manager:
|
||||
|
||||
```shell
|
||||
kube-controller-manager ... --cloud-provider=ovirt --cloud-config=/path/to/ovirt-cloud.conf ...
|
||||
```
|
||||
|
||||
## oVirt Cloud Provider Screencast
|
||||
|
||||
This short screencast demonstrates how the oVirt Cloud Provider can be used to dynamically add VMs to your Kubernetes cluster.
|
||||
|
||||
[![Screencast](https://img.youtube.com/vi/JyyST4ZKne8/0.jpg)](https://www.youtube.com/watch?v=JyyST4ZKne8)
|
||||
|
||||
## Support Level
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
oVirt | | | | [docs](/docs/setup/on-premises-vm/ovirt/) | | Community ([@simon3z](https://github.com/simon3z))
|
||||
|
||||
For support level information on all solutions, see the [Table of solutions](/docs/setup/pick-right-solution/#table-of-solutions) chart.
|
||||
|
||||
{{% /capture %}}
|
|
@ -1,20 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- colemickens
|
||||
- brendandburns
|
||||
title: Running Kubernetes on Alibaba Cloud
|
||||
---
|
||||
|
||||
## Alibaba Cloud Container Service
|
||||
|
||||
The [Alibaba Cloud Container Service](https://www.aliyun.com/product/containerservice) lets you run and manage Docker applications on a cluster of Alibaba Cloud ECS instances. It supports the popular open source container orchestrators: Docker Swarm and Kubernetes.
|
||||
|
||||
To simplify cluster deployment and management, use [Kubernetes Support for Alibaba Cloud Container Service](https://www.aliyun.com/solution/kubernetes/). You can get started quickly by following the [Kubernetes walk-through](https://help.aliyun.com/document_detail/53751.html), and there are some [tutorials for Kubernetes Support on Alibaba Cloud](https://yq.aliyun.com/teams/11/type_blog-cid_200-page_1) in Chinese.
|
||||
|
||||
To use custom binaries or open source Kubernetes, follow the instructions below.
|
||||
|
||||
## Custom Deployments
|
||||
|
||||
The source code for [Kubernetes with Alibaba Cloud provider implementation](https://github.com/AliyunContainerService/kubernetes) is open source and available on GitHub.
|
||||
|
||||
For more information, see "[Quick deployment of Kubernetes - VPC environment on Alibaba Cloud](https://www.alibabacloud.com/forum/read-830)" in English and [Chinese](https://yq.aliyun.com/articles/66474).
|
|
@ -1,89 +0,0 @@
|
|||
---
|
||||
title: Running Kubernetes on AWS EC2
|
||||
content_template: templates/task
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
This page describes how to install a Kubernetes cluster on AWS.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
To create a Kubernetes cluster on AWS, you will need an Access Key ID and a Secret Access Key from AWS.
|
||||
|
||||
### Supported Production Grade Tools
|
||||
|
||||
* [conjure-up](/docs/getting-started-guides/ubuntu/) is an open-source installer for Kubernetes that creates Kubernetes clusters with native AWS integrations on Ubuntu.
|
||||
|
||||
* [Kubernetes Operations](https://github.com/kubernetes/kops) - Production Grade K8s Installation, Upgrades, and Management. Supports running Debian, Ubuntu, CentOS, and RHEL in AWS.
|
||||
|
||||
* [CoreOS Tectonic](https://coreos.com/tectonic/) includes the open-source [Tectonic Installer](https://github.com/coreos/tectonic-installer) that creates Kubernetes clusters with Container Linux nodes on AWS.
|
||||
|
||||
* CoreOS originated and the Kubernetes Incubator maintains [a CLI tool, kube-aws](https://github.com/kubernetes-incubator/kube-aws), that creates and manages Kubernetes clusters with [Container Linux](https://coreos.com/why/) nodes, using AWS tools: EC2, CloudFormation and Autoscaling.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
## Getting started with your cluster
|
||||
|
||||
### Command line administration tool: kubectl
|
||||
|
||||
The cluster startup script will leave you with a `kubernetes` directory on your workstation.
|
||||
Alternately, you can download the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases).
|
||||
|
||||
Next, add the appropriate binary folder to your `PATH` to access kubectl:
|
||||
|
||||
```shell
|
||||
# macOS
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
|
||||
|
||||
# Linux
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
|
||||
```
|
||||
|
||||
An up-to-date documentation page for this tool is available here: [kubectl manual](/docs/user-guide/kubectl/)
|
||||
|
||||
By default, `kubectl` will use the `kubeconfig` file generated during the cluster startup for authenticating against the API.
|
||||
For more information, please read [kubeconfig files](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
|
||||
|
||||
### Examples
|
||||
|
||||
See [a simple nginx example](/docs/tasks/run-application/run-stateless-application-deployment/) to try out your new cluster.
|
||||
|
||||
The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/)
|
||||
|
||||
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/)
|
||||
|
||||
## Scaling the cluster
|
||||
|
||||
Adding and removing nodes through `kubectl` is not supported. You can still scale the amount of nodes manually through adjustments of the 'Desired' and 'Max' properties within the [Auto Scaling Group](http://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html), which was created during the installation.
|
||||
|
||||
## Tearing down the cluster
|
||||
|
||||
Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the
|
||||
`kubernetes` directory:
|
||||
|
||||
```shell
|
||||
cluster/kube-down.sh
|
||||
```
|
||||
|
||||
## Support Level
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------------- | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
AWS | kops | Debian | k8s (VPC) | [docs](https://github.com/kubernetes/kops) | | Community ([@justinsb](https://github.com/justinsb))
|
||||
AWS | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/aws) | | Community
|
||||
AWS | Juju | Ubuntu | flannel, calico, canal | [docs](/docs/getting-started-guides/ubuntu) | 100% | Commercial, Community
|
||||
|
||||
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
|
||||
|
||||
## Further reading
|
||||
|
||||
Please see the [Kubernetes docs](/docs/) for more details on administering
|
||||
and using a Kubernetes cluster.
|
||||
|
||||
{{% /capture %}}
|
|
@ -1,39 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- colemickens
|
||||
- brendandburns
|
||||
title: Running Kubernetes on Azure
|
||||
---
|
||||
|
||||
## Azure Container Service
|
||||
|
||||
The [Azure Container Service](https://azure.microsoft.com/en-us/services/container-service/) offers simple
|
||||
deployments of one of three open source orchestrators: DC/OS, Swarm, and Kubernetes clusters.
|
||||
|
||||
For an example of deploying a Kubernetes cluster onto Azure via the Azure Container Service:
|
||||
|
||||
**[Microsoft Azure Container Service - Kubernetes Walkthrough](https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes)**
|
||||
|
||||
## Custom Deployments: ACS-Engine
|
||||
|
||||
The core of the Azure Container Service is **open source** and available on GitHub for the community
|
||||
to use and contribute to: **[ACS-Engine](https://github.com/Azure/acs-engine)**.
|
||||
|
||||
ACS-Engine is a good choice if you need to make customizations to the deployment beyond what the Azure Container
|
||||
Service officially supports. These customizations include deploying into existing virtual networks, utilizing multiple
|
||||
agent pools, and more. Some community contributions to ACS-Engine may even become features of the Azure Container Service.
|
||||
|
||||
The input to ACS-Engine is similar to the ARM template syntax used to deploy a cluster directly with the Azure Container Service.
|
||||
The resulting output is an Azure Resource Manager Template that can then be checked into source control and can then be used
|
||||
to deploy Kubernetes clusters into Azure.
|
||||
|
||||
You can get started quickly by following the **[ACS-Engine Kubernetes Walkthrough](https://github.com/Azure/acs-engine/blob/master/docs/kubernetes.md)**.
|
||||
|
||||
## CoreOS Tectonic for Azure
|
||||
|
||||
The CoreOS Tectonic Installer for Azure is **open source** and available on GitHub for the community to use and contribute to: **[Tectonic Installer](https://github.com/coreos/tectonic-installer)**.
|
||||
|
||||
Tectonic Installer is a good choice when you need to make cluster customizations as it is built on [Hashicorp's Terraform](https://www.terraform.io/docs/providers/azurerm/) Azure Resource Manager (ARM) provider. This enables users to customize or integrate using familiar Terraform tooling.
|
||||
|
||||
You can get started using the [Tectonic Installer for Azure Guide](https://coreos.com/tectonic/docs/latest/install/azure/azure-terraform.html).
|
||||
|
|
@ -1,342 +0,0 @@
|
|||
---
|
||||
title: Running Kubernetes on CenturyLink Cloud
|
||||
---
|
||||
|
||||
{: toc}
|
||||
|
||||
These scripts handle the creation, deletion and expansion of Kubernetes clusters on CenturyLink Cloud.
|
||||
|
||||
You can accomplish all these tasks with a single command. We have made the Ansible playbooks used to perform these tasks available [here](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/ansible/README.md).
|
||||
|
||||
## Find Help
|
||||
|
||||
If you run into any problems or want help with anything, we are here to help. Reach out to use via any of the following ways:
|
||||
|
||||
- Submit a github issue
|
||||
- Send an email to Kubernetes AT ctl DOT io
|
||||
- Visit [http://info.ctl.io/kubernetes](http://info.ctl.io/kubernetes)
|
||||
|
||||
## Clusters of VMs or Physical Servers, your choice.
|
||||
|
||||
- We support Kubernetes clusters on both Virtual Machines or Physical Servers. If you want to use physical servers for the worker nodes (minions), simple use the --minion_type=bareMetal flag.
|
||||
- For more information on physical servers, visit: [https://www.ctl.io/bare-metal/](https://www.ctl.io/bare-metal/)
|
||||
- Physical serves are only available in the VA1 and GB3 data centers.
|
||||
- VMs are available in all 13 of our public cloud locations
|
||||
|
||||
## Requirements
|
||||
|
||||
The requirements to run this script are:
|
||||
|
||||
- A linux administrative host (tested on ubuntu and macOS)
|
||||
- python 2 (tested on 2.7.11)
|
||||
- pip (installed with python as of 2.7.9)
|
||||
- git
|
||||
- A CenturyLink Cloud account with rights to create new hosts
|
||||
- An active VPN connection to the CenturyLink Cloud from your linux host
|
||||
|
||||
## Script Installation
|
||||
|
||||
After you have all the requirements met, please follow these instructions to install this script.
|
||||
|
||||
1) Clone this repository and cd into it.
|
||||
|
||||
```shell
|
||||
git clone https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc
|
||||
```
|
||||
|
||||
2) Install all requirements, including
|
||||
|
||||
* Ansible
|
||||
* CenturyLink Cloud SDK
|
||||
* Ansible Modules
|
||||
|
||||
```shell
|
||||
sudo pip install -r ansible/requirements.txt
|
||||
```
|
||||
|
||||
3) Create the credentials file from the template and use it to set your ENV variables
|
||||
|
||||
```shell
|
||||
cp ansible/credentials.sh.template ansible/credentials.sh
|
||||
vi ansible/credentials.sh
|
||||
source ansible/credentials.sh
|
||||
|
||||
```
|
||||
|
||||
4) Grant your machine access to the CenturyLink Cloud network by using a VM inside the network or [ configuring a VPN connection to the CenturyLink Cloud network.](https://www.ctl.io/knowledge-base/network/how-to-configure-client-vpn/)
|
||||
|
||||
|
||||
#### Script Installation Example: Ubuntu 14 Walkthrough
|
||||
|
||||
If you use an ubuntu 14, for your convenience we have provided a step by step
|
||||
guide to install the requirements and install the script.
|
||||
|
||||
```shell
|
||||
# system
|
||||
apt-get update
|
||||
apt-get install -y git python python-crypto
|
||||
curl -O https://bootstrap.pypa.io/get-pip.py
|
||||
python get-pip.py
|
||||
|
||||
# installing this repository
|
||||
mkdir -p ~home/k8s-on-clc
|
||||
cd ~home/k8s-on-clc
|
||||
git clone https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc.git
|
||||
cd adm-kubernetes-on-clc/
|
||||
pip install -r requirements.txt
|
||||
|
||||
# getting started
|
||||
cd ansible
|
||||
cp credentials.sh.template credentials.sh; vi credentials.sh
|
||||
source credentials.sh
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Cluster Creation
|
||||
|
||||
To create a new Kubernetes cluster, simply run the ```kube-up.sh``` script. A complete
|
||||
list of script options and some examples are listed below.
|
||||
|
||||
```shell
|
||||
CLC_CLUSTER_NAME=[name of kubernetes cluster]
|
||||
cd ./adm-kubernetes-on-clc
|
||||
bash kube-up.sh -c="$CLC_CLUSTER_NAME"
|
||||
```
|
||||
|
||||
It takes about 15 minutes to create the cluster. Once the script completes, it
|
||||
will output some commands that will help you setup kubectl on your machine to
|
||||
point to the new cluster.
|
||||
|
||||
When the cluster creation is complete, the configuration files for it are stored
|
||||
locally on your administrative host, in the following directory
|
||||
|
||||
```shell
|
||||
> CLC_CLUSTER_HOME=$HOME/.clc_kube/$CLC_CLUSTER_NAME/
|
||||
```
|
||||
|
||||
|
||||
#### Cluster Creation: Script Options
|
||||
|
||||
```shell
|
||||
Usage: kube-up.sh [OPTIONS]
|
||||
Create servers in the CenturyLinkCloud environment and initialize a Kubernetes cluster
|
||||
Environment variables CLC_V2_API_USERNAME and CLC_V2_API_PASSWD must be set in
|
||||
order to access the CenturyLinkCloud API
|
||||
|
||||
All options (both short and long form) require arguments, and must include "="
|
||||
between option name and option value.
|
||||
|
||||
-h (--help) display this help and exit
|
||||
-c= (--clc_cluster_name=) set the name of the cluster, as used in CLC group names
|
||||
-t= (--minion_type=) standard -> VM (default), bareMetal -> physical]
|
||||
-d= (--datacenter=) VA1 (default)
|
||||
-m= (--minion_count=) number of kubernetes minion nodes
|
||||
-mem= (--vm_memory=) number of GB ram for each minion
|
||||
-cpu= (--vm_cpu=) number of virtual cps for each minion node
|
||||
-phyid= (--server_conf_id=) physical server configuration id, one of
|
||||
physical_server_20_core_conf_id
|
||||
physical_server_12_core_conf_id
|
||||
physical_server_4_core_conf_id (default)
|
||||
-etcd_separate_cluster=yes create a separate cluster of three etcd nodes,
|
||||
otherwise run etcd on the master node
|
||||
```
|
||||
|
||||
## Cluster Expansion
|
||||
|
||||
To expand an existing Kubernetes cluster, run the ```add-kube-node.sh```
|
||||
script. A complete list of script options and some examples are listed [below](#cluster-expansion-script-options).
|
||||
This script must be run from the same host that created the cluster (or a host
|
||||
that has the cluster artifact files stored in ```~/.clc_kube/$cluster_name```).
|
||||
|
||||
```shell
|
||||
cd ./adm-kubernetes-on-clc
|
||||
bash add-kube-node.sh -c="name_of_kubernetes_cluster" -m=2
|
||||
```
|
||||
|
||||
#### Cluster Expansion: Script Options
|
||||
|
||||
```shell
|
||||
Usage: add-kube-node.sh [OPTIONS]
|
||||
Create servers in the CenturyLinkCloud environment and add to an
|
||||
existing CLC kubernetes cluster
|
||||
|
||||
Environment variables CLC_V2_API_USERNAME and CLC_V2_API_PASSWD must be set in
|
||||
order to access the CenturyLinkCloud API
|
||||
|
||||
-h (--help) display this help and exit
|
||||
-c= (--clc_cluster_name=) set the name of the cluster, as used in CLC group names
|
||||
-m= (--minion_count=) number of kubernetes minion nodes to add
|
||||
```
|
||||
|
||||
## Cluster Deletion
|
||||
|
||||
There are two ways to delete an existing cluster:
|
||||
|
||||
1) Use our python script:
|
||||
|
||||
```shell
|
||||
python delete_cluster.py --cluster=clc_cluster_name --datacenter=DC1
|
||||
```
|
||||
|
||||
2) Use the CenturyLink Cloud UI. To delete a cluster, log into the CenturyLink
|
||||
Cloud control portal and delete the parent server group that contains the
|
||||
Kubernetes Cluster. We hope to add a scripted option to do this soon.
|
||||
|
||||
## Examples
|
||||
|
||||
Create a cluster with name of k8s_1, 1 master node and 3 worker minions (on physical machines), in VA1
|
||||
|
||||
```shell
|
||||
bash kube-up.sh --clc_cluster_name=k8s_1 --minion_type=bareMetal --minion_count=3 --datacenter=VA1
|
||||
```
|
||||
|
||||
Create a cluster with name of k8s_2, an ha etcd cluster on 3 VMs and 6 worker minions (on VMs), in VA1
|
||||
|
||||
```shell
|
||||
bash kube-up.sh --clc_cluster_name=k8s_2 --minion_type=standard --minion_count=6 --datacenter=VA1 --etcd_separate_cluster=yes
|
||||
```
|
||||
|
||||
Create a cluster with name of k8s_3, 1 master node, and 10 worker minions (on VMs) with higher mem/cpu, in UC1:
|
||||
|
||||
```shell
|
||||
bash kube-up.sh --clc_cluster_name=k8s_3 --minion_type=standard --minion_count=10 --datacenter=VA1 -mem=6 -cpu=4
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Cluster Features and Architecture
|
||||
|
||||
We configure the Kubernetes cluster with the following features:
|
||||
|
||||
* KubeDNS: DNS resolution and service discovery
|
||||
* Heapster/InfluxDB: For metric collection. Needed for Grafana and auto-scaling.
|
||||
* Grafana: Kubernetes/Docker metric dashboard
|
||||
* KubeUI: Simple web interface to view Kubernetes state
|
||||
* Kube Dashboard: New web interface to interact with your cluster
|
||||
|
||||
We use the following to create the Kubernetes cluster:
|
||||
|
||||
* Kubernetes 1.1.7
|
||||
* Ubuntu 14.04
|
||||
* Flannel 0.5.4
|
||||
* Docker 1.9.1-0~trusty
|
||||
* Etcd 2.2.2
|
||||
|
||||
## Optional add-ons
|
||||
|
||||
* Logging: We offer an integrated centralized logging ELK platform so that all
|
||||
Kubernetes and docker logs get sent to the ELK stack. To install the ELK stack
|
||||
and configure Kubernetes to send logs to it, follow [the log
|
||||
aggregation documentation](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/log_aggregration.md). Note: We don't install this by default as
|
||||
the footprint isn't trivial.
|
||||
|
||||
## Cluster management
|
||||
|
||||
The most widely used tool for managing a Kubernetes cluster is the command-line
|
||||
utility ```kubectl```. If you do not already have a copy of this binary on your
|
||||
administrative machine, you may run the script ```install_kubectl.sh``` which will
|
||||
download it and install it in ```/usr/bin/local```.
|
||||
|
||||
The script requires that the environment variable ```CLC_CLUSTER_NAME``` be defined
|
||||
|
||||
```install_kubectl.sh``` also writes a configuration file which will embed the necessary
|
||||
authentication certificates for the particular cluster. The configuration file is
|
||||
written to the ```${CLC_CLUSTER_HOME}/kube``` directory
|
||||
|
||||
```shell
|
||||
export KUBECONFIG=${CLC_CLUSTER_HOME}/kube/config
|
||||
kubectl version
|
||||
kubectl cluster-info
|
||||
```
|
||||
|
||||
### Accessing the cluster programmatically
|
||||
|
||||
It's possible to use the locally stored client certificates to access the apiserver. For example, you may want to use any of the [Kubernetes API client libraries](/docs/reference/using-api/client-libraries/) to program against your Kubernetes cluster in the programming language of your choice.
|
||||
|
||||
To demonstrate how to use these locally stored certificates, we provide the following example of using ```curl``` to communicate to the master apiserver via https:
|
||||
|
||||
```shell
|
||||
curl \
|
||||
--cacert ${CLC_CLUSTER_HOME}/pki/ca.crt \
|
||||
--key ${CLC_CLUSTER_HOME}/pki/kubecfg.key \
|
||||
--cert ${CLC_CLUSTER_HOME}/pki/kubecfg.crt https://${MASTER_IP}:6443
|
||||
```
|
||||
|
||||
But please note, this *does not* work out of the box with the ```curl``` binary
|
||||
distributed with macOS.
|
||||
|
||||
### Accessing the cluster with a browser
|
||||
|
||||
We install [the kubernetes dashboard](/docs/tasks/web-ui-dashboard/). When you
|
||||
create a cluster, the script should output URLs for these interfaces like this:
|
||||
|
||||
kubernetes-dashboard is running at ```https://${MASTER_IP}:6443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy```.
|
||||
|
||||
Note on Authentication to the UIs: The cluster is set up to use basic
|
||||
authentication for the user _admin_. Hitting the url at
|
||||
```https://${MASTER_IP}:6443``` will require accepting the self-signed certificate
|
||||
from the apiserver, and then presenting the admin password written to file at:
|
||||
|
||||
```> _${CLC_CLUSTER_HOME}/kube/admin_password.txt_```
|
||||
|
||||
|
||||
### Configuration files
|
||||
|
||||
Various configuration files are written into the home directory *CLC_CLUSTER_HOME* under
|
||||
```.clc_kube/${CLC_CLUSTER_NAME}``` in several subdirectories. You can use these files
|
||||
to access the cluster from machines other than where you created the cluster from.
|
||||
|
||||
* ```config/```: Ansible variable files containing parameters describing the master and minion hosts
|
||||
* ```hosts/```: hosts files listing access information for the ansible playbooks
|
||||
* ```kube/```: ```kubectl``` configuration files, and the basic-authentication password for admin access to the Kubernetes API
|
||||
* ```pki/```: public key infrastructure files enabling TLS communication in the cluster
|
||||
* ```ssh/```: SSH keys for root access to the hosts
|
||||
|
||||
|
||||
## ```kubectl``` usage examples
|
||||
|
||||
There are a great many features of _kubectl_. Here are a few examples
|
||||
|
||||
List existing nodes, pods, services and more, in all namespaces, or in just one:
|
||||
|
||||
```shell
|
||||
kubectl get nodes
|
||||
kubectl get --all-namespaces pods
|
||||
kubectl get --all-namespaces services
|
||||
kubectl get --namespace=kube-system replicationcontrollers
|
||||
```
|
||||
|
||||
The Kubernetes API server exposes services on web URLs, which are protected by requiring
|
||||
client certificates. If you run a kubectl proxy locally, ```kubectl``` will provide
|
||||
the necessary certificates and serve locally over http.
|
||||
|
||||
```shell
|
||||
kubectl proxy -p 8001
|
||||
```
|
||||
|
||||
Then, you can access urls like ```http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/``` without the need for client certificates in your browser.
|
||||
|
||||
|
||||
## What Kubernetes features do not work on CenturyLink Cloud
|
||||
|
||||
These are the known items that don't work on CenturyLink cloud but do work on other cloud providers:
|
||||
|
||||
- At this time, there is no support services of the type [LoadBalancer](/docs/tasks/access-application-cluster/create-external-load-balancer/). We are actively working on this and hope to publish the changes sometime around April 2016.
|
||||
|
||||
- At this time, there is no support for persistent storage volumes provided by
|
||||
CenturyLink Cloud. However, customers can bring their own persistent storage
|
||||
offering. We ourselves use Gluster.
|
||||
|
||||
|
||||
## Ansible Files
|
||||
|
||||
If you want more information about our Ansible files, please [read this file](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/ansible/README.md)
|
||||
|
||||
## Further reading
|
||||
|
||||
Please see the [Kubernetes docs](/docs/) for more details on administering
|
||||
and using a Kubernetes cluster.
|
||||
|
||||
|
||||
|
|
@ -1,224 +0,0 @@
|
|||
---
|
||||
title: Running Kubernetes on Google Compute Engine
|
||||
content_template: templates/task
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
The example below creates a Kubernetes cluster with 4 worker node Virtual Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) for hosted cluster installation and management.
|
||||
|
||||
For an easy way to experiment with the Kubernetes development environment, click the button below
|
||||
to open a Google Cloud Shell with an auto-cloned copy of the Kubernetes source repo.
|
||||
|
||||
[![Open in Cloud Shell](https://gstatic.com/cloudssh/images/open-btn.png)](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/kubernetes/kubernetes&page=editor&open_in_editor=README.md)
|
||||
|
||||
If you want to use custom binaries or pure open source Kubernetes, please continue with the instructions below.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. You need a Google Cloud Platform account with billing enabled. Visit the [Google Developers Console](https://console.cloud.google.com) for more details.
|
||||
1. Install `gcloud` as necessary. `gcloud` can be installed as a part of the [Google Cloud SDK](https://cloud.google.com/sdk/).
|
||||
1. Enable the [Compute Engine Instance Group Manager API](https://console.developers.google.com/apis/api/replicapool.googleapis.com/overview) in the [Google Cloud developers console](https://console.developers.google.com/apis/library).
|
||||
1. Make sure that gcloud is set to use the Google Cloud Platform project you want. You can check the current project using `gcloud config list project` and change it via `gcloud config set project <project-id>`.
|
||||
1. Make sure you have credentials for GCloud by running `gcloud auth login`.
|
||||
1. (Optional) In order to make API calls against GCE, you must also run `gcloud auth application-default login`.
|
||||
1. Make sure you can start up a GCE VM from the command line. At least make sure you can do the [Create an instance](https://cloud.google.com/compute/docs/instances/#startinstancegcloud) part of the GCE Quickstart.
|
||||
1. Make sure you can SSH into the VM without interactive prompts. See the [Log in to the instance](https://cloud.google.com/compute/docs/instances/#sshing) part of the GCE Quickstart.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
## Starting a cluster
|
||||
|
||||
You can install a client and start a cluster with either one of these commands (we list both in case only one is installed on your machine):
|
||||
|
||||
|
||||
```shell
|
||||
curl -sS https://get.k8s.io | bash
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```shell
|
||||
wget -q -O - https://get.k8s.io | bash
|
||||
```
|
||||
|
||||
Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster.
|
||||
|
||||
By default, some containers will already be running on your cluster. Containers like `fluentd` provide [logging](/docs/concepts/cluster-administration/logging/), while `heapster` provides [monitoring](https://releases.k8s.io/master/cluster/addons/cluster-monitoring/README.md) services.
|
||||
|
||||
The script run by the commands above creates a cluster with the name/prefix "kubernetes". It defines one specific cluster config, so you can't run it more than once.
|
||||
|
||||
Alternately, you can download and install the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases), then run the `<kubernetes>/cluster/kube-up.sh` script to start the cluster:
|
||||
|
||||
```shell
|
||||
cd kubernetes
|
||||
cluster/kube-up.sh
|
||||
```
|
||||
|
||||
If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `<kubernetes>/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.
|
||||
|
||||
If you run into trouble, please see the section on [troubleshooting](/docs/setup/turnkey/gce/#troubleshooting), post to the
|
||||
[Kubernetes Forum](https://discuss.kubernetes.io), or come ask questions on [Slack](/docs/troubleshooting/#slack).
|
||||
|
||||
The next few steps will show you:
|
||||
|
||||
1. How to set up the command line client on your workstation to manage the cluster
|
||||
1. Examples of how to use the cluster
|
||||
1. How to delete the cluster
|
||||
1. How to start clusters with non-default options (like larger clusters)
|
||||
|
||||
## Installing the Kubernetes command line tools on your workstation
|
||||
|
||||
The cluster startup script will leave you with a running cluster and a `kubernetes` directory on your workstation.
|
||||
|
||||
The [kubectl](/docs/user-guide/kubectl/) tool controls the Kubernetes cluster
|
||||
manager. It lets you inspect your cluster resources, create, delete, and update
|
||||
components, and much more. You will use it to look at your new cluster and bring
|
||||
up example apps.
|
||||
|
||||
You can use `gcloud` to install the `kubectl` command-line tool on your workstation:
|
||||
|
||||
```shell
|
||||
gcloud components install kubectl
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
The kubectl version bundled with `gcloud` may be older than the one
|
||||
downloaded by the get.k8s.io install script. See [Installing kubectl](/docs/tasks/kubectl/install/)
|
||||
document to see how you can set up the latest `kubectl` on your workstation.
|
||||
{{< /note >}}
|
||||
|
||||
## Getting started with your cluster
|
||||
|
||||
### Inspect your cluster
|
||||
|
||||
Once `kubectl` is in your path, you can use it to look at your cluster. E.g., running:
|
||||
|
||||
```shell
|
||||
kubectl get --all-namespaces services
|
||||
```
|
||||
|
||||
should show a set of [services](/docs/user-guide/services) that look something like this:
|
||||
|
||||
```shell
|
||||
NAMESPACE NAME TYPE CLUSTER_IP EXTERNAL_IP PORT(S) AGE
|
||||
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1d
|
||||
kube-system kube-dns ClusterIP 10.0.0.2 <none> 53/TCP,53/UDP 1d
|
||||
kube-system kube-ui ClusterIP 10.0.0.3 <none> 80/TCP 1d
|
||||
...
|
||||
```
|
||||
|
||||
Similarly, you can take a look at the set of [pods](/docs/user-guide/pods) that were created during cluster startup.
|
||||
You can do this via the
|
||||
|
||||
```shell
|
||||
kubectl get --all-namespaces pods
|
||||
```
|
||||
|
||||
command.
|
||||
|
||||
You'll see a list of pods that looks something like this (the name specifics will be different):
|
||||
|
||||
```shell
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system fluentd-cloud-logging-kubernetes-minion-63uo 1/1 Running 0 14m
|
||||
kube-system fluentd-cloud-logging-kubernetes-minion-c1n9 1/1 Running 0 14m
|
||||
kube-system fluentd-cloud-logging-kubernetes-minion-c4og 1/1 Running 0 14m
|
||||
kube-system fluentd-cloud-logging-kubernetes-minion-ngua 1/1 Running 0 14m
|
||||
kube-system kube-dns-v5-7ztia 3/3 Running 0 15m
|
||||
kube-system kube-ui-v1-curt1 1/1 Running 0 15m
|
||||
kube-system monitoring-heapster-v5-ex4u3 1/1 Running 1 15m
|
||||
kube-system monitoring-influx-grafana-v1-piled 2/2 Running 0 15m
|
||||
```
|
||||
|
||||
Some of the pods may take a few seconds to start up (during this time they'll show `Pending`), but check that they all show as `Running` after a short period.
|
||||
|
||||
### Run some examples
|
||||
|
||||
Then, see [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster.
|
||||
|
||||
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/). The [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) is a good "getting started" walkthrough.
|
||||
|
||||
## Tearing down the cluster
|
||||
|
||||
To remove/delete/teardown the cluster, use the `kube-down.sh` script.
|
||||
|
||||
```shell
|
||||
cd kubernetes
|
||||
cluster/kube-down.sh
|
||||
```
|
||||
|
||||
Likewise, the `kube-up.sh` in the same directory will bring it back up. You do not need to rerun the `curl` or `wget` command: everything needed to setup the Kubernetes cluster is now on your workstation.
|
||||
|
||||
## Customizing
|
||||
|
||||
The script above relies on Google Storage to stage the Kubernetes release. It
|
||||
then will start (by default) a single master VM along with 4 worker VMs. You
|
||||
can tweak some of these parameters by editing `kubernetes/cluster/gce/config-default.sh`
|
||||
You can view a transcript of a successful cluster creation
|
||||
[here](https://gist.github.com/satnam6502/fc689d1b46db9772adea).
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Project settings
|
||||
|
||||
You need to have the Google Cloud Storage API, and the Google Cloud Storage
|
||||
JSON API enabled. It is activated by default for new projects. Otherwise, it
|
||||
can be done in the Google Cloud Console. See the [Google Cloud Storage JSON
|
||||
API Overview](https://cloud.google.com/storage/docs/json_api/) for more
|
||||
details.
|
||||
|
||||
Also ensure that-- as listed in the [Prerequisites section](#prerequisites)-- you've enabled the `Compute Engine Instance Group Manager API`, and can start up a GCE VM from the command line as in the [GCE Quickstart](https://cloud.google.com/compute/docs/quickstart) instructions.
|
||||
|
||||
### Cluster initialization hang
|
||||
|
||||
If the Kubernetes startup script hangs waiting for the API to be reachable, you can troubleshoot by SSHing into the master and node VMs and looking at logs such as `/var/log/startupscript.log`.
|
||||
|
||||
**Once you fix the issue, you should run `kube-down.sh` to cleanup** after the partial cluster creation, before running `kube-up.sh` to try again.
|
||||
|
||||
### SSH
|
||||
|
||||
If you're having trouble SSHing into your instances, ensure the GCE firewall
|
||||
isn't blocking port 22 to your VMs. By default, this should work but if you
|
||||
have edited firewall rules or created a new non-default network, you'll need to
|
||||
expose it: `gcloud compute firewall-rules create default-ssh --network=<network-name>
|
||||
--description "SSH allowed from anywhere" --allow tcp:22`
|
||||
|
||||
Additionally, your GCE SSH key must either have no passcode or you need to be
|
||||
using `ssh-agent`.
|
||||
|
||||
### Networking
|
||||
|
||||
The instances must be able to connect to each other using their private IP. The
|
||||
script uses the "default" network which should have a firewall rule called
|
||||
"default-allow-internal" which allows traffic on any port on the private IPs.
|
||||
If this rule is missing from the default network or if you change the network
|
||||
being used in `cluster/config-default.sh` create a new rule with the following
|
||||
field values:
|
||||
|
||||
* Source Ranges: `10.0.0.0/8`
|
||||
* Allowed Protocols and Port: `tcp:1-65535;udp:1-65535;icmp`
|
||||
|
||||
## Support Level
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
GCE | Saltstack | Debian | GCE | [docs](/docs/setup/turnkey/gce/) | | Project
|
||||
|
||||
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
|
||||
|
||||
## Further reading
|
||||
|
||||
Please see the [Kubernetes docs](/docs/) for more details on administering
|
||||
and using a Kubernetes cluster.
|
||||
|
||||
{{% /capture %}}
|
|
@ -1,189 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- baldwinspc
|
||||
title: Running Kubernetes on Multiple Clouds with Stackpoint.io
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
[StackPointCloud](https://stackpoint.io/) is the universal control plane for Kubernetes Anywhere. StackPointCloud allows you to deploy and manage a Kubernetes cluster to the cloud provider of your choice in 3 steps using a web-based interface.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## AWS
|
||||
|
||||
To create a Kubernetes cluster on AWS, you will need an Access Key ID and a Secret Access Key from AWS.
|
||||
|
||||
1. Choose a Provider
|
||||
|
||||
a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account.
|
||||
|
||||
b. Click **+ADD A CLUSTER NOW**.
|
||||
|
||||
c. Click to select Amazon Web Services (AWS).
|
||||
|
||||
1. Configure Your Provider
|
||||
|
||||
a. Add your Access Key ID and a Secret Access Key from AWS. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair.
|
||||
|
||||
b. Click **SUBMIT** to submit the authorization information.
|
||||
|
||||
1. Configure Your Cluster
|
||||
|
||||
Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster.
|
||||
|
||||
1. Run the Cluster
|
||||
|
||||
You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters).
|
||||
|
||||
For information on using and managing a Kubernetes cluster on AWS, [consult the Kubernetes documentation](/docs/getting-started-guides/aws/).
|
||||
|
||||
|
||||
## GCE
|
||||
|
||||
To create a Kubernetes cluster on GCE, you will need the Service Account JSON Data from Google.
|
||||
|
||||
1. Choose a Provider
|
||||
|
||||
a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account.
|
||||
|
||||
b. Click **+ADD A CLUSTER NOW**.
|
||||
|
||||
c. Click to select Google Compute Engine (GCE).
|
||||
|
||||
1. Configure Your Provider
|
||||
|
||||
a. Add your Service Account JSON Data from Google. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair.
|
||||
|
||||
b. Click **SUBMIT** to submit the authorization information.
|
||||
|
||||
1. Configure Your Cluster
|
||||
|
||||
Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster.
|
||||
|
||||
1. Run the Cluster
|
||||
|
||||
You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters).
|
||||
|
||||
For information on using and managing a Kubernetes cluster on GCE, [consult the Kubernetes documentation](/docs/getting-started-guides/gce/).
|
||||
|
||||
|
||||
## Google Kubernetes Engine
|
||||
|
||||
To create a Kubernetes cluster on Google Kubernetes Engine, you will need the Service Account JSON Data from Google.
|
||||
|
||||
1. Choose a Provider
|
||||
|
||||
a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account.
|
||||
|
||||
b. Click **+ADD A CLUSTER NOW**.
|
||||
|
||||
c. Click to select Google Kubernetes Engine.
|
||||
|
||||
1. Configure Your Provider
|
||||
|
||||
a. Add your Service Account JSON Data from Google. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair.
|
||||
|
||||
b. Click **SUBMIT** to submit the authorization information.
|
||||
|
||||
1. Configure Your Cluster
|
||||
|
||||
Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster.
|
||||
|
||||
1. Run the Cluster
|
||||
|
||||
You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters).
|
||||
|
||||
For information on using and managing a Kubernetes cluster on Google Kubernetes Engine, consult [the official documentation](/docs/home/).
|
||||
|
||||
|
||||
## DigitalOcean
|
||||
|
||||
To create a Kubernetes cluster on DigitalOcean, you will need a DigitalOcean API Token.
|
||||
|
||||
1. Choose a Provider
|
||||
|
||||
a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account.
|
||||
|
||||
b. Click **+ADD A CLUSTER NOW**.
|
||||
|
||||
c. Click to select DigitalOcean.
|
||||
|
||||
1. Configure Your Provider
|
||||
|
||||
a. Add your DigitalOcean API Token. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair.
|
||||
|
||||
b. Click **SUBMIT** to submit the authorization information.
|
||||
|
||||
1. Configure Your Cluster
|
||||
|
||||
Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster.
|
||||
|
||||
1. Run the Cluster
|
||||
|
||||
You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters).
|
||||
|
||||
For information on using and managing a Kubernetes cluster on DigitalOcean, consult [the official documentation](/docs/home/).
|
||||
|
||||
|
||||
## Microsoft Azure
|
||||
|
||||
To create a Kubernetes cluster on Microsoft Azure, you will need an Azure Subscription ID, Username/Email, and Password.
|
||||
|
||||
1. Choose a Provider
|
||||
|
||||
a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account.
|
||||
|
||||
b. Click **+ADD A CLUSTER NOW**.
|
||||
|
||||
c. Click to select Microsoft Azure.
|
||||
|
||||
1. Configure Your Provider
|
||||
|
||||
a. Add your Azure Subscription ID, Username/Email, and Password. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair.
|
||||
|
||||
b. Click **SUBMIT** to submit the authorization information.
|
||||
|
||||
1. Configure Your Cluster
|
||||
|
||||
Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster.
|
||||
|
||||
1. Run the Cluster
|
||||
|
||||
You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters).
|
||||
|
||||
For information on using and managing a Kubernetes cluster on Azure, [consult the Kubernetes documentation](/docs/getting-started-guides/azure/).
|
||||
|
||||
|
||||
## Packet
|
||||
|
||||
To create a Kubernetes cluster on Packet, you will need a Packet API Key.
|
||||
|
||||
1. Choose a Provider
|
||||
|
||||
a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account.
|
||||
|
||||
b. Click **+ADD A CLUSTER NOW**.
|
||||
|
||||
c. Click to select Packet.
|
||||
|
||||
1. Configure Your Provider
|
||||
|
||||
a. Add your Packet API Key. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair.
|
||||
|
||||
b. Click **SUBMIT** to submit the authorization information.
|
||||
|
||||
1. Configure Your Cluster
|
||||
|
||||
Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster.
|
||||
|
||||
1. Run the Cluster
|
||||
|
||||
You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters).
|
||||
|
||||
For information on using and managing a Kubernetes cluster on Packet, consult [the official documentation](/docs/home/).
|
||||
|
||||
{{% /capture %}}
|
Loading…
Reference in New Issue