Merge branch 'master' into adjust-node-hostname-docs
commit
3dc6120e8e
|
@ -235,12 +235,8 @@ toc:
|
|||
path: /docs/getting-started-guides/centos/centos_manual_config/
|
||||
- title: CoreOS
|
||||
path: /docs/getting-started-guides/coreos
|
||||
- title: CoreOS with Calico
|
||||
path: /docs/getting-started-guides/coreos/bare_metal_calico/
|
||||
- title: Ubuntu
|
||||
path: /docs/getting-started-guides/ubuntu/
|
||||
- title: Ubuntu Nodes with Calico
|
||||
path: /docs/getting-started-guides/ubuntu-calico/
|
||||
- title: Validate Node Setup
|
||||
path: /docs/admin/node-conformance
|
||||
- title: Portable Multi-Node Cluster
|
||||
|
|
|
@ -219,7 +219,7 @@ toc:
|
|||
- title: Replication Controller
|
||||
path: /docs/user-guide/replication-controller/
|
||||
- title: Resource Quotas
|
||||
path: /docs/admin/resource-quota/
|
||||
path: /docs/admin/resourcequota/
|
||||
- title: Scheduled Jobs
|
||||
path: /docs/user-guide/scheduled-jobs/
|
||||
- title: Secrets
|
||||
|
@ -269,6 +269,6 @@ toc:
|
|||
- title: Federation Components
|
||||
section:
|
||||
- title: federation-apiserver
|
||||
path: /docs/admin/federation-apiserver.md
|
||||
path: /docs/admin/federation-apiserver
|
||||
- title : federation-controller-mananger
|
||||
path: /docs/admin/federation-controller-manager.md
|
||||
path: /docs/admin/federation-controller-manager
|
||||
|
|
|
@ -15,7 +15,7 @@ Add-ons in each section are sorted alphabetically - the ordering does not imply
|
|||
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm) unites Flannel and Calico, providing networking and network policy.
|
||||
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml) is a overlay network provider that can be used with Kubernetes.
|
||||
* [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/user-guide/networkpolicies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).
|
||||
* [Weave Net](https://github.com/weaveworks/weave-kube) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
|
||||
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
|
||||
|
||||
## Visualization & Control
|
||||
|
||||
|
|
|
@ -6,7 +6,8 @@ assignees:
|
|||
- deads2k
|
||||
|
||||
---
|
||||
|
||||
* TOC
|
||||
{:toc}
|
||||
|
||||
## Users in Kubernetes
|
||||
|
||||
|
@ -33,7 +34,7 @@ or be treated as an anonymous user.
|
|||
|
||||
## Authentication strategies
|
||||
|
||||
Kubernetes uses client certificates, bearer tokens, or HTTP basic auth to
|
||||
Kubernetes uses client certificates, bearer tokens, an authenticating proxy, or HTTP basic auth to
|
||||
authenticate API requests through authentication plugins. As HTTP request are
|
||||
made to the API server plugins attempts to associate the following attributes
|
||||
with the request:
|
||||
|
@ -360,6 +361,20 @@ An unsuccessful request would return:
|
|||
|
||||
HTTP status codes can be used to supply additional error context.
|
||||
|
||||
|
||||
### Authenticating Proxy
|
||||
|
||||
The API server can be configured to identify users from request header values, such as `X-Remote-User`.
|
||||
It is designed for use in combination with an authenticating proxy, which sets the request header value.
|
||||
In order to prevent header spoofing, the authenticating proxy is required to present a valid client
|
||||
certificate to the API server for validation against the specified CA before the request headers are
|
||||
checked.
|
||||
|
||||
* `--requestheader-username-headers` Required, case-insensitive. Header names to check, in order, for the user identity. The first header containing a value is used as the identity.
|
||||
* `--requestheader-client-ca-file` Required. PEM-encoded certificate bundle. A valid client certificate must be presented and validated against the certificate authorities in the specified file before the request headers are checked for user names.
|
||||
* `--requestheader-allowed-names` Optional. List of common names (cn). If set, a valid client certificate with a Common Name (cn) in the specified list must be presented before the request headers are checked for user names. If empty, any Common Name is allowed.
|
||||
|
||||
|
||||
### Keystone Password
|
||||
|
||||
Keystone authentication is enabled by passing the `--experimental-keystone-url=<AuthURL>`
|
||||
|
|
|
@ -160,6 +160,7 @@ kubectl get pods busybox
|
|||
```
|
||||
|
||||
You should see:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
busybox 1/1 Running 0 <some-time>
|
||||
|
|
|
@ -100,16 +100,15 @@ for `${NODE_IP}` on each machine.
|
|||
|
||||
#### Validating your cluster
|
||||
|
||||
Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate with
|
||||
|
||||
Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate on master with
|
||||
```shell
|
||||
etcdctl member list
|
||||
kubectl exec < pod_name > etcdctl member list
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```shell
|
||||
etcdctl cluster-health
|
||||
kubectl exec < pod_name > etcdctl cluster-health
|
||||
```
|
||||
|
||||
You can also validate that this is working with `etcdctl set foo bar` on one node, and `etcdctl get foo`
|
||||
|
|
|
@ -177,7 +177,7 @@ complicated way to build an overlay network. This is endorsed by several of the
|
|||
|
||||
### Project Calico
|
||||
|
||||
[Project Calico](https://github.com/projectcalico/calico-containers/blob/master/docs/cni/kubernetes/README.md) is an open source container networking provider and network policy engine.
|
||||
[Project Calico](http://docs.projectcalico.org/) is an open source container networking provider and network policy engine.
|
||||
|
||||
Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet. Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking. Calico also provides fine-grained, intent based network security policy for Kubernetes pods via its distributed firewall.
|
||||
|
||||
|
|
|
@ -33,7 +33,7 @@ cd kubernetes
|
|||
make release
|
||||
```
|
||||
|
||||
For more details on the release process see the [`build/` directory](http://releases.k8s.io/{{page.githubbranch}}/build/)
|
||||
For more details on the release process see the [`build/`](http://releases.k8s.io/{{page.githubbranch}}/build/) directory
|
||||
|
||||
### Download Kubernetes and automatically set up a default cluster
|
||||
|
||||
|
@ -57,4 +57,4 @@ Possible values for `YOUR_PROVIDER` include:
|
|||
* `vsphere` - VMWare VSphere
|
||||
* `rackspace` - Rackspace
|
||||
|
||||
For the complete, up-to-date list of providers supported by this script, see [the `/cluster` folder in the main Kubernetes repo](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/cluster), where each folder represents a possible value for `YOUR_PROVIDER`. If you don't see your desired provider, try looking at our [getting started guides](/docs/getting-started-guides); there's a good chance we have docs for them.
|
||||
For the complete, up-to-date list of providers supported by this script, see the [`/cluster`](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/cluster) folder in the main Kubernetes repo, where each folder represents a possible value for `YOUR_PROVIDER`. If you don't see your desired provider, try looking at our [getting started guides](/docs/getting-started-guides); there's a good chance we have docs for them.
|
||||
|
|
|
@ -1,209 +0,0 @@
|
|||
---
|
||||
|
||||
---
|
||||
|
||||
This document describes how to deploy Kubernetes with Calico networking on _bare metal_ CoreOS. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-containers repository](https://github.com/projectcalico/calico-containers).
|
||||
|
||||
To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes).
|
||||
|
||||
Specifically, this guide will have you do the following:
|
||||
|
||||
- Deploy a Kubernetes master node on CoreOS using cloud-config.
|
||||
- Deploy two Kubernetes compute nodes with Calico Networking using cloud-config.
|
||||
- Configure `kubectl` to access your cluster.
|
||||
|
||||
The resulting cluster will use SSL between Kubernetes components. It will run the SkyDNS service and kube-ui, and be fully conformant with the Kubernetes v1.1 conformance tests.
|
||||
|
||||
## Prerequisites and Assumptions
|
||||
|
||||
- At least three bare-metal machines (or VMs) to work with. This guide will configure them as follows:
|
||||
- 1 Kubernetes Master
|
||||
- 2 Kubernetes Nodes
|
||||
- Your nodes should have IP connectivity to each other and the internet.
|
||||
- This guide assumes a DHCP server on your network to assign server IPs.
|
||||
- This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands).
|
||||
|
||||
## Cloud-config
|
||||
|
||||
This guide will use [cloud-config](https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/) to configure each of the nodes in our Kubernetes cluster.
|
||||
|
||||
We'll use two cloud-config files:
|
||||
- `master-config.yaml`: cloud-config for the Kubernetes master
|
||||
- `node-config.yaml`: cloud-config for each Kubernetes node
|
||||
|
||||
## Download CoreOS
|
||||
|
||||
Download the stable CoreOS bootable ISO from the [CoreOS website](https://coreos.com/docs/running-coreos/platforms/iso/).
|
||||
|
||||
## Configure the Kubernetes Master
|
||||
|
||||
1. Once you've downloaded the ISO image, burn the ISO to a CD/DVD/USB key and boot from it (if using a virtual machine you can boot directly from the ISO). Once booted, you should be automatically logged in as the `core` user at the terminal. At this point CoreOS is running from the ISO and it hasn't been installed yet.
|
||||
|
||||
2. *On another machine*, download the [master cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/master-config-template.yaml) and save it as `master-config.yaml`.
|
||||
|
||||
3. Replace the following variables in the `master-config.yaml` file.
|
||||
|
||||
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server. See [generating ssh keys](https://help.github.com/articles/generating-ssh-keys/)
|
||||
|
||||
4. Copy the edited `master-config.yaml` to your Kubernetes master machine (using a USB stick, for example).
|
||||
|
||||
5. The CoreOS bootable ISO comes with a tool called `coreos-install` which will allow us to install CoreOS and configure the machine using a cloud-config file. The following command will download and install stable CoreOS using the `master-config.yaml` file we just created for configuration. Run this on the Kubernetes master.
|
||||
|
||||
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
|
||||
|
||||
```shell
|
||||
sudo coreos-install -d /dev/sda -C stable -c master-config.yaml
|
||||
```
|
||||
|
||||
6. Once complete, restart the server and boot from `/dev/sda` (you may need to remove the ISO image). When it comes back up, you should have SSH access as the `core` user using the public key provided in the `master-config.yaml` file.
|
||||
|
||||
### Configure TLS
|
||||
|
||||
The master requires the CA certificate, `ca.pem`; its own certificate, `apiserver.pem` and its private key, `apiserver-key.pem`. This [CoreOS guide](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to generate these.
|
||||
|
||||
1. Generate the necessary certificates for the master. This [guide for generating Kubernetes TLS Assets](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to use OpenSSL to generate the required assets.
|
||||
|
||||
2. Send the three files to your master host (using `scp` for example).
|
||||
|
||||
3. Move them to the `/etc/kubernetes/ssl` folder and ensure that only the root user can read the key:
|
||||
|
||||
```shell
|
||||
# Move keys
|
||||
sudo mkdir -p /etc/kubernetes/ssl/
|
||||
sudo mv -t /etc/kubernetes/ssl/ ca.pem apiserver.pem apiserver-key.pem
|
||||
|
||||
# Set Permissions
|
||||
sudo chmod 600 /etc/kubernetes/ssl/apiserver-key.pem
|
||||
sudo chown root:root /etc/kubernetes/ssl/apiserver-key.pem
|
||||
```
|
||||
|
||||
4. Restart the kubelet to pick up the changes:
|
||||
|
||||
```shell
|
||||
sudo systemctl restart kubelet
|
||||
```
|
||||
|
||||
## Configure the compute nodes
|
||||
|
||||
The following steps will set up a single Kubernetes node for use as a compute host. Run these steps to deploy each Kubernetes node in your cluster.
|
||||
|
||||
1. Boot up the node machine using the bootable ISO we downloaded earlier. You should be automatically logged in as the `core` user.
|
||||
|
||||
2. Make a copy of the [node cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/node-config-template.yaml) for this machine.
|
||||
|
||||
3. Replace the following placeholders in the `node-config.yaml` file to match your deployment.
|
||||
|
||||
- `<HOSTNAME>`: Hostname for this node (e.g. kube-node1, kube-node2)
|
||||
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server.
|
||||
- `<KUBERNETES_MASTER>`: The IPv4 address of the Kubernetes master.
|
||||
|
||||
4. Replace the following placeholders with the contents of their respective files.
|
||||
|
||||
- `<CA_CERT>`: Complete contents of `ca.pem`
|
||||
- `<CA_KEY_CERT>`: Complete contents of `ca-key.pem`
|
||||
|
||||
> **Important:** in a production deployment, embedding the secret key in cloud-config is a bad idea! In production you should use an appropriate secret manager.
|
||||
|
||||
> **Important:** Make sure you indent the entire file to match the indentation of the placeholder. For example:
|
||||
>
|
||||
> ```shell
|
||||
> - path: /etc/kubernetes/ssl/ca.pem
|
||||
> owner: core
|
||||
> permissions: 0644
|
||||
> content: |
|
||||
> <CA_CERT>
|
||||
> ```
|
||||
>
|
||||
> should look like this once the certificate is in place:
|
||||
>
|
||||
> ```shell
|
||||
> - path: /etc/kubernetes/ssl/ca.pem
|
||||
> owner: core
|
||||
> permissions: 0644
|
||||
> content: |
|
||||
> -----BEGIN CERTIFICATE-----
|
||||
> MIIC9zCCAd+gAwIBAgIJAJMnVnhVhy5pMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
|
||||
> ...<snip>...
|
||||
> QHwi1rNc8eBLNrd4BM/A1ZeDVh/Q9KxN+ZG/hHIXhmWKgN5wQx6/81FIFg==
|
||||
> -----END CERTIFICATE-----
|
||||
> ```
|
||||
|
||||
5. Move the modified `node-config.yaml` to your Kubernetes node machine and install and configure CoreOS on the node using the following command.
|
||||
|
||||
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
|
||||
|
||||
```shell
|
||||
sudo coreos-install -d /dev/sda -C stable -c node-config.yaml
|
||||
```
|
||||
|
||||
6. Once complete, restart the server and boot into `/dev/sda`. When it comes back up, you should have SSH access as the `core` user using the public key provided in the `node-config.yaml` file. It will take some time for the node to be fully configured.
|
||||
|
||||
## Configure Kubeconfig
|
||||
|
||||
To administer your cluster from a separate host, you will need the client and admin certificates generated earlier (`ca.pem`, `admin.pem`, `admin-key.pem`). With certificates in place, run the following commands with the appropriate filepaths.
|
||||
|
||||
```shell
|
||||
kubectl config set-cluster calico-cluster --server=https://<KUBERNETES_MASTER> --certificate-authority=<CA_CERT_PATH>
|
||||
kubectl config set-credentials calico-admin --certificate-authority=<CA_CERT_PATH> --client-key=<ADMIN_KEY_PATH> --client-certificate=<ADMIN_CERT_PATH>
|
||||
kubectl config set-context calico --cluster=calico-cluster --user=calico-admin
|
||||
kubectl config use-context calico
|
||||
```
|
||||
|
||||
Check your work with `kubectl get nodes`.
|
||||
|
||||
## Install the DNS Addon
|
||||
|
||||
Most Kubernetes deployments will require the DNS addon for service discovery. To install DNS, create the skydns service and replication controller provided.
|
||||
|
||||
```shell
|
||||
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/dns/skydns.yaml
|
||||
```
|
||||
|
||||
## Install the Kubernetes UI Addon (Optional)
|
||||
|
||||
The Kubernetes UI can be installed using `kubectl` to run the following manifest file.
|
||||
|
||||
```shell
|
||||
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kube-ui/kube-ui.yaml
|
||||
```
|
||||
|
||||
## Launch other Services With Calico-Kubernetes
|
||||
|
||||
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.version}}.0/examples/) to set up other services on your cluster.
|
||||
|
||||
## Connectivity to outside the cluster
|
||||
|
||||
Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP.
|
||||
|
||||
### NAT on the nodes
|
||||
|
||||
The simplest method for enabling connectivity from containers to the internet is to use outgoing NAT on your Kubernetes nodes.
|
||||
|
||||
Calico can provide outgoing NAT for containers. To enable it, use the following `calicoctl` command:
|
||||
|
||||
```shell
|
||||
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool add <CONTAINER_SUBNET> --nat-outgoing
|
||||
```
|
||||
|
||||
By default, `<CONTAINER_SUBNET>` will be `192.168.0.0/16`. You can find out which pools have been configured with the following command:
|
||||
|
||||
```shell
|
||||
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool show
|
||||
```
|
||||
|
||||
### NAT at the border router
|
||||
|
||||
In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity).
|
||||
|
||||
The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md).
|
||||
|
||||
## Support Level
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
Bare-metal | CoreOS | CoreOS | Calico | [docs](/docs/getting-started-guides/coreos/bare_metal_calico) | | Community ([@caseydavenport](https://github.com/caseydavenport))
|
||||
|
||||
|
||||
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
|
||||
|
|
@ -43,6 +43,8 @@ clusters.
|
|||
|
||||
[KCluster.io](https://kcluster.io) provides highly available and scalable managed Kubernetes clusters for AWS.
|
||||
|
||||
[Platform9](https://platform9.com/products/kubernetes/) offers managed Kubernetes on-premises or any public cloud, and provides 24/7 health monitoring and alerting.
|
||||
|
||||
### Turn-key Cloud Solutions
|
||||
|
||||
These solutions allow you to create Kubernetes clusters on a range of Cloud IaaS providers with only a
|
||||
|
@ -123,6 +125,7 @@ GKE | | | GCE | [docs](https://clou
|
|||
Stackpoint.io | | multi-support | multi-support | [docs](http://www.stackpointcloud.com) | | Commercial
|
||||
AppsCode.com | Saltstack | Debian | multi-support | [docs](https://appscode.com/products/cloud-deployment/) | | Commercial
|
||||
KCluster.io | | multi-support | multi-support | [docs](https://kcluster.io) | | Commercial
|
||||
Platform9 | | multi-support | multi-support | [docs](https://platform9.com/products/kubernetes/) | | Commercial
|
||||
GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce) | ['œ“][1] | Project
|
||||
Azure | CoreOS | CoreOS | Weave | [docs](/docs/getting-started-guides/coreos/azure/) | | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin))
|
||||
Azure | Ignition | Ubuntu | Azure | [docs](/docs/getting-started-guides/azure) | | Community (Microsoft: [@brendandburns](https://github.com/brendandburns), [@colemickens](https://github.com/colemickens))
|
||||
|
@ -140,7 +143,6 @@ AWS | CoreOS | CoreOS | flannel | [docs](/docs/gettin
|
|||
GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires))
|
||||
Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles))
|
||||
Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/bare_metal_offline) | | Community ([@jeffbean](https://github.com/jeffbean))
|
||||
Bare-metal | CoreOS | CoreOS | Calico | [docs](/docs/getting-started-guides/coreos/bare_metal_calico) | | Community ([@caseydavenport](https://github.com/caseydavenport))
|
||||
CloudStack | Ansible | CoreOS | flannel | [docs](/docs/getting-started-guides/cloudstack) | | Community ([@runseb](https://github.com/runseb))
|
||||
Vmware vSphere | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/vsphere) | | Community ([@imkin](https://github.com/imkin))
|
||||
Vmware Photon | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/photon-controller) | | Community ([@alainroy](https://github.com/alainroy))
|
||||
|
@ -150,7 +152,6 @@ OpenStack/HPCloud | Juju | Ubuntu | flannel | [docs](/docs/gettin
|
|||
Joyent | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/juju) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
|
||||
AWS | Saltstack | Debian | AWS | [docs](/docs/getting-started-guides/aws) | | Community ([@justinsb](https://github.com/justinsb))
|
||||
AWS | kops | Debian | AWS | [docs](https://github.com/kubernetes/kops) | | Community ([@justinsb](https://github.com/justinsb))
|
||||
Bare-metal | custom | Ubuntu | Calico | [docs](/docs/getting-started-guides/ubuntu-calico) | | Community ([@djosborne](https://github.com/djosborne))
|
||||
Bare-metal | custom | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu) | | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY))
|
||||
libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](/docs/getting-started-guides/libvirt-coreos) | | Community ([@lhuard1A](https://github.com/lhuard1A))
|
||||
oVirt | | | | [docs](/docs/getting-started-guides/ovirt) | | Community ([@simon3z](https://github.com/simon3z))
|
||||
|
|
|
@ -26,6 +26,12 @@ a building block. kops builds on the kubeadm work.
|
|||
|
||||
### (1/5) Install kops
|
||||
|
||||
#### Requirements
|
||||
|
||||
You must have [kubectl](http://kubernetes.io/docs/getting-started-guides/kubectl/) installed in order for kops to work.
|
||||
|
||||
#### Installation
|
||||
|
||||
Download kops from the [releases page](https://github.com/kubernetes/kops/releases) (it is also easy to build from source):
|
||||
|
||||
On MacOS:
|
||||
|
|
|
@ -121,7 +121,7 @@ setfacl -m g:kvm:--x ~
|
|||
|
||||
By default, the libvirt-coreos setup will create a single Kubernetes master and 3 Kubernetes nodes. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation.
|
||||
|
||||
There is both an automated way and a manual, customizable way of setting up libvert Kubernetes clusters on CoreOS.
|
||||
There is both an automated way and a manual, customizable way of setting up libvirt Kubernetes clusters on CoreOS.
|
||||
|
||||
#### Automated setup
|
||||
|
||||
|
|
|
@ -67,19 +67,19 @@ to run commands against the cluster.
|
|||
|
||||
```shell
|
||||
# linux/amd64
|
||||
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
# linux/386
|
||||
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/386/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/386/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
# linux/arm
|
||||
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
# linux/arm64
|
||||
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/arm64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
#linux/ppc64le
|
||||
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/ppc64le/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/linux/ppc64le/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
# OS X/amd64
|
||||
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
# OS X/386
|
||||
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/386/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/darwin/386/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
|
||||
```
|
||||
|
||||
For Windows, download [kubectl.exe](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0/bin/windows/amd64/kubectl.exe) and save it to a location on your PATH.
|
||||
|
|
|
@ -12,7 +12,7 @@ export KUBE_NODE_OS_DISTRIBUTION=debian
|
|||
curl -sS https://get.k8s.io | bash
|
||||
```
|
||||
|
||||
See the [Calico documentation](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes#getting-started) for more options to deploy Calico with Kubernetes.
|
||||
See the [Calico documentation](http://docs.projectcalico.org/) for more options to deploy Calico with Kubernetes.
|
||||
|
||||
Once your cluster using Calico is running, you should see a collection of pods running in the `kube-system` Namespace that support Kubernetes NetworkPolicy.
|
||||
|
||||
|
|
|
@ -1,484 +0,0 @@
|
|||
---
|
||||
|
||||
---
|
||||
|
||||
This document describes how to deploy Kubernetes with Calico networking from scratch on _bare metal_ Ubuntu. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-containers repository](https://github.com/projectcalico/calico-containers).
|
||||
|
||||
To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes).
|
||||
|
||||
This guide will set up a simple Kubernetes cluster with a single Kubernetes master and two Kubernetes nodes. We'll run Calico's etcd cluster on the master and install the Calico daemon on the master and nodes.
|
||||
|
||||
## Prerequisites and Assumptions
|
||||
|
||||
- This guide uses `systemd` for process management. Ubuntu 15.04 supports systemd natively as do a number of other Linux distributions.
|
||||
- All machines should have Docker >= 1.7.0 installed.
|
||||
- To install Docker on Ubuntu, follow [these instructions](https://docs.docker.com/installation/ubuntulinux/)
|
||||
- All machines should have connectivity to each other and the internet.
|
||||
- This guide assumes a DHCP server on your network to assign server IPs.
|
||||
- This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands).
|
||||
- This guide assumes that none of the hosts have been configured with any Kubernetes or Calico software.
|
||||
- This guide will set up a secure, TLS-authenticated API server.
|
||||
|
||||
## Set up the master
|
||||
|
||||
### Configure TLS
|
||||
|
||||
The master requires the root CA public key, `ca.pem`; the apiserver certificate, `apiserver.pem` and its private key, `apiserver-key.pem`.
|
||||
|
||||
1. Create the file `openssl.cnf` with the following contents.
|
||||
|
||||
```conf
|
||||
[req]
|
||||
req_extensions = v3_req
|
||||
distinguished_name = req_distinguished_name
|
||||
[req_distinguished_name]
|
||||
[ v3_req ]
|
||||
basicConstraints = CA:FALSE
|
||||
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
|
||||
subjectAltName = @alt_names
|
||||
[alt_names]
|
||||
DNS.1 = kubernetes
|
||||
DNS.2 = kubernetes.default
|
||||
IP.1 = 10.100.0.1
|
||||
IP.2 = ${MASTER_IPV4}
|
||||
```
|
||||
|
||||
> Replace ${MASTER_IPV4} with the Master's IP address on which the Kubernetes API will be accessible.
|
||||
|
||||
2. Generate the necessary TLS assets.
|
||||
|
||||
```shell
|
||||
# Generate the root CA.
|
||||
openssl genrsa -out ca-key.pem 2048
|
||||
openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"
|
||||
|
||||
# Generate the API server keypair.
|
||||
openssl genrsa -out apiserver-key.pem 2048
|
||||
openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
|
||||
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf
|
||||
```
|
||||
|
||||
3. You should now have the following three files: `ca.pem`, `apiserver.pem`, and `apiserver-key.pem`. Send the three files to your master host (using `scp` for example).
|
||||
4. Move them to the `/etc/kubernetes/ssl` folder and ensure that only the root user can read the key:
|
||||
|
||||
```shell
|
||||
# Move keys
|
||||
sudo mkdir -p /etc/kubernetes/ssl/
|
||||
sudo mv -t /etc/kubernetes/ssl/ ca.pem apiserver.pem apiserver-key.pem
|
||||
|
||||
# Set permissions
|
||||
sudo chmod 600 /etc/kubernetes/ssl/apiserver-key.pem
|
||||
sudo chown root:root /etc/kubernetes/ssl/apiserver-key.pem
|
||||
```
|
||||
|
||||
### Install Calico's etcd on the master
|
||||
|
||||
Calico needs its own etcd cluster to store its state. In this guide we install a single-node cluster on the master server.
|
||||
|
||||
> Note: In a production deployment we recommend running a distributed etcd cluster for redundancy. In this guide, we use a single etcd for simplicitly.
|
||||
|
||||
1. Download the template manifest file:
|
||||
|
||||
```shell
|
||||
wget https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/calico-etcd.manifest
|
||||
```
|
||||
|
||||
2. Replace all instances of `<MASTER_IPV4>` in the `calico-etcd.manifest` file with your master's IP address.
|
||||
|
||||
3. Then, move the file to the `/etc/kubernetes/manifests` directory. This will not have any effect until we later run the kubelet, but Calico seems to tolerate the lack of its etcd in the interim.
|
||||
|
||||
```shell
|
||||
sudo mv -f calico-etcd.manifest /etc/kubernetes/manifests
|
||||
```
|
||||
|
||||
### Install Calico on the master
|
||||
|
||||
We need to install Calico on the master. This allows the master to route packets to the pods on other nodes.
|
||||
|
||||
1. Install the `calicoctl` tool:
|
||||
|
||||
```shell
|
||||
wget https://github.com/projectcalico/calico-containers/releases/download/v0.15.0/calicoctl
|
||||
chmod +x calicoctl
|
||||
sudo mv calicoctl /usr/bin
|
||||
```
|
||||
|
||||
2. Prefetch the calico/node container (this ensures that the Calico service starts immediately when we enable it):
|
||||
|
||||
```shell
|
||||
sudo docker pull calico/node:v0.15.0
|
||||
```
|
||||
|
||||
3. Download the `network-environment` template from the `calico-kubernetes` repository:
|
||||
|
||||
```shell
|
||||
wget -O network-environment https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/network-environment-template
|
||||
```
|
||||
|
||||
4. Edit `network-environment` to represent this node's settings:
|
||||
|
||||
- Replace `<KUBERNETES_MASTER>` with the IP address of the master. This should be the source IP address used to reach the Kubernetes worker nodes.
|
||||
|
||||
5. Move `network-environment` into `/etc`:
|
||||
|
||||
```shell
|
||||
sudo mv -f network-environment /etc
|
||||
```
|
||||
|
||||
6. Install, enable, and start the `calico-node` service:
|
||||
|
||||
```shell
|
||||
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/common/calico-node.service
|
||||
sudo systemctl enable /etc/systemd/calico-node.service
|
||||
sudo systemctl start calico-node.service
|
||||
```
|
||||
|
||||
### Install Kubernetes on the Master
|
||||
|
||||
We'll use the `kubelet` to bootstrap the Kubernetes master.
|
||||
|
||||
1. Download and install the `kubelet` and `kubectl` binaries:
|
||||
|
||||
```shell
|
||||
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubectl
|
||||
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubelet
|
||||
sudo chmod +x /usr/bin/kubelet /usr/bin/kubectl
|
||||
```
|
||||
|
||||
2. Install the `kubelet` systemd unit file and start the `kubelet`:
|
||||
|
||||
```shell
|
||||
# Install the unit file
|
||||
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kubelet.service
|
||||
|
||||
# Enable the unit file so that it runs on boot
|
||||
sudo systemctl enable /etc/systemd/kubelet.service
|
||||
|
||||
# Start the kubelet service
|
||||
sudo systemctl start kubelet.service
|
||||
```
|
||||
|
||||
3. Download and install the master manifest file, which will start the Kubernetes master services automatically:
|
||||
|
||||
```shell
|
||||
sudo mkdir -p /etc/kubernetes/manifests
|
||||
sudo wget -N -P /etc/kubernetes/manifests https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kubernetes-master.manifest
|
||||
```
|
||||
|
||||
4. Check the progress by running `docker ps`. After a while, you should see the `etcd`, `apiserver`, `controller-manager`, `scheduler`, and `kube-proxy` containers running.
|
||||
|
||||
> Note: it may take some time for all the containers to start. Don't worry if `docker ps` doesn't show any containers for a while or if some containers start before others.
|
||||
|
||||
## Set up the nodes
|
||||
|
||||
The following steps should be run on each Kubernetes node.
|
||||
|
||||
### Configure TLS
|
||||
|
||||
Worker nodes require three keys: `ca.pem`, `worker.pem`, and `worker-key.pem`. We've already generated
|
||||
`ca.pem` and `ca-key.pem` for use on the Master. The worker public/private keypair should be generated for each Kubernetes node.
|
||||
|
||||
1. Create the file `worker-openssl.cnf` with the following contents.
|
||||
|
||||
```conf
|
||||
[req]
|
||||
req_extensions = v3_req
|
||||
distinguished_name = req_distinguished_name
|
||||
[req_distinguished_name]
|
||||
[ v3_req ]
|
||||
basicConstraints = CA:FALSE
|
||||
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
|
||||
subjectAltName = @alt_names
|
||||
[alt_names]
|
||||
IP.1 = $ENV::WORKER_IP
|
||||
```
|
||||
|
||||
2. Generate the necessary TLS assets for this worker. This relies on the worker's IP address, and the `ca.pem` and `ca-key.pem` files generated earlier in the guide.
|
||||
|
||||
```shell
|
||||
# Export this worker's IP address.
|
||||
export WORKER_IP=<WORKER_IPV4>
|
||||
```
|
||||
|
||||
```shell
|
||||
# Generate keys.
|
||||
openssl genrsa -out worker-key.pem 2048
|
||||
openssl req -new -key worker-key.pem -out worker.csr -subj "/CN=worker-key" -config worker-openssl.cnf
|
||||
openssl x509 -req -in worker.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out worker.pem -days 365 -extensions v3_req -extfile worker-openssl.cnf
|
||||
```
|
||||
|
||||
3. Send the three files (`ca.pem`, `worker.pem`, and `worker-key.pem`) to the host (using scp, for example).
|
||||
|
||||
4. Move the files to the `/etc/kubernetes/ssl` folder with the appropriate permissions:
|
||||
|
||||
```shell
|
||||
# Move keys
|
||||
sudo mkdir -p /etc/kubernetes/ssl/
|
||||
sudo mv -t /etc/kubernetes/ssl/ ca.pem worker.pem worker-key.pem
|
||||
|
||||
# Set permissions
|
||||
sudo chmod 600 /etc/kubernetes/ssl/worker-key.pem
|
||||
sudo chown root:root /etc/kubernetes/ssl/worker-key.pem
|
||||
```
|
||||
|
||||
### Configure the kubelet worker
|
||||
|
||||
1. With your certs in place, create a kubeconfig for worker authentication in `/etc/kubernetes/worker-kubeconfig.yaml`; replace `<KUBERNETES_MASTER>` with the IP address of the master:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Config
|
||||
clusters:
|
||||
- name: local
|
||||
cluster:
|
||||
server: https://<KUBERNETES_MASTER>:443
|
||||
certificate-authority: /etc/kubernetes/ssl/ca.pem
|
||||
users:
|
||||
- name: kubelet
|
||||
user:
|
||||
client-certificate: /etc/kubernetes/ssl/worker.pem
|
||||
client-key: /etc/kubernetes/ssl/worker-key.pem
|
||||
contexts:
|
||||
- context:
|
||||
cluster: local
|
||||
user: kubelet
|
||||
name: kubelet-context
|
||||
current-context: kubelet-context
|
||||
```
|
||||
|
||||
### Install Calico on the node
|
||||
|
||||
On your compute nodes, it is important that you install Calico before Kubernetes. We'll install Calico using the provided `calico-node.service` systemd unit file:
|
||||
|
||||
1. Install the `calicoctl` binary:
|
||||
|
||||
```shell
|
||||
wget https://github.com/projectcalico/calico-containers/releases/download/v0.15.0/calicoctl
|
||||
chmod +x calicoctl
|
||||
sudo mv calicoctl /usr/bin
|
||||
```
|
||||
|
||||
2. Fetch the calico/node container:
|
||||
|
||||
```shell
|
||||
sudo docker pull calico/node:v0.15.0
|
||||
```
|
||||
|
||||
3. Download the `network-environment` template from the `calico-cni` repository:
|
||||
|
||||
```shell
|
||||
wget -O network-environment https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/node/network-environment-template
|
||||
```
|
||||
|
||||
4. Edit `network-environment` to represent this node's settings:
|
||||
|
||||
- Replace `<DEFAULT_IPV4>` with the IP address of the node.
|
||||
- Replace `<KUBERNETES_MASTER>` with the IP or hostname of the master.
|
||||
|
||||
5. Move `network-environment` into `/etc`:
|
||||
|
||||
```shell
|
||||
sudo mv -f network-environment /etc
|
||||
```
|
||||
|
||||
6. Install the `calico-node` service:
|
||||
|
||||
```shell
|
||||
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/common/calico-node.service
|
||||
sudo systemctl enable /etc/systemd/calico-node.service
|
||||
sudo systemctl start calico-node.service
|
||||
```
|
||||
|
||||
7. Install the Calico CNI plugins:
|
||||
|
||||
```shell
|
||||
sudo mkdir -p /opt/cni/bin/
|
||||
sudo wget -N -P /opt/cni/bin/ https://github.com/projectcalico/calico-cni/releases/download/v1.0.0/calico
|
||||
sudo wget -N -P /opt/cni/bin/ https://github.com/projectcalico/calico-cni/releases/download/v1.0.0/calico-ipam
|
||||
sudo chmod +x /opt/cni/bin/calico /opt/cni/bin/calico-ipam
|
||||
```
|
||||
|
||||
8. Create a CNI network configuration file, which tells Kubernetes to create a network named `calico-k8s-network` and to use the calico plugins for that network. Create file `/etc/cni/net.d/10-calico.conf` with the following contents, replacing `<KUBERNETES_MASTER>` with the IP of the master (this file should be the same on each node):
|
||||
|
||||
```shell
|
||||
# Make the directory structure.
|
||||
mkdir -p /etc/cni/net.d
|
||||
|
||||
# Make the network configuration file
|
||||
cat >/etc/cni/net.d/10-calico.conf <<EOF
|
||||
{
|
||||
"name": "calico-k8s-network",
|
||||
"type": "calico",
|
||||
"etcd_authority": "<KUBERNETES_MASTER>:6666",
|
||||
"log_level": "info",
|
||||
"ipam": {
|
||||
"type": "calico-ipam"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
Since this is the only network we create, it will be used by default by the kubelet.
|
||||
|
||||
9. Verify that Calico started correctly:
|
||||
|
||||
```shell
|
||||
calicoctl status
|
||||
```
|
||||
|
||||
should show that Felix (Calico's per-node agent) is running and the there should be a BGP status line for each other node that you've configured and the master. The "Info" column should show "Established":
|
||||
|
||||
```
|
||||
$ calicoctl status
|
||||
calico-node container is running. Status: Up 15 hours
|
||||
Running felix version 1.3.0rc5
|
||||
|
||||
IPv4 BGP status
|
||||
+---------------+-------------------+-------+----------+-------------+
|
||||
| Peer address | Peer type | State | Since | Info |
|
||||
+---------------+-------------------+-------+----------+-------------+
|
||||
| 172.18.203.41 | node-to-node mesh | up | 17:32:26 | Established |
|
||||
| 172.18.203.42 | node-to-node mesh | up | 17:32:25 | Established |
|
||||
+---------------+-------------------+-------+----------+-------------+
|
||||
|
||||
IPv6 BGP status
|
||||
+--------------+-----------+-------+-------+------+
|
||||
| Peer address | Peer type | State | Since | Info |
|
||||
+--------------+-----------+-------+-------+------+
|
||||
+--------------+-----------+-------+-------+------+
|
||||
```
|
||||
|
||||
If the "Info" column shows "Active" or some other value then Calico is having difficulty connecting to the other host. Check the IP address of the peer is correct and check that Calico is using the correct local IP address (set in the `network-environment` file above).
|
||||
|
||||
### Install Kubernetes on the Node
|
||||
|
||||
1. Download and Install the kubelet binary:
|
||||
|
||||
```shell
|
||||
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubelet
|
||||
sudo chmod +x /usr/bin/kubelet
|
||||
```
|
||||
|
||||
2. Install the `kubelet` systemd unit file:
|
||||
|
||||
```shell
|
||||
# Download the unit file.
|
||||
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/node/kubelet.service
|
||||
|
||||
# Enable and start the unit files so that they run on boot
|
||||
sudo systemctl enable /etc/systemd/kubelet.service
|
||||
sudo systemctl start kubelet.service
|
||||
```
|
||||
|
||||
3. Download the `kube-proxy` manifest:
|
||||
|
||||
```shell
|
||||
wget https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/node/kube-proxy.manifest
|
||||
```
|
||||
|
||||
4. In that file, replace `<KUBERNETES_MASTER>` with your master's IP. Then move it into place:
|
||||
|
||||
```shell
|
||||
sudo mkdir -p /etc/kubernetes/manifests/
|
||||
sudo mv kube-proxy.manifest /etc/kubernetes/manifests/
|
||||
```
|
||||
|
||||
## Configure kubectl remote access
|
||||
|
||||
To administer your cluster from a separate host (e.g your laptop), you will need the root CA generated earlier, as well as an admin public/private keypair (`ca.pem`, `admin.pem`, `admin-key.pem`). Run the following steps on the machine which you will use to control your cluster.
|
||||
|
||||
1. Download the kubectl binary.
|
||||
|
||||
```shell
|
||||
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubectl
|
||||
sudo chmod +x /usr/bin/kubectl
|
||||
```
|
||||
|
||||
2. Generate the admin public/private keypair.
|
||||
|
||||
3. Export the necessary variables, substituting in correct values for your machine.
|
||||
|
||||
```shell
|
||||
# Export the appropriate paths.
|
||||
export CA_CERT_PATH=<PATH_TO_CA_PEM>
|
||||
export ADMIN_CERT_PATH=<PATH_TO_ADMIN_PEM>
|
||||
export ADMIN_KEY_PATH=<PATH_TO_ADMIN_KEY_PEM>
|
||||
|
||||
# Export the Master's IP address.
|
||||
export MASTER_IPV4=<MASTER_IPV4>
|
||||
```
|
||||
|
||||
4. Configure your host `kubectl` with the admin credentials:
|
||||
|
||||
```shell
|
||||
kubectl config set-cluster calico-cluster --server=https://${MASTER_IPV4} --certificate-authority=${CA_CERT_PATH}
|
||||
kubectl config set-credentials calico-admin --certificate-authority=${CA_CERT_PATH} --client-key=${ADMIN_KEY_PATH} --client-certificate=${ADMIN_CERT_PATH}
|
||||
kubectl config set-context calico --cluster=calico-cluster --user=calico-admin
|
||||
kubectl config use-context calico
|
||||
```
|
||||
|
||||
Check your work with `kubectl get nodes`, which should succeed and display the nodes.
|
||||
|
||||
## Install the DNS Addon
|
||||
|
||||
Most Kubernetes deployments will require the DNS addon for service discovery. To install DNS, create the skydns service and replication controller provided. This step makes use of the kubectl configuration made above.
|
||||
|
||||
```shell
|
||||
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/dns/skydns.yaml
|
||||
```
|
||||
|
||||
## Install the Kubernetes UI Addon (Optional)
|
||||
|
||||
The Kubernetes UI can be installed using `kubectl` to run the following manifest file.
|
||||
|
||||
```shell
|
||||
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kube-ui/kube-ui.yaml
|
||||
```
|
||||
|
||||
Note: The Kubernetes UI addon is deprecated and has been replaced with Kubernetes dashboard. You can install it by running:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
|
||||
```
|
||||
|
||||
You can find the docs at [Kubernetes Dashboard](https://github.com/kubernetes/dashboard)
|
||||
|
||||
## Launch other Services With Calico-Kubernetes
|
||||
|
||||
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.version}}.0/examples/) to set up other services on your cluster.
|
||||
|
||||
## Connectivity to outside the cluster
|
||||
|
||||
Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP.
|
||||
|
||||
### NAT on the nodes
|
||||
|
||||
The simplest method for enabling connectivity from containers to the internet is to use outgoing NAT on your Kubernetes nodes.
|
||||
|
||||
Calico can provide outgoing NAT for containers. To enable it, use the following `calicoctl` command:
|
||||
|
||||
```shell
|
||||
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool add <CONTAINER_SUBNET> --nat-outgoing
|
||||
```
|
||||
|
||||
By default, `<CONTAINER_SUBNET>` will be `192.168.0.0/16`. You can find out which pools have been configured with the following command:
|
||||
|
||||
```shell
|
||||
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool show
|
||||
```
|
||||
|
||||
### NAT at the border router
|
||||
|
||||
In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity).
|
||||
|
||||
The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md).
|
||||
|
||||
## Support Level
|
||||
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
Bare-metal | custom | Ubuntu | Calico | [docs](/docs/getting-started-guides/ubuntu-calico) | | Community ([@djosborne](https://github.com/djosborne))
|
||||
|
||||
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.
|
||||
|
|
@ -145,7 +145,7 @@ docker stop hello_tutorial
|
|||
Now that the image works as intended and is all tagged with your `$PROJECT_ID`, we can push it to the [Google Container Registry](https://cloud.google.com/tools/container-registry/), a private repository for your Docker images accessible from every Google Cloud project (but also from outside Google Cloud Platform) :
|
||||
|
||||
```shell
|
||||
gcloud docker push gcr.io/$PROJECT_ID/hello-node:v1
|
||||
gcloud docker -- push gcr.io/$PROJECT_ID/hello-node:v1
|
||||
```
|
||||
|
||||
If all goes well, you should be able to see the container image listed in the console: *Compute > Container Engine > Container Registry*. We now have a project-wide Docker image available which Kubernetes can access and orchestrate.
|
||||
|
|
|
@ -53,18 +53,18 @@ Make sure you review the [beta limitations](https://github.com/kubernetes/contri
|
|||
A minimal Ingress might look like:
|
||||
|
||||
```yaml
|
||||
01. apiVersion: extensions/v1beta1
|
||||
02. kind: Ingress
|
||||
03. metadata:
|
||||
04. name: test-ingress
|
||||
05. spec:
|
||||
06. rules:
|
||||
07. - http:
|
||||
08. paths:
|
||||
09. - path: /testpath
|
||||
10. backend:
|
||||
11. serviceName: test
|
||||
12. servicePort: 80
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: test-ingress
|
||||
spec:
|
||||
rules:
|
||||
- http:
|
||||
paths:
|
||||
- path: /testpath
|
||||
backend:
|
||||
serviceName: test
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
*POSTing this to the API server will have no effect if you have not configured an [Ingress controller](#ingress-controllers).*
|
||||
|
|
|
@ -10,4 +10,4 @@ spec:
|
|||
accessModes:
|
||||
- ReadWriteOnce
|
||||
hostPath:
|
||||
path: "/somepath/data01"
|
||||
path: "/tmp/data01"
|
||||
|
|
|
@ -27,7 +27,7 @@ for ease of development and testing. You'll create a local `HostPath` for this
|
|||
support local storage on the host at this time. There is no guarantee your pod ends up on the correct node where the `HostPath` resides.
|
||||
|
||||
```shell
|
||||
# This will be nginx's webroot
|
||||
# This will be nginx's webroot; execute this on the node where your pod will run.
|
||||
$ mkdir /tmp/data01
|
||||
$ echo 'I love Kubernetes storage!' > /tmp/data01/index.html
|
||||
```
|
||||
|
@ -125,4 +125,4 @@ I love Kubernetes storage!
|
|||
|
||||
Hopefully this simple guide is enough to get you started with PersistentVolumes. If you have any questions, join the team on [Slack](/docs/troubleshooting/#slack) and ask!
|
||||
|
||||
Enjoy!
|
||||
Enjoy!
|
||||
|
|
|
@ -220,7 +220,7 @@ The specification of a pre-stop hook is similar to that of probes, but without t
|
|||
|
||||
## Termination message
|
||||
|
||||
In order to achieve a reasonably high level of availability, especially for actively developed applications, it's important to debug failures quickly. Kubernetes can speed debugging by surfacing causes of fatal errors in a way that can be display using [`kubectl`](/docs/user-guide/kubectl/kubectl) or the [UI](/docs/user-guide/ui), in addition to general [log collection](/docs/user-guide/logging). It is possible to specify a `terminationMessagePath` where a container will write its 'death rattle'?, such as assertion failure messages, stack traces, exceptions, and so on. The default path is `/dev/termination-log`.
|
||||
In order to achieve a reasonably high level of availability, especially for actively developed applications, it's important to debug failures quickly. Kubernetes can speed debugging by surfacing causes of fatal errors in a way that can be display using [`kubectl`](/docs/user-guide/kubectl/) or the [UI](/docs/user-guide/ui), in addition to general [log collection](/docs/user-guide/logging). It is possible to specify a `terminationMessagePath` where a container will write its 'death rattle'?, such as assertion failure messages, stack traces, exceptions, and so on. The default path is `/dev/termination-log`.
|
||||
|
||||
Here is a toy example:
|
||||
|
||||
|
|
Loading…
Reference in New Issue