Nips and tucks to Kubernetes 1.2 imports, config change default to 1.2
parent
a6f6fd01cd
commit
ee01d0661a
|
@ -16,8 +16,8 @@ defaults:
|
|||
scope:
|
||||
path: "docs"
|
||||
values:
|
||||
version: "v1.1"
|
||||
version: "v1.2"
|
||||
layout: docwithnav
|
||||
showedit: true
|
||||
githubbranch: "release-1.1"
|
||||
githubbranch: "release-1.2"
|
||||
permalink: pretty
|
||||
|
|
|
@ -37,7 +37,7 @@ as well as information about the default services running in the cluster (monito
|
|||
tokens are written in `~/.kube/config`, they will be necessary to use the CLI or the HTTP Basic Auth.
|
||||
|
||||
By default, the script will provision a new VPC and a 4 node k8s cluster in us-west-2a (Oregon) with EC2 instances running on Ubuntu.
|
||||
You can override the variables defined in [config-default.sh](http://releases.k8s.io/release-1.2/cluster/aws/config-default.sh) to change this behavior as follows:
|
||||
You can override the variables defined in [config-default.sh](http://releases.k8s.io/{{page.githubbranch}}/cluster/aws/config-default.sh) to change this behavior as follows:
|
||||
|
||||
```shell
|
||||
export KUBE_AWS_ZONE=eu-west-1c
|
||||
|
|
|
@ -50,7 +50,7 @@ Download the stable CoreOS bootable ISO from the [CoreOS website](https://coreos
|
|||
|
||||
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
|
||||
|
||||
```
|
||||
```shell
|
||||
sudo coreos-install -d /dev/sda -C stable -c master-config.yaml
|
||||
```
|
||||
|
||||
|
@ -105,7 +105,7 @@ The following steps will set up a single Kubernetes node for use as a compute ho
|
|||
|
||||
> **Important:** Make sure you indent the entire file to match the indentation of the placeholder. For example:
|
||||
>
|
||||
> ```
|
||||
> ```shell
|
||||
> - path: /etc/kubernetes/ssl/ca.pem
|
||||
> owner: core
|
||||
> permissions: 0644
|
||||
|
@ -115,7 +115,7 @@ The following steps will set up a single Kubernetes node for use as a compute ho
|
|||
>
|
||||
> should look like this once the certificate is in place:
|
||||
>
|
||||
> ```
|
||||
> ```shell
|
||||
> - path: /etc/kubernetes/ssl/ca.pem
|
||||
> owner: core
|
||||
> permissions: 0644
|
||||
|
|
|
@ -8,7 +8,7 @@ to run is `${K8S_VERSION}`, which should hold a released version of Kubernetes >
|
|||
|
||||
Enviroinment variables used:
|
||||
|
||||
```sh
|
||||
```shell
|
||||
export MASTER_IP=<the_master_ip_here>
|
||||
export K8S_VERSION=<your_k8s_version (e.g. 1.2.0-alpha.6)>
|
||||
export FLANNEL_VERSION=<your_flannel_version (e.g. 0.5.5)>
|
||||
|
@ -42,7 +42,7 @@ sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docke
|
|||
|
||||
_If you have Docker 1.8.0 or higher run this instead_
|
||||
|
||||
```sh
|
||||
```shell
|
||||
sudo sh -c 'docker daemon -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
|
||||
```
|
||||
|
||||
|
|
|
@ -114,7 +114,7 @@ Calico needs its own etcd cluster to store its state. In this guide we install
|
|||
|
||||
1. Download the template manifest file:
|
||||
|
||||
```
|
||||
```shell
|
||||
wget https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/calico-etcd.manifest
|
||||
```
|
||||
|
||||
|
@ -140,13 +140,13 @@ We need to install Calico on the master. This allows the master to route packet
|
|||
|
||||
2. Prefetch the calico/node container (this ensures that the Calico service starts immediately when we enable it):
|
||||
|
||||
```
|
||||
```shell
|
||||
sudo docker pull calico/node:v0.15.0
|
||||
```
|
||||
|
||||
3. Download the `network-environment` template from the `calico-kubernetes` repository:
|
||||
|
||||
```
|
||||
```shell
|
||||
wget -O network-environment https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/network-environment-template
|
||||
```
|
||||
|
||||
|
@ -210,7 +210,7 @@ Worker nodes require three keys: `ca.pem`, `worker.pem`, and `worker-key.pem`.
|
|||
|
||||
4. Move the files to the `/etc/kubernetes/ssl` folder with the appropriate permissions:
|
||||
|
||||
```
|
||||
```shell
|
||||
# Move keys
|
||||
sudo mkdir -p /etc/kubernetes/ssl/
|
||||
sudo mv -t /etc/kubernetes/ssl/ ca.pem worker.pem worker-key.pem
|
||||
|
@ -224,7 +224,7 @@ Worker nodes require three keys: `ca.pem`, `worker.pem`, and `worker-key.pem`.
|
|||
|
||||
1. With your certs in place, create a kubeconfig for worker authentication in `/etc/kubernetes/worker-kubeconfig.yaml`; replace `<KUBERNETES_MASTER>` with the IP address of the master:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Config
|
||||
clusters:
|
||||
|
|
|
@ -17,8 +17,8 @@
|
|||
# Elasticsearch and Kibana pods for the GCE platform.
|
||||
# For examples of how to observe the ingested logs please
|
||||
# see the appropriate getting started guide e.g.
|
||||
# Google Cloud Logging: https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/logging.md
|
||||
# With Elasticsearch and Kibana logging: https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/logging-elasticsearch.md
|
||||
# Google Cloud Logging: http://kubernetes.io/docs/getting-started-guides/logging/
|
||||
# With Elasticsearch and Kibana logging: http://kubernetes.io/docs/getting-started-guides/logging-elasticsearch.md
|
||||
|
||||
.PHONY: up down logger-up logger-down logger10-up logger10-down
|
||||
|
||||
|
|
Loading…
Reference in New Issue