Change files in dos format to unix format. Removing ^M.

pull/1144/head
adieu 2016-09-01 22:25:57 +08:00
parent fda0238a36
commit 838352ff6e
11 changed files with 1595 additions and 1595 deletions

View File

@ -1,214 +1,214 @@
--- ---
assignees: assignees:
- derekwaynecarr - derekwaynecarr
- janetkuo - janetkuo
--- ---
By default, pods run with unbounded CPU and memory limits. This means that any pod in the By default, pods run with unbounded CPU and memory limits. This means that any pod in the
system will be able to consume as much CPU and memory on the node that executes the pod. system will be able to consume as much CPU and memory on the node that executes the pod.
Users may want to impose restrictions on the amount of resource a single pod in the system may consume Users may want to impose restrictions on the amount of resource a single pod in the system may consume
for a variety of reasons. for a variety of reasons.
For example: For example:
1. Each node in the cluster has 2GB of memory. The cluster operator does not want to accept pods 1. Each node in the cluster has 2GB of memory. The cluster operator does not want to accept pods
that require more than 2GB of memory since no node in the cluster can support the requirement. To prevent a that require more than 2GB of memory since no node in the cluster can support the requirement. To prevent a
pod from being permanently unscheduled to a node, the operator instead chooses to reject pods that exceed 2GB pod from being permanently unscheduled to a node, the operator instead chooses to reject pods that exceed 2GB
of memory as part of admission control. of memory as part of admission control.
2. A cluster is shared by two communities in an organization that runs production and development workloads 2. A cluster is shared by two communities in an organization that runs production and development workloads
respectively. Production workloads may consume up to 8GB of memory, but development workloads may consume up respectively. Production workloads may consume up to 8GB of memory, but development workloads may consume up
to 512MB of memory. The cluster operator creates a separate namespace for each workload, and applies limits to to 512MB of memory. The cluster operator creates a separate namespace for each workload, and applies limits to
each namespace. each namespace.
3. Users may create a pod which consumes resources just below the capacity of a machine. The left over space 3. Users may create a pod which consumes resources just below the capacity of a machine. The left over space
may be too small to be useful, but big enough for the waste to be costly over the entire cluster. As a result, may be too small to be useful, but big enough for the waste to be costly over the entire cluster. As a result,
the cluster operator may want to set limits that a pod must consume at least 20% of the memory and cpu of their the cluster operator may want to set limits that a pod must consume at least 20% of the memory and cpu of their
average node size in order to provide for more uniform scheduling and to limit waste. average node size in order to provide for more uniform scheduling and to limit waste.
This example demonstrates how limits can be applied to a Kubernetes [namespace](/docs/admin/namespaces/walkthrough/) to control This example demonstrates how limits can be applied to a Kubernetes [namespace](/docs/admin/namespaces/walkthrough/) to control
min/max resource limits per pod. In addition, this example demonstrates how you can min/max resource limits per pod. In addition, this example demonstrates how you can
apply default resource limits to pods in the absence of an end-user specified value. apply default resource limits to pods in the absence of an end-user specified value.
See [LimitRange design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_limit_range.md) for more information. For a detailed description of the Kubernetes resource model, see [Resources](/docs/user-guide/compute-resources/) See [LimitRange design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_limit_range.md) for more information. For a detailed description of the Kubernetes resource model, see [Resources](/docs/user-guide/compute-resources/)
## Step 0: Prerequisites ## Step 0: Prerequisites
This example requires a running Kubernetes cluster. See the [Getting Started guides](/docs/getting-started-guides/) for how to get started. This example requires a running Kubernetes cluster. See the [Getting Started guides](/docs/getting-started-guides/) for how to get started.
Change to the `<kubernetes>` directory if you're not already there. Change to the `<kubernetes>` directory if you're not already there.
## Step 1: Create a namespace ## Step 1: Create a namespace
This example will work in a custom namespace to demonstrate the concepts involved. This example will work in a custom namespace to demonstrate the concepts involved.
Let's create a new namespace called limit-example: Let's create a new namespace called limit-example:
```shell ```shell
$ kubectl create namespace limit-example $ kubectl create namespace limit-example
namespace "limit-example" created namespace "limit-example" created
``` ```
Note that `kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands: Note that `kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands:
```shell ```shell
$ kubectl get namespaces $ kubectl get namespaces
NAME STATUS AGE NAME STATUS AGE
default Active 51s default Active 51s
limit-example Active 45s limit-example Active 45s
``` ```
## Step 2: Apply a limit to the namespace ## Step 2: Apply a limit to the namespace
Let's create a simple limit in our namespace. Let's create a simple limit in our namespace.
```shell ```shell
$ kubectl create -f docs/admin/limitrange/limits.yaml --namespace=limit-example $ kubectl create -f docs/admin/limitrange/limits.yaml --namespace=limit-example
limitrange "mylimits" created limitrange "mylimits" created
``` ```
Let's describe the limits that we have imposed in our namespace. Let's describe the limits that we have imposed in our namespace.
```shell ```shell
$ kubectl describe limits mylimits --namespace=limit-example $ kubectl describe limits mylimits --namespace=limit-example
Name: mylimits Name: mylimits
Namespace: limit-example Namespace: limit-example
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- ----------------------- ---- -------- --- --- --------------- ------------- -----------------------
Pod cpu 200m 2 - - - Pod cpu 200m 2 - - -
Pod memory 6Mi 1Gi - - - Pod memory 6Mi 1Gi - - -
Container cpu 100m 2 200m 300m - Container cpu 100m 2 200m 300m -
Container memory 3Mi 1Gi 100Mi 200Mi - Container memory 3Mi 1Gi 100Mi 200Mi -
``` ```
In this scenario, we have said the following: In this scenario, we have said the following:
1. If a max constraint is specified for a resource (2 CPU and 1Gi memory in this case), then a limit 1. If a max constraint is specified for a resource (2 CPU and 1Gi memory in this case), then a limit
must be specified for that resource across all containers. Failure to specify a limit will result in must be specified for that resource across all containers. Failure to specify a limit will result in
a validation error when attempting to create the pod. Note that a default value of limit is set by a validation error when attempting to create the pod. Note that a default value of limit is set by
*default* in file `limits.yaml` (300m CPU and 200Mi memory). *default* in file `limits.yaml` (300m CPU and 200Mi memory).
2. If a min constraint is specified for a resource (100m CPU and 3Mi memory in this case), then a 2. If a min constraint is specified for a resource (100m CPU and 3Mi memory in this case), then a
request must be specified for that resource across all containers. Failure to specify a request will request must be specified for that resource across all containers. Failure to specify a request will
result in a validation error when attempting to create the pod. Note that a default value of request is result in a validation error when attempting to create the pod. Note that a default value of request is
set by *defaultRequest* in file `limits.yaml` (200m CPU and 100Mi memory). set by *defaultRequest* in file `limits.yaml` (200m CPU and 100Mi memory).
3. For any pod, the sum of all containers memory requests must be >= 6Mi and the sum of all containers 3. For any pod, the sum of all containers memory requests must be >= 6Mi and the sum of all containers
memory limits must be <= 1Gi; the sum of all containers CPU requests must be >= 200m and the sum of all memory limits must be <= 1Gi; the sum of all containers CPU requests must be >= 200m and the sum of all
containers CPU limits must be <= 2. containers CPU limits must be <= 2.
## Step 3: Enforcing limits at point of creation ## Step 3: Enforcing limits at point of creation
The limits enumerated in a namespace are only enforced when a pod is created or updated in The limits enumerated in a namespace are only enforced when a pod is created or updated in
the cluster. If you change the limits to a different value range, it does not affect pods that the cluster. If you change the limits to a different value range, it does not affect pods that
were previously created in a namespace. were previously created in a namespace.
If a resource (cpu or memory) is being restricted by a limit, the user will get an error at time If a resource (cpu or memory) is being restricted by a limit, the user will get an error at time
of creation explaining why. of creation explaining why.
Let's first spin up a [Deployment](/docs/user-guide/deployments) that creates a single container Pod to demonstrate Let's first spin up a [Deployment](/docs/user-guide/deployments) that creates a single container Pod to demonstrate
how default values are applied to each pod. how default values are applied to each pod.
```shell ```shell
$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example $ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example
deployment "nginx" created deployment "nginx" created
``` ```
Note that `kubectl run` creates a Deployment named "nginx" on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead. Note that `kubectl run` creates a Deployment named "nginx" on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead.
If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/kubectl_run/) for more details. If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/kubectl_run/) for more details.
The Deployment manages 1 replica of single container Pod. Let's take a look at the Pod it manages. First, find the name of the Pod: The Deployment manages 1 replica of single container Pod. Let's take a look at the Pod it manages. First, find the name of the Pod:
```shell ```shell
$ kubectl get pods --namespace=limit-example $ kubectl get pods --namespace=limit-example
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
nginx-2040093540-s8vzu 1/1 Running 0 11s nginx-2040093540-s8vzu 1/1 Running 0 11s
``` ```
Let's print this Pod with yaml output format (using `-o yaml` flag), and then `grep` the `resources` field. Note that your pod name will be different. Let's print this Pod with yaml output format (using `-o yaml` flag), and then `grep` the `resources` field. Note that your pod name will be different.
``` shell ```shell
$ kubectl get pods nginx-2040093540-s8vzu --namespace=limit-example -o yaml | grep resources -C 8 $ kubectl get pods nginx-2040093540-s8vzu --namespace=limit-example -o yaml | grep resources -C 8
resourceVersion: "57" resourceVersion: "57"
selfLink: /api/v1/namespaces/limit-example/pods/nginx-2040093540-ivimu selfLink: /api/v1/namespaces/limit-example/pods/nginx-2040093540-ivimu
uid: 67b20741-f53b-11e5-b066-64510658e388 uid: 67b20741-f53b-11e5-b066-64510658e388
spec: spec:
containers: containers:
- image: nginx - image: nginx
imagePullPolicy: Always imagePullPolicy: Always
name: nginx name: nginx
resources: resources:
limits: limits:
cpu: 300m cpu: 300m
memory: 200Mi memory: 200Mi
requests: requests:
cpu: 200m cpu: 200m
memory: 100Mi memory: 100Mi
terminationMessagePath: /dev/termination-log terminationMessagePath: /dev/termination-log
volumeMounts: volumeMounts:
``` ```
Note that our nginx container has picked up the namespace default cpu and memory resource *limits* and *requests*. Note that our nginx container has picked up the namespace default cpu and memory resource *limits* and *requests*.
Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 cpu cores. Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 cpu cores.
```shell ```shell
$ kubectl create -f docs/admin/limitrange/invalid-pod.yaml --namespace=limit-example $ kubectl create -f docs/admin/limitrange/invalid-pod.yaml --namespace=limit-example
Error from server: error when creating "docs/admin/limitrange/invalid-pod.yaml": Pod "invalid-pod" is forbidden: [Maximum cpu usage per Pod is 2, but limit is 3., Maximum cpu usage per Container is 2, but limit is 3.] Error from server: error when creating "docs/admin/limitrange/invalid-pod.yaml": Pod "invalid-pod" is forbidden: [Maximum cpu usage per Pod is 2, but limit is 3., Maximum cpu usage per Container is 2, but limit is 3.]
``` ```
Let's create a pod that falls within the allowed limit boundaries. Let's create a pod that falls within the allowed limit boundaries.
```shell ```shell
$ kubectl create -f docs/admin/limitrange/valid-pod.yaml --namespace=limit-example $ kubectl create -f docs/admin/limitrange/valid-pod.yaml --namespace=limit-example
pod "valid-pod" created pod "valid-pod" created
``` ```
Now look at the Pod's resources field: Now look at the Pod's resources field:
```shell ```shell
$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources $ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources
uid: 3b1bfd7a-f53c-11e5-b066-64510658e388 uid: 3b1bfd7a-f53c-11e5-b066-64510658e388
spec: spec:
containers: containers:
- image: gcr.io/google_containers/serve_hostname - image: gcr.io/google_containers/serve_hostname
imagePullPolicy: Always imagePullPolicy: Always
name: kubernetes-serve-hostname name: kubernetes-serve-hostname
resources: resources:
limits: limits:
cpu: "1" cpu: "1"
memory: 512Mi memory: 512Mi
requests: requests:
cpu: "1" cpu: "1"
memory: 512Mi memory: 512Mi
``` ```
Note that this pod specifies explicit resource *limits* and *requests* so it did not pick up the namespace Note that this pod specifies explicit resource *limits* and *requests* so it did not pick up the namespace
default values. default values.
Note: The *limits* for CPU resource are enforced in the default Kubernetes setup on the physical node Note: The *limits* for CPU resource are enforced in the default Kubernetes setup on the physical node
that runs the container unless the administrator deploys the kubelet with the folllowing flag: that runs the container unless the administrator deploys the kubelet with the folllowing flag:
```shell ```shell
$ kubelet --help $ kubelet --help
Usage of kubelet Usage of kubelet
.... ....
--cpu-cfs-quota[=true]: Enable CPU CFS quota enforcement for containers that specify CPU limits --cpu-cfs-quota[=true]: Enable CPU CFS quota enforcement for containers that specify CPU limits
$ kubelet --cpu-cfs-quota=false ... $ kubelet --cpu-cfs-quota=false ...
``` ```
## Step 4: Cleanup ## Step 4: Cleanup
To remove the resources used by this example, you can just delete the limit-example namespace. To remove the resources used by this example, you can just delete the limit-example namespace.
```shell ```shell
$ kubectl delete namespace limit-example $ kubectl delete namespace limit-example
namespace "limit-example" deleted namespace "limit-example" deleted
$ kubectl get namespaces $ kubectl get namespaces
NAME STATUS AGE NAME STATUS AGE
default Active 12m default Active 12m
``` ```
## Summary ## Summary
Cluster operators that want to restrict the amount of resources a single container or pod may consume Cluster operators that want to restrict the amount of resources a single container or pod may consume
are able to define allowable ranges per Kubernetes namespace. In the absence of any explicit assignments, are able to define allowable ranges per Kubernetes namespace. In the absence of any explicit assignments,
the Kubernetes system is able to apply default resource *limits* and *requests* if desired in order to the Kubernetes system is able to apply default resource *limits* and *requests* if desired in order to
constrain the amount of resource a pod consumes on a node. constrain the amount of resource a pod consumes on a node.

View File

@ -1,106 +1,106 @@
--- ---
assignees: assignees:
- davidopp - davidopp
- lavalamp - lavalamp
--- ---
The Kubernetes cluster can be configured using Salt. The Kubernetes cluster can be configured using Salt.
The Salt scripts are shared across multiple hosting providers, so it's important to understand some background information prior to making a modification to ensure your changes do not break hosting Kubernetes across multiple environments. Depending on where you host your Kubernetes cluster, you may be using different operating systems and different networking configurations. As a result, it's important to understand some background information before making Salt changes in order to minimize introducing failures for other hosting providers. The Salt scripts are shared across multiple hosting providers, so it's important to understand some background information prior to making a modification to ensure your changes do not break hosting Kubernetes across multiple environments. Depending on where you host your Kubernetes cluster, you may be using different operating systems and different networking configurations. As a result, it's important to understand some background information before making Salt changes in order to minimize introducing failures for other hosting providers.
## Salt cluster setup ## Salt cluster setup
The **salt-master** service runs on the kubernetes-master [(except on the default GCE setup)](#standalone-salt-configuration-on-gce). The **salt-master** service runs on the kubernetes-master [(except on the default GCE setup)](#standalone-salt-configuration-on-gce).
The **salt-minion** service runs on the kubernetes-master and each kubernetes-node in the cluster. The **salt-minion** service runs on the kubernetes-master and each kubernetes-node in the cluster.
Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file [(except on GCE)](#standalone-salt-configuration-on-gce). Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file [(except on GCE)](#standalone-salt-configuration-on-gce).
```shell ```shell
[root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf [root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf
master: kubernetes-master master: kubernetes-master
``` ```
The salt-master is contacted by each salt-minion and depending upon the machine information presented, the salt-master will provision the machine as either a kubernetes-master or kubernetes-node with all the required capabilities needed to run Kubernetes. The salt-master is contacted by each salt-minion and depending upon the machine information presented, the salt-master will provision the machine as either a kubernetes-master or kubernetes-node with all the required capabilities needed to run Kubernetes.
If you are running the Vagrant based environment, the **salt-api** service is running on the kubernetes-master. It is configured to enable the vagrant user to introspect the salt cluster in order to find out about machines in the Vagrant environment via a REST API. If you are running the Vagrant based environment, the **salt-api** service is running on the kubernetes-master. It is configured to enable the vagrant user to introspect the salt cluster in order to find out about machines in the Vagrant environment via a REST API.
## Standalone Salt Configuration on GCE ## Standalone Salt Configuration on GCE
On GCE, the master and nodes are all configured as [standalone minions](http://docs.saltstack.com/en/latest/topics/tutorials/standalone_minion.html). The configuration for each VM is derived from the VM's [instance metadata](https://cloud.google.com/compute/docs/metadata) and then stored in Salt grains (`/etc/salt/minion.d/grains.conf`) and pillars (`/srv/salt-overlay/pillar/cluster-params.sls`) that local Salt uses to enforce state. On GCE, the master and nodes are all configured as [standalone minions](http://docs.saltstack.com/en/latest/topics/tutorials/standalone_minion.html). The configuration for each VM is derived from the VM's [instance metadata](https://cloud.google.com/compute/docs/metadata) and then stored in Salt grains (`/etc/salt/minion.d/grains.conf`) and pillars (`/srv/salt-overlay/pillar/cluster-params.sls`) that local Salt uses to enforce state.
All remaining sections that refer to master/minion setups should be ignored for GCE. One fallout of the GCE setup is that the Salt mine doesn't exist - there is no sharing of configuration amongst nodes. All remaining sections that refer to master/minion setups should be ignored for GCE. One fallout of the GCE setup is that the Salt mine doesn't exist - there is no sharing of configuration amongst nodes.
## Salt security ## Salt security
*(Not applicable on default GCE setup.)* *(Not applicable on default GCE setup.)*
Security is not enabled on the salt-master, and the salt-master is configured to auto-accept incoming requests from minions. It is not recommended to use this security configuration in production environments without deeper study. (In some environments this isn't as bad as it might sound if the salt master port isn't externally accessible and you trust everyone on your network.) Security is not enabled on the salt-master, and the salt-master is configured to auto-accept incoming requests from minions. It is not recommended to use this security configuration in production environments without deeper study. (In some environments this isn't as bad as it might sound if the salt master port isn't externally accessible and you trust everyone on your network.)
```shell ```shell
[root@kubernetes-master] $ cat /etc/salt/master.d/auto-accept.conf [root@kubernetes-master] $ cat /etc/salt/master.d/auto-accept.conf
open_mode: True open_mode: True
auto_accept: True auto_accept: True
``` ```
## Salt minion configuration ## Salt minion configuration
Each minion in the salt cluster has an associated configuration that instructs the salt-master how to provision the required resources on the machine. Each minion in the salt cluster has an associated configuration that instructs the salt-master how to provision the required resources on the machine.
An example file is presented below using the Vagrant based environment. An example file is presented below using the Vagrant based environment.
```shell ```shell
[root@kubernetes-master] $ cat /etc/salt/minion.d/grains.conf [root@kubernetes-master] $ cat /etc/salt/minion.d/grains.conf
grains: grains:
etcd_servers: $MASTER_IP etcd_servers: $MASTER_IP
cloud: vagrant cloud: vagrant
roles: roles:
- kubernetes-master - kubernetes-master
``` ```
Each hosting environment has a slightly different grains.conf file that is used to build conditional logic where required in the Salt files. Each hosting environment has a slightly different grains.conf file that is used to build conditional logic where required in the Salt files.
The following enumerates the set of defined key/value pairs that are supported today. If you add new ones, please make sure to update this list. The following enumerates the set of defined key/value pairs that are supported today. If you add new ones, please make sure to update this list.
Key | Value Key | Value
------------- | ------------- -----------------------------------|----------------------------------------------------------------
`api_servers` | (Optional) The IP address / host name where a kubelet can get read-only access to kube-apiserver `api_servers` | (Optional) The IP address / host name where a kubelet can get read-only access to kube-apiserver
`cbr-cidr` | (Optional) The minion IP address range used for the docker container bridge. `cbr-cidr` | (Optional) The minion IP address range used for the docker container bridge.
`cloud` | (Optional) Which IaaS platform is used to host Kubernetes, *gce*, *azure*, *aws*, *vagrant* `cloud` | (Optional) Which IaaS platform is used to host Kubernetes, *gce*, *azure*, *aws*, *vagrant*
`etcd_servers` | (Optional) Comma-delimited list of IP addresses the kube-apiserver and kubelet use to reach etcd. Uses the IP of the first machine in the kubernetes_master role, or 127.0.0.1 on GCE. `etcd_servers` | (Optional) Comma-delimited list of IP addresses the kube-apiserver and kubelet use to reach etcd. Uses the IP of the first machine in the kubernetes_master role, or 127.0.0.1 on GCE.
`hostnamef` | (Optional) The full host name of the machine, i.e. uname -n `hostnamef` | (Optional) The full host name of the machine, i.e. uname -n
`node_ip` | (Optional) The IP address to use to address this node `node_ip` | (Optional) The IP address to use to address this node
`hostname_override` | (Optional) Mapped to the kubelet hostname-override `hostname_override` | (Optional) Mapped to the kubelet hostname-override
`network_mode` | (Optional) Networking model to use among nodes: *openvswitch* `network_mode` | (Optional) Networking model to use among nodes: *openvswitch*
`networkInterfaceName` | (Optional) Networking interface to use to bind addresses, default value *eth0* `networkInterfaceName` | (Optional) Networking interface to use to bind addresses, default value *eth0*
`publicAddressOverride` | (Optional) The IP address the kube-apiserver should use to bind against for external read-only access `publicAddressOverride` | (Optional) The IP address the kube-apiserver should use to bind against for external read-only access
`roles` | (Required) 1. `kubernetes-master` means this machine is the master in the Kubernetes cluster. 2. `kubernetes-pool` means this machine is a kubernetes-node. Depending on the role, the Salt scripts will provision different resources on the machine. `roles` | (Required) 1. `kubernetes-master` means this machine is the master in the Kubernetes cluster. 2. `kubernetes-pool` means this machine is a kubernetes-node. Depending on the role, the Salt scripts will provision different resources on the machine.
These keys may be leveraged by the Salt sls files to branch behavior. These keys may be leveraged by the Salt sls files to branch behavior.
In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, it's important to sometimes distinguish behavior based on operating system using if branches like the following. In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, it's important to sometimes distinguish behavior based on operating system using if branches like the following.
```liquid ```liquid
{% raw %} {% raw %}
{% if grains['os_family'] == 'RedHat' %} {% if grains['os_family'] == 'RedHat' %}
// something specific to a RedHat environment (Centos, Fedora, RHEL) where you may use yum, systemd, etc. // something specific to a RedHat environment (Centos, Fedora, RHEL) where you may use yum, systemd, etc.
{% else %} {% else %}
// something specific to Debian environment (apt-get, initd) // something specific to Debian environment (apt-get, initd)
{% endif %} {% endif %}
{% endraw %} {% endraw %}
``` ```
## Best Practices ## Best Practices
1. When configuring default arguments for processes, it's best to avoid the use of EnvironmentFiles (Systemd in Red Hat environments) or init.d files (Debian distributions) to hold default values that should be common across operating system environments. This helps keep our Salt template files easy to understand for editors who may not be familiar with the particulars of each distribution. 1. When configuring default arguments for processes, it's best to avoid the use of EnvironmentFiles (Systemd in Red Hat environments) or init.d files (Debian distributions) to hold default values that should be common across operating system environments. This helps keep our Salt template files easy to understand for editors who may not be familiar with the particulars of each distribution.
## Future enhancements (Networking) ## Future enhancements (Networking)
Per pod IP configuration is provider-specific, so when making networking changes, it's important to sandbox these as all providers may not use the same mechanisms (iptables, openvswitch, etc.) Per pod IP configuration is provider-specific, so when making networking changes, it's important to sandbox these as all providers may not use the same mechanisms (iptables, openvswitch, etc.)
We should define a grains.conf key that captures more specifically what network configuration environment is being used to avoid future confusion across providers. We should define a grains.conf key that captures more specifically what network configuration environment is being used to avoid future confusion across providers.
## Further reading ## Further reading
The [cluster/saltbase](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/) tree has more details on the current SaltStack configuration. The [cluster/saltbase](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/) tree has more details on the current SaltStack configuration.

View File

@ -1,182 +1,182 @@
--- ---
assignees: assignees:
- lavalamp - lavalamp
- thockin - thockin
--- ---
* TOC * TOC
{:toc} {:toc}
## Prerequisites ## Prerequisites
You need two machines with CentOS installed on them. You need two machines with CentOS installed on them.
## Starting a cluster ## Starting a cluster
This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc... This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
This guide will only get ONE node working. Multiple nodes requires a functional [networking configuration](/docs/admin/networking) done outside of kubernetes. Although the additional Kubernetes configuration requirements should be obvious. This guide will only get ONE node working. Multiple nodes requires a functional [networking configuration](/docs/admin/networking) done outside of kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the node and run kubelet, proxy, cadvisor and docker. The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the node and run kubelet, proxy, cadvisor and docker.
**System Information:** **System Information:**
Hosts: Hosts:
Please replace host IP with your environment. Please replace host IP with your environment.
```conf ```conf
centos-master = 192.168.121.9 centos-master = 192.168.121.9
centos-minion = 192.168.121.65 centos-minion = 192.168.121.65
``` ```
**Prepare the hosts:** **Prepare the hosts:**
* Create a /etc/yum.repos.d/virt7-docker-common-release.repo on all hosts - centos-{master,minion} with following information. * Create a /etc/yum.repos.d/virt7-docker-common-release.repo on all hosts - centos-{master,minion} with following information.
```conf ```conf
[virt7-docker-common-release] [virt7-docker-common-release]
name=virt7-docker-common-release name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/ baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0 gpgcheck=0
``` ```
* Install Kubernetes and etcd on all hosts - centos-{master,minion}. This will also pull in docker and cadvisor. * Install Kubernetes and etcd on all hosts - centos-{master,minion}. This will also pull in docker and cadvisor.
```shell ```shell
yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd
``` ```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS) * Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS)
```shell ```shell
echo "192.168.121.9 centos-master echo "192.168.121.9 centos-master
192.168.121.65 centos-minion" >> /etc/hosts 192.168.121.65 centos-minion" >> /etc/hosts
``` ```
* Edit /etc/kubernetes/config which will be the same on all hosts to contain: * Edit /etc/kubernetes/config which will be the same on all hosts to contain:
```shell ```shell
# Comma separated list of nodes in the etcd cluster # Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379" KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"
# logging to stderr means we get it in the systemd journal # logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug # journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0" KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers # Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false" KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the replication controller and scheduler find the kube-apiserver # How the replication controller and scheduler find the kube-apiserver
KUBE_MASTER="--master=http://centos-master:8080" KUBE_MASTER="--master=http://centos-master:8080"
``` ```
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers * Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers
```shell ```shell
systemctl disable iptables-services firewalld systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld systemctl stop iptables-services firewalld
``` ```
**Configure the Kubernetes services on the master.** **Configure the Kubernetes services on the master.**
* Edit /etc/etcd/etcd.conf to appear as such: * Edit /etc/etcd/etcd.conf to appear as such:
```shell ```shell
# [member] # [member]
ETCD_NAME=default ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#[cluster] #[cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379" ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
``` ```
* Edit /etc/kubernetes/apiserver to appear as such: * Edit /etc/kubernetes/apiserver to appear as such:
```shell ```shell
# The address on the local server to listen to. # The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0" KUBE_API_ADDRESS="--address=0.0.0.0"
# The port on the local server to listen on. # The port on the local server to listen on.
KUBE_API_PORT="--port=8080" KUBE_API_PORT="--port=8080"
# Port kubelets listen on # Port kubelets listen on
KUBELET_PORT="--kubelet-port=10250" KUBELET_PORT="--kubelet-port=10250"
# Address range to use for services # Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own! # Add your own!
KUBE_API_ARGS="" KUBE_API_ARGS=""
``` ```
* Start the appropriate services on master: * Start the appropriate services on master:
```shell ```shell
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES systemctl restart $SERVICES
systemctl enable $SERVICES systemctl enable $SERVICES
systemctl status $SERVICES systemctl status $SERVICES
done done
``` ```
**Configure the Kubernetes services on the node.** **Configure the Kubernetes services on the node.**
***We need to configure the kubelet and start the kubelet and proxy*** ***We need to configure the kubelet and start the kubelet and proxy***
* Edit /etc/kubernetes/kubelet to appear as such: * Edit /etc/kubernetes/kubelet to appear as such:
```shell ```shell
# The address for the info server to serve on # The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0" KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on # The port for the info server to serve on
KUBELET_PORT="--port=10250" KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname # You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=centos-minion" KUBELET_HOSTNAME="--hostname-override=centos-minion"
# Location of the api-server # Location of the api-server
KUBELET_API_SERVER="--api-servers=http://centos-master:8080" KUBELET_API_SERVER="--api-servers=http://centos-master:8080"
# Add your own! # Add your own!
KUBELET_ARGS="" KUBELET_ARGS=""
``` ```
* Start the appropriate services on node (centos-minion). * Start the appropriate services on node (centos-minion).
```shell ```shell
for SERVICES in kube-proxy kubelet docker; do for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES systemctl restart $SERVICES
systemctl enable $SERVICES systemctl enable $SERVICES
systemctl status $SERVICES systemctl status $SERVICES
done done
``` ```
*You should be finished!* *You should be finished!*
* Check to make sure the cluster can see the node (on centos-master) * Check to make sure the cluster can see the node (on centos-master)
```shell ```shell
$ kubectl get nodes $ kubectl get nodes
NAME LABELS STATUS NAME LABELS STATUS
centos-minion <none> Ready centos-minion <none> Ready
``` ```
**The cluster should be running! Launch a test pod.** **The cluster should be running! Launch a test pod.**
You should have a functional cluster, check out [101](/docs/user-guide/walkthrough/)! You should have a functional cluster, check out [101](/docs/user-guide/walkthrough/)!
## Support Level ## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | custom | CentOS | _none_ | [docs](/docs/getting-started-guides/centos/centos_manual_config) | | Community ([@coolsvap](https://github.com/coolsvap)) Bare-metal | custom | CentOS | _none_ | [docs](/docs/getting-started-guides/centos/centos_manual_config) | | Community ([@coolsvap](https://github.com/coolsvap))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -1,209 +1,209 @@
--- ---
--- ---
This document describes how to deploy Kubernetes with Calico networking on _bare metal_ CoreOS. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-containers repository](https://github.com/projectcalico/calico-containers). This document describes how to deploy Kubernetes with Calico networking on _bare metal_ CoreOS. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-containers repository](https://github.com/projectcalico/calico-containers).
To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes). To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes).
Specifically, this guide will have you do the following: Specifically, this guide will have you do the following:
- Deploy a Kubernetes master node on CoreOS using cloud-config. - Deploy a Kubernetes master node on CoreOS using cloud-config.
- Deploy two Kubernetes compute nodes with Calico Networking using cloud-config. - Deploy two Kubernetes compute nodes with Calico Networking using cloud-config.
- Configure `kubectl` to access your cluster. - Configure `kubectl` to access your cluster.
The resulting cluster will use SSL between Kubernetes components. It will run the SkyDNS service and kube-ui, and be fully conformant with the Kubernetes v1.1 conformance tests. The resulting cluster will use SSL between Kubernetes components. It will run the SkyDNS service and kube-ui, and be fully conformant with the Kubernetes v1.1 conformance tests.
## Prerequisites and Assumptions ## Prerequisites and Assumptions
- At least three bare-metal machines (or VMs) to work with. This guide will configure them as follows: - At least three bare-metal machines (or VMs) to work with. This guide will configure them as follows:
- 1 Kubernetes Master - 1 Kubernetes Master
- 2 Kubernetes Nodes - 2 Kubernetes Nodes
- Your nodes should have IP connectivity to each other and the internet. - Your nodes should have IP connectivity to each other and the internet.
- This guide assumes a DHCP server on your network to assign server IPs. - This guide assumes a DHCP server on your network to assign server IPs.
- This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands). - This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands).
## Cloud-config ## Cloud-config
This guide will use [cloud-config](https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/) to configure each of the nodes in our Kubernetes cluster. This guide will use [cloud-config](https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/) to configure each of the nodes in our Kubernetes cluster.
We'll use two cloud-config files: We'll use two cloud-config files:
- `master-config.yaml`: cloud-config for the Kubernetes master - `master-config.yaml`: cloud-config for the Kubernetes master
- `node-config.yaml`: cloud-config for each Kubernetes node - `node-config.yaml`: cloud-config for each Kubernetes node
## Download CoreOS ## Download CoreOS
Download the stable CoreOS bootable ISO from the [CoreOS website](https://coreos.com/docs/running-coreos/platforms/iso/). Download the stable CoreOS bootable ISO from the [CoreOS website](https://coreos.com/docs/running-coreos/platforms/iso/).
## Configure the Kubernetes Master ## Configure the Kubernetes Master
1. Once you've downloaded the ISO image, burn the ISO to a CD/DVD/USB key and boot from it (if using a virtual machine you can boot directly from the ISO). Once booted, you should be automatically logged in as the `core` user at the terminal. At this point CoreOS is running from the ISO and it hasn't been installed yet. 1. Once you've downloaded the ISO image, burn the ISO to a CD/DVD/USB key and boot from it (if using a virtual machine you can boot directly from the ISO). Once booted, you should be automatically logged in as the `core` user at the terminal. At this point CoreOS is running from the ISO and it hasn't been installed yet.
2. *On another machine*, download the [master cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/master-config-template.yaml) and save it as `master-config.yaml`. 2. *On another machine*, download the [master cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/master-config-template.yaml) and save it as `master-config.yaml`.
3. Replace the following variables in the `master-config.yaml` file. 3. Replace the following variables in the `master-config.yaml` file.
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server. See [generating ssh keys](https://help.github.com/articles/generating-ssh-keys/) - `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server. See [generating ssh keys](https://help.github.com/articles/generating-ssh-keys/)
4. Copy the edited `master-config.yaml` to your Kubernetes master machine (using a USB stick, for example). 4. Copy the edited `master-config.yaml` to your Kubernetes master machine (using a USB stick, for example).
5. The CoreOS bootable ISO comes with a tool called `coreos-install` which will allow us to install CoreOS and configure the machine using a cloud-config file. The following command will download and install stable CoreOS using the `master-config.yaml` file we just created for configuration. Run this on the Kubernetes master. 5. The CoreOS bootable ISO comes with a tool called `coreos-install` which will allow us to install CoreOS and configure the machine using a cloud-config file. The following command will download and install stable CoreOS using the `master-config.yaml` file we just created for configuration. Run this on the Kubernetes master.
> **Warning:** this is a destructive operation that erases disk `sda` on your server. > **Warning:** this is a destructive operation that erases disk `sda` on your server.
```shell ```shell
sudo coreos-install -d /dev/sda -C stable -c master-config.yaml sudo coreos-install -d /dev/sda -C stable -c master-config.yaml
``` ```
6. Once complete, restart the server and boot from `/dev/sda` (you may need to remove the ISO image). When it comes back up, you should have SSH access as the `core` user using the public key provided in the `master-config.yaml` file. 6. Once complete, restart the server and boot from `/dev/sda` (you may need to remove the ISO image). When it comes back up, you should have SSH access as the `core` user using the public key provided in the `master-config.yaml` file.
### Configure TLS ### Configure TLS
The master requires the CA certificate, `ca.pem`; its own certificate, `apiserver.pem` and its private key, `apiserver-key.pem`. This [CoreOS guide](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to generate these. The master requires the CA certificate, `ca.pem`; its own certificate, `apiserver.pem` and its private key, `apiserver-key.pem`. This [CoreOS guide](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to generate these.
1. Generate the necessary certificates for the master. This [guide for generating Kubernetes TLS Assets](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to use OpenSSL to generate the required assets. 1. Generate the necessary certificates for the master. This [guide for generating Kubernetes TLS Assets](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to use OpenSSL to generate the required assets.
2. Send the three files to your master host (using `scp` for example). 2. Send the three files to your master host (using `scp` for example).
3. Move them to the `/etc/kubernetes/ssl` folder and ensure that only the root user can read the key: 3. Move them to the `/etc/kubernetes/ssl` folder and ensure that only the root user can read the key:
```shell ```shell
# Move keys # Move keys
sudo mkdir -p /etc/kubernetes/ssl/ sudo mkdir -p /etc/kubernetes/ssl/
sudo mv -t /etc/kubernetes/ssl/ ca.pem apiserver.pem apiserver-key.pem sudo mv -t /etc/kubernetes/ssl/ ca.pem apiserver.pem apiserver-key.pem
# Set Permissions # Set Permissions
sudo chmod 600 /etc/kubernetes/ssl/apiserver-key.pem sudo chmod 600 /etc/kubernetes/ssl/apiserver-key.pem
sudo chown root:root /etc/kubernetes/ssl/apiserver-key.pem sudo chown root:root /etc/kubernetes/ssl/apiserver-key.pem
``` ```
4. Restart the kubelet to pick up the changes: 4. Restart the kubelet to pick up the changes:
```shell ```shell
sudo systemctl restart kubelet sudo systemctl restart kubelet
``` ```
## Configure the compute nodes ## Configure the compute nodes
The following steps will set up a single Kubernetes node for use as a compute host. Run these steps to deploy each Kubernetes node in your cluster. The following steps will set up a single Kubernetes node for use as a compute host. Run these steps to deploy each Kubernetes node in your cluster.
1. Boot up the node machine using the bootable ISO we downloaded earlier. You should be automatically logged in as the `core` user. 1. Boot up the node machine using the bootable ISO we downloaded earlier. You should be automatically logged in as the `core` user.
2. Make a copy of the [node cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/node-config-template.yaml) for this machine. 2. Make a copy of the [node cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/node-config-template.yaml) for this machine.
3. Replace the following placeholders in the `node-config.yaml` file to match your deployment. 3. Replace the following placeholders in the `node-config.yaml` file to match your deployment.
- `<HOSTNAME>`: Hostname for this node (e.g. kube-node1, kube-node2) - `<HOSTNAME>`: Hostname for this node (e.g. kube-node1, kube-node2)
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server. - `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server.
- `<KUBERNETES_MASTER>`: The IPv4 address of the Kubernetes master. - `<KUBERNETES_MASTER>`: The IPv4 address of the Kubernetes master.
4. Replace the following placeholders with the contents of their respective files. 4. Replace the following placeholders with the contents of their respective files.
- `<CA_CERT>`: Complete contents of `ca.pem` - `<CA_CERT>`: Complete contents of `ca.pem`
- `<CA_KEY_CERT>`: Complete contents of `ca-key.pem` - `<CA_KEY_CERT>`: Complete contents of `ca-key.pem`
> **Important:** in a production deployment, embedding the secret key in cloud-config is a bad idea! In production you should use an appropriate secret manager. > **Important:** in a production deployment, embedding the secret key in cloud-config is a bad idea! In production you should use an appropriate secret manager.
> **Important:** Make sure you indent the entire file to match the indentation of the placeholder. For example: > **Important:** Make sure you indent the entire file to match the indentation of the placeholder. For example:
> >
> ```shell > ```shell
> - path: /etc/kubernetes/ssl/ca.pem > - path: /etc/kubernetes/ssl/ca.pem
> owner: core > owner: core
> permissions: 0644 > permissions: 0644
> content: | > content: |
> <CA_CERT> > <CA_CERT>
> ``` > ```
> >
> should look like this once the certificate is in place: > should look like this once the certificate is in place:
> >
> ```shell > ```shell
> - path: /etc/kubernetes/ssl/ca.pem > - path: /etc/kubernetes/ssl/ca.pem
> owner: core > owner: core
> permissions: 0644 > permissions: 0644
> content: | > content: |
> -----BEGIN CERTIFICATE----- > -----BEGIN CERTIFICATE-----
> MIIC9zCCAd+gAwIBAgIJAJMnVnhVhy5pMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV > MIIC9zCCAd+gAwIBAgIJAJMnVnhVhy5pMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
> ...<snip>... > ...<snip>...
> QHwi1rNc8eBLNrd4BM/A1ZeDVh/Q9KxN+ZG/hHIXhmWKgN5wQx6/81FIFg== > QHwi1rNc8eBLNrd4BM/A1ZeDVh/Q9KxN+ZG/hHIXhmWKgN5wQx6/81FIFg==
> -----END CERTIFICATE----- > -----END CERTIFICATE-----
> ``` > ```
5. Move the modified `node-config.yaml` to your Kubernetes node machine and install and configure CoreOS on the node using the following command. 5. Move the modified `node-config.yaml` to your Kubernetes node machine and install and configure CoreOS on the node using the following command.
> **Warning:** this is a destructive operation that erases disk `sda` on your server. > **Warning:** this is a destructive operation that erases disk `sda` on your server.
```shell ```shell
sudo coreos-install -d /dev/sda -C stable -c node-config.yaml sudo coreos-install -d /dev/sda -C stable -c node-config.yaml
``` ```
6. Once complete, restart the server and boot into `/dev/sda`. When it comes back up, you should have SSH access as the `core` user using the public key provided in the `node-config.yaml` file. It will take some time for the node to be fully configured. 6. Once complete, restart the server and boot into `/dev/sda`. When it comes back up, you should have SSH access as the `core` user using the public key provided in the `node-config.yaml` file. It will take some time for the node to be fully configured.
## Configure Kubeconfig ## Configure Kubeconfig
To administer your cluster from a separate host, you will need the client and admin certificates generated earlier (`ca.pem`, `admin.pem`, `admin-key.pem`). With certificates in place, run the following commands with the appropriate filepaths. To administer your cluster from a separate host, you will need the client and admin certificates generated earlier (`ca.pem`, `admin.pem`, `admin-key.pem`). With certificates in place, run the following commands with the appropriate filepaths.
```shell ```shell
kubectl config set-cluster calico-cluster --server=https://<KUBERNETES_MASTER> --certificate-authority=<CA_CERT_PATH> kubectl config set-cluster calico-cluster --server=https://<KUBERNETES_MASTER> --certificate-authority=<CA_CERT_PATH>
kubectl config set-credentials calico-admin --certificate-authority=<CA_CERT_PATH> --client-key=<ADMIN_KEY_PATH> --client-certificate=<ADMIN_CERT_PATH> kubectl config set-credentials calico-admin --certificate-authority=<CA_CERT_PATH> --client-key=<ADMIN_KEY_PATH> --client-certificate=<ADMIN_CERT_PATH>
kubectl config set-context calico --cluster=calico-cluster --user=calico-admin kubectl config set-context calico --cluster=calico-cluster --user=calico-admin
kubectl config use-context calico kubectl config use-context calico
``` ```
Check your work with `kubectl get nodes`. Check your work with `kubectl get nodes`.
## Install the DNS Addon ## Install the DNS Addon
Most Kubernetes deployments will require the DNS addon for service discovery. To install DNS, create the skydns service and replication controller provided. Most Kubernetes deployments will require the DNS addon for service discovery. To install DNS, create the skydns service and replication controller provided.
```shell ```shell
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/dns/skydns.yaml kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/dns/skydns.yaml
``` ```
## Install the Kubernetes UI Addon (Optional) ## Install the Kubernetes UI Addon (Optional)
The Kubernetes UI can be installed using `kubectl` to run the following manifest file. The Kubernetes UI can be installed using `kubectl` to run the following manifest file.
```shell ```shell
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kube-ui/kube-ui.yaml kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kube-ui/kube-ui.yaml
``` ```
## Launch other Services With Calico-Kubernetes ## Launch other Services With Calico-Kubernetes
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.version}}.0/examples/) to set up other services on your cluster. At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.version}}.0/examples/) to set up other services on your cluster.
## Connectivity to outside the cluster ## Connectivity to outside the cluster
Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP. Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP.
### NAT on the nodes ### NAT on the nodes
The simplest method for enabling connectivity from containers to the internet is to use outgoing NAT on your Kubernetes nodes. The simplest method for enabling connectivity from containers to the internet is to use outgoing NAT on your Kubernetes nodes.
Calico can provide outgoing NAT for containers. To enable it, use the following `calicoctl` command: Calico can provide outgoing NAT for containers. To enable it, use the following `calicoctl` command:
```shell ```shell
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool add <CONTAINER_SUBNET> --nat-outgoing ETCD_AUTHORITY=<master_ip:6666> calicoctl pool add <CONTAINER_SUBNET> --nat-outgoing
``` ```
By default, `<CONTAINER_SUBNET>` will be `192.168.0.0/16`. You can find out which pools have been configured with the following command: By default, `<CONTAINER_SUBNET>` will be `192.168.0.0/16`. You can find out which pools have been configured with the following command:
```shell ```shell
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool show ETCD_AUTHORITY=<master_ip:6666> calicoctl pool show
``` ```
### NAT at the border router ### NAT at the border router
In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity). In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity).
The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md). The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md).
## Support Level ## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | CoreOS | CoreOS | Calico | [docs](/docs/getting-started-guides/coreos/bare_metal_calico) | | Community ([@caseydavenport](https://github.com/caseydavenport)) Bare-metal | CoreOS | CoreOS | Calico | [docs](/docs/getting-started-guides/coreos/bare_metal_calico) | | Community ([@caseydavenport](https://github.com/caseydavenport))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -1,241 +1,241 @@
--- ---
assignees: assignees:
- aveshagarwal - aveshagarwal
- erictune - erictune
--- ---
Configuring Kubernetes on Fedora via Ansible offers a simple way to quickly create a clustered environment with little effort. Configuring Kubernetes on Fedora via Ansible offers a simple way to quickly create a clustered environment with little effort.
* TOC * TOC
{:toc} {:toc}
## Prerequisites ## Prerequisites
1. Host able to run ansible and able to clone the following repo: [kubernetes](https://github.com/kubernetes/kubernetes.git) 1. Host able to run ansible and able to clone the following repo: [kubernetes](https://github.com/kubernetes/kubernetes.git)
2. A Fedora 21+ host to act as cluster master 2. A Fedora 21+ host to act as cluster master
3. As many Fedora 21+ hosts as you would like, that act as cluster nodes 3. As many Fedora 21+ hosts as you would like, that act as cluster nodes
The hosts can be virtual or bare metal. Ansible will take care of the rest of the configuration for you - configuring networking, installing packages, handling the firewall, etc. This example will use one master and two nodes. The hosts can be virtual or bare metal. Ansible will take care of the rest of the configuration for you - configuring networking, installing packages, handling the firewall, etc. This example will use one master and two nodes.
## Architecture of the cluster ## Architecture of the cluster
A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a cluster with three hosts, for example: A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a cluster with three hosts, for example:
```shell ```shell
master,etcd = kube-master.example.com master,etcd = kube-master.example.com
node1 = kube-node-01.example.com node1 = kube-node-01.example.com
node2 = kube-node-02.example.com node2 = kube-node-02.example.com
``` ```
**Make sure your local machine has** **Make sure your local machine has**
- ansible (must be 1.9.0+) - ansible (must be 1.9.0+)
- git - git
- python-netaddr - python-netaddr
If not If not
```shell ```shell
yum install -y ansible git python-netaddr yum install -y ansible git python-netaddr
``` ```
**Now clone down the Kubernetes repository** **Now clone down the Kubernetes repository**
```shell ```shell
git clone https://github.com/kubernetes/contrib.git git clone https://github.com/kubernetes/contrib.git
cd contrib/ansible cd contrib/ansible
``` ```
**Tell ansible about each machine and its role in your cluster** **Tell ansible about each machine and its role in your cluster**
Get the IP addresses from the master and nodes. Add those to the `~/contrib/ansible/inventory` file on the host running Ansible. Get the IP addresses from the master and nodes. Add those to the `~/contrib/ansible/inventory` file on the host running Ansible.
```shell ```shell
[masters] [masters]
kube-master.example.com kube-master.example.com
[etcd] [etcd]
kube-master.example.com kube-master.example.com
[nodes] [nodes]
kube-node-01.example.com kube-node-01.example.com
kube-node-02.example.com kube-node-02.example.com
``` ```
## Setting up ansible access to your nodes ## Setting up ansible access to your nodes
If you already are running on a machine which has passwordless ssh access to the kube-master and kube-node-{01,02} nodes, and 'sudo' privileges, simply set the value of `ansible_ssh_user` in `~/contrib/ansible/group_vars/all.yml` to the username which you use to ssh to the nodes (i.e. `fedora`), and proceed to the next step... If you already are running on a machine which has passwordless ssh access to the kube-master and kube-node-{01,02} nodes, and 'sudo' privileges, simply set the value of `ansible_ssh_user` in `~/contrib/ansible/group_vars/all.yml` to the username which you use to ssh to the nodes (i.e. `fedora`), and proceed to the next step...
*Otherwise* setup ssh on the machines like so (you will need to know the root password to all machines in the cluster). *Otherwise* setup ssh on the machines like so (you will need to know the root password to all machines in the cluster).
edit: ~/contrib/ansible/group_vars/all.yml edit: ~/contrib/ansible/group_vars/all.yml
```yaml ```yaml
ansible_ssh_user: root ansible_ssh_user: root
``` ```
**Configuring ssh access to the cluster** **Configuring ssh access to the cluster**
If you already have ssh access to every machine using ssh public keys you may skip to [setting up the cluster](#setting-up-the-cluster) If you already have ssh access to every machine using ssh public keys you may skip to [setting up the cluster](#setting-up-the-cluster)
Make sure your local machine (root) has an ssh key pair if not Make sure your local machine (root) has an ssh key pair if not
```shell ```shell
ssh-keygen ssh-keygen
``` ```
Copy the ssh public key to **all** nodes in the cluster Copy the ssh public key to **all** nodes in the cluster
```shell ```shell
for node in kube-master.example.com kube-node-01.example.com kube-node-02.example.com; do for node in kube-master.example.com kube-node-01.example.com kube-node-02.example.com; do
ssh-copy-id ${node} ssh-copy-id ${node}
done done
``` ```
## Setting up the cluster ## Setting up the cluster
Although the default value of variables in `~/contrib/ansible/group_vars/all.yml` should be good enough, if not, change them as needed. Although the default value of variables in `~/contrib/ansible/group_vars/all.yml` should be good enough, if not, change them as needed.
```conf ```conf
edit: ~/contrib/ansible/group_vars/all.yml edit: ~/contrib/ansible/group_vars/all.yml
``` ```
**Configure access to kubernetes packages** **Configure access to kubernetes packages**
Modify `source_type` as below to access kubernetes packages through the package manager. Modify `source_type` as below to access kubernetes packages through the package manager.
```yaml ```yaml
source_type: packageManager source_type: packageManager
``` ```
**Configure the IP addresses used for services** **Configure the IP addresses used for services**
Each Kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment. Each Kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment.
```yaml ```yaml
kube_service_addresses: 10.254.0.0/16 kube_service_addresses: 10.254.0.0/16
``` ```
**Managing flannel** **Managing flannel**
Modify `flannel_subnet`, `flannel_prefix` and `flannel_host_prefix` only if defaults are not appropriate for your cluster. Modify `flannel_subnet`, `flannel_prefix` and `flannel_host_prefix` only if defaults are not appropriate for your cluster.
**Managing add on services in your cluster** **Managing add on services in your cluster**
Set `cluster_logging` to false or true (default) to disable or enable logging with elasticsearch. Set `cluster_logging` to false or true (default) to disable or enable logging with elasticsearch.
```yaml ```yaml
cluster_logging: true cluster_logging: true
``` ```
Turn `cluster_monitoring` to true (default) or false to enable or disable cluster monitoring with heapster and influxdb. Turn `cluster_monitoring` to true (default) or false to enable or disable cluster monitoring with heapster and influxdb.
```yaml ```yaml
cluster_monitoring: true cluster_monitoring: true
``` ```
Turn `dns_setup` to true (recommended) or false to enable or disable whole DNS configuration. Turn `dns_setup` to true (recommended) or false to enable or disable whole DNS configuration.
```yaml ```yaml
dns_setup: true dns_setup: true
``` ```
**Tell ansible to get to work!** **Tell ansible to get to work!**
This will finally setup your whole Kubernetes cluster for you. This will finally setup your whole Kubernetes cluster for you.
```shell ```shell
cd ~/contrib/ansible/ cd ~/contrib/ansible/
./setup.sh ./setup.sh
``` ```
## Testing and using your new cluster ## Testing and using your new cluster
That's all there is to it. It's really that easy. At this point you should have a functioning Kubernetes cluster. That's all there is to it. It's really that easy. At this point you should have a functioning Kubernetes cluster.
**Show kubernetes nodes** **Show kubernetes nodes**
Run the following on the kube-master: Run the following on the kube-master:
```shell ```shell
kubectl get nodes kubectl get nodes
``` ```
**Show services running on masters and nodes** **Show services running on masters and nodes**
```shell ```shell
systemctl | grep -i kube systemctl | grep -i kube
``` ```
**Show firewall rules on the masters and nodes** **Show firewall rules on the masters and nodes**
```shell ```shell
iptables -nvL iptables -nvL
``` ```
**Create /tmp/apache.json on the master with the following contents and deploy pod** **Create /tmp/apache.json on the master with the following contents and deploy pod**
```json ```json
{ {
"kind": "Pod", "kind": "Pod",
"apiVersion": "v1", "apiVersion": "v1",
"metadata": { "metadata": {
"name": "fedoraapache", "name": "fedoraapache",
"labels": { "labels": {
"name": "fedoraapache" "name": "fedoraapache"
} }
}, },
"spec": { "spec": {
"containers": [ "containers": [
{ {
"name": "fedoraapache", "name": "fedoraapache",
"image": "fedora/apache", "image": "fedora/apache",
"ports": [ "ports": [
{ {
"hostPort": 80, "hostPort": 80,
"containerPort": 80 "containerPort": 80
} }
] ]
} }
] ]
} }
} }
``` ```
```shell ```shell
kubectl create -f /tmp/apache.json kubectl create -f /tmp/apache.json
``` ```
**Check where the pod was created** **Check where the pod was created**
```shell ```shell
kubectl get pods kubectl get pods
``` ```
**Check Docker status on nodes** **Check Docker status on nodes**
```shell ```shell
docker ps docker ps
docker images docker images
``` ```
**After the pod is 'Running' Check web server access on the node** **After the pod is 'Running' Check web server access on the node**
```shell ```shell
curl http://localhost curl http://localhost
``` ```
That's it ! That's it !
## Support Level ## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | Ansible | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/fedora_ansible_config) | | Project Bare-metal | Ansible | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/fedora_ansible_config) | | Project
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -1,219 +1,219 @@
--- ---
assignees: assignees:
- aveshagarwal - aveshagarwal
- eparis - eparis
- thockin - thockin
--- ---
* TOC * TOC
{:toc} {:toc}
## Prerequisites ## Prerequisites
1. You need 2 or more machines with Fedora installed. 1. You need 2 or more machines with Fedora installed.
## Instructions ## Instructions
This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc... This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](/docs/admin/networking/) done outside of Kubernetes. Although the additional Kubernetes configuration requirements should be obvious. This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](/docs/admin/networking/) done outside of Kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and Kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker. The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and Kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
**System Information:** **System Information:**
Hosts: Hosts:
```conf ```conf
fed-master = 192.168.121.9 fed-master = 192.168.121.9
fed-node = 192.168.121.65 fed-node = 192.168.121.65
``` ```
**Prepare the hosts:** **Prepare the hosts:**
* Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond. * Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond.
* The [--enablerepo=updates-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive. * The [--enablerepo=updates-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive.
* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below. * If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below.
* Running on AWS EC2 with RHEL 7.2, you need to enable "extras" repository for yum by editing `/etc/yum.repos.d/redhat-rhui.repo` and changing the changing the `enable=0` to `enable=1` for extras. * Running on AWS EC2 with RHEL 7.2, you need to enable "extras" repository for yum by editing `/etc/yum.repos.d/redhat-rhui.repo` and changing the changing the `enable=0` to `enable=1` for extras.
```shell ```shell
yum -y install --enablerepo=updates-testing kubernetes yum -y install --enablerepo=updates-testing kubernetes
``` ```
* Install etcd and iptables * Install etcd and iptables
```shell ```shell
yum -y install etcd iptables yum -y install etcd iptables
``` ```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping. * Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping.
```shell ```shell
echo "192.168.121.9 fed-master echo "192.168.121.9 fed-master
192.168.121.65 fed-node" >> /etc/hosts 192.168.121.65 fed-node" >> /etc/hosts
``` ```
* Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain: * Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain:
```shell ```shell
# Comma separated list of nodes in the etcd cluster # Comma separated list of nodes in the etcd cluster
KUBE_MASTER="--master=http://fed-master:8080" KUBE_MASTER="--master=http://fed-master:8080"
# logging to stderr means we get it in the systemd journal # logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug # journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0" KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers # Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false" KUBE_ALLOW_PRIV="--allow-privileged=false"
``` ```
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install. * Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install.
```shell ```shell
systemctl disable iptables-services firewalld systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld systemctl stop iptables-services firewalld
``` ```
**Configure the Kubernetes services on the master.** **Configure the Kubernetes services on the master.**
* Edit /etc/kubernetes/apiserver to appear as such. The service-cluster-ip-range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything. * Edit /etc/kubernetes/apiserver to appear as such. The service-cluster-ip-range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything.
```shell ```shell
# The address on the local server to listen to. # The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0" KUBE_API_ADDRESS="--address=0.0.0.0"
# Comma separated list of nodes in the etcd cluster # Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:4001" KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:4001"
# Address range to use for services # Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own! # Add your own!
KUBE_API_ARGS="" KUBE_API_ARGS=""
``` ```
* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused". Note that Fedora 22 uses etcd 2.0, One of the changes in etcd 2.0 is that now uses port 2379 and 2380 (as opposed to etcd 0.46 which userd 4001 and 7001). * Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused". Note that Fedora 22 uses etcd 2.0, One of the changes in etcd 2.0 is that now uses port 2379 and 2380 (as opposed to etcd 0.46 which userd 4001 and 7001).
```shell ```shell
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"
``` ```
* Create /var/run/kubernetes on master: * Create /var/run/kubernetes on master:
```shell ```shell
mkdir /var/run/kubernetes mkdir /var/run/kubernetes
chown kube:kube /var/run/kubernetes chown kube:kube /var/run/kubernetes
chmod 750 /var/run/kubernetes chmod 750 /var/run/kubernetes
``` ```
* Start the appropriate services on master: * Start the appropriate services on master:
```shell ```shell
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES systemctl restart $SERVICES
systemctl enable $SERVICES systemctl enable $SERVICES
systemctl status $SERVICES systemctl status $SERVICES
done done
``` ```
* Addition of nodes: * Addition of nodes:
* Create following node.json file on Kubernetes master node: * Create following node.json file on Kubernetes master node:
```json ```json
{ {
"apiVersion": "v1", "apiVersion": "v1",
"kind": "Node", "kind": "Node",
"metadata": { "metadata": {
"name": "fed-node", "name": "fed-node",
"labels":{ "name": "fed-node-label"} "labels":{ "name": "fed-node-label"}
}, },
"spec": { "spec": {
"externalID": "fed-node" "externalID": "fed-node"
} }
} }
``` ```
Now create a node object internally in your Kubernetes cluster by running: Now create a node object internally in your Kubernetes cluster by running:
```shell ```shell
$ kubectl create -f ./node.json $ kubectl create -f ./node.json
$ kubectl get nodes $ kubectl get nodes
NAME LABELS STATUS NAME LABELS STATUS
fed-node name=fed-node-label Unknown fed-node name=fed-node-label Unknown
``` ```
Please note that in the above, it only creates a representation for the node Please note that in the above, it only creates a representation for the node
_fed-node_ internally. It does not provision the actual _fed-node_. Also, it _fed-node_ internally. It does not provision the actual _fed-node_. Also, it
is assumed that _fed-node_ (as specified in `name`) can be resolved and is is assumed that _fed-node_ (as specified in `name`) can be resolved and is
reachable from Kubernetes master node. This guide will discuss how to provision reachable from Kubernetes master node. This guide will discuss how to provision
a Kubernetes node (fed-node) below. a Kubernetes node (fed-node) below.
**Configure the Kubernetes services on the node.** **Configure the Kubernetes services on the node.**
***We need to configure the kubelet on the node.*** ***We need to configure the kubelet on the node.***
* Edit /etc/kubernetes/kubelet to appear as such: * Edit /etc/kubernetes/kubelet to appear as such:
```shell ```shell
### ###
# Kubernetes kubelet (node) config # Kubernetes kubelet (node) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0" KUBELET_ADDRESS="--address=0.0.0.0"
# You may leave this blank to use the actual hostname # You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=fed-node" KUBELET_HOSTNAME="--hostname-override=fed-node"
# location of the api-server # location of the api-server
KUBELET_API_SERVER="--api-servers=http://fed-master:8080" KUBELET_API_SERVER="--api-servers=http://fed-master:8080"
# Add your own! # Add your own!
#KUBELET_ARGS="" #KUBELET_ARGS=""
``` ```
* Start the appropriate services on the node (fed-node). * Start the appropriate services on the node (fed-node).
```shell ```shell
for SERVICES in kube-proxy kubelet docker; do for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES systemctl restart $SERVICES
systemctl enable $SERVICES systemctl enable $SERVICES
systemctl status $SERVICES systemctl status $SERVICES
done done
``` ```
* Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_. * Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_.
```shell ```shell
kubectl get nodes kubectl get nodes
NAME LABELS STATUS NAME LABELS STATUS
fed-node name=fed-node-label Ready fed-node name=fed-node-label Ready
``` ```
* Deletion of nodes: * Deletion of nodes:
To delete _fed-node_ from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information): To delete _fed-node_ from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
```shell ```shell
kubectl delete -f ./node.json kubectl delete -f ./node.json
``` ```
*You should be finished!* *You should be finished!*
**The cluster should be running! Launch a test pod.** **The cluster should be running! Launch a test pod.**
You should have a functional cluster, check out [101](/docs/user-guide/walkthrough/)! You should have a functional cluster, check out [101](/docs/user-guide/walkthrough/)!
## Support Level ## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | custom | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config) | | Project Bare-metal | custom | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config) | | Project
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -1,191 +1,191 @@
--- ---
assignees: assignees:
- dchen1107 - dchen1107
- erictune - erictune
- thockin - thockin
--- ---
* TOC * TOC
{:toc} {:toc}
This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](/docs/getting-started-guides/fedora/fedora_manual_config/) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network. This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](/docs/getting-started-guides/fedora/fedora_manual_config/) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network.
## Prerequisites ## Prerequisites
You need 2 or more machines with Fedora installed. You need 2 or more machines with Fedora installed.
## Master Setup ## Master Setup
**Perform following commands on the Kubernetes master** **Perform following commands on the Kubernetes master**
* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. Flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are: * Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. Flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are:
```json ```json
{ {
"Network": "18.16.0.0/16", "Network": "18.16.0.0/16",
"SubnetLen": 24, "SubnetLen": 24,
"Backend": { "Backend": {
"Type": "vxlan", "Type": "vxlan",
"VNI": 1 "VNI": 1
} }
} }
``` ```
**NOTE:** Choose an IP range that is *NOT* part of the public IP address range. **NOTE:** Choose an IP range that is *NOT* part of the public IP address range.
Add the configuration to the etcd server on fed-master. Add the configuration to the etcd server on fed-master.
```shell ```shell
etcdctl set /coreos.com/network/config < flannel-config.json etcdctl set /coreos.com/network/config < flannel-config.json
``` ```
* Verify the key exists in the etcd server on fed-master. * Verify the key exists in the etcd server on fed-master.
```shell ```shell
etcdctl get /coreos.com/network/config etcdctl get /coreos.com/network/config
``` ```
## Node Setup ## Node Setup
**Perform following commands on all Kubernetes nodes** **Perform following commands on all Kubernetes nodes**
Edit the flannel configuration file /etc/sysconfig/flanneld as follows: Edit the flannel configuration file /etc/sysconfig/flanneld as follows:
```shell ```shell
# Flanneld configuration options # Flanneld configuration options
# etcd url location. Point this to the server where etcd runs # etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD="http://fed-master:4001" FLANNEL_ETCD="http://fed-master:4001"
# etcd config key. This is the configuration key that flannel queries # etcd config key. This is the configuration key that flannel queries
# For address range assignment # For address range assignment
FLANNEL_ETCD_KEY="/coreos.com/network" FLANNEL_ETCD_KEY="/coreos.com/network"
# Any additional options that you want to pass # Any additional options that you want to pass
FLANNEL_OPTIONS="" FLANNEL_OPTIONS=""
``` ```
**Note:** By default, flannel uses the interface for the default route. If you have multiple interfaces and would like to use an interface other than the default route one, you could add "-iface=" to FLANNEL_OPTIONS. For additional options, run `flanneld --help` on command line. **Note:** By default, flannel uses the interface for the default route. If you have multiple interfaces and would like to use an interface other than the default route one, you could add "-iface=" to FLANNEL_OPTIONS. For additional options, run `flanneld --help` on command line.
Enable the flannel service. Enable the flannel service.
```shell ```shell
systemctl enable flanneld systemctl enable flanneld
``` ```
If docker is not running, then starting flannel service is enough and skip the next step. If docker is not running, then starting flannel service is enough and skip the next step.
```shell ```shell
systemctl start flanneld systemctl start flanneld
``` ```
If docker is already running, then stop docker, delete docker bridge (docker0), start flanneld and restart docker as follows. Another alternative is to just reboot the system (`systemctl reboot`). If docker is already running, then stop docker, delete docker bridge (docker0), start flanneld and restart docker as follows. Another alternative is to just reboot the system (`systemctl reboot`).
```shell ```shell
systemctl stop docker systemctl stop docker
ip link delete docker0 ip link delete docker0
systemctl start flanneld systemctl start flanneld
systemctl start docker systemctl start docker
``` ```
## **Test the cluster and flannel configuration** ## **Test the cluster and flannel configuration**
Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each Kubernetes node out of the IP range configured above. A working output should look like this: Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each Kubernetes node out of the IP range configured above. A working output should look like this:
```shell ```shell
# ip -4 a|grep inet # ip -4 a|grep inet
inet 127.0.0.1/8 scope host lo inet 127.0.0.1/8 scope host lo
inet 192.168.122.77/24 brd 192.168.122.255 scope global dynamic eth0 inet 192.168.122.77/24 brd 192.168.122.255 scope global dynamic eth0
inet 18.16.29.0/16 scope global flannel.1 inet 18.16.29.0/16 scope global flannel.1
inet 18.16.29.1/24 scope global docker0 inet 18.16.29.1/24 scope global docker0
``` ```
From any node in the cluster, check the cluster members by issuing a query to etcd server via curl (only partial output is shown using `grep -E "\{|\}|key|value"`). If you set up a 1 master and 3 nodes cluster, you should see one block for each node showing the subnets they have been assigned. You can associate those subnets to each node by the MAC address (VtepMAC) and IP address (Public IP) that is listed in the output. From any node in the cluster, check the cluster members by issuing a query to etcd server via curl (only partial output is shown using `grep -E "\{|\}|key|value"`). If you set up a 1 master and 3 nodes cluster, you should see one block for each node showing the subnets they have been assigned. You can associate those subnets to each node by the MAC address (VtepMAC) and IP address (Public IP) that is listed in the output.
```shell ```shell
curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool
``` ```
```json ```json
{ {
"node": { "node": {
"key": "/coreos.com/network/subnets", "key": "/coreos.com/network/subnets",
{ {
"key": "/coreos.com/network/subnets/18.16.29.0-24", "key": "/coreos.com/network/subnets/18.16.29.0-24",
"value": "{\"PublicIP\":\"192.168.122.77\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"46:f1:d0:18:d0:65\"}}" "value": "{\"PublicIP\":\"192.168.122.77\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"46:f1:d0:18:d0:65\"}}"
}, },
{ {
"key": "/coreos.com/network/subnets/18.16.83.0-24", "key": "/coreos.com/network/subnets/18.16.83.0-24",
"value": "{\"PublicIP\":\"192.168.122.36\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"ca:38:78:fc:72:29\"}}" "value": "{\"PublicIP\":\"192.168.122.36\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"ca:38:78:fc:72:29\"}}"
}, },
{ {
"key": "/coreos.com/network/subnets/18.16.90.0-24", "key": "/coreos.com/network/subnets/18.16.90.0-24",
"value": "{\"PublicIP\":\"192.168.122.127\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"92:e2:80:ba:2d:4d\"}}" "value": "{\"PublicIP\":\"192.168.122.127\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"92:e2:80:ba:2d:4d\"}}"
} }
} }
} }
``` ```
From all nodes, review the `/run/flannel/subnet.env` file. This file was generated automatically by flannel. From all nodes, review the `/run/flannel/subnet.env` file. This file was generated automatically by flannel.
```shell ```shell
# cat /run/flannel/subnet.env # cat /run/flannel/subnet.env
FLANNEL_SUBNET=18.16.29.1/24 FLANNEL_SUBNET=18.16.29.1/24
FLANNEL_MTU=1450 FLANNEL_MTU=1450
FLANNEL_IPMASQ=false FLANNEL_IPMASQ=false
``` ```
At this point, we have etcd running on the Kubernetes master, and flannel / docker running on Kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly. At this point, we have etcd running on the Kubernetes master, and flannel / docker running on Kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly.
Issue the following commands on any 2 nodes: Issue the following commands on any 2 nodes:
```shell ```shell
# docker run -it fedora:latest bash # docker run -it fedora:latest bash
bash-4.3# bash-4.3#
``` ```
This will place you inside the container. Install iproute and iputils packages to install ip and ping utilities. Due to a [bug](https://bugzilla.redhat.com/show_bug.cgi?id=1142311), it is required to modify capabilities of ping binary to work around "Operation not permitted" error. This will place you inside the container. Install iproute and iputils packages to install ip and ping utilities. Due to a [bug](https://bugzilla.redhat.com/show_bug.cgi?id=1142311), it is required to modify capabilities of ping binary to work around "Operation not permitted" error.
```shell ```shell
bash-4.3# yum -y install iproute iputils bash-4.3# yum -y install iproute iputils
bash-4.3# setcap cap_net_raw-ep /usr/bin/ping bash-4.3# setcap cap_net_raw-ep /usr/bin/ping
``` ```
Now note the IP address on the first node: Now note the IP address on the first node:
```shell ```shell
bash-4.3# ip -4 a l eth0 | grep inet bash-4.3# ip -4 a l eth0 | grep inet
inet 18.16.29.4/24 scope global eth0 inet 18.16.29.4/24 scope global eth0
``` ```
And also note the IP address on the other node: And also note the IP address on the other node:
```shell ```shell
bash-4.3# ip a l eth0 | grep inet bash-4.3# ip a l eth0 | grep inet
inet 18.16.90.4/24 scope global eth0 inet 18.16.90.4/24 scope global eth0
``` ```
Now ping from the first node to the other node: Now ping from the first node to the other node:
```shell ```shell
bash-4.3# ping 18.16.90.4 bash-4.3# ping 18.16.90.4
PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data. PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data.
64 bytes from 18.16.90.4: icmp_seq=1 ttl=62 time=0.275 ms 64 bytes from 18.16.90.4: icmp_seq=1 ttl=62 time=0.275 ms
64 bytes from 18.16.90.4: icmp_seq=2 ttl=62 time=0.372 ms 64 bytes from 18.16.90.4: icmp_seq=2 ttl=62 time=0.372 ms
``` ```
Now Kubernetes multi-node cluster is set up with overlay networking set up by flannel. Now Kubernetes multi-node cluster is set up with overlay networking set up by flannel.
## Support Level ## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Bare-metal | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) Bare-metal | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
libvirt | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) libvirt | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
KVM | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) KVM | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -1,50 +1,50 @@
--- ---
assignees: assignees:
- caesarxuchao - caesarxuchao
- mikedanese - mikedanese
--- ---
kubectl port-forward forwards connections to a local port to a port on a pod. Its man page is available [here](/docs/user-guide/kubectl/kubectl_port-forward). Compared to [kubectl proxy](/docs/user-guide/accessing-the-cluster/#using-kubectl-proxy), `kubectl port-forward` is more generic as it can forward TCP traffic while `kubectl proxy` can only forward HTTP traffic. This guide demonstrates how to use `kubectl port-forward` to connect to a Redis database, which may be useful for database debugging. kubectl port-forward forwards connections to a local port to a port on a pod. Its man page is available [here](/docs/user-guide/kubectl/kubectl_port-forward). Compared to [kubectl proxy](/docs/user-guide/accessing-the-cluster/#using-kubectl-proxy), `kubectl port-forward` is more generic as it can forward TCP traffic while `kubectl proxy` can only forward HTTP traffic. This guide demonstrates how to use `kubectl port-forward` to connect to a Redis database, which may be useful for database debugging.
## Creating a Redis master ## Creating a Redis master
```shell ```shell
$ kubectl create -f examples/redis/redis-master.yaml $ kubectl create -f examples/redis/redis-master.yaml
pods/redis-master pods/redis-master
``` ```
wait until the Redis master pod is Running and Ready, wait until the Redis master pod is Running and Ready,
```shell ```shell
$ kubectl get pods $ kubectl get pods
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
redis-master 2/2 Running 0 41s redis-master 2/2 Running 0 41s
``` ```
## Connecting to the Redis master[a] ## Connecting to the Redis master[a]
The Redis master is listening on port 6379, to verify this, The Redis master is listening on port 6379, to verify this,
```shell{% raw %} ```shell{% raw %}
$ kubectl get pods redis-master --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}' $ kubectl get pods redis-master --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
6379{% endraw %} 6379{% endraw %}
``` ```
then we forward the port 6379 on the local workstation to the port 6379 of pod redis-master, then we forward the port 6379 on the local workstation to the port 6379 of pod redis-master,
```shell ```shell
$ kubectl port-forward redis-master 6379:6379 $ kubectl port-forward redis-master 6379:6379
I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:6379 -> 6379 I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:6379 -> 6379
I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:6379 -> 6379 I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:6379 -> 6379
``` ```
To verify the connection is successful, we run a redis-cli on the local workstation, To verify the connection is successful, we run a redis-cli on the local workstation,
```shell ```shell
$ redis-cli $ redis-cli
127.0.0.1:6379> ping 127.0.0.1:6379> ping
PONG PONG
``` ```
Now one can debug the database from the local workstation. Now one can debug the database from the local workstation.

View File

@ -1,32 +1,32 @@
--- ---
assignees: assignees:
- caesarxuchao - caesarxuchao
- lavalamp - lavalamp
--- ---
You have seen the [basics](/docs/user-guide/accessing-the-cluster) about `kubectl proxy` and `apiserver proxy`. This guide shows how to use them together to access a service([kube-ui](/docs/user-guide/ui)) running on the Kubernetes cluster from your workstation. You have seen the [basics](/docs/user-guide/accessing-the-cluster) about `kubectl proxy` and `apiserver proxy`. This guide shows how to use them together to access a service([kube-ui](/docs/user-guide/ui)) running on the Kubernetes cluster from your workstation.
## Getting the apiserver proxy URL of kube-ui ## Getting the apiserver proxy URL of kube-ui
kube-ui is deployed as a cluster add-on. To find its apiserver proxy URL, kube-ui is deployed as a cluster add-on. To find its apiserver proxy URL,
```shell ```shell
$ kubectl cluster-info | grep "KubeUI" $ kubectl cluster-info | grep "KubeUI"
KubeUI is running at https://173.255.119.104/api/v1/proxy/namespaces/kube-system/services/kube-ui KubeUI is running at https://173.255.119.104/api/v1/proxy/namespaces/kube-system/services/kube-ui
``` ```
if this command does not find the URL, try the steps [here](/docs/user-guide/ui/#accessing-the-ui). if this command does not find the URL, try the steps [here](/docs/user-guide/ui/#accessing-the-ui).
## Connecting to the kube-ui service from your local workstation ## Connecting to the kube-ui service from your local workstation
The above proxy URL is an access to the kube-ui service provided by the apiserver. To access it, you still need to authenticate to the apiserver. `kubectl proxy` can handle the authentication. The above proxy URL is an access to the kube-ui service provided by the apiserver. To access it, you still need to authenticate to the apiserver. `kubectl proxy` can handle the authentication.
```shell ```shell
$ kubectl proxy --port=8001 $ kubectl proxy --port=8001
Starting to serve on localhost:8001 Starting to serve on localhost:8001
``` ```
Now you can access the kube-ui service on your local workstation at [http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-ui](http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-ui) Now you can access the kube-ui service on your local workstation at [http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-ui](http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-ui)

View File

@ -1,74 +1,74 @@
--- ---
assignees: assignees:
- caesarxuchao - caesarxuchao
- mikedanese - mikedanese
--- ---
Developers can use `kubectl exec` to run commands in a container. This guide demonstrates two use cases. Developers can use `kubectl exec` to run commands in a container. This guide demonstrates two use cases.
## Using kubectl exec to check the environment variables of a container ## Using kubectl exec to check the environment variables of a container
Kubernetes exposes [services](/docs/user-guide/services/#environment-variables) through environment variables. It is convenient to check these environment variables using `kubectl exec`. Kubernetes exposes [services](/docs/user-guide/services/#environment-variables) through environment variables. It is convenient to check these environment variables using `kubectl exec`.
We first create a pod and a service, We first create a pod and a service,
```shell ```shell
$ kubectl create -f examples/guestbook/redis-master-controller.yaml $ kubectl create -f examples/guestbook/redis-master-controller.yaml
$ kubectl create -f examples/guestbook/redis-master-service.yaml $ kubectl create -f examples/guestbook/redis-master-service.yaml
``` ```
wait until the pod is Running and Ready, wait until the pod is Running and Ready,
```shell ```shell
$ kubectl get pod $ kubectl get pod
NAME READY REASON RESTARTS AGE NAME READY REASON RESTARTS AGE
redis-master-ft9ex 1/1 Running 0 12s redis-master-ft9ex 1/1 Running 0 12s
``` ```
then we can check the environment variables of the pod, then we can check the environment variables of the pod,
```shell ```shell
$ kubectl exec redis-master-ft9ex env $ kubectl exec redis-master-ft9ex env
... ...
REDIS_MASTER_SERVICE_PORT=6379 REDIS_MASTER_SERVICE_PORT=6379
REDIS_MASTER_SERVICE_HOST=10.0.0.219 REDIS_MASTER_SERVICE_HOST=10.0.0.219
... ...
``` ```
We can use these environment variables in applications to find the service. We can use these environment variables in applications to find the service.
## Using kubectl exec to check the mounted volumes ## Using kubectl exec to check the mounted volumes
It is convenient to use `kubectl exec` to check if the volumes are mounted as expected. It is convenient to use `kubectl exec` to check if the volumes are mounted as expected.
We first create a Pod with a volume mounted at /data/redis, We first create a Pod with a volume mounted at /data/redis,
```shell ```shell
kubectl create -f docs/user-guide/walkthrough/pod-redis.yaml kubectl create -f docs/user-guide/walkthrough/pod-redis.yaml
``` ```
wait until the pod is Running and Ready, wait until the pod is Running and Ready,
```shell ```shell
$ kubectl get pods $ kubectl get pods
NAME READY REASON RESTARTS AGE NAME READY REASON RESTARTS AGE
storage 1/1 Running 0 1m storage 1/1 Running 0 1m
``` ```
we then use `kubectl exec` to verify that the volume is mounted at /data/redis, we then use `kubectl exec` to verify that the volume is mounted at /data/redis,
```shell ```shell
$ kubectl exec storage ls /data $ kubectl exec storage ls /data
redis redis
``` ```
## Using kubectl exec to open a bash terminal in a pod ## Using kubectl exec to open a bash terminal in a pod
After all, open a terminal in a pod is the most direct way to introspect the pod. Assuming the pod/storage is still running, run After all, open a terminal in a pod is the most direct way to introspect the pod. Assuming the pod/storage is still running, run
```shell ```shell
$ kubectl exec -ti storage -- bash $ kubectl exec -ti storage -- bash
root@storage:/data# root@storage:/data#
``` ```
This gets you a terminal. This gets you a terminal.

View File

@ -1,80 +1,80 @@
--- ---
assignees: assignees:
- mikedanese - mikedanese
--- ---
This page is designed to help you use logs to troubleshoot issues with your Kubernetes solution. This page is designed to help you use logs to troubleshoot issues with your Kubernetes solution.
## Logging by Kubernetes Components ## Logging by Kubernetes Components
Kubernetes components, such as kubelet and apiserver, use the [glog](https://godoc.org/github.com/golang/glog) logging library. Developer conventions for logging severity are described in [docs/devel/logging.md](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/logging.md). Kubernetes components, such as kubelet and apiserver, use the [glog](https://godoc.org/github.com/golang/glog) logging library. Developer conventions for logging severity are described in [docs/devel/logging.md](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/logging.md).
## Examining the logs of running containers ## Examining the logs of running containers
The logs of a running container may be fetched using the command `kubectl logs`. For example, given The logs of a running container may be fetched using the command `kubectl logs`. For example, given
this pod specification [counter-pod.yaml](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/blog-logging/counter-pod.yaml), which has a container which writes out some text to standard this pod specification [counter-pod.yaml](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/blog-logging/counter-pod.yaml), which has a container which writes out some text to standard
output every second. (You can find different pod specifications [here](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/user-guide/logging-demo/).) output every second. (You can find different pod specifications [here](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/user-guide/logging-demo/).)
{% include code.html language="yaml" file="counter-pod.yaml" k8slink="/examples/blog-logging/counter-pod.yaml" %} {% include code.html language="yaml" file="counter-pod.yaml" k8slink="/examples/blog-logging/counter-pod.yaml" %}
we can run the pod: we can run the pod:
```shell ```shell
$ kubectl create -f ./counter-pod.yaml $ kubectl create -f ./counter-pod.yaml
pods/counter pods/counter
``` ```
and then fetch the logs: and then fetch the logs:
```shell ```shell
$ kubectl logs counter $ kubectl logs counter
0: Tue Jun 2 21:37:31 UTC 2015 0: Tue Jun 2 21:37:31 UTC 2015
1: Tue Jun 2 21:37:32 UTC 2015 1: Tue Jun 2 21:37:32 UTC 2015
2: Tue Jun 2 21:37:33 UTC 2015 2: Tue Jun 2 21:37:33 UTC 2015
3: Tue Jun 2 21:37:34 UTC 2015 3: Tue Jun 2 21:37:34 UTC 2015
4: Tue Jun 2 21:37:35 UTC 2015 4: Tue Jun 2 21:37:35 UTC 2015
5: Tue Jun 2 21:37:36 UTC 2015 5: Tue Jun 2 21:37:36 UTC 2015
... ...
``` ```
If a pod has more than one container then you need to specify which container's log files should If a pod has more than one container then you need to specify which container's log files should
be fetched e.g. be fetched e.g.
```shell ```shell
$ kubectl logs kube-dns-v3-7r1l9 etcd $ kubectl logs kube-dns-v3-7r1l9 etcd
2015/06/23 00:43:10 etcdserver: start to snapshot (applied: 30003, lastsnap: 20002) 2015/06/23 00:43:10 etcdserver: start to snapshot (applied: 30003, lastsnap: 20002)
2015/06/23 00:43:10 etcdserver: compacted log at index 30003 2015/06/23 00:43:10 etcdserver: compacted log at index 30003
2015/06/23 00:43:10 etcdserver: saved snapshot at index 30003 2015/06/23 00:43:10 etcdserver: saved snapshot at index 30003
2015/06/23 02:05:42 etcdserver: start to snapshot (applied: 40004, lastsnap: 30003) 2015/06/23 02:05:42 etcdserver: start to snapshot (applied: 40004, lastsnap: 30003)
2015/06/23 02:05:42 etcdserver: compacted log at index 40004 2015/06/23 02:05:42 etcdserver: compacted log at index 40004
2015/06/23 02:05:42 etcdserver: saved snapshot at index 40004 2015/06/23 02:05:42 etcdserver: saved snapshot at index 40004
2015/06/23 03:28:31 etcdserver: start to snapshot (applied: 50005, lastsnap: 40004) 2015/06/23 03:28:31 etcdserver: start to snapshot (applied: 50005, lastsnap: 40004)
2015/06/23 03:28:31 etcdserver: compacted log at index 50005 2015/06/23 03:28:31 etcdserver: compacted log at index 50005
2015/06/23 03:28:31 etcdserver: saved snapshot at index 50005 2015/06/23 03:28:31 etcdserver: saved snapshot at index 50005
2015/06/23 03:28:56 filePurge: successfully removed file default.etcd/member/wal/0000000000000000-0000000000000000.wal 2015/06/23 03:28:56 filePurge: successfully removed file default.etcd/member/wal/0000000000000000-0000000000000000.wal
2015/06/23 04:51:03 etcdserver: start to snapshot (applied: 60006, lastsnap: 50005) 2015/06/23 04:51:03 etcdserver: start to snapshot (applied: 60006, lastsnap: 50005)
2015/06/23 04:51:03 etcdserver: compacted log at index 60006 2015/06/23 04:51:03 etcdserver: compacted log at index 60006
2015/06/23 04:51:03 etcdserver: saved snapshot at index 60006 2015/06/23 04:51:03 etcdserver: saved snapshot at index 60006
... ...
``` ```
## Cluster level logging to Google Cloud Logging ## Cluster level logging to Google Cloud Logging
The getting started guide [Cluster Level Logging to Google Cloud Logging](/docs/getting-started-guides/logging) The getting started guide [Cluster Level Logging to Google Cloud Logging](/docs/getting-started-guides/logging)
explains how container logs are ingested into [Google Cloud Logging](https://cloud.google.com/logging/docs/) explains how container logs are ingested into [Google Cloud Logging](https://cloud.google.com/logging/docs/)
and shows how to query the ingested logs. and shows how to query the ingested logs.
## Cluster level logging with Elasticsearch and Kibana ## Cluster level logging with Elasticsearch and Kibana
The getting started guide [Cluster Level Logging with Elasticsearch and Kibana](/docs/getting-started-guides/logging-elasticsearch) The getting started guide [Cluster Level Logging with Elasticsearch and Kibana](/docs/getting-started-guides/logging-elasticsearch)
describes how to ingest cluster level logs into Elasticsearch and view them using Kibana. describes how to ingest cluster level logs into Elasticsearch and view them using Kibana.
## Ingesting Application Log Files ## Ingesting Application Log Files
Cluster level logging only collects the standard output and standard error output of the applications Cluster level logging only collects the standard output and standard error output of the applications
running in containers. The guide [Collecting log files from within containers with Fluentd and sending them to the Google Cloud Logging service](https://github.com/kubernetes/contrib/blob/master/logging/fluentd-sidecar-gcp/README.md) explains how the log files of applications can also be ingested into Google Cloud logging. running in containers. The guide [Collecting log files from within containers with Fluentd and sending them to the Google Cloud Logging service](https://github.com/kubernetes/contrib/blob/master/logging/fluentd-sidecar-gcp/README.md) explains how the log files of applications can also be ingested into Google Cloud logging.
## Known issues ## Known issues
Kubernetes does log rotation for Kubernetes components and docker containers. The command `kubectl logs` currently only read the latest logs, not all historical ones. Kubernetes does log rotation for Kubernetes components and docker containers. The command `kubectl logs` currently only read the latest logs, not all historical ones.