Syntax highlighting manual fixes up to mesos.md (next is scratch.md)

pull/43/head
John Mulhausen 2016-02-16 19:28:28 -08:00
parent ee72211075
commit beda532ca4
128 changed files with 1765 additions and 3547 deletions

View File

@ -19,7 +19,7 @@ title: "Kubernetes Documentation: releases.k8s.io/release-1.1"
* An overview of the [Design of Kubernetes](design/)
* There are example files and walkthroughs in the [examples](../examples/)
* There are example files and walkthroughs in the [examples](https://github.com/kubernetes/kubernetes/tree/master/examples)
folder.
* If something went wrong, see the [troubleshooting](troubleshooting) document for how to debug.

View File

@ -127,11 +127,6 @@ Yes.
For Kubernetes 1.0, we strongly recommend running the following set of admission control plug-ins (order matters):
```
```shell
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
```

View File

@ -88,26 +88,19 @@ To permit an action Policy with an unset namespace applies regardless of namespa
A service account automatically generates a user. The user's name is generated according to the naming convention:
```
```shell
system:serviceaccount:<namespace>:<serviceaccountname>
```
Creating a new namespace also causes a new service account to be created, of this form:*
```
```shell
system:serviceaccount:<namespace>:default
```
For example, if you wanted to grant the default service account in the kube-system full privilege to the API, you would add this line to your policy file:
```json
{"user":"system:serviceaccount:kube-system:default"}
```
The apiserver will need to be restarted to pickup the new policy lines.
@ -118,11 +111,9 @@ Other implementations can be developed fairly easily.
The APIserver calls the Authorizer interface:
```go
type Authorizer interface {
Authorize(a Attributes) error
}
```
to determine whether or not to allow each API action.

View File

@ -40,7 +40,7 @@ To prevent memory leaks or other resource issues in [cluster addons](https://rel
For example:
```yaml
```yaml
containers:
- image: gcr.io/google_containers/heapster:v0.15.0
name: heapster
@ -48,8 +48,8 @@ containers:
limits:
cpu: 100m
memory: 200Mi
```
```
These limits, however, are based on data collected from addons running on 4-node clusters (see [#10335](http://issue.k8s.io/10335#issuecomment-117861225)). The addons consume a lot more resources when running on large deployment clusters (see [#5880](http://issue.k8s.io/5880#issuecomment-113984085)). So, if a large cluster is deployed without adjusting these values, the addons may continuously get killed because they keep hitting the limits.
To avoid running into cluster addon resource issues, when creating a cluster with many nodes, consider the following:

View File

@ -40,24 +40,20 @@ The node upgrade process is user-initiated and is described in the [GKE document
### Upgrading open source Google Compute Engine clusters
Upgrades on open source Google Compute Engine (GCE) clusters are controlled by the ```cluster/gce/upgrade.sh``` script.
Upgrades on open source Google Compute Engine (GCE) clusters are controlled by the `cluster/gce/upgrade.sh` script.
Get its usage by running `cluster/gce/upgrade.sh -h`.
For example, to upgrade just your master to a specific version (v1.0.2):
```shell
cluster/gce/upgrade.sh -M v1.0.2
```
Alternatively, to upgrade your entire cluster to the latest stable release:
```shell
cluster/gce/upgrade.sh release/stable
```
### Other platforms
@ -70,10 +66,8 @@ recommend testing the upgrade on an experimental cluster before performing the u
If your cluster runs short on resources you can easily add more machines to it if your cluster is running in [Node self-registration mode](node.html#self-registration-of-nodes).
If you're using GCE or GKE it's done by resizing Instance Group managing your Nodes. It can be accomplished by modifying number of instances on `Compute > Compute Engine > Instance groups > your group > Edit group` [Google Cloud Console page](https://console.developers.google.com) or using gcloud CLI:
```
```shell
gcloud compute instance-groups managed --zone compute-zone resize my-cluster-minon-group --new-size 42
```
Instance Group will take care of putting appropriate image on new machines and start them, while Kubelet will register its Node with API server to make it available for scheduling. If you scale the instance group down, system will randomly choose Nodes to kill.
@ -84,31 +78,32 @@ In other environments you may need to configure the machine yourself and tell th
### Horizontal auto-scaling of nodes (GCE)
If you are using GCE, you can configure your cluster so that the number of nodes will be automatically scaled based on their CPU and memory utilization.
Before setting up the cluster by ```kube-up.sh```, you can set ```KUBE_ENABLE_NODE_AUTOSCALER``` environment variable to ```true``` and export it.
Before setting up the cluster by `kube-up.sh`, you can set `KUBE_ENABLE_NODE_AUTOSCALE`
environment variable to `true`
and export it.
The script will create an autoscaler for the instance group managing your nodes.
The autoscaler will try to maintain the average CPU and memory utilization of nodes within the cluster close to the target value.
The target value can be configured by ```KUBE_TARGET_NODE_UTILIZATION``` environment variable (default: 0.7) for ``kube-up.sh`` when creating the cluster.
The target value can be configured by `KUBE_TARGET_NODE_UTILIZATION`
environment variable (default: 0.7) for `kube-up.sh` when creating the cluster.
The node utilization is the total node's CPU/memory usage (OS + k8s + user load) divided by the node's capacity.
If the desired numbers of nodes in the cluster resulting from CPU utilization and memory utilization are different,
the autoscaler will choose the bigger number.
The number of nodes in the cluster set by the autoscaler will be limited from ```KUBE_AUTOSCALER_MIN_NODES``` (default: 1)
to ```KUBE_AUTOSCALER_MAX_NODES``` (default: the initial number of nodes in the cluster).
The number of nodes in the cluster set by the autoscaler will be limited from `KUBE_AUTOSCALER_MIN_NODES`
(default: 1)
to `KUBE_AUTOSCALER_MAX_NODES`
(default: the initial number of nodes in the cluster).
The autoscaler is implemented as a Compute Engine Autoscaler.
The initial values of the autoscaler parameters set by ``kube-up.sh`` and some more advanced options can be tweaked on
The initial values of the autoscaler parameters set by `kube-up.sh` and some more advanced options can be tweaked on
`Compute > Compute Engine > Instance groups > your group > Edit group`[Google Cloud Console page](https://console.developers.google.com)
or using gcloud CLI:
```
```shell
gcloud preview autoscaler --zone compute-zone <command>
```
Note that autoscaling will work properly only if node metrics are accessible in Google Cloud Monitoring.
To make the metrics accessible, you need to create your cluster with ```KUBE_ENABLE_CLUSTER_MONITORING```
equal to ```google``` or ```googleinfluxdb``` (```googleinfluxdb``` is the default value).
Note that autoscaling will work properly only if node metrics are accessible in Google Cloud Monitoring. To make the metrics accessible, you need to create your cluster with `KUBE_ENABLE_CLUSTER_MONITORING` equal to `google` or `googleinfluxdb` (`googleinfluxdb` is the default value).
## Maintenance on a Node
@ -123,9 +118,7 @@ If you want more control over the upgrading process, you may use the following w
Mark the node to be rebooted as unschedulable:
```shell
kubectl replace nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedulable": true}}'
```
This keeps new pods from landing on the node while you are trying to get them off.
@ -135,9 +128,7 @@ Get the pods off the machine, via any of the following strategies:
* Delete pods with:
```shell
kubectl delete pods $PODNAME
```
For pods with a replication controller, the pod will eventually be replaced by a new pod which will be scheduled to a new node. Additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.
@ -149,9 +140,7 @@ Perform maintenance work on the node.
Make the node schedulable again:
```shell
kubectl replace nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedulable": false}}'
```
If you deleted the node's VM instance and created a new one, then a new schedulable node resource will
@ -193,11 +182,6 @@ for changes to this variable to take effect.
You can use the `kube-version-change` utility to convert config files between different API versions.
```shell
$ hack/build-go.sh cmd/kube-version-change
$ _output/local/go/bin/kube-version-change -i myPod.v1beta3.yaml -o myPod.v1.yaml
```

View File

@ -13,9 +13,7 @@ The first thing to debug in your cluster is if your nodes are all registered cor
Run
```shell
kubectl get nodes
```
And verify that all of the nodes you expect to see are present and that they are all in the `Ready` state.
@ -108,7 +106,4 @@ Mitigations:
- Mitigates: Kubelet software fault
- Action: [Multiple independent clusters](multi-cluster) (and avoid making risky changes to all clusters at once)
- Mitigates: Everything listed above.
- Mitigates: Everything listed above.

View File

@ -42,10 +42,5 @@ test key. On your master VM (or somewhere with firewalls configured such that
you can talk to your cluster's etcd), try:
```shell
curl -fs -X PUT "http://${host}:${port}/v2/keys/_test"
```

View File

@ -12,8 +12,6 @@ or try [Google Container Engine](https://cloud.google.com/container-engine/) for
Also, at this time high availability support for Kubernetes is not continuously tested in our end-to-end (e2e) testing. We will
be working to add this continuous testing, but for now the single-node master installations are more heavily tested.
{% include pagetoc.html %}
## Overview
@ -85,10 +83,10 @@ a simple cluster set up, using etcd's built in discovery to build our cluster.
First, hit the etcd discovery service to create a new token:
```shell
curl https://discovery.etcd.io/new?size=3
```
```shell
curl https://discovery.etcd.io/new?size=3
```
On each node, copy the [etcd.yaml](high-availability/etcd.yaml) file into `/etc/kubernetes/manifests/etcd.yaml`
The kubelet on each node actively monitors the contents of that directory, and it will create an instance of the `etcd`
@ -103,16 +101,16 @@ for `${NODE_IP}` on each machine.
Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate with
```shell
etcdctl member list
```
```shell
etcdctl member list
```
and
```shell
etcdctl cluster-health
```
```shell
etcdctl cluster-health
```
You can also validate that this is working with `etcdctl set foo bar` on one node, and `etcd get foo`
on a different node.
@ -141,10 +139,10 @@ Once you have replicated etcd set up correctly, we will also install the apiserv
First you need to create the initial log file, so that Docker mounts a file instead of a directory:
```shell
touch /var/log/kube-apiserver.log
```
```shell
touch /var/log/kube-apiserver.log
```
Next, you need to create a `/srv/kubernetes/` directory on each node. This directory includes:
* basic_auth.csv - basic auth user and password
* ca.crt - Certificate Authority cert
@ -193,14 +191,13 @@ In the future, we expect to more tightly integrate this lease-locking into the s
First, create empty log files on each node, so that Docker will mount the files not make new directories:
```shell
```shell
touch /var/log/kube-scheduler.log
touch /var/log/kube-controller-manager.log
```
touch /var/log/kube-controller-manager.log
```
Next, set up the descriptions of the scheduler and controller manager pods on each node.
by copying [kube-scheduler.yaml](high-availability/kube-scheduler.yaml) and [kube-controller-manager.yaml](high-availability/kube-controller-manager.yaml) into the `/srv/kubernetes/`
directory.
by copying [kube-scheduler.yaml](high-availability/kube-scheduler.yaml) and [kube-controller-manager.yaml](high-availability/kube-controller-manager.yaml) into the `/srv/kubernetes/` directory.
### Running the podmaster

View File

@ -40,32 +40,27 @@ This example will work in a custom namespace to demonstrate the concepts involve
Let's create a new namespace called limit-example:
```shell
```shell
$ kubectl create -f docs/admin/limitrange/namespace.yaml
namespace "limit-example" created
$ kubectl get namespaces
NAME LABELS STATUS AGE
default <none> Active 5m
limit-example <none> Active 53s
```
```
## Step 2: Apply a limit to the namespace
Let's create a simple limit in our namespace.
```shell
```shell
$ kubectl create -f docs/admin/limitrange/limits.yaml --namespace=limit-example
limitrange "mylimits" created
```
```
Let's describe the limits that we have imposed in our namespace.
```shell
```shell
$ kubectl describe limits mylimits --namespace=limit-example
Name: mylimits
Namespace: limit-example
@ -75,9 +70,8 @@ Pod cpu 200m 2 - - -
Pod memory 6Mi 1Gi - - -
Container cpu 100m 2 200m 300m -
Container memory 3Mi 1Gi 100Mi 200Mi -
```
```
In this scenario, we have said the following:
1. If a max constraint is specified for a resource (2 CPU and 1Gi memory in this case), then a limit
@ -104,20 +98,17 @@ of creation explaining why.
Let's first spin up a replication controller that creates a single container pod to demonstrate
how default values are applied to each pod.
```shell
```shell
$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example
replicationcontroller "nginx" created
$ kubectl get pods --namespace=limit-example
NAME READY STATUS RESTARTS AGE
nginx-aq0mf 1/1 Running 0 35s
$ kubectl get pods nginx-aq0mf --namespace=limit-example -o yaml | grep resources -C 8
```
```yaml
resourceVersion: "127"
```
```yaml
resourceVersion: "127"
selfLink: /api/v1/namespaces/limit-example/pods/nginx-aq0mf
uid: 51be42a7-7156-11e5-9921-286ed488f785
spec:
@ -134,33 +125,27 @@ spec:
memory: 100Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
```
```
Note that our nginx container has picked up the namespace default cpu and memory resource *limits* and *requests*.
Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 cpu cores.
```shell
```shell
$ kubectl create -f docs/admin/limitrange/invalid-pod.yaml --namespace=limit-example
Error from server: error when creating "docs/admin/limitrange/invalid-pod.yaml": Pod "invalid-pod" is forbidden: [Maximum cpu usage per Pod is 2, but limit is 3., Maximum cpu usage per Container is 2, but limit is 3.]
```
```
Let's create a pod that falls within the allowed limit boundaries.
```shell
```shell
$ kubectl create -f docs/admin/limitrange/valid-pod.yaml --namespace=limit-example
pod "valid-pod" created
$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources
```
```yaml
uid: 162a12aa-7157-11e5-9921-286ed488f785
```
```yaml
uid: 162a12aa-7157-11e5-9921-286ed488f785
spec:
containers:
- image: gcr.io/google_containers/serve_hostname
@ -173,47 +158,36 @@ spec:
requests:
cpu: "1"
memory: 512Mi
```
```
Note that this pod specifies explicit resource *limits* and *requests* so it did not pick up the namespace
default values.
Note: The *limits* for CPU resource are not enforced in the default Kubernetes setup on the physical node
that runs the container unless the administrator deploys the kubelet with the folllowing flag:
```
```
$ kubelet --help
Usage of kubelet
....
--cpu-cfs-quota[=false]: Enable CPU CFS quota enforcement for containers that specify CPU limits
$ kubelet --cpu-cfs-quota=true ...
```
```
## Step 4: Cleanup
To remove the resources used by this example, you can just delete the limit-example namespace.
```shell
```shell
$ kubectl delete namespace limit-example
namespace "limit-example" deleted
$ kubectl get namespaces
NAME LABELS STATUS AGE
default <none> Active 20m
```
```
## Summary
Cluster operators that want to restrict the amount of resources a single container or pod may consume
are able to define allowable ranges per Kubernetes namespace. In the absence of any explicit assignments,
the Kubernetes system is able to apply default resource *limits* and *requests* if desired in order to
constrain the amount of resource a pod consumes on a node.
constrain the amount of resource a pod consumes on a node.

View File

@ -40,32 +40,27 @@ This example will work in a custom namespace to demonstrate the concepts involve
Let's create a new namespace called limit-example:
```shell
```shell
$ kubectl create -f docs/admin/limitrange/namespace.yaml
namespace "limit-example" created
$ kubectl get namespaces
NAME LABELS STATUS AGE
default <none> Active 5m
limit-example <none> Active 53s
```
```
## Step 2: Apply a limit to the namespace
Let's create a simple limit in our namespace.
```shell
```shell
$ kubectl create -f docs/admin/limitrange/limits.yaml --namespace=limit-example
limitrange "mylimits" created
```
```
Let's describe the limits that we have imposed in our namespace.
```shell
```shell
$ kubectl describe limits mylimits --namespace=limit-example
Name: mylimits
Namespace: limit-example
@ -75,9 +70,8 @@ Pod cpu 200m 2 - - -
Pod memory 6Mi 1Gi - - -
Container cpu 100m 2 200m 300m -
Container memory 3Mi 1Gi 100Mi 200Mi -
```
```
In this scenario, we have said the following:
1. If a max constraint is specified for a resource (2 CPU and 1Gi memory in this case), then a limit
@ -104,20 +98,17 @@ of creation explaining why.
Let's first spin up a replication controller that creates a single container pod to demonstrate
how default values are applied to each pod.
```shell
```shell
$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example
replicationcontroller "nginx" created
$ kubectl get pods --namespace=limit-example
NAME READY STATUS RESTARTS AGE
nginx-aq0mf 1/1 Running 0 35s
$ kubectl get pods nginx-aq0mf --namespace=limit-example -o yaml | grep resources -C 8
```
```yaml
resourceVersion: "127"
```
```yaml
resourceVersion: "127"
selfLink: /api/v1/namespaces/limit-example/pods/nginx-aq0mf
uid: 51be42a7-7156-11e5-9921-286ed488f785
spec:
@ -134,33 +125,27 @@ spec:
memory: 100Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
```
```
Note that our nginx container has picked up the namespace default cpu and memory resource *limits* and *requests*.
Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 cpu cores.
```shell
```shell
$ kubectl create -f docs/admin/limitrange/invalid-pod.yaml --namespace=limit-example
Error from server: error when creating "docs/admin/limitrange/invalid-pod.yaml": Pod "invalid-pod" is forbidden: [Maximum cpu usage per Pod is 2, but limit is 3., Maximum cpu usage per Container is 2, but limit is 3.]
```
```
Let's create a pod that falls within the allowed limit boundaries.
```shell
```shell
$ kubectl create -f docs/admin/limitrange/valid-pod.yaml --namespace=limit-example
pod "valid-pod" created
$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources
```
```yaml
uid: 162a12aa-7157-11e5-9921-286ed488f785
```
```yaml
uid: 162a12aa-7157-11e5-9921-286ed488f785
spec:
containers:
- image: gcr.io/google_containers/serve_hostname
@ -173,47 +158,37 @@ spec:
requests:
cpu: "1"
memory: 512Mi
```
```
Note that this pod specifies explicit resource *limits* and *requests* so it did not pick up the namespace
default values.
Note: The *limits* for CPU resource are not enforced in the default Kubernetes setup on the physical node
that runs the container unless the administrator deploys the kubelet with the folllowing flag:
```
```shell
$ kubelet --help
Usage of kubelet
....
--cpu-cfs-quota[=false]: Enable CPU CFS quota enforcement for containers that specify CPU limits
$ kubelet --cpu-cfs-quota=true ...
```
```
## Step 4: Cleanup
To remove the resources used by this example, you can just delete the limit-example namespace.
```shell
```shell
$ kubectl delete namespace limit-example
namespace "limit-example" deleted
$ kubectl get namespaces
NAME LABELS STATUS AGE
default <none> Active 20m
```
```
## Summary
Cluster operators that want to restrict the amount of resources a single container or pod may consume
are able to define allowable ranges per Kubernetes namespace. In the absence of any explicit assignments,
the Kubernetes system is able to apply default resource *limits* and *requests* if desired in order to
constrain the amount of resource a pod consumes on a node.
constrain the amount of resource a pod consumes on a node.

View File

@ -46,12 +46,10 @@ Look [here](namespaces/) for an in depth example of namespaces.
You can list the current namespaces in a cluster using:
```shell
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
kube-system <none> Active
```
Kubernetes starts with two initial namespaces:
@ -61,15 +59,12 @@ Kubernetes starts with two initial namespaces:
You can also get the summary of a specific namespace using:
```shell
$ kubectl get namespaces <name>
```
Or you can get detailed information with:
```shell
$ kubectl describe namespaces <name>
Name: default
Labels: <none>
@ -81,7 +76,6 @@ Resource Limits
Type Resource Min Max Default
---- -------- --- --- ---
Container cpu - - 100m
```
Note that these details show both resource quota (if present) as well as resource limit ranges.
@ -105,12 +99,10 @@ See the [design doc](../design/namespaces.html#phases) for more details.
To create a new namespace, first create a new YAML file called `my-namespace.yaml` with the contents:
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: <insert-namespace-name-here>
```
Note that the name of your namespace must be a DNS compatible label.
@ -120,9 +112,7 @@ More information on the `finalizers` field can be found in the namespace [design
Then run:
```shell
$ kubectl create -f ./my-namespace.yaml
```
### Working in namespaces
@ -135,9 +125,7 @@ and [Setting the namespace preference](/{{page.version}}/docs/user-guide/namespa
You can delete a namespace with
```shell
$ kubectl delete namespaces <insert-some-namespace-name>
```
**WARNING, this deletes _everything_ under the namespace!**
@ -156,7 +144,4 @@ across namespaces, you need to use the fully qualified domain name (FQDN).
## Design
Details of the design of namespaces in Kubernetes, including a [detailed example](../design/namespaces.html#example-openshift-origin-managing-a-kubernetes-namespace)
can be found in the [namespaces design doc](../design/namespaces)
can be found in the [namespaces design doc](../design/namespaces)

View File

@ -27,12 +27,12 @@ services, and replication controllers used by the cluster.
Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following:
```shell
```shell
$ kubectl get namespaces
NAME LABELS
default <none>
```
default <none>
```
### Step Two: Create new namespaces
For this exercise, we will create two additional Kubernetes namespaces to hold our content.
@ -54,7 +54,7 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo
<!-- BEGIN MUNGE: EXAMPLE namespace-dev.json -->
```json
```json
{
"kind": "Namespace",
"apiVersion": "v1",
@ -64,35 +64,34 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo
"name": "development"
}
}
}
```
}
```
[Download example](namespace-dev.json)
<!-- END MUNGE: EXAMPLE namespace-dev.json -->
Create the development namespace using kubectl.
```shell
$ kubectl create -f docs/admin/namespaces/namespace-dev.json
```
```shell
$ kubectl create -f docs/admin/namespaces/namespace-dev.json
```
And then lets create the production namespace using kubectl.
```shell
$ kubectl create -f docs/admin/namespaces/namespace-prod.json
```
```shell
$ kubectl create -f docs/admin/namespaces/namespace-prod.json
```
To be sure things are right, let's list all of the namespaces in our cluster.
```shell
```shell
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
development name=development Active
production name=production Active
```
production name=production Active
```
### Step Three: Create pods in each namespace
A Kubernetes namespace provides the scope for pods, services, and replication controllers in the cluster.
@ -103,7 +102,7 @@ To demonstrate this, let's spin up a simple replication controller and pod in th
We first check what is the current context:
```yaml
```yaml
apiVersion: v1
clusters:
- cluster:
@ -127,32 +126,32 @@ users:
- name: lithe-cocoa-92103_kubernetes-basic-auth
user:
password: h5M0FtUUIflBSdI7
username: admin
```
username: admin
```
The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context.
```shell
```shell
$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
```
$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
```
The above commands provided two request contexts you can alternate against depending on what namespace you
wish to work against.
Let's switch to operate in the development namespace.
```shell
$ kubectl config use-context dev
```
```shell
$ kubectl config use-context dev
```
You can verify your current context by doing the following:
```shell
$ kubectl config view
```
```yaml
```shell
$ kubectl config view
```
```yaml
apiVersion: v1
clusters:
- cluster:
@ -186,20 +185,20 @@ users:
- name: lithe-cocoa-92103_kubernetes-basic-auth
user:
password: h5M0FtUUIflBSdI7
username: admin
```
username: admin
```
At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace.
Let's create some content.
```shell
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
```
```shell
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
```
We have just created a replication controller whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname.
```shell
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
snowflake snowflake kubernetes/serve_hostname run=snowflake 2
@ -207,30 +206,30 @@ snowflake snowflake kubernetes/serve_hostname run=snowflake 2
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
snowflake-8w0qn 1/1 Running 0 22s
snowflake-jrpzb 1/1 Running 0 22s
```
snowflake-jrpzb 1/1 Running 0 22s
```
And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.
Let's switch to the production namespace and show how resources in one namespace are hidden from the other.
```shell
$ kubectl config use-context prod
```
```shell
$ kubectl config use-context prod
```
The production namespace should be empty.
```shell
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
```
NAME READY STATUS RESTARTS AGE
```
Production likes to run cattle, so let's create some cattle pods.
```shell
```shell
$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5
$ kubectl get rc
@ -243,9 +242,9 @@ cattle-97rva 1/1 Running 0 12s
cattle-i9ojn 1/1 Running 0 12s
cattle-qj3yv 1/1 Running 0 12s
cattle-yc7vn 1/1 Running 0 12s
cattle-zz7ea 1/1 Running 0 12s
```
cattle-zz7ea 1/1 Running 0 12s
```
At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.
As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different

View File

@ -27,11 +27,9 @@ services, and replication controllers used by the cluster.
Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following:
```shell
$ kubectl get namespaces
NAME LABELS
default <none>
```
### Step Two: Create new namespaces
@ -56,7 +54,6 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo
<!-- BEGIN MUNGE: EXAMPLE namespace-dev.json -->
```json
{
"kind": "Namespace",
"apiVersion": "v1",
@ -67,7 +64,6 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo
}
}
}
```
[Download example](namespace-dev.json)
@ -76,32 +72,25 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo
Create the development namespace using kubectl.
```shell
$ kubectl create -f docs/admin/namespaces/namespace-dev.json
```
And then lets create the production namespace using kubectl.
```shell
$ kubectl create -f docs/admin/namespaces/namespace-prod.json
```
To be sure things are right, let's list all of the namespaces in our cluster.
```shell
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
development name=development Active
production name=production Active
```
### Step Three: Create pods in each namespace
A Kubernetes namespace provides the scope for pods, services, and replication controllers in the cluster.
@ -113,7 +102,6 @@ To demonstrate this, let's spin up a simple replication controller and pod in th
We first check what is the current context:
```yaml
apiVersion: v1
clusters:
- cluster:
@ -138,16 +126,13 @@ users:
user:
password: h5M0FtUUIflBSdI7
username: admin
```
The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context.
```shell
$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
```
The above commands provided two request contexts you can alternate against depending on what namespace you
@ -156,21 +141,16 @@ wish to work against.
Let's switch to operate in the development namespace.
```shell
$ kubectl config use-context dev
```
You can verify your current context by doing the following:
```shell
$ kubectl config view
```
```yaml
apiVersion: v1
clusters:
- cluster:
@ -205,7 +185,6 @@ users:
user:
password: h5M0FtUUIflBSdI7
username: admin
```
At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace.
@ -213,15 +192,12 @@ At this point, all requests we make to the Kubernetes cluster from the command l
Let's create some content.
```shell
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
```
We have just created a replication controller whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname.
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
snowflake snowflake kubernetes/serve_hostname run=snowflake 2
@ -230,7 +206,6 @@ $ kubectl get pods
NAME READY STATUS RESTARTS AGE
snowflake-8w0qn 1/1 Running 0 22s
snowflake-jrpzb 1/1 Running 0 22s
```
And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.
@ -238,27 +213,22 @@ And this is great, developers are able to do what they want, and they do not hav
Let's switch to the production namespace and show how resources in one namespace are hidden from the other.
```shell
$ kubectl config use-context prod
```
The production namespace should be empty.
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
```
Production likes to run cattle, so let's create some cattle pods.
```shell
$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5
$ kubectl get rc
@ -272,13 +242,9 @@ cattle-i9ojn 1/1 Running 0 12s
cattle-qj3yv 1/1 Running 0 12s
cattle-yc7vn 1/1 Running 0 12s
cattle-zz7ea 1/1 Running 0 12s
```
At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.
As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different
authorization rules for each namespace.
authorization rules for each namespace.

View File

@ -108,9 +108,7 @@ on that subnet, and is passed to docker's `--bridge` flag.
We start Docker with:
```shell
DOCKER_OPTS="--bridge=cbr0 --iptables=false --ip-masq=false"
DOCKER_OPTS="--bridge=cbr0 --iptables=false --ip-masq=false"
```
This bridge is created by Kubelet (controlled by the `--configure-cbr0=true`
@ -127,18 +125,14 @@ itself) traffic that is bound for IPs outside the GCE project network
(10.0.0.0/8).
```shell
iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o eth0 -j MASQUERADE
```
Lastly we enable IP forwarding in the kernel (so the kernel will process
packets for bridged containers):
```shell
sysctl net.ipv4.ip_forward=1
```
The result of all this is that all `Pods` can reach each other and can egress
@ -183,7 +177,4 @@ IPs.
The early design of the networking model and its rationale, and some future
plans are described in more detail in the [networking design
document](../design/networking).
document](../design/networking).

View File

@ -57,14 +57,12 @@ Node condition is represented as a json object. For example,
the following conditions mean the node is in sane state:
```json
"conditions": [
{
"kind": "Ready",
"status": "True",
},
]
```
If the Status of the Ready condition
@ -91,7 +89,6 @@ After creation, Kubernetes will check whether the node is valid or not.
For example, if you try to create a node from the following content:
```json
{
"kind": "Node",
"apiVersion": "v1",
@ -102,7 +99,6 @@ For example, if you try to create a node from the following content:
}
}
}
```
Kubernetes will create a Node object internally (the representation), and
@ -165,9 +161,7 @@ preparatory step before a node reboot, etc. For example, to mark a node
unschedulable, run this command:
```shell
kubectl replace nodes 10.1.2.3 --patch='{"apiVersion": "v1", "unschedulable": true}'
```
Note that pods which are created by a daemonSet controller bypass the Kubernetes scheduler,
@ -190,7 +184,6 @@ If you want to explicitly reserve resources for non-Pod processes, you can creat
pod. Use the following template:
```yaml
apiVersion: v1
kind: Pod
metadata:
@ -203,7 +196,6 @@ spec:
limits:
cpu: 100m
memory: 100Mi
```
Set the `cpu` and `memory` values to the amount of resources you want to reserve.
@ -216,6 +208,3 @@ on each kubelet where you want to reserve resources.
Node is a top-level resource in the kubernetes REST API. More details about the
API object can be found at: [Node API
object](http://kubernetes.io/v1.1/docs/api-reference/v1/definitions.html#_v1_node).

View File

@ -87,7 +87,6 @@ supply of Pod IPs.
Kubectl supports creating, updating, and viewing quotas:
```shell
$ kubectl namespace myspace
$ cat <<EOF > quota.json
{
@ -122,7 +121,6 @@ pods 5 10
replicationcontrollers 5 20
resourcequotas 1 1
services 3 5
```
## Quota and Cluster Capacity
@ -146,11 +144,8 @@ restrictions around nodes: pods from several namespaces may run on the same node
## Example
See a [detailed example for how to use resource quota](resourcequota/)..
See a [detailed example for how to use resource quota](resourcequota/).
## Read More
See [ResourceQuota design doc](../design/admission_control_resource_quota) for more information.

View File

@ -13,17 +13,15 @@ This example will work in a custom namespace to demonstrate the concepts involve
Let's create a new namespace called quota-example:
```shell
```shell
$ kubectl create -f docs/admin/resourcequota/namespace.yaml
namespace "quota-example" created
$ kubectl get namespaces
NAME LABELS STATUS AGE
default <none> Active 2m
quota-example <none> Active 39s
```
```
## Step 2: Apply a quota to the namespace
By default, a pod will run with unbounded CPU and memory requests/limits. This means that any pod in the
@ -37,21 +35,18 @@ checks the total resource *requests*, not resource *limits* of all containers/po
Let's create a simple quota in our namespace:
```shell
```shell
$ kubectl create -f docs/admin/resourcequota/quota.yaml --namespace=quota-example
resourcequota "quota" created
```
```
Once your quota is applied to a namespace, the system will restrict any creation of content
in the namespace until the quota usage has been calculated. This should happen quickly.
You can describe your current quota usage to see what resources are being consumed in your
namespace.
```shell
```shell
$ kubectl describe quota quota --namespace=quota-example
Name: quota
Namespace: quota-example
@ -65,9 +60,8 @@ replicationcontrollers 0 20
resourcequotas 1 1
secrets 1 10
services 0 5
```
```
## Step 3: Applying default resource requests and limits
Pod authors rarely specify resource requests and limits for their pods.
@ -77,26 +71,21 @@ cpu and memory by creating an nginx container.
To demonstrate, lets create a replication controller that runs nginx:
```shell
```shell
$ kubectl run nginx --image=nginx --replicas=1 --namespace=quota-example
replicationcontroller "nginx" created
```
```
Now let's look at the pods that were created.
```shell
```shell
$ kubectl get pods --namespace=quota-example
NAME READY STATUS RESTARTS AGE
```
```
What happened? I have no pods! Let's describe the replication controller to get a view of what is happening.
```shell
```shell
kubectl describe rc nginx --namespace=quota-example
Name: nginx
Namespace: quota-example
@ -109,16 +98,14 @@ No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
42s 11s 3 {replication-controller } FailedCreate Error creating: Pod "nginx-" is forbidden: Must make a non-zero request for memory since it is tracked by quota.
```
```
The Kubernetes API server is rejecting the replication controllers requests to create a pod because our pods
do not specify any memory usage *request*.
So let's set some default values for the amount of cpu and memory a pod can consume:
```shell
```shell
$ kubectl create -f docs/admin/resourcequota/limits.yaml --namespace=quota-example
limitrange "limits" created
$ kubectl describe limits limits --namespace=quota-example
@ -128,27 +115,23 @@ Type Resource Min Max Request Limit Limit/Request
---- -------- --- --- ------- ----- -------------
Container memory - - 256Mi 512Mi -
Container cpu - - 100m 200m -
```
```
Now any time a pod is created in this namespace, if it has not specified any resource request/limit, the default
amount of cpu and memory per container will be applied, and the request will be used as part of admission control.
Now that we have applied default resource *request* for our namespace, our replication controller should be able to
create its pods.
```shell
```shell
$ kubectl get pods --namespace=quota-example
NAME READY STATUS RESTARTS AGE
nginx-fca65 1/1 Running 0 1m
```
```
And if we print out our quota usage in the namespace:
```shell
```shell
$ kubectl describe quota quota --namespace=quota-example
Name: quota
Namespace: quota-example
@ -162,9 +145,8 @@ replicationcontrollers 1 20
resourcequotas 1 1
secrets 1 10
services 0 5
```
```
You can now see the pod that was created is consuming explicit amounts of resources (specified by resource *request*),
and the usage is being tracked by the Kubernetes system properly.
@ -173,8 +155,4 @@ and the usage is being tracked by the Kubernetes system properly.
Actions that consume node resources for cpu and memory can be subject to hard quota limits defined
by the namespace quota. The resource consumption is measured by resource *request* in pod specification.
Any action that consumes those resources can be tweaked, or can pick up namespace level defaults to
meet your end goal.
Any action that consumes those resources can be tweaked, or can pick up namespace level defaults to meet your end goal.

View File

@ -13,17 +13,15 @@ This example will work in a custom namespace to demonstrate the concepts involve
Let's create a new namespace called quota-example:
```shell
```shell
$ kubectl create -f docs/admin/resourcequota/namespace.yaml
namespace "quota-example" created
$ kubectl get namespaces
NAME LABELS STATUS AGE
default <none> Active 2m
quota-example <none> Active 39s
```
```
## Step 2: Apply a quota to the namespace
By default, a pod will run with unbounded CPU and memory requests/limits. This means that any pod in the
@ -37,21 +35,18 @@ checks the total resource *requests*, not resource *limits* of all containers/po
Let's create a simple quota in our namespace:
```shell
```shell
$ kubectl create -f docs/admin/resourcequota/quota.yaml --namespace=quota-example
resourcequota "quota" created
```
```
Once your quota is applied to a namespace, the system will restrict any creation of content
in the namespace until the quota usage has been calculated. This should happen quickly.
You can describe your current quota usage to see what resources are being consumed in your
namespace.
```shell
```shell
$ kubectl describe quota quota --namespace=quota-example
Name: quota
Namespace: quota-example
@ -65,9 +60,8 @@ replicationcontrollers 0 20
resourcequotas 1 1
secrets 1 10
services 0 5
```
```
## Step 3: Applying default resource requests and limits
Pod authors rarely specify resource requests and limits for their pods.
@ -77,26 +71,21 @@ cpu and memory by creating an nginx container.
To demonstrate, lets create a replication controller that runs nginx:
```shell
```shell
$ kubectl run nginx --image=nginx --replicas=1 --namespace=quota-example
replicationcontroller "nginx" created
```
```
Now let's look at the pods that were created.
```shell
```shell
$ kubectl get pods --namespace=quota-example
NAME READY STATUS RESTARTS AGE
```
```
What happened? I have no pods! Let's describe the replication controller to get a view of what is happening.
```shell
```shell
kubectl describe rc nginx --namespace=quota-example
Name: nginx
Namespace: quota-example
@ -109,16 +98,14 @@ No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
42s 11s 3 {replication-controller } FailedCreate Error creating: Pod "nginx-" is forbidden: Must make a non-zero request for memory since it is tracked by quota.
```
```
The Kubernetes API server is rejecting the replication controllers requests to create a pod because our pods
do not specify any memory usage *request*.
So let's set some default values for the amount of cpu and memory a pod can consume:
```shell
```shell
$ kubectl create -f docs/admin/resourcequota/limits.yaml --namespace=quota-example
limitrange "limits" created
$ kubectl describe limits limits --namespace=quota-example
@ -128,27 +115,23 @@ Type Resource Min Max Request Limit Limit/Request
---- -------- --- --- ------- ----- -------------
Container memory - - 256Mi 512Mi -
Container cpu - - 100m 200m -
```
```
Now any time a pod is created in this namespace, if it has not specified any resource request/limit, the default
amount of cpu and memory per container will be applied, and the request will be used as part of admission control.
Now that we have applied default resource *request* for our namespace, our replication controller should be able to
create its pods.
```shell
```shell
$ kubectl get pods --namespace=quota-example
NAME READY STATUS RESTARTS AGE
nginx-fca65 1/1 Running 0 1m
```
```
And if we print out our quota usage in the namespace:
```shell
```shell
$ kubectl describe quota quota --namespace=quota-example
Name: quota
Namespace: quota-example
@ -162,19 +145,12 @@ replicationcontrollers 1 20
resourcequotas 1 1
secrets 1 10
services 0 5
```
You can now see the pod that was created is consuming explicit amounts of resources (specified by resource *request*),
and the usage is being tracked by the Kubernetes system properly.
```
You can now see the pod that was created is consuming explicit amounts of resources (specified by resource *request*), and the usage is being tracked by the Kubernetes system properly.
## Summary
Actions that consume node resources for cpu and memory can be subject to hard quota limits defined
by the namespace quota. The resource consumption is measured by resource *request* in pod specification.
Any action that consumes those resources can be tweaked, or can pick up namespace level defaults to
meet your end goal.
Actions that consume node resources for cpu and memory can be subject to hard quota limits defined by the namespace quota. The resource consumption is measured by resource *request* in pod specification.
Any action that consumes those resources can be tweaked, or can pick up namespace level defaults to meet your end goal.

View File

@ -13,11 +13,11 @@ The **salt-minion** service runs on the kubernetes-master and each kubernetes-no
Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file [(except on GCE)](#standalone-salt-configuration-on-gce).
```shell
```shell
[root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf
master: kubernetes-master
```
```
The salt-master is contacted by each salt-minion and depending upon the machine information presented, the salt-master will provision the machine as either a kubernetes-master or kubernetes-node with all the required capabilities needed to run Kubernetes.
If you are running the Vagrant based environment, the **salt-api** service is running on the kubernetes-master. It is configured to enable the vagrant user to introspect the salt cluster in order to find out about machines in the Vagrant environment via a REST API.
@ -34,27 +34,27 @@ All remaining sections that refer to master/minion setups should be ignored for
Security is not enabled on the salt-master, and the salt-master is configured to auto-accept incoming requests from minions. It is not recommended to use this security configuration in production environments without deeper study. (In some environments this isn't as bad as it might sound if the salt master port isn't externally accessible and you trust everyone on your network.)
```shell
```shell
[root@kubernetes-master] $ cat /etc/salt/master.d/auto-accept.conf
open_mode: True
auto_accept: True
```
```
## Salt minion configuration
Each minion in the salt cluster has an associated configuration that instructs the salt-master how to provision the required resources on the machine.
An example file is presented below using the Vagrant based environment.
```shell
```shell
[root@kubernetes-master] $ cat /etc/salt/minion.d/grains.conf
grains:
etcd_servers: $MASTER_IP
cloud_provider: vagrant
roles:
- kubernetes-master
```
```
Each hosting environment has a slightly different grains.conf file that is used to build conditional logic where required in the Salt files.
The following enumerates the set of defined key/value pairs that are supported today. If you add new ones, please make sure to update this list.
@ -85,8 +85,8 @@ In addition, a cluster may be running a Debian based operating system or Red Hat
// something specific to Debian environment (apt-get, initd)
{% endif %}
{% endraw %}
```
```
## Best Practices
1. When configuring default arguments for processes, it's best to avoid the use of EnvironmentFiles (Systemd in Red Hat environments) or init.d files (Debian distributions) to hold default values that should be common across operating system environments. This helps keep our Salt template files easy to understand for editors who may not be familiar with the particulars of each distribution.

View File

@ -62,7 +62,6 @@ of type `ServiceAccountToken` with an annotation referencing the service
account, and the controller will update it with a generated token:
```json
secret.json:
{
"kind": "Secret",
@ -75,28 +74,20 @@ secret.json:
},
"type": "kubernetes.io/service-account-token"
}
```
```shell
kubectl create -f ./secret.json
kubectl describe secret mysecretname
```
#### To delete/invalidate a service account token
```shell
kubectl delete secret mysecretname
```
### Service Account Controller
Service Account Controller manages ServiceAccount inside namespaces, and ensures
a ServiceAccount named "default" exists in every active namespace.
a ServiceAccount named "default" exists in every active namespace.

View File

@ -20,14 +20,14 @@ For example, this is how to start a simple web server as a static pod:
1. Choose a node where we want to run the static pod. In this example, it's `my-minion1`.
```shell
[joe@host ~] $ ssh my-minion1
```
```shell
[joe@host ~] $ ssh my-minion1
```
2. Choose a directory, say `/etc/kubelet.d` and place a web server pod definition there, e.g. `/etc/kubernetes.d/static-web.yaml`:
```shell
[root@my-minion1 ~] $ mkdir /etc/kubernetes.d/
```shell
[root@my-minion1 ~] $ mkdir /etc/kubernetes.d/
[root@my-minion1 ~] $ cat <<EOF >/etc/kubernetes.d/static-web.yaml
apiVersion: v1
kind: Pod
@ -43,23 +43,22 @@ For example, this is how to start a simple web server as a static pod:
- name: web
containerPort: 80
protocol: tcp
EOF
```
EOF
```
2. Configure your kubelet daemon on the node to use this directory by running it with `--config=/etc/kubelet.d/` argument. On Fedora Fedora 21 with Kubernetes 0.17 edit `/etc/kubernetes/kubelet` to include this line:
```
KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --config=/etc/kubelet.d/"
```
Instructions for other distributions or Kubernetes installations may vary.
```
KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --config=/etc/kubelet.d/"
```
Instructions for other distributions or Kubernetes installations may vary.
3. Restart kubelet. On Fedora 21, this is:
```shell
[root@my-minion1 ~] $ systemctl restart kubelet
```
```shell
[root@my-minion1 ~] $ systemctl restart kubelet
```
## Pods created via HTTP
Kubelet periodically downloads a file specified by `--manifest-url=<URL>` argument and interprets it as a json/yaml file with a pod definition. It works the same as `--config=<directory>`, i.e. it's reloaded every now and then and changes are applied to running static pods (see below).
@ -68,50 +67,50 @@ Kubelet periodically downloads a file specified by `--manifest-url=<URL>` argume
When kubelet starts, it automatically starts all pods defined in directory specified in `--config=` or `--manifest-url=` arguments, i.e. our static-web. (It may take some time to pull nginx image, be patient'|):
```shell
```shell
[joe@my-minion1 ~] $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-minion1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c
```
f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-minion1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c
```
If we look at our Kubernetes API server (running on host `my-master`), we see that a new mirror-pod was created there too:
```shell
```shell
[joe@host ~] $ ssh my-master
[joe@my-master ~] $ kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
static-web-my-minion1 172.17.0.3 my-minion1/192.168.100.71 role=myrole Running 11 minutes
web nginx Running 11 minutes
```
web nginx Running 11 minutes
```
Labels from the static pod are propagated into the mirror-pod and can be used as usual for filtering.
Notice we cannot delete the pod with the API server (e.g. via [`kubectl`](../user-guide/kubectl/kubectl) command), kubelet simply won't remove it.
```shell
```shell
[joe@my-master ~] $ kubectl delete pod static-web-my-minion1
pods/static-web-my-minion1
[joe@my-master ~] $ kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST ...
static-web-my-minion1 172.17.0.3 my-minion1/192.168.100.71 ...
```
static-web-my-minion1 172.17.0.3 my-minion1/192.168.100.71 ...
```
Back to our `my-minion1` host, we can try to stop the container manually and see, that kubelet automatically restarts it in a while:
```shell
```shell
[joe@host ~] $ ssh my-minion1
[joe@my-minion1 ~] $ docker stop f6d05272b57e
[joe@my-minion1 ~] $ sleep 20
[joe@my-minion1 ~] $ docker ps
CONTAINER ID IMAGE COMMAND CREATED ...
5b920cbaf8b1 nginx:latest "nginx -g 'daemon of 2 seconds ago ...
```
5b920cbaf8b1 nginx:latest "nginx -g 'daemon of 2 seconds ago ...
```
## Dynamic addition and removal of static pods
Running kubelet periodically scans the configured directory (`/etc/kubelet.d` in our example) for changes and adds/removes pods as files appear/disappear in this directory.
```shell
```shell
[joe@my-minion1 ~] $ mv /etc/kubernetes.d/static-web.yaml /tmp
[joe@my-minion1 ~] $ sleep 20
[joe@my-minion1 ~] $ docker ps
@ -120,11 +119,5 @@ Running kubelet periodically scans the configured directory (`/etc/kubelet.d` in
[joe@my-minion1 ~] $ sleep 20
[joe@my-minion1 ~] $ docker ps
CONTAINER ID IMAGE COMMAND CREATED ...
e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago
```
e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago
```

View File

@ -120,17 +120,15 @@ Objects that contain both spec and status should not contain additional top-leve
The `FooCondition` type for some resource type `Foo` may include a subset of the following fields, but must contain at least `type` and `status` fields:
```go
Type FooConditionType `json:"type" description:"type of Foo condition"`
```go
Type FooConditionType `json:"type" description:"type of Foo condition"`
Status ConditionStatus `json:"status" description:"status of the condition, one of True, False, Unknown"`
LastHeartbeatTime unversioned.Time `json:"lastHeartbeatTime,omitempty" description:"last time we got an update on a given condition"`
LastTransitionTime unversioned.Time `json:"lastTransitionTime,omitempty" description:"last time the condition transit from one status to another"`
Reason string `json:"reason,omitempty" description:"one-word CamelCase reason for the condition's last transition"`
Message string `json:"message,omitempty" description:"human-readable message indicating details about last transition"`
```
```
Additional fields may be added in the future.
Conditions should be added to explicitly convey properties that users and components care about rather than requiring those properties to be inferred from other observations.
@ -165,24 +163,20 @@ Discussed in [#2004](http://issue.k8s.io/2004) and elsewhere. There are no maps
For example:
```yaml
```yaml
ports:
- name: www
containerPort: 80
```
```
vs.
```yaml
```yaml
ports:
www:
containerPort: 80
```
```
This rule maintains the invariant that all JSON/YAML keys are fields in API objects. The only exceptions are pure maps in the API (currently, labels, selectors, annotations, data), as opposed to sets of subobjects.
#### Constants
@ -236,27 +230,23 @@ The API supports three different PATCH operations, determined by their correspon
In the standard JSON merge patch, JSON objects are always merged but lists are always replaced. Often that isn't what we want. Let's say we start with the following Pod:
```yaml
```yaml
spec:
containers:
- name: nginx
image: nginx-1.0
```
```
...and we POST that to the server (as JSON). Then let's say we want to *add* a container to this Pod.
```yaml
```yaml
PATCH /api/v1/namespaces/default/pods/pod-name
spec:
containers:
- name: log-tailer
image: log-tailer-1.0
```
```
If we were to use standard Merge Patch, the entire container list would be replaced with the single log-tailer container. However, our intent is for the container lists to merge together based on the `name` field.
To solve this problem, Strategic Merge Patch uses metadata attached to the API objects to determine what lists should be merged and which ones should not. Currently the metadata is available as struct tags on the API objects themselves, but will become available to clients as Swagger annotations in the future. In the above example, the `patchStrategy` metadata for the `containers` field would be `merge` and the `patchMergeKey` would be `name`.
@ -269,52 +259,43 @@ Strategic Merge Patch also supports special operations as listed below.
To override the container list to be strictly replaced, regardless of the default:
```yaml
```yaml
containers:
- name: nginx
image: nginx-1.0
- $patch: replace # any further $patch operations nested in this list will be ignored
```
```
To delete an element of a list that should be merged:
```yaml
```yaml
containers:
- name: nginx
image: nginx-1.0
- $patch: delete
name: log-tailer # merge key and value goes here
```
```
### Map Operations
To indicate that a map should not be merged and instead should be taken literally:
```yaml
```yaml
$patch: replace # recursive and applies to all fields of the map it's in
containers:
- name: nginx
image: nginx-1.0
```
```
To delete a field of a map:
```yaml
```yaml
name: nginx
image: nginx-1.0
labels:
live: null # set the value of the map key to null
```
```
## Idempotency
All compatible Kubernetes APIs MUST support "name idempotency" and respond with an HTTP status code 409 when a request is made to POST an object that has the same name as an existing object in the system. See [docs/user-guide/identifiers.md](../user-guide/identifiers) for details.
@ -371,15 +352,13 @@ The only way for a client to know the expected value of resourceVersion is to ha
In the case of a conflict, the correct client action at this point is to GET the resource again, apply the changes afresh, and try submitting again. This mechanism can be used to prevent races like the following:
```
```shell
Client #1 Client #2
GET Foo GET Foo
Set Foo.Bar = "one" Set Foo.Baz = "two"
PUT Foo PUT Foo
```
```
When these sequences occur in parallel, either the change to Foo.Bar or the change to Foo.Baz can be lost.
On the other hand, when specifying the resourceVersion, one of the PUTs will fail, since whichever write succeeds changes the resourceVersion for Foo.
@ -501,8 +480,7 @@ The status object is encoded as JSON and provided as the body of the response.
**Example:**
```shell
```shell
$ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https://10.240.122.184:443/api/v1/namespaces/default/pods/grafana
> GET /api/v1/namespaces/default/pods/grafana HTTP/1.1
@ -530,9 +508,8 @@ $ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https:/
},
"code": 404
}
```
```
`status` field contains one of two possible values:
* `Success`
* `Failure`
@ -674,8 +651,4 @@ Other advice regarding use of labels, annotations, and other generic map keys by
- Key names should be all lowercase, with words separated by dashes, such as `desired-replicas`
- Prefix the key with `kubernetes.io/` or `foo.kubernetes.io/`, preferably the latter if the label/annotation is specific to `foo`
- For instance, prefer `service-account.kubernetes.io/name` over `kubernetes.io/service-account.name`
- Use annotations to store API extensions that the controller responsible for the resource doesn't need to know about, experimental fields that aren't intended to be generally used API fields, etc. Beware that annotations aren't automatically handled by the API conversion machinery.
- Use annotations to store API extensions that the controller responsible for the resource doesn't need to know about, experimental fields that aren't intended to be generally used API fields, etc. Beware that annotations aren't automatically handled by the API conversion machinery.

View File

@ -92,27 +92,23 @@ Let's consider some examples. In a hypothetical API (assume we're at version
v6), the `Frobber` struct looks something like this:
```go
// API v6.
type Frobber struct {
Height int `json:"height"`
Param string `json:"param"`
}
```
You want to add a new `Width` field. It is generally safe to add new fields
without changing the API version, so you can simply change it to:
```go
// Still API v6.
type Frobber struct {
Height int `json:"height"`
Width int `json:"width"`
Param string `json:"param"`
}
```
The onus is on you to define a sane default value for `Width` such that rule #1
@ -124,7 +120,6 @@ simply change `Param string` to `Params []string` (without creating a whole new
API version) - that fails rules #1 and #2. You can instead do something like:
```go
// Still API v6, but kind of clumsy.
type Frobber struct {
Height int `json:"height"`
@ -132,7 +127,6 @@ type Frobber struct {
Param string `json:"param"` // the first param
ExtraParams []string `json:"params"` // additional params
}
```
Now you can satisfy the rules: API calls that provide the old style `Param`
@ -144,14 +138,12 @@ distinct from any one version is to handle growth like this. The internal
representation can be implemented as:
```go
// Internal, soon to be v7beta1.
type Frobber struct {
Height int
Width int
Params []string
}
```
The code that converts to/from versioned APIs can decode this into the somewhat
@ -175,14 +167,12 @@ you add units to `height` and `width`. You implement this by adding duplicate
fields:
```go
type Frobber struct {
Height *int `json:"height"`
Width *int `json:"width"`
HeightInInches *int `json:"heightInInches"`
WidthInInches *int `json:"widthInInches"`
}
```
You convert all of the fields to pointers in order to distinguish between unset and
@ -198,38 +188,32 @@ in the case of an old client that was only aware of the old field (e.g., `height
Say the client creates:
```json
{
"height": 10,
"width": 5
}
```
and GETs:
```json
{
"height": 10,
"heightInInches": 10,
"width": 5,
"widthInInches": 5
}
```
then PUTs back:
```json
{
"height": 13,
"heightInInches": 10,
"width": 5,
"widthInInches": 5
}
```
The update should not fail, because it would have worked before `heightInInches` was added.
@ -401,11 +385,9 @@ regenerate auto-generated ones. To regenerate them:
- run
```shell
hack/update-generated-conversions.sh
```
If running the above script is impossible due to compile errors, the easiest
workaround is to comment out the code causing errors and let the script to
regenerate it. If the auto-generated conversion methods are not used by the
@ -429,9 +411,7 @@ To regenerate them:
- run
```shell
hack/update-generated-deep-copies.sh
```
## Edit json (un)marshaling code
@ -447,9 +427,7 @@ To regenerate them:
- run
```shell
hack/update-codecgen.sh
```
## Making a new API Group
@ -532,9 +510,7 @@ an example to illustrate your change.
Make sure you update the swagger API spec by running:
```shell
hack/update-swagger-spec.sh
```
The API spec changes should be in a commit separate from your other changes.
@ -605,6 +581,4 @@ New feature development proceeds through a series of stages of increasing maturi
software releases
- Cluster Reliability: high
- Support: API version will continue to be present for many subsequent software releases;
- Recommended Use Cases: any
- Recommended Use Cases: any

View File

@ -20,13 +20,11 @@ for kubernetes.
The submit-queue does the following:
```go
for _, pr := range readyToMergePRs() {
if testsAreStable() {
mergePR(pr)
}
}
```
The status of the submit-queue is [online.](http://submit-queue.k8s.io/)
@ -101,6 +99,4 @@ Right now you have to ask a contributor (this may be you!) to re-run the test wi
### How can I kick Shippable to re-test on a failure?
Right now the easiest way is to close and then immediately re-open the PR.
Right now the easiest way is to close and then immediately re-open the PR.

View File

@ -9,9 +9,7 @@ Kubernetes projects.
Any contributor can propose a cherry pick of any pull request, like so:
```shell
hack/cherry_pick_pull.sh upstream/release-3.14 98765
```
This will walk you through the steps to propose an automated cherry pick of pull
@ -32,7 +30,4 @@ conflict***.
Now that we've structured cherry picks as PRs, searching for all cherry-picks
against a release is a GitHub query: For example,
[this query is all of the v0.21.x cherry-picks](https://github.com/kubernetes/kubernetes/pulls?utf8=%E2%9C%93&q=is%3Apr+%22automated+cherry+pick%22+base%3Arelease-0.21)
[this query is all of the v0.21.x cherry-picks](https://github.com/kubernetes/kubernetes/pulls?utf8=%E2%9C%93&q=is%3Apr+%22automated+cherry+pick%22+base%3Arelease-0.21)

View File

@ -18,12 +18,10 @@ Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/deve
By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-minion-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run:
```shell
cd kubernetes
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
```
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
@ -31,11 +29,9 @@ The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster
If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default) environment variable:
```shell
export VAGRANT_DEFAULT_PROVIDER=parallels
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
```
Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine.
@ -45,25 +41,20 @@ By default, each VM in the cluster is running Fedora, and all of the Kubernetes
To access the master or any node:
```shell
vagrant ssh master
vagrant ssh minion-1
```
If you are running more than one nodes, you can access the others by:
```shell
vagrant ssh minion-2
vagrant ssh minion-3
```
To view the service status and/or logs on the kubernetes-master:
```shell
$ vagrant ssh master
[vagrant@kubernetes-master ~] $ sudo systemctl status kube-apiserver
[vagrant@kubernetes-master ~] $ sudo journalctl -r -u kube-apiserver
@ -73,19 +64,16 @@ $ vagrant ssh master
[vagrant@kubernetes-master ~] $ sudo systemctl status etcd
[vagrant@kubernetes-master ~] $ sudo systemctl status nginx
```
To view the services on any of the nodes:
```shell
$ vagrant ssh minion-1
[vagrant@kubernetes-minion-1] $ sudo systemctl status docker
[vagrant@kubernetes-minion-1] $ sudo journalctl -r -u docker
[vagrant@kubernetes-minion-1] $ sudo systemctl status kubelet
[vagrant@kubernetes-minion-1] $ sudo journalctl -r -u kubelet
```
### Interacting with your Kubernetes cluster with Vagrant.
@ -95,26 +83,20 @@ With your Kubernetes cluster up, you can manage the nodes in your cluster with t
To push updates to new Kubernetes code after making source changes:
```shell
./cluster/kube-push.sh
```
To stop and then restart the cluster:
```shell
vagrant halt
./cluster/kube-up.sh
```
To destroy the cluster:
```shell
vagrant destroy
```
Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script.
@ -122,14 +104,12 @@ Once your Vagrant machines are up and provisioned, the first thing to do is to c
You may need to build the binaries first, you can do this with `make`
```shell
$ ./cluster/kubectl.sh get nodes
NAME LABELS STATUS
kubernetes-minion-0whl kubernetes.io/hostname=kubernetes-minion-0whl Ready
kubernetes-minion-4jdf kubernetes.io/hostname=kubernetes-minion-4jdf Ready
kubernetes-minion-epbe kubernetes.io/hostname=kubernetes-minion-epbe Ready
```
### Interacting with your Kubernetes cluster with the `kube-*` scripts.
@ -139,41 +119,31 @@ Alternatively to using the vagrant commands, you can also use the `cluster/kube-
All of these commands assume you have set `KUBERNETES_PROVIDER` appropriately:
```shell
export KUBERNETES_PROVIDER=vagrant
```
Bring up a vagrant cluster
```shell
./cluster/kube-up.sh
```
Destroy the vagrant cluster
```shell
./cluster/kube-down.sh
```
Update the vagrant cluster after you make changes (only works when building your own releases locally):
```shell
./cluster/kube-push.sh
```
Interact with the cluster
```shell
./cluster/kubectl.sh
```
### Authenticating with your master
@ -181,7 +151,6 @@ Interact with the cluster
When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future.
```shell
$ cat ~/.kubernetes_vagrant_auth
{ "User": "vagrant",
"Password": "vagrant"
@ -189,15 +158,12 @@ $ cat ~/.kubernetes_vagrant_auth
"CertFile": "/home/k8s_user/.kubecfg.vagrant.crt",
"KeyFile": "/home/k8s_user/.kubecfg.vagrant.key"
}
```
You should now be set to use the `cluster/kubectl.sh` script. For example try to list the nodes that you have started with:
```shell
./cluster/kubectl.sh get nodes
```
### Running containers
@ -205,14 +171,12 @@ You should now be set to use the `cluster/kubectl.sh` script. For example try to
Your cluster is running, you can list the nodes in your cluster:
```shell
$ ./cluster/kubectl.sh get nodes
NAME LABELS STATUS
kubernetes-minion-0whl kubernetes.io/hostname=kubernetes-minion-0whl Ready
kubernetes-minion-4jdf kubernetes.io/hostname=kubernetes-minion-4jdf Ready
kubernetes-minion-epbe kubernetes.io/hostname=kubernetes-minion-epbe Ready
```
Now start running some containers!
@ -221,7 +185,6 @@ You can now use any of the cluster/kube-*.sh commands to interact with your VM m
Before starting a container there will be no pods, services and replication controllers.
```shell
$ cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
@ -230,59 +193,49 @@ NAME LABELS SELECTOR IP(S) PORT(S)
$ cluster/kubectl.sh get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
```
Start a container running nginx with a replication controller and three replicas
```shell
$ cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
my-nginx my-nginx nginx run=my-nginx 3
```
When listing the pods, you will see that three containers have been started and are in Waiting state:
```shell
$ cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-389da 1/1 Waiting 0 33s
my-nginx-kqdjk 1/1 Waiting 0 33s
my-nginx-nyj3x 1/1 Waiting 0 33s
```
You need to wait for the provisioning to complete, you can monitor the minions by doing:
```shell
$ sudo salt '*minion-1' cmd.run 'docker images'
kubernetes-minion-1:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
<none> <none> 96864a7d2df3 26 hours ago 204.4 MB
kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB
```
Once the docker image for nginx has been downloaded, the container will start and you can list it:
```shell
$ sudo salt '*minion-1' cmd.run 'docker ps'
kubernetes-minion-1:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f
fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b
```
Going back to listing the pods, services and replicationcontrollers, you now have:
```shell
$ cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-389da 1/1 Running 0 33s
@ -295,21 +248,18 @@ NAME LABELS SELECTOR IP(S) PORT(S)
$ cluster/kubectl.sh get rc
NAME IMAGE(S) SELECTOR REPLICAS
my-nginx nginx run=my-nginx 3
```
We did not start any services, hence there are none listed. But we see three replicas displayed properly.
Check the [guestbook](../../../examples/guestbook/README) application to learn how to create a service.
Check the [guestbook](https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook/README) application to learn how to create a service.
You can already play with scaling the replicas with:
```shell
$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-kqdjk 1/1 Running 0 13m
my-nginx-nyj3x 1/1 Running 0 13m
```
Congratulations!
@ -319,9 +269,7 @@ Congratulations!
The following will run all of the end-to-end testing scenarios assuming you set your environment in `cluster/kube-env.sh`:
```shell
NUM_MINIONS=3 hack/e2e-test.sh
```
### Troubleshooting
@ -331,12 +279,10 @@ NUM_MINIONS=3 hack/e2e-test.sh
By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh`
```shell
export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box
export KUBERNETES_BOX_URL=path_of_your_kuber_box
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
```
#### I just created the cluster, but I am getting authorization errors!
@ -344,21 +290,17 @@ export KUBERNETES_PROVIDER=vagrant
You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact.
```shell
rm ~/.kubernetes_vagrant_auth
```
After using kubectl.sh make sure that the correct credentials are set:
```shell
$ cat ~/.kubernetes_vagrant_auth
{
"User": "vagrant",
"Password": "vagrant"
}
```
#### I just created the cluster, but I do not see my container running!
@ -379,9 +321,7 @@ Are you sure you built a release first? Did you install `net-tools`? For more cl
You can control the number of nodes that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_MINIONS` to 1 like so:
```shell
export NUM_MINIONS=1
```
#### I want my VMs to have more memory!
@ -390,23 +330,16 @@ You can control the memory allotted to virtual machines with the `KUBERNETES_MEM
Just set it to the number of megabytes you would like the machines to have. For example:
```shell
export KUBERNETES_MEMORY=2048
```
If you need more granular control, you can set the amount of memory for the master and nodes independently. For example:
```shell
export KUBERNETES_MASTER_MEMORY=1536
export KUBERNETES_MINION_MEMORY=2048
```
#### I ran vagrant suspend and nothing works!
`vagrant suspend` seems to mess up the network. It's not supported at this time.
`vagrant suspend` seems to mess up the network. It's not supported at this time.

View File

@ -27,49 +27,39 @@ Below, we outline one of the more common git workflows that core developers use.
The commands below require that you have $GOPATH set ([$GOPATH docs](https://golang.org/doc/code.html#GOPATH)). We highly recommend you put Kubernetes' code into your GOPATH. Note: the commands below will not work if there is more than one directory in your `$GOPATH`.
```shell
mkdir -p $GOPATH/src/k8s.io
cd $GOPATH/src/k8s.io
# Replace "$YOUR_GITHUB_USERNAME" below with your github username
git clone https://github.com/$YOUR_GITHUB_USERNAME/kubernetes.git
cd kubernetes
git remote add upstream 'https://github.com/kubernetes/kubernetes.git'
```
### Create a branch and make changes
```shell
git checkout -b myfeature
# Make your code changes
```
### Keeping your development fork in sync
```shell
git fetch upstream
git rebase upstream/master
```
Note: If you have write access to the main repository at github.com/kubernetes/kubernetes, you should modify your git configuration so that you can't accidentally push to upstream:
```shell
git remote set-url --push upstream no_push
```
### Committing changes to your fork
```shell
git commit
git push -f origin myfeature
```
### Creating a pull request
@ -107,20 +97,16 @@ directly from mercurial.
2) Create a new GOPATH for your tools and install godep:
```shell
export GOPATH=$HOME/go-tools
mkdir -p $GOPATH
go get github.com/tools/godep
```
3) Add $GOPATH/bin to your path. Typically you'd add this to your ~/.profile:
```shell
export GOPATH=$HOME/go-tools
export PATH=$PATH:$GOPATH/bin
```
### Using godep
@ -132,52 +118,43 @@ Here's a quick walkthrough of one way to use godeps to add or update a Kubernete
_Devoting a separate directory is not required, but it is helpful to separate dependency updates from other changes._
```shell
export KPATH=$HOME/code/kubernetes
mkdir -p $KPATH/src/k8s.io/kubernetes
cd $KPATH/src/k8s.io/kubernetes
git clone https://path/to/your/fork .
# Or copy your existing local repo here. IMPORTANT: making a symlink doesn't work.
```
2) Set up your GOPATH.
```shell
# Option A: this will let your builds see packages that exist elsewhere on your system.
export GOPATH=$KPATH:$GOPATH
# Option B: This will *not* let your local builds see packages that exist elsewhere on your system.
export GOPATH=$KPATH
# Option B is recommended if you're going to mess with the dependencies.
```
3) Populate your new GOPATH.
```shell
cd $KPATH/src/k8s.io/kubernetes
godep restore
```
4) Next, you can either add a new dependency or update an existing one.
```shell
# To add a new dependency, do:
cd $KPATH/src/k8s.io/kubernetes
go get path/to/dependency
# Change code in Kubernetes to use the dependency.
godep save ./...
# To update an existing dependency, do:
cd $KPATH/src/k8s.io/kubernetes
go get -u path/to/dependency
# Change code in Kubernetes accordingly if necessary.
godep update path/to/dependency/...
```
_If `go get -u path/to/dependency` fails with compilation errors, instead try `go get -d -u path/to/dependency`
@ -199,41 +176,33 @@ Before committing any changes, please link/copy these hooks into your .git
directory. This will keep you from accidentally committing non-gofmt'd go code.
```shell
cd kubernetes/.git/hooks/
ln -s ../../hooks/pre-commit .
```
## Unit tests
```shell
cd kubernetes
hack/test-go.sh
```
Alternatively, you could also run:
```shell
cd kubernetes
godep go test ./...
```
If you only want to run unit tests in one package, you could run ``godep go test`` under the package directory. For example, the following commands will run all unit tests in package kubelet:
```shell
$ cd kubernetes # step into the kubernetes directory.
$ cd pkg/kubelet
$ godep go test
# some output from unit tests
PASS
ok k8s.io/kubernetes/pkg/kubelet 0.317s
```
## Coverage
@ -243,10 +212,8 @@ Currently, collecting coverage is only supported for the Go unit tests.
To run all unit tests and generate an HTML coverage report, run the following:
```shell
cd kubernetes
KUBE_COVER=y hack/test-go.sh
```
At the end of the run, an the HTML report will be generated with the path printed to stdout.
@ -254,10 +221,8 @@ At the end of the run, an the HTML report will be generated with the path printe
To run tests and collect coverage in only one package, pass its relative path under the `kubernetes` directory as an argument, for example:
```shell
cd kubernetes
KUBE_COVER=y hack/test-go.sh pkg/kubectl
```
Multiple arguments can be passed, in which case the coverage results will be combined for all tests run.
@ -269,10 +234,8 @@ Coverage results for the project can also be viewed on [Coveralls](https://cover
You need an [etcd](https://github.com/coreos/etcd/releases/tag/v2.0.0) in your path, please make sure it is installed and in your ``$PATH``.
```shell
cd kubernetes
hack/test-integration.sh
```
## End-to-End tests
@ -280,18 +243,14 @@ hack/test-integration.sh
You can run an end-to-end test which will bring up a master and two nodes, perform some tests, and then tear everything down. Make sure you have followed the getting started steps for your chosen cloud platform (which might involve changing the `KUBERNETES_PROVIDER` environment variable to something other than "gce".
```shell
cd kubernetes
hack/e2e-test.sh
```
Pressing control-C should result in an orderly shutdown but if something goes wrong and you still have some VMs running you can force a cleanup with this command:
```shell
go run hack/e2e.go --down
```
### Flag options
@ -299,54 +258,40 @@ go run hack/e2e.go --down
See the flag definitions in `hack/e2e.go` for more options, such as reusing an existing cluster, here is an overview:
```shell
# Build binaries for testing
go run hack/e2e.go --build
# Create a fresh cluster. Deletes a cluster first, if it exists
go run hack/e2e.go --up
# Create a fresh cluster at a specific release version.
go run hack/e2e.go --up --version=0.7.0
# Test if a cluster is up.
go run hack/e2e.go --isup
# Push code to an existing cluster
go run hack/e2e.go --push
# Push to an existing cluster, or bring up a cluster if it's down.
go run hack/e2e.go --pushup
# Run all tests
go run hack/e2e.go --test
# Run tests matching the regex "Pods.*env"
go run hack/e2e.go -v -test --test_args="--ginkgo.focus=Pods.*env"
# Alternately, if you have the e2e cluster up and no desire to see the event stream, you can run ginkgo-e2e.sh directly:
hack/ginkgo-e2e.sh --ginkgo.focus=Pods.*env
```
### Combining flags
```shell
# Flags can be combined, and their actions will take place in this order:
# -build, -push|-up|-pushup, -test|-tests=..., -down
# e.g.:
go run hack/e2e.go -build -pushup -test -down
# -v (verbose) can be added if you want streaming output instead of only
# seeing the output of failed commands.
# -ctl can be used to quickly call kubectl against your e2e cluster. Useful for
# cleaning up after a failed test or viewing logs. Use -v to avoid suppressing
# kubectl output.
go run hack/e2e.go -v -ctl='get events'
go run hack/e2e.go -v -ctl='delete pod foobar'
```
## Conformance testing
@ -368,10 +313,5 @@ See [conformance-test.sh](http://releases.k8s.io/release-1.1/hack/conformance-te
## Regenerating the CLI documentation
```shell
hack/update-generated-docs.sh
```
```

View File

@ -27,8 +27,7 @@ The output for the end-2-end tests will be a single binary called `e2e.test` und
For the purposes of brevity, we will look at a subset of the options, which are listed below:
```
```shell
-ginkgo.dryRun=false: If set, ginkgo will walk the test hierarchy without actually running anything. Best paired with -v.
-ginkgo.failFast=false: If set, ginkgo will stop running a test suite after a failure occurs.
-ginkgo.failOnPending=false: If set, ginkgo will mark the test suite as failed if any specs are pending.
@ -41,31 +40,34 @@ For the purposes of brevity, we will look at a subset of the options, which are
-prom-push-gateway="": The URL to prometheus gateway, so that metrics can be pushed during e2es and scraped by prometheus. Typically something like 127.0.0.1:9091.
-provider="": The name of the Kubernetes provider (gce, gke, local, vagrant, etc.)
-repo-root="../../": Root directory of kubernetes repository, for finding test files.
```
Prior to running the tests, it is recommended that you first create a simple auth file in your home directory, e.g. `$HOME/.kubernetes_auth` , with the following:
```
```json
{
"User": "root",
"Password": ""
}
```
Next, you will need a cluster that you can test against. As mentioned earlier, you will want to execute `sudo ./hack/local-up-cluster.sh`. To get a sense of what tests exist, you may want to run:
`e2e.test --host="127.0.0.1:8080" --provider="local" --ginkgo.v=true -ginkgo.dryRun=true --kubeconfig="$HOME/.kubernetes_auth" --repo-root="$KUBERNETES_SRC_PATH"`
```shell
e2e.test --host="127.0.0.1:8080" --provider="local" --ginkgo.v=true -ginkgo.dryRun=true --kubeconfig="$HOME/.kubernetes_auth" --repo-root="$KUBERNETES_SRC_PATH"
```
If you wish to execute a specific set of tests you can use the `-ginkgo.focus=` regex, e.g.:
`e2e.test ... --ginkgo.focus="DNS|(?i)nodeport(?-i)|kubectl guestbook"`
```shell
e2e.test ... --ginkgo.focus="DNS|(?i)nodeport(?-i)|kubectl guestbook"
```
Conversely, if you wish to exclude a set of tests, you can run:
`e2e.test ... --ginkgo.skip="Density|Scale"`
```shell
e2e.test ... --ginkgo.skip="Density|Scale"
```
As mentioned earlier there are a host of other options that are available, but are left to the developer
@ -91,9 +93,9 @@ For developers who are interested in doing their own performance analysis, we re
For more accurate measurements, you may wish to set up prometheus external to kubernetes in an environment where it can access the major system components (api-server, controller-manager, scheduler). This is especially useful when attempting to gather metrics in a load-balanced api-server environment, because all api-servers can be analyzed independently as well as collectively. On startup, configuration file is passed to prometheus that specifies the endpoints that prometheus will scrape, as well as the sampling interval.
```
**prometheus.conf**
#prometheus.conf
```conf
job: {
name: "kubernetes"
scrape_interval: "1s"
@ -105,13 +107,8 @@ job: {
# controller-manager
target: "http://localhost:10252/metrics"
}
```
Once prometheus is scraping the kubernetes endpoints, that data can then be plotted using promdash, and alerts can be created against the assortment of metrics that kubernetes provides.
**HAPPY TESTING!**

View File

@ -14,7 +14,6 @@ There is a testing image `brendanburns/flake` up on the docker hub. We will use
Create a replication controller with the following config:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
@ -34,15 +33,12 @@ spec:
value: pkg/tools
- name: REPO_SPEC
value: https://github.com/kubernetes/kubernetes
```
Note that we omit the labels and the selector fields of the replication controller, because they will be populated from the labels field of the pod template by default.
```shell
kubectl create -f ./controller.yaml
```
This will spin up 24 instances of the test. They will run to completion, then exit, and the kubelet will restart them, accumulating more and more runs of the test.
@ -50,7 +46,6 @@ You can examine the recent runs of the test by calling `docker ps -a` and lookin
You can use this script to automate checking for failures, assuming your cluster is running on GCE and has four nodes:
```shell
echo "" > output.txt
for i in {1..4}; do
echo "Checking kubernetes-minion-${i}"
@ -58,20 +53,14 @@ for i in {1..4}; do
gcloud compute ssh "kubernetes-minion-${i}" --command="sudo docker ps -a" >> output.txt
done
grep "Exited ([^0])" output.txt
```
Eventually you will have sufficient runs for your purposes. At that point you can stop and delete the replication controller by running:
```shell
kubectl stop replicationcontroller flakecontroller
```
If you do a final check for flakes with `docker ps -a`, ignore tasks that exited -1, since that's what happens when you stop the replication controller.
Happy flake hunting!
Happy flake hunting!

View File

@ -8,37 +8,26 @@ Run `./hack/get-build.sh -h` for its usage.
For example, to get a build at a specific version (v1.0.2):
```shell
./hack/get-build.sh v1.0.2
```
Alternatively, to get the latest stable release:
```shell
./hack/get-build.sh release/stable
```
Finally, you can just print the latest or stable version:
```shell
./hack/get-build.sh -v ci/latest
```
You can also use the gsutil tool to explore the Google Cloud Storage release buckets. Here are some examples:
```shell
gsutil cat gs://kubernetes-release/ci/latest.txt # output the latest ci version number
gsutil cat gs://kubernetes-release/ci/latest-green.txt # output the latest ci version number that passed gce e2e
gsutil ls gs://kubernetes-release/ci/v0.20.0-29-g29a55cc/ # list the contents of a ci release
gsutil ls gs://kubernetes-release/release # list all official releases and rcs
```

View File

@ -13,9 +13,7 @@ Find the most-recent PR that was merged with the current .0 release. Remember t
### 2) Run the release-notes tool
```shell
${KUBERNETES_ROOT}/build/make-release-notes.sh $LASTPR $CURRENTPR
```
### 3) Trim the release notes

View File

@ -12,11 +12,9 @@ Go comes with inbuilt 'net/http/pprof' profiling library and profiling web servi
TL;DR: Add lines:
```go
m.mux.HandleFunc("/debug/pprof/", pprof.Index)
m.mux.HandleFunc("/debug/pprof/profile", pprof.Profile)
m.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol)
```
to the init(c *Config) method in 'pkg/master/master.go' and import 'net/http/pprof' package.
@ -28,17 +26,13 @@ In most use cases to use profiler service it's enough to do 'import _ net/http/p
Even when running profiler I found not really straightforward to use 'go tool pprof' with it. The problem is that at least for dev purposes certificates generated for APIserver are not signed by anyone trusted and because secureServer serves only secure traffic it isn't straightforward to connect to the service. The best workaround I found is by creating an ssh tunnel from the kubernetes_master open unsecured port to some external server, and use this server as a proxy. To save everyone looking for correct ssh flags, it is done by running:
```shell
ssh kubernetes_master -L<local_port>:localhost:8080
```
or analogous one for you Cloud provider. Afterwards you can e.g. run
```shell
go tool pprof http://localhost:<local_port>/debug/pprof/profile
```
to get 30 sec. CPU profile.

View File

@ -34,9 +34,7 @@ can find the Git hash for a build by looking at the "Console Log", then look for
`githash=`. You should see a line line:
```shell
+ githash=v0.20.2-322-g974377b
```
Because Jenkins builds frequently, if you're looking between jobs
@ -51,9 +49,7 @@ oncall.
Before proceeding to the next step:
```shell
export BRANCHPOINT=v0.20.2-322-g974377b
```
Where `v0.20.2-322-g974377b` is the git hash you decided on. This will become
@ -203,12 +199,10 @@ We are using `pkg/version/base.go` as the source of versioning in absence of
information from git. Here is a sample of that file's contents:
```go
var (
gitVersion string = "v0.4-dev" // version from git, output of $(git describe)
gitCommit string = "" // sha1 from git, output of $(git rev-parse HEAD)
)
```
This means a build with `go install` or `go get` or a build from a tarball will
@ -288,7 +282,6 @@ As an example, Docker commit a327d9b91edf has a `v1.1.1-N-gXXX` label but it is
not present in Docker `v1.2.0`:
```shell
$ git describe a327d9b91edf
v1.1.1-822-ga327d9b91edf
@ -296,7 +289,6 @@ $ git log --oneline v1.2.0..a327d9b91edf
a327d9b91edf Fix data space reporting from Kb/Mb to KB/MB
(Non-empty output here means the commit is not present on v1.2.0.)
```
## Release Notes

View File

@ -35,7 +35,7 @@ the policies used are selected by the functions `defaultPredicates()` and `defau
However, the choice of policies
can be overridden by passing the command-line flag `--policy-config-file` to the scheduler, pointing to a JSON
file specifying which scheduling policies to use. See
[examples/scheduler-policy-config.json](../../examples/scheduler-policy-config.json) for an example
[examples/scheduler-policy-config.json](https://github.com/kubernetes/kubernetes/tree/master/examples/scheduler-policy-config.json) for an example
config file. (Note that the config file format is versioned; the API is defined in
[plugin/pkg/scheduler/api](http://releases.k8s.io/release-1.1/plugin/pkg/scheduler/api/)).
Thus to add a new scheduling policy, you should modify predicates.go or priorities.go,

View File

@ -14,22 +14,21 @@ title: "Getting started on AWS EC2"
NOTE: This script use the 'default' AWS profile by default.
You may explicitly set AWS profile to use using the `AWS_DEFAULT_PROFILE` environment variable:
```shell
export AWS_DEFAULT_PROFILE=myawsprofile
```
```shell
export AWS_DEFAULT_PROFILE=myawsprofile
```
## Cluster turnup
### Supported procedure: `get-kube`
```shell
```shell
#Using wget
export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
#Using cURL
export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash
```
export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash
```
NOTE: This script calls [cluster/kube-up.sh](http://releases.k8s.io/release-1.1/cluster/kube-up.sh)
which in turn calls [cluster/aws/util.sh](http://releases.k8s.io/release-1.1/cluster/aws/util.sh)
using [cluster/aws/config-default.sh](http://releases.k8s.io/release-1.1/cluster/aws/config-default.sh).
@ -41,16 +40,16 @@ tokens are written in `~/.kube/config`, they will be necessary to use the CLI or
By default, the script will provision a new VPC and a 4 node k8s cluster in us-west-2a (Oregon) with `t2.micro` instances running on Ubuntu.
You can override the variables defined in [config-default.sh](http://releases.k8s.io/release-1.1/cluster/aws/config-default.sh) to change this behavior as follows:
```shell
```shell
export KUBE_AWS_ZONE=eu-west-1c
export NUM_MINIONS=2
export MINION_SIZE=m3.medium
export AWS_S3_REGION=eu-west-1
export AWS_S3_BUCKET=mycompany-kubernetes-artifacts
export INSTANCE_PREFIX=k8s
...
```
...
```
It will also try to create or reuse a keypair called "kubernetes", and IAM profiles called "kubernetes-master" and "kubernetes-minion".
If these already exist, make sure you want them to be used here.
@ -70,14 +69,13 @@ Alternately, you can download the latest Kubernetes release from [this page](htt
Next, add the appropriate binary folder to your `PATH` to access kubectl:
```shell
```shell
# OS X
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
# Linux
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
```
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
```
An up-to-date documentation page for this tool is available here: [kubectl manual](/{{page.version}}/docs/user-guide/kubectl/kubectl)
By default, `kubectl` will use the `kubeconfig` file generated during the cluster startup for authenticating against the API.
@ -87,23 +85,20 @@ For more information, please read [kubeconfig files](/{{page.version}}/docs/user
See [a simple nginx example](/{{page.version}}/docs/user-guide/simple-nginx) to try out your new cluster.
The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](../../examples/guestbook/)
The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook/)
For more complete applications, please look in the [examples directory](../../examples/)
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/master/examples/)
## Tearing down the cluster
Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the
`kubernetes` directory:
```shell
cluster/kube-down.sh
```
```shell
cluster/kube-down.sh
```
## Further reading
Please see the [Kubernetes docs](/{{page.version}}/docs/) for more details on administering
and using a Kubernetes cluster.
and using a Kubernetes cluster.

View File

@ -16,11 +16,9 @@ Get the Kubernetes source. If you are simply building a release from source the
Building a release is simple.
```shell
git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
make release
```
For more details on the release process see the [`build/` directory](http://releases.k8s.io/release-1.1/build/)

View File

@ -21,55 +21,49 @@ The Kubernetes package provides a few services: kube-apiserver, kube-scheduler,
Hosts:
```
```
centos-master = 192.168.121.9
centos-minion = 192.168.121.65
```
```
**Prepare the hosts:**
* Create virt7-testing repo on all hosts - centos-{master,minion} with following information.
```
```
[virt7-testing]
name=virt7-testing
baseurl=http://cbs.centos.org/repos/virt7-testing/x86_64/os/
gpgcheck=0
```
```
* Install Kubernetes on all hosts - centos-{master,minion}. This will also pull in etcd, docker, and cadvisor.
```shell
```shell
yum -y install --enablerepo=virt7-testing kubernetes
```
```
* Note * Using etcd-0.4.6-7 (This is temporary update in documentation)
If you do not get etcd-0.4.6-7 installed with virt7-testing repo,
In the current virt7-testing repo, the etcd package is updated which causes service failure. To avoid this,
```shell
```shell
yum erase etcd
```
```
It will uninstall the current available etcd package
```shell
```shell
yum install http://cbs.centos.org/kojifiles/packages/etcd/0.4.6/7.el7.centos/x86_64/etcd-0.4.6-7.el7.centos.x86_64.rpm
yum -y install --enablerepo=virt7-testing kubernetes
```
```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS)
```shell
```shell
echo "192.168.121.9 centos-master
192.168.121.65 centos-minion" >> /etc/hosts
```
```
* Edit /etc/kubernetes/config which will be the same on all hosts to contain:
```shell
```shell
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:4001"
@ -81,20 +75,18 @@ KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
```
```
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers
```shell
```shell
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
```
```
**Configure the Kubernetes services on the master.**
* Edit /etc/kubernetes/apiserver to appear as such:
```shell
```shell
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
@ -112,25 +104,23 @@ KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
```
```
* Start the appropriate services on master:
```shell
```shell
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
```
**Configure the Kubernetes services on the node.**
***We need to configure the kubelet and start the kubelet and proxy***
* Edit /etc/kubernetes/kubelet to appear as such:
```shell
```shell
# The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0"
@ -145,28 +135,25 @@ KUBELET_API_SERVER="--api-servers=http://centos-master:8080"
# Add your own!
KUBELET_ARGS=""
```
```
* Start the appropriate services on node (centos-minion).
```shell
```shell
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
```
*You should be finished!*
* Check to make sure the cluster can see the node (on centos-master)
```shell
```shell
$ kubectl get nodes
NAME LABELS STATUS
centos-minion <none> Ready
```
```
**The cluster should be running! Launch a test pod.**
You should have a functional cluster, check out [101](/{{page.version}}/docs/user-guide/walkthrough/README)!

View File

@ -18,76 +18,68 @@ In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure clo
To get started, you need to checkout the code:
```shell
```shell
git clone https://github.com/kubernetes/kubernetes
cd kubernetes/docs/getting-started-guides/coreos/azure/
```
```
You will need to have [Node.js installed](http://nodejs.org/download/) on you machine. If you have previously used Azure CLI, you should have it already.
First, you need to install some of the dependencies with
```shell
```shell
npm install
```
```
Now, all you need to do is:
```shell
```shell
./azure-login.js -u <your_username>
./create-kubernetes-cluster.js
```
```
This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes: 1 kubernetes master and 2 kubernetes nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
![VMs in Azure](initial_cluster.png)
Once the creation of Azure VMs has finished, you should see the following:
```shell
```shell
...
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_1c1496016083b4_ssh_conf <hostname>`
azure_wrapper/info: The hosts in this deployment are:
[ 'etcd-00', 'etcd-01', 'etcd-02', 'kube-00', 'kube-01', 'kube-02' ]
azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.yml`
```
```
Let's login to the master node like so:
```shell
```shell
ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
```
```
> Note: config file name will be different, make sure to use the one you see.
Check there are 2 nodes in the cluster:
```shell
```shell
core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS
kube-01 kubernetes.io/hostname=kube-01 Ready
kube-02 kubernetes.io/hostname=kube-02 Ready
```
```
## Deploying the workload
Let's follow the Guestbook example now:
```shell
```shell
kubectl create -f ~/guestbook-example
```
```
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Pending` to `Running`.
```shell
```shell
kubectl get pods --watch
```
```
> Note: the most time it will spend downloading Docker container images on each of the nodes.
Eventually you should see:
```shell
```shell
NAME READY STATUS RESTARTS AGE
frontend-0a9xi 1/1 Running 0 4m
frontend-4wahe 1/1 Running 0 4m
@ -95,8 +87,7 @@ frontend-6l36j 1/1 Running 0 4m
redis-master-talmr 1/1 Running 0 4m
redis-slave-12zfd 1/1 Running 0 4m
redis-slave-3nbce 1/1 Running 0 4m
```
```
## Scaling
Two single-core nodes are certainly not enough for a production system of today. Let's scale the cluster by adding a couple of bigger nodes.
@ -105,13 +96,12 @@ You will need to open another terminal window on your machine and go to the same
First, lets set the size of new VMs:
```shell
```shell
export AZ_VM_SIZE=Large
```
```
Now, run scale script with state file of the previous deployment and number of nodes to add:
```shell
```shell
core@kube-00 ~ $ ./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2
...
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_8f984af944f572_ssh_conf <hostname>`
@ -125,69 +115,63 @@ azure_wrapper/info: The hosts in this deployment are:
'kube-03',
'kube-04' ]
azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.yml`
```
```
> Note: this step has created new files in `./output`.
Back on `kube-00`:
```shell
```shell
core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS
kube-01 kubernetes.io/hostname=kube-01 Ready
kube-02 kubernetes.io/hostname=kube-02 Ready
kube-03 kubernetes.io/hostname=kube-03 Ready
kube-04 kubernetes.io/hostname=kube-04 Ready
```
```
You can see that two more nodes joined happily. Let's scale the number of Guestbook instances now.
First, double-check how many replication controllers there are:
```shell
```shell
core@kube-00 ~ $ kubectl get rc
ONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 3
redis-master master redis name=redis-master 1
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 2
```
```
As there are 4 nodes, let's scale proportionally:
```shell
```shell
core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave
>>>>>>> coreos/azure: Updates for 1.0
scaled
core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend
scaled
```
```
Check what you have now:
```shell
```shell
core@kube-00 ~ $ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 4
redis-master master redis name=redis-master 1
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 4
```
```
You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node.
```shell
```shell
core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend
NAME READY STATUS RESTARTS AGE
frontend-0a9xi 1/1 Running 0 22m
frontend-4wahe 1/1 Running 0 22m
frontend-6l36j 1/1 Running 0 22m
frontend-z9oxo 1/1 Running 0 41s
```
```
## Exposing the app to the outside world
There is no native Azure load-balancer support in Kubernetes 1.0, however here is how you can expose the Guestbook app to the Internet.
```
```
./expose_guestbook_app_port.sh ./output/kube_1c1496016083b4_ssh_conf
Guestbook app is on port 31605, will map it to port 80 on kube-00
info: Executing command vm endpoint create
@ -203,8 +187,7 @@ data: Protcol : tcp
data: Virtual IP Address : 137.117.156.164
data: Direct server return : Disabled
info: vm endpoint show command OK
```
```
You then should be able to access it from anywhere via the Azure virtual IP for `kube-00` displayed above, i.e. `http://137.117.156.164/` in my case.
## Next steps
@ -217,10 +200,9 @@ You should probably try deploy other [example apps](../../../../examples/) or wr
If you don't wish care about the Azure bill, you can tear down the cluster. It's easy to redeploy it, as you can see.
```shell
```shell
./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml
```
```
> Note: make sure to use the _latest state file_, as after scaling there is a new one.
By the way, with the scripts shown, you can deploy multiple clusters, if you like :)

View File

@ -28,32 +28,26 @@ master and etcd nodes, and show how to scale the cluster with ease.
To get started, you need to checkout the code:
```shell
```shell
git clone https://github.com/kubernetes/kubernetes
cd kubernetes/docs/getting-started-guides/coreos/azure/
```
```
You will need to have [Node.js installed](http://nodejs.org/download/) on you machine. If you have previously used Azure CLI, you should have it already.
First, you need to install some of the dependencies with
```shell
```shell
npm install
```
```
Now, all you need to do is:
```shell
```shell
./azure-login.js -u <your_username>
./create-kubernetes-cluster.js
```
```
This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes: 1 kubernetes master and 2 kubernetes nodes.
The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to
ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
@ -62,61 +56,50 @@ ensure a user of the free tier can reproduce it without paying extra. I will sho
Once the creation of Azure VMs has finished, you should see the following:
```shell
```shell
...
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_1c1496016083b4_ssh_conf <hostname>`
azure_wrapper/info: The hosts in this deployment are:
[ 'etcd-00', 'etcd-01', 'etcd-02', 'kube-00', 'kube-01', 'kube-02' ]
azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.yml`
```
```
Let's login to the master node like so:
```shell
```shell
ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
```
```
> Note: config file name will be different, make sure to use the one you see.
Check there are 2 nodes in the cluster:
```shell
```shell
core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS
kube-01 kubernetes.io/hostname=kube-01 Ready
kube-02 kubernetes.io/hostname=kube-02 Ready
```
```
## Deploying the workload
Let's follow the Guestbook example now:
```shell
```shell
kubectl create -f ~/guestbook-example
```
```
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Pending` to `Running`.
```shell
```shell
kubectl get pods --watch
```
```
> Note: the most time it will spend downloading Docker container images on each of the nodes.
Eventually you should see:
```shell
```shell
NAME READY STATUS RESTARTS AGE
frontend-0a9xi 1/1 Running 0 4m
frontend-4wahe 1/1 Running 0 4m
@ -125,8 +108,7 @@ redis-master-talmr 1/1 Running 0 4m
redis-slave-12zfd 1/1 Running 0 4m
redis-slave-3nbce 1/1 Running 0 4m
```
```
## Scaling
Two single-core nodes are certainly not enough for a production system of today. Let's scale the cluster by adding a couple of bigger nodes.
@ -135,16 +117,13 @@ You will need to open another terminal window on your machine and go to the same
First, lets set the size of new VMs:
```shell
```shell
export AZ_VM_SIZE=Large
```
```
Now, run scale script with state file of the previous deployment and number of nodes to add:
```shell
```shell
core@kube-00 ~ $ ./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2
...
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_8f984af944f572_ssh_conf <hostname>`
@ -159,14 +138,12 @@ azure_wrapper/info: The hosts in this deployment are:
'kube-04' ]
azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.yml`
```
```
> Note: this step has created new files in `./output`.
Back on `kube-00`:
```shell
```shell
core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS
kube-01 kubernetes.io/hostname=kube-01 Ready
@ -174,50 +151,42 @@ kube-02 kubernetes.io/hostname=kube-02 Ready
kube-03 kubernetes.io/hostname=kube-03 Ready
kube-04 kubernetes.io/hostname=kube-04 Ready
```
```
You can see that two more nodes joined happily. Let's scale the number of Guestbook instances now.
First, double-check how many replication controllers there are:
```shell
```shell
core@kube-00 ~ $ kubectl get rc
ONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 3
redis-master master redis name=redis-master 1
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 2
```
```
As there are 4 nodes, let's scale proportionally:
```shell
```shell
core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave
>>>>>>> coreos/azure: Updates for 1.0
scaled
core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend
scaled
```
```
Check what you have now:
```shell
```shell
core@kube-00 ~ $ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 4
redis-master master redis name=redis-master 1
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 4
```
```
You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node.
```shell
```shell
core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend
NAME READY STATUS RESTARTS AGE
frontend-0a9xi 1/1 Running 0 22m
@ -225,14 +194,12 @@ frontend-4wahe 1/1 Running 0 22m
frontend-6l36j 1/1 Running 0 22m
frontend-z9oxo 1/1 Running 0 41s
```
```
## Exposing the app to the outside world
There is no native Azure load-balancer support in Kubernetes 1.0, however here is how you can expose the Guestbook app to the Internet.
```
```
./expose_guestbook_app_port.sh ./output/kube_1c1496016083b4_ssh_conf
Guestbook app is on port 31605, will map it to port 80 on kube-00
info: Executing command vm endpoint create
@ -249,8 +216,7 @@ data: Virtual IP Address : 137.117.156.164
data: Direct server return : Disabled
info: vm endpoint show command OK
```
```
You then should be able to access it from anywhere via the Azure virtual IP for `kube-00` displayed above, i.e. `http://137.117.156.164/` in my case.
## Next steps
@ -263,12 +229,10 @@ You should probably try deploy other [example apps](../../../../examples/) or wr
If you don't wish care about the Azure bill, you can tear down the cluster. It's easy to redeploy it, as you can see.
```shell
```shell
./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml
```
```
> Note: make sure to use the _latest state file_, as after scaling there is a new one.
By the way, with the scripts shown, you can deploy multiple clusters, if you like :)

View File

@ -30,26 +30,22 @@ In the next few steps you will be asked to configure these files and host them o
To get the Kubernetes source, clone the GitHub repo, and build the binaries.
```
```
git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
./build/release.sh
```
```
Once the binaries are built, host the entire `<kubernetes>/_output/dockerized/bin/<OS>/<ARCHITECTURE>/` folder on an accessible HTTP server so they can be accessed by the cloud-config. You'll point your cloud-config files at this HTTP server later.
## Download CoreOS
Let's download the CoreOS bootable ISO. We'll use this image to boot and install CoreOS on each server.
```
```
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_iso_image.iso
```
```
You can also download the ISO from the [CoreOS website](https://coreos.com/docs/running-coreos/platforms/iso/).
## Configure the Kubernetes Master
@ -58,13 +54,11 @@ Once you've downloaded the image, use it to boot your Kubernetes Master server.
Let's get the master-config.yaml and fill in the necessary variables. Run the following commands on your HTTP server to get the cloud-config files.
```
```
git clone https://github.com/Metaswitch/calico-kubernetes-demo.git
cd calico-kubernetes-demo/coreos
```
```
You'll need to replace the following variables in the `master-config.yaml` file to match your deployment.
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server.
- `<KUBERNETES_LOC>`: The address used to get the kubernetes binaries over HTTP.
@ -75,12 +69,10 @@ Host the modified `master-config.yaml` file and pull it on to your Kubernetes Ma
The CoreOS bootable ISO comes with a tool called `coreos-install` which will allow us to install CoreOS to disk and configure the install using cloud-config. The following command will download and install stable CoreOS, using the master-config.yaml file for configuration.
```
```
sudo coreos-install -d /dev/sda -C stable -c master-config.yaml
```
```
Once complete, eject the bootable ISO and restart the server. When it comes back up, you should have SSH access as the `core` user using the public key provided in the master-config.yaml file.
## Configure the compute hosts
@ -102,32 +94,26 @@ You'll need to replace the following variables in the `node-config.yaml` file to
Host the modified `node-config.yaml` file and pull it on to your Kubernetes node.
```
```
wget http://<http_server_ip>/node-config.yaml
```
```
Install and configure CoreOS on the node using the following command.
```
```
sudo coreos-install -d /dev/sda -C stable -c node-config.yaml
```
```
Once complete, restart the server. When it comes back up, you should have SSH access as the `core` user using the public key provided in the `node-config.yaml` file. It will take some time for the node to be fully configured. Once fully configured, you can check that the node is running with the following command on the Kubernetes master.
```
```
/home/core/kubectl get nodes
```
```
## Testing the Cluster
You should now have a functional bare-metal Kubernetes cluster with one master and two compute hosts.
Try running the [guestbook demo](../../../examples/guestbook/) to test out your new cluster!
Try running the [guestbook demo](https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook/) to test out your new cluster!

View File

@ -24,13 +24,13 @@ whether for testing a POC before the real deal, or you are restricted to be tota
## This Guides variables
```shell
| Node Description | MAC | IP |
| :---------------------------- | :---------------: | :---------: |
| CoreOS/etcd/Kubernetes Master | d0:00:67:13:0d:00 | 10.20.30.40 |
| CoreOS Slave 1 | d0:00:67:13:0d:01 | 10.20.30.41 |
| CoreOS Slave 2 | d0:00:67:13:0d:02 | 10.20.30.42 |
```
## Setup PXELINUX CentOS
To setup CentOS PXELINUX environment there is a complete [guide here](http://docs.fedoraproject.org/en-US/Fedora/7/html/Installation_Guide/ap-pxe-server). This section is the abbreviated version.
@ -98,7 +98,7 @@ This section describes how to setup the CoreOS images to live alongside a pre-ex
gpg --verify coreos_production_pxe_image.cpio.gz.sig
4. Edit the menu `vi /tftpboot/pxelinux.cfg/default` again
default menu.c32
prompt 0
timeout 300
@ -221,358 +221,357 @@ To make the setup work, you need to replace a few placeholders:
On the PXE server make and fill in the variables `vi /var/www/html/coreos/pxe-cloud-config-master.yml`.
#cloud-config
---
write_files:
- path: /opt/bin/waiter.sh
owner: root
content: |
#! /usr/bin/bash
until curl http://127.0.0.1:4001/v2/machines; do sleep 2; done
- path: /opt/bin/kubernetes-download.sh
owner: root
permissions: 0755
content: |
#! /usr/bin/bash
/usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubectl"
/usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubernetes"
/usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubecfg"
chmod +x /opt/bin/*
- path: /etc/profile.d/opt-path.sh
owner: root
permissions: 0755
content: |
#! /usr/bin/bash
PATH=$PATH/opt/bin
coreos:
units:
- name: 10-eno1.network
runtime: true
content: |
[Match]
Name=eno1
[Network]
DHCP=yes
- name: 20-nodhcp.network
runtime: true
content: |
[Match]
Name=en*
[Network]
DHCP=none
- name: get-kube-tools.service
runtime: true
command: start
content: |
[Service]
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStart=/opt/bin/kubernetes-download.sh
RemainAfterExit=yes
Type=oneshot
- name: setup-network-environment.service
command: start
content: |
[Unit]
Description=Setup Network Environment
Documentation=https://github.com/kelseyhightower/setup-network-environment
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/setup-network-environment
ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
ExecStart=/opt/bin/setup-network-environment
RemainAfterExit=yes
Type=oneshot
- name: etcd.service
command: start
content: |
[Unit]
Description=etcd
Requires=setup-network-environment.service
After=setup-network-environment.service
[Service]
EnvironmentFile=/etc/network-environment
User=etcd
PermissionsStartOnly=true
ExecStart=/usr/bin/etcd \
--name ${DEFAULT_IPV4} \
--addr ${DEFAULT_IPV4}:4001 \
--bind-addr 0.0.0.0 \
--cluster-active-size 1 \
--data-dir /var/lib/etcd \
--http-read-timeout 86400 \
--peer-addr ${DEFAULT_IPV4}:7001 \
--snapshot true
Restart=always
RestartSec=10s
- name: fleet.socket
command: start
content: |
[Socket]
ListenStream=/var/run/fleet.sock
- name: fleet.service
command: start
content: |
[Unit]
Description=fleet daemon
Wants=etcd.service
After=etcd.service
Wants=fleet.socket
After=fleet.socket
[Service]
Environment="FLEET_ETCD_SERVERS=http://127.0.0.1:4001"
Environment="FLEET_METADATA=role=master"
ExecStart=/usr/bin/fleetd
Restart=always
RestartSec=10s
- name: etcd-waiter.service
command: start
content: |
[Unit]
Description=etcd waiter
Wants=network-online.target
Wants=etcd.service
After=etcd.service
After=network-online.target
Before=flannel.service
Before=setup-network-environment.service
[Service]
ExecStartPre=/usr/bin/chmod +x /opt/bin/waiter.sh
ExecStart=/usr/bin/bash /opt/bin/waiter.sh
RemainAfterExit=true
Type=oneshot
- name: flannel.service
command: start
content: |
[Unit]
Wants=etcd-waiter.service
After=etcd-waiter.service
Requires=etcd.service
After=etcd.service
After=network-online.target
Wants=network-online.target
Description=flannel is an etcd backed overlay network for containers
[Service]
Type=notify
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/flanneld
ExecStartPre=/usr/bin/chmod +x /opt/bin/flanneld
ExecStartPre=-/usr/bin/etcdctl mk /coreos.com/network/config '{"Network":"10.100.0.0/16", "Backend": {"Type": "vxlan"}}'
ExecStart=/opt/bin/flanneld
- name: kube-apiserver.service
command: start
content: |
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
Requires=etcd.service
After=etcd.service
[Service]
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-apiserver
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver
ExecStart=/opt/bin/kube-apiserver \
--address=0.0.0.0 \
--port=8080 \
--service-cluster-ip-range=10.100.0.0/16 \
--etcd-servers=http://127.0.0.1:4001 \
--logtostderr=true
Restart=always
RestartSec=10
- name: kube-controller-manager.service
command: start
content: |
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
Requires=kube-apiserver.service
After=kube-apiserver.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-controller-manager
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager
ExecStart=/opt/bin/kube-controller-manager \
--master=127.0.0.1:8080 \
--logtostderr=true
Restart=always
RestartSec=10
- name: kube-scheduler.service
command: start
content: |
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
Requires=kube-apiserver.service
After=kube-apiserver.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-scheduler
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler
ExecStart=/opt/bin/kube-scheduler --master=127.0.0.1:8080
Restart=always
RestartSec=10
- name: kube-register.service
command: start
content: |
[Unit]
Description=Kubernetes Registration Service
Documentation=https://github.com/kelseyhightower/kube-register
Requires=kube-apiserver.service
After=kube-apiserver.service
Requires=fleet.service
After=fleet.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-register
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-register
ExecStart=/opt/bin/kube-register \
--metadata=role=node \
--fleet-endpoint=unix:///var/run/fleet.sock \
--healthz-port=10248 \
--api-endpoint=http://127.0.0.1:8080
Restart=always
RestartSec=10
update:
group: stable
reboot-strategy: off
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAAD...
```yaml
#cloud-config
---
write_files:
- path: /opt/bin/waiter.sh
owner: root
content: |
#! /usr/bin/bash
until curl http://127.0.0.1:4001/v2/machines; do sleep 2; done
- path: /opt/bin/kubernetes-download.sh
owner: root
permissions: 0755
content: |
#! /usr/bin/bash
/usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubectl"
/usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubernetes"
/usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubecfg"
chmod +x /opt/bin/*
- path: /etc/profile.d/opt-path.sh
owner: root
permissions: 0755
content: |
#! /usr/bin/bash
PATH=$PATH/opt/bin
coreos:
units:
- name: 10-eno1.network
runtime: true
content: |
[Match]
Name=eno1
[Network]
DHCP=yes
- name: 20-nodhcp.network
runtime: true
content: |
[Match]
Name=en*
[Network]
DHCP=none
- name: get-kube-tools.service
runtime: true
command: start
content: |
[Service]
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStart=/opt/bin/kubernetes-download.sh
RemainAfterExit=yes
Type=oneshot
- name: setup-network-environment.service
command: start
content: |
[Unit]
Description=Setup Network Environment
Documentation=https://github.com/kelseyhightower/setup-network-environment
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/setup-network-environment
ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
ExecStart=/opt/bin/setup-network-environment
RemainAfterExit=yes
Type=oneshot
- name: etcd.service
command: start
content: |
[Unit]
Description=etcd
Requires=setup-network-environment.service
After=setup-network-environment.service
[Service]
EnvironmentFile=/etc/network-environment
User=etcd
PermissionsStartOnly=true
ExecStart=/usr/bin/etcd \
--name ${DEFAULT_IPV4} \
--addr ${DEFAULT_IPV4}:4001 \
--bind-addr 0.0.0.0 \
--cluster-active-size 1 \
--data-dir /var/lib/etcd \
--http-read-timeout 86400 \
--peer-addr ${DEFAULT_IPV4}:7001 \
--snapshot true
Restart=always
RestartSec=10s
- name: fleet.socket
command: start
content: |
[Socket]
ListenStream=/var/run/fleet.sock
- name: fleet.service
command: start
content: |
[Unit]
Description=fleet daemon
Wants=etcd.service
After=etcd.service
Wants=fleet.socket
After=fleet.socket
[Service]
Environment="FLEET_ETCD_SERVERS=http://127.0.0.1:4001"
Environment="FLEET_METADATA=role=master"
ExecStart=/usr/bin/fleetd
Restart=always
RestartSec=10s
- name: etcd-waiter.service
command: start
content: |
[Unit]
Description=etcd waiter
Wants=network-online.target
Wants=etcd.service
After=etcd.service
After=network-online.target
Before=flannel.service
Before=setup-network-environment.service
[Service]
ExecStartPre=/usr/bin/chmod +x /opt/bin/waiter.sh
ExecStart=/usr/bin/bash /opt/bin/waiter.sh
RemainAfterExit=true
Type=oneshot
- name: flannel.service
command: start
content: |
[Unit]
Wants=etcd-waiter.service
After=etcd-waiter.service
Requires=etcd.service
After=etcd.service
After=network-online.target
Wants=network-online.target
Description=flannel is an etcd backed overlay network for containers
[Service]
Type=notify
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/flanneld
ExecStartPre=/usr/bin/chmod +x /opt/bin/flanneld
ExecStartPre=-/usr/bin/etcdctl mk /coreos.com/network/config '{"Network":"10.100.0.0/16", "Backend": {"Type": "vxlan"}}'
ExecStart=/opt/bin/flanneld
- name: kube-apiserver.service
command: start
content: |
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
Requires=etcd.service
After=etcd.service
[Service]
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-apiserver
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver
ExecStart=/opt/bin/kube-apiserver \
--address=0.0.0.0 \
--port=8080 \
--service-cluster-ip-range=10.100.0.0/16 \
--etcd-servers=http://127.0.0.1:4001 \
--logtostderr=true
Restart=always
RestartSec=10
- name: kube-controller-manager.service
command: start
content: |
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
Requires=kube-apiserver.service
After=kube-apiserver.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-controller-manager
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager
ExecStart=/opt/bin/kube-controller-manager \
--master=127.0.0.1:8080 \
--logtostderr=true
Restart=always
RestartSec=10
- name: kube-scheduler.service
command: start
content: |
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
Requires=kube-apiserver.service
After=kube-apiserver.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-scheduler
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler
ExecStart=/opt/bin/kube-scheduler --master=127.0.0.1:8080
Restart=always
RestartSec=10
- name: kube-register.service
command: start
content: |
[Unit]
Description=Kubernetes Registration Service
Documentation=https://github.com/kelseyhightower/kube-register
Requires=kube-apiserver.service
After=kube-apiserver.service
Requires=fleet.service
After=fleet.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-register
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-register
ExecStart=/opt/bin/kube-register \
--metadata=role=node \
--fleet-endpoint=unix:///var/run/fleet.sock \
--healthz-port=10248 \
--api-endpoint=http://127.0.0.1:8080
Restart=always
RestartSec=10
update:
group: stable
reboot-strategy: off
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAAD...
```
### node.yml
On the PXE server make and fill in the variables `vi /var/www/html/coreos/pxe-cloud-config-slave.yml`.
#cloud-config
---
write_files:
- path: /etc/default/docker
content: |
DOCKER_EXTRA_OPTS='--insecure-registry="rdocker.example.com:5000"'
coreos:
units:
- name: 10-eno1.network
runtime: true
```yaml
#cloud-config
---
write_files:
- path: /etc/default/docker
content: |
DOCKER_EXTRA_OPTS='--insecure-registry="rdocker.example.com:5000"'
coreos:
units:
- name: 10-eno1.network
runtime: true
content: |
[Match]
Name=eno1
[Network]
DHCP=yes
- name: 20-nodhcp.network
runtime: true
content: |
[Match]
Name=en*
[Network]
DHCP=none
- name: etcd.service
mask: true
- name: docker.service
drop-ins:
- name: 50-insecure-registry.conf
content: |
[Match]
Name=eno1
[Network]
DHCP=yes
- name: 20-nodhcp.network
runtime: true
content: |
[Match]
Name=en*
[Network]
DHCP=none
- name: etcd.service
mask: true
- name: docker.service
drop-ins:
- name: 50-insecure-registry.conf
content: |
[Service]
Environment="HTTP_PROXY=http://rproxy.example.com:3128/" "NO_PROXY=localhost,127.0.0.0/8,rdocker.example.com"
- name: fleet.service
command: start
content: |
[Unit]
Description=fleet daemon
Wants=fleet.socket
After=fleet.socket
[Service]
Environment="FLEET_ETCD_SERVERS=http://<MASTER_SERVER_IP>:4001"
Environment="FLEET_METADATA=role=node"
ExecStart=/usr/bin/fleetd
Restart=always
RestartSec=10s
- name: flannel.service
command: start
content: |
[Unit]
After=network-online.target
Wants=network-online.target
Description=flannel is an etcd backed overlay network for containers
[Service]
Type=notify
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/flanneld
ExecStartPre=/usr/bin/chmod +x /opt/bin/flanneld
ExecStart=/opt/bin/flanneld -etcd-endpoints http://<MASTER_SERVER_IP>:4001
- name: docker.service
command: start
content: |
[Unit]
After=flannel.service
Wants=flannel.service
Description=Docker Application Container Engine
Documentation=http://docs.docker.io
[Service]
EnvironmentFile=-/etc/default/docker
EnvironmentFile=/run/flannel/subnet.env
ExecStartPre=/bin/mount --make-rprivate /
ExecStart=/usr/bin/docker -d --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} -s=overlay -H fd:// ${DOCKER_EXTRA_OPTS}
[Install]
WantedBy=multi-user.target
- name: setup-network-environment.service
command: start
content: |
[Unit]
Description=Setup Network Environment
Documentation=https://github.com/kelseyhightower/setup-network-environment
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/setup-network-environment
ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
ExecStart=/opt/bin/setup-network-environment
RemainAfterExit=yes
Type=oneshot
- name: kube-proxy.service
command: start
content: |
[Unit]
Description=Kubernetes Proxy
Documentation=https://github.com/kubernetes/kubernetes
Requires=setup-network-environment.service
After=setup-network-environment.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-proxy
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy
ExecStart=/opt/bin/kube-proxy \
--etcd-servers=http://<MASTER_SERVER_IP>:4001 \
--logtostderr=true
Restart=always
RestartSec=10
- name: kube-kubelet.service
command: start
content: |
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
Requires=setup-network-environment.service
After=setup-network-environment.service
[Service]
EnvironmentFile=/etc/network-environment
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kubelet
ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet
ExecStart=/opt/bin/kubelet \
--address=0.0.0.0 \
--port=10250 \
--hostname-override=${DEFAULT_IPV4} \
--api-servers=<MASTER_SERVER_IP>:8080 \
--healthz-bind-address=0.0.0.0 \
--healthz-port=10248 \
--logtostderr=true
Restart=always
RestartSec=10
update:
group: stable
reboot-strategy: off
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAAD...
Environment="HTTP_PROXY=http://rproxy.example.com:3128/" "NO_PROXY=localhost,127.0.0.0/8,rdocker.example.com"
- name: fleet.service
command: start
content: |
[Unit]
Description=fleet daemon
Wants=fleet.socket
After=fleet.socket
[Service]
Environment="FLEET_ETCD_SERVERS=http://<MASTER_SERVER_IP>:4001"
Environment="FLEET_METADATA=role=node"
ExecStart=/usr/bin/fleetd
Restart=always
RestartSec=10s
- name: flannel.service
command: start
content: |
[Unit]
After=network-online.target
Wants=network-online.target
Description=flannel is an etcd backed overlay network for containers
[Service]
Type=notify
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/flanneld
ExecStartPre=/usr/bin/chmod +x /opt/bin/flanneld
ExecStart=/opt/bin/flanneld -etcd-endpoints http://<MASTER_SERVER_IP>:4001
- name: docker.service
command: start
content: |
[Unit]
After=flannel.service
Wants=flannel.service
Description=Docker Application Container Engine
Documentation=http://docs.docker.io
[Service]
EnvironmentFile=-/etc/default/docker
EnvironmentFile=/run/flannel/subnet.env
ExecStartPre=/bin/mount --make-rprivate /
ExecStart=/usr/bin/docker -d --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} -s=overlay -H fd:// ${DOCKER_EXTRA_OPTS}
[Install]
WantedBy=multi-user.target
- name: setup-network-environment.service
command: start
content: |
[Unit]
Description=Setup Network Environment
Documentation=https://github.com/kelseyhightower/setup-network-environment
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/setup-network-environment
ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
ExecStart=/opt/bin/setup-network-environment
RemainAfterExit=yes
Type=oneshot
- name: kube-proxy.service
command: start
content: |
[Unit]
Description=Kubernetes Proxy
Documentation=https://github.com/kubernetes/kubernetes
Requires=setup-network-environment.service
After=setup-network-environment.service
[Service]
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-proxy
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy
ExecStart=/opt/bin/kube-proxy \
--etcd-servers=http://<MASTER_SERVER_IP>:4001 \
--logtostderr=true
Restart=always
RestartSec=10
- name: kube-kubelet.service
command: start
content: |
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
Requires=setup-network-environment.service
After=setup-network-environment.service
[Service]
EnvironmentFile=/etc/network-environment
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kubelet
ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet
ExecStart=/opt/bin/kubelet \
--address=0.0.0.0 \
--port=10250 \
--hostname-override=${DEFAULT_IPV4} \
--api-servers=<MASTER_SERVER_IP>:8080 \
--healthz-bind-address=0.0.0.0 \
--healthz-port=10248 \
--logtostderr=true
Restart=always
RestartSec=10
update:
group: stable
reboot-strategy: off
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAAD...
```
## New pxelinux.cfg file
Create a pxelinux target file for a _slave_ node: `vi /tftpboot/pxelinux.cfg/coreos-node-slave`
@ -621,7 +620,7 @@ Now that the CoreOS with Kubernetes installed is up and running lets spin up som
See [a simple nginx example](/{{page.version}}/docs/user-guide/simple-nginx) to try out your new cluster.
For more complete applications, please look in the [examples directory](../../../examples/).
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/master/examples/).
## Helping commands for debugging

View File

@ -21,16 +21,13 @@ Use the [master.yaml](cloud-configs/master.yaml) and [node.yaml](cloud-configs/n
#### Provision the Master
```shell
aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group"
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes
```
```shell
aws ec2 run-instances \
--image-id <ami_image_id> \
--key-name <keypair> \
@ -40,15 +37,12 @@ aws ec2 run-instances \
--user-data file://master.yaml
```
#### Capture the private IP address
```shell
aws ec2 describe-instances --instance-id <master-instance-id>
```
#### Edit node.yaml
Edit `node.yaml` and replace all instances of `<master-private-ip>` with the private IP address of the master node.
@ -56,7 +50,6 @@ Edit `node.yaml` and replace all instances of `<master-private-ip>` with the pri
#### Provision worker nodes
```shell
aws ec2 run-instances \
--count 1 \
--image-id <ami_image_id> \
@ -67,7 +60,6 @@ aws ec2 run-instances \
--user-data file://node.yaml
```
### Google Compute Engine (GCE)
*Attention:* Replace `<gce_image_id>` below for a [suitable version of CoreOS image for Google Compute Engine](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/).
@ -75,7 +67,6 @@ aws ec2 run-instances \
#### Provision the Master
```shell
gcloud compute instances create master \
--image-project coreos-cloud \
--image <gce_image_id> \
@ -85,15 +76,12 @@ gcloud compute instances create master \
--metadata-from-file user-data=master.yaml
```
#### Capture the private IP address
```shell
gcloud compute instances list
```
#### Edit node.yaml
Edit `node.yaml` and replace all instances of `<master-private-ip>` with the private IP address of the master node.
@ -101,7 +89,6 @@ Edit `node.yaml` and replace all instances of `<master-private-ip>` with the pri
#### Provision worker nodes
```shell
gcloud compute instances create node1 \
--image-project coreos-cloud \
--image <gce_image_id> \
@ -111,7 +98,6 @@ gcloud compute instances create node1 \
--metadata-from-file user-data=node.yaml
```
#### Establish network connectivity
Next, setup an ssh tunnel to the master so you can run kubectl from your local host.
@ -128,7 +114,6 @@ These instructions were tested on the Ice House release on a Metacloud distribut
Make sure the environment variables are set for OpenStack such as:
```shell
OS_TENANT_ID
OS_PASSWORD
OS_AUTH_URL
@ -136,43 +121,35 @@ OS_USERNAME
OS_TENANT_NAME
```
Test this works with something like:
```
nova list
```
#### Get a Suitable CoreOS Image
You'll need a [suitable version of CoreOS image for OpenStack](https://coreos.com/os/docs/latest/booting-on-openstack)
Once you download that, upload it to glance. An example is shown below:
```shell
glance image-create --name CoreOS723 \
--container-format bare --disk-format qcow2 \
--file coreos_production_openstack_image.img \
--is-public True
```
#### Create security group
```shell
nova secgroup-create kubernetes "Kubernetes Security Group"
nova secgroup-add-rule kubernetes tcp 22 22 0.0.0.0/0
nova secgroup-add-rule kubernetes tcp 80 80 0.0.0.0/0
```
#### Provision the Master
```shell
nova boot \
--image <image_name> \
--key-name <my_key> \
@ -182,39 +159,42 @@ nova boot \
kube-master
```
```<image_name>```
is the CoreOS image name. In our example we can use the image we created in the previous step and put in 'CoreOS723'
```<image_name>``` is the CoreOS image name. In our example we can use the image we created in the previous step and put in 'CoreOS723'
```<my_key>```
is the keypair name that you already generated to access the instance.
```<my_key>``` is the keypair name that you already generated to access the instance.
```<flavor_id>``` is the flavor ID you use to size the instance. Run ```nova flavor-list``` to get the IDs. 3 on the system this was tested with gives the m1.large size.
```<flavor_id>```
is the flavor ID you use to size the instance. Run ```nova flavor-list```
to get the IDs. 3 on the system this was tested with gives the m1.large size.
The important part is to ensure you have the files/master.yml as this is what will do all the post boot configuration. This path is relevant so we are assuming in this example that you are running the nova command in a directory where there is a subdirectory called files that has the master.yml file in it. Absolute paths also work.
Next, assign it a public IP address:
```
```
nova floating-ip-list
```
Get an IP address that's free and run:
```
nova floating-ip-associate kube-master <ip address>
```
where ```<ip address>``` is the IP address that was available from the ```nova floating-ip-list``` command.
where ```<ip address>```
is the IP address that was available from the ```nova floating-ip-list```
command.
#### Provision Worker Nodes
Edit ```node.yaml``` and replace all instances of ```<master-private-ip>``` with the private IP address of the master node. You can get this by running ```nova show kube-master``` assuming you named your instance kube master. This is not the floating IP address you just assigned it.
Edit ```node.yaml```
and replace all instances of ```<master-private-ip>```
with the private IP address of the master node. You can get this by running ```nova show kube-master```
assuming you named your instance kube master. This is not the floating IP address you just assigned it.
```shell
nova boot \
--image <image_name> \
--key-name <my_key> \
@ -224,7 +204,6 @@ nova boot \
minion01
```
This is basically the same as the master nodes but with the node.yaml post-boot script instead of the master.

View File

@ -27,7 +27,7 @@ Explore the following resources for more information about Kubernetes, Kubernete
- [DCOS Documentation](https://docs.mesosphere.com/)
- [Managing DCOS Services](https://docs.mesosphere.com/services/kubernetes/)
- [Kubernetes Examples](../../examples/README)
- [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/master/examples/README)
- [Kubernetes on Mesos Documentation](https://releases.k8s.io/release-1.1/contrib/mesos/README.md)
- [Kubernetes on Mesos Release Notes](https://github.com/mesosphere/kubernetes-mesos/releases)
- [Kubernetes on DCOS Package Source](https://github.com/mesosphere/kubernetes-mesos)
@ -45,73 +45,66 @@ Explore the following resources for more information about Kubernetes, Kubernete
1. Configure and validate the [Mesosphere Multiverse](https://github.com/mesosphere/multiverse) as a package source repository
```
$ dcos config prepend package.sources https://github.com/mesosphere/multiverse/archive/version-1.x.zip
$ dcos package update --validate
```
```shell
$ dcos config prepend package.sources https://github.com/mesosphere/multiverse/archive/version-1.x.zip
$ dcos package update --validate
```
2. Install etcd
By default, the Kubernetes DCOS package starts a single-node etcd. In order to avoid state loss in the event of Kubernetes component container failure, install an HA [etcd-mesos](https://github.com/mesosphere/etcd-mesos) cluster on DCOS.
```
$ dcos package install etcd
```
```shell
$ dcos package install etcd
```
3. Verify that etcd is installed and healthy
The etcd cluster takes a short while to deploy. Verify that `/etcd` is healthy before going on to the next step.
```
$ dcos marathon app list
```shell
$ dcos marathon app list
ID MEM CPUS TASKS HEALTH DEPLOYMENT CONTAINER CMD
/etcd 128 0.2 1/1 1/1 --- DOCKER None
```
/etcd 128 0.2 1/1 1/1 --- DOCKER None
```
4. Create Kubernetes installation configuration
Configure Kubernetes to use the HA etcd installed on DCOS.
```
$ cat >/tmp/options.json <<EOF
```shell
$ cat >/tmp/options.json <<EOF
{
"kubernetes": {
"etcd-mesos-framework-name": "etcd"
}
}
EOF
```
EOF
```
5. Install Kubernetes
```
$ dcos package install --options=/tmp/options.json kubernetes
```
```shell
$ dcos package install --options=/tmp/options.json kubernetes
```
6. Verify that Kubernetes is installed and healthy
The Kubernetes cluster takes a short while to deploy. Verify that `/kubernetes` is healthy before going on to the next step.
```
$ dcos marathon app list
```shell
$ dcos marathon app list
ID MEM CPUS TASKS HEALTH DEPLOYMENT CONTAINER CMD
/etcd 128 0.2 1/1 1/1 --- DOCKER None
/kubernetes 768 1 1/1 1/1 --- DOCKER None
```
/kubernetes 768 1 1/1 1/1 --- DOCKER None
```
7. Verify that Kube-DNS & Kube-UI are deployed, running, and ready
```
$ dcos kubectl get pods --namespace=kube-system
```shell
$ dcos kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
kube-dns-v8-tjxk9 4/4 Running 0 1m
kube-ui-v2-tjq7b 1/1 Running 0 1m
```
Names and ages may vary.
kube-ui-v2-tjq7b 1/1 Running 0 1m
```
Names and ages may vary.
Now that Kubernetes is installed on DCOS, you may wish to explore the [Kubernetes Examples](../../examples/README) or the [Kubernetes User Guide](../user-guide/README).
Now that Kubernetes is installed on DCOS, you may wish to explore the [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/master/examples/) or the [Kubernetes User Guide](../user-guide/README).
## Uninstall
@ -120,19 +113,17 @@ Now that Kubernetes is installed on DCOS, you may wish to explore the [Kubernete
Before uninstalling Kubernetes, destroy all the pods and replication controllers. The uninstall process will try to do this itself, but by default it times out quickly and may leave your cluster in a dirty state.
```
$ dcos kubectl delete rc,pods --all --namespace=default
$ dcos kubectl delete rc,pods --all --namespace=kube-system
```
```shell
$ dcos kubectl delete rc,pods --all --namespace=default
$ dcos kubectl delete rc,pods --all --namespace=kube-system
```
2. Validate that all pods have been deleted
```
$ dcos kubectl get pods --all-namespaces
```
```shell
$ dcos kubectl get pods --all-namespaces
```
3. Uninstall Kubernetes
```
$ dcos package uninstall kubernetes
```
```shell
$ dcos package uninstall kubernetes
```

View File

@ -39,10 +39,10 @@ it is still useful to use containers for deployment and management, so we create
You can specify k8s version on very node before install:
```
export K8S_VERSION=<your_k8s_version (e.g. 1.0.3)>
```
```shell
export K8S_VERSION=<your_k8s_version (e.g. 1.0.3)>
```
Otherwise, we'll use latest `hyperkube` image as default k8s version.
## Master Node
@ -51,13 +51,13 @@ The first step in the process is to initialize the master node.
Clone the Kubernetes repo, and run [master.sh](/{{page.version}}/docs/getting-started-guides/docker-multinode/master.sh) on the master machine with root:
```shell
```shell
cd kubernetes/docs/getting-started-guides/docker-multinode/
./master.sh
./master.sh
...
`Master done!`
```
`Master done!`
See [here](/{{page.version}}/docs/getting-started-guides/docker-multinode/master) for detailed instructions explanation.
## Adding a worker node
@ -66,13 +66,12 @@ Once your master is up and running you can add one or more workers on different
Clone the Kubernetes repo, and run [worker.sh](/{{page.version}}/docs/getting-started-guides/docker-multinode/worker.sh) on the worker machine with root:
```shell
```shell
export MASTER_IP=<your_master_ip (e.g. 1.2.3.4)>
cd kubernetes/docs/getting-started-guides/docker-multinode/
./worker.sh
```
`Worker done!`
./worker.sh ...
`Worker done!`
````
See [here](/{{page.version}}/docs/getting-started-guides/docker-multinode/worker) for detailed instructions explanation.
@ -84,4 +83,4 @@ See [here](/{{page.version}}/docs/getting-started-guides/docker-multinode/deploy
Once your cluster has been created you can [test it out](/{{page.version}}/docs/getting-started-guides/docker-multinode/testing)
For more complete applications, please look in the [examples directory](../../examples/)
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/master/examples/)

View File

@ -14,7 +14,6 @@ First of all, download the template dns rc and svc file from
Then you need to set `DNS_REPLICAS` , `DNS_DOMAIN` , `DNS_SERVER_IP` , `KUBE_SERVER` ENV.
```
$ export DNS_REPLICAS=1
$ export DNS_DOMAIN=cluster.local # specify in startup parameter `--cluster-domain` for containerized kubelet
@ -24,28 +23,23 @@ $ export DNS_SERVER_IP=10.0.0.10 # specify in startup parameter `--cluster-dns`
$ export KUBE_SERVER=10.10.103.250 # your master server ip, you may change it
```
### Replace the corresponding value in the template.
```
$ sed -e "s/{{ pillar\['dns_replicas'\] }}/${DNS_REPLICAS}/g;s/{{ pillar\['dns_domain'\] }}/${DNS_DOMAIN}/g;s/{kube_server_url}/${KUBE_SERVER}/g;" skydns-rc.yaml.in > ./skydns-rc.yaml
$ sed -e "s/{{ pillar\['dns_server'\] }}/${DNS_SERVER_IP}/g" skydns-svc.yaml.in > ./skydns-svc.yaml
```
### Use `kubectl` to create skydns rc and service
```
$ kubectl -s "$KUBE_SERVER:8080" --namespace=kube-system create -f ./skydns-rc.yaml
$ kubectl -s "$KUBE_SERVER:8080" --namespace=kube-system create -f ./skydns-svc.yaml
```
### Test if DNS works
Follow [this link](https://releases.k8s.io/release-1.1/cluster/addons/dns#how-do-i-test-if-it-is-working) to check it out.

View File

@ -23,11 +23,9 @@ Docker containers themselves. To achieve this, we need a separate "bootstrap" i
Run:
```shell
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
_Important Note_:
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
across reboots and failures.
@ -38,20 +36,15 @@ across reboots and failures.
Run:
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
```
Next, you need to set a CIDR range for flannel. This CIDR should be chosen to be non-overlapping with any existing network you are using:
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host gcr.io/google_containers/etcd:2.0.12 etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'
```
### Set up Flannel on the master node
Flannel is a network abstraction layer build by CoreOS, we will use it to provide simplified networking between our Pods of containers.
@ -65,19 +58,15 @@ To re-configure Docker to use flannel, we need to take docker down, run flannel
Turning down Docker is system dependent, it may be:
```shell
sudo /etc/init.d/docker stop
```
or
```shell
sudo systemctl stop docker
```
or it may be something else.
#### Run flannel
@ -85,21 +74,17 @@ or it may be something else.
Now run flanneld itself:
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.5.0
```
The previous command should have printed a really long hash, copy this hash.
Now get the subnet settings from flannel:
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
```
#### Edit the docker configuration
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
@ -109,22 +94,18 @@ This may be in `/etc/default/docker` or `/etc/systemd/service/docker.service` or
Regardless, you need to add the following to the docker command line:
```shell
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
```
#### Remove the existing Docker bridge
Docker creates a bridge named `docker0` by default. You need to remove this:
```shell
sudo /sbin/ifconfig docker0 down
sudo brctl delbr docker0
```
You may need to install the `bridge-utils` package for the `brctl` binary.
#### Restart Docker
@ -132,25 +113,20 @@ You may need to install the `bridge-utils` package for the `brctl` binary.
Again this is system dependent, it may be:
```shell
sudo /etc/init.d/docker start
```
it may be:
```shell
systemctl start docker
```
## Starting the Kubernetes Master
Ok, now that your networking is set up, you can startup Kubernetes, this is the same as the single-node case, we will use the "main" instance of the Docker daemon for the Kubernetes components.
```shell
sudo docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
@ -165,17 +141,14 @@ sudo docker run \
gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube kubelet --api-servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable-server --hostname-override=127.0.0.1 --config=/etc/kubernetes/manifests-multi --cluster-dns=10.0.0.10 --cluster-domain=cluster.local
```
> Note that `--cluster-dns` and `--cluster-domain` is used to deploy dns, feel free to discard them if dns is not needed.
### Also run the service proxy
```shell
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
```
### Test it out
At this point, you should have a functioning 1-node cluster. Let's test it out!
@ -187,20 +160,16 @@ Download the kubectl binary and make it available by editing your PATH ENV.
List the nodes
```shell
kubectl get nodes
```
This should print:
```shell
NAME LABELS STATUS
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
```
If the status of the node is `NotReady` or `Unknown` please check that all of the containers you created are successfully running.
If all else fails, ask questions on [Slack](../../troubleshooting.html#slack).

View File

@ -4,65 +4,51 @@ title: "Testing your Kubernetes cluster."
To validate that your node(s) have been added, run:
```shell
kubectl get nodes
```
That should show something like:
```shell
NAME LABELS STATUS
10.240.99.26 kubernetes.io/hostname=10.240.99.26 Ready
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
```
If the status of any node is `Unknown` or `NotReady` your cluster is broken, double check that all containers are running properly, and if all else fails, contact us on [Slack](../../troubleshooting.html#slack).
### Run an application
```shell
kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
```
now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
### Expose it as a service
```shell
kubectl expose rc nginx --port=80
```
Run the following command to obtain the IP of this service we just created. There are two IPs, the first one is internal (CLUSTER_IP), and the second one is the external load-balanced IP.
```shell
kubectl get svc nginx
```
Alternatively, you can obtain only the first IP (CLUSTER_IP) by running:
```shell
kubectl get svc nginx --template={{.spec.clusterIP}}
```
Hit the webserver with the first IP (CLUSTER_IP):
```shell
curl <insert-cluster-ip-here>
```
Note that you will need run this curl command on your boot2docker VM if you are running on OS X.
### Scaling
@ -70,19 +56,15 @@ Note that you will need run this curl command on your boot2docker VM if you are
Now try to scale up the nginx you created before:
```shell
kubectl scale rc nginx --replicas=3
```
And list the pods
```shell
kubectl get pods
```
You should see pods landing on the newly added machine.

View File

@ -26,11 +26,9 @@ As previously, we need a second instance of the Docker daemon running to bootstr
Run:
```shell
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
_Important Note_:
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
across reboots and failures.
@ -42,19 +40,15 @@ To re-configure Docker to use flannel, we need to take docker down, run flannel
Turning down Docker is system dependent, it may be:
```shell
sudo /etc/init.d/docker stop
```
or
```shell
sudo systemctl stop docker
```
or it may be something else.
#### Run flannel
@ -62,22 +56,17 @@ or it may be something else.
Now run flanneld itself, this call is slightly different from the above, since we point it at the etcd instance on the master.
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.5.0 /opt/bin/flanneld --etcd-endpoints=http://${MASTER_IP}:4001
```
The previous command should have printed a really long hash, copy this hash.
Now get the subnet settings from flannel:
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
```
#### Edit the docker configuration
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
@ -87,22 +76,18 @@ This may be in `/etc/default/docker` or `/etc/systemd/service/docker.service` or
Regardless, you need to add the following to the docker command line:
```shell
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
```
#### Remove the existing Docker bridge
Docker creates a bridge named `docker0` by default. You need to remove this:
```shell
sudo /sbin/ifconfig docker0 down
sudo brctl delbr docker0
```
You may need to install the `bridge-utils` package for the `brctl` binary.
#### Restart Docker
@ -110,19 +95,15 @@ You may need to install the `bridge-utils` package for the `brctl` binary.
Again this is system dependent, it may be:
```shell
sudo /etc/init.d/docker start
```
it may be:
```shell
systemctl start docker
```
### Start Kubernetes on the worker node
#### Run the kubelet
@ -130,7 +111,6 @@ systemctl start docker
Again this is similar to the above, but the `--api-servers` now points to the master we set up in the beginning.
```shell
sudo docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
@ -145,17 +125,14 @@ sudo docker run \
gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube kubelet --api-servers=http://${MASTER_IP}:8080 --v=2 --address=0.0.0.0 --enable-server --hostname-override=$(hostname -i) --cluster-dns=10.0.0.10 --cluster-domain=cluster.local
```
#### Run the service proxy
The service proxy provides load-balancing between groups of containers defined by Kubernetes `Services`
```shell
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube proxy --master=http://${MASTER_IP}:8080 --v=2
```
### Next steps
Move on to [testing your cluster](testing) or [add another node](#adding-a-kubernetes-worker-node-via-docker)

View File

@ -10,8 +10,6 @@ Here's a diagram of what the final result will look like:
![Kubernetes Single Node on Docker](/{{page.version}}/docs/getting-started-guides/k8s-singlenode-docker.png)
{% include pagetoc.html %}
## Prerequisites
@ -20,40 +18,39 @@ Here's a diagram of what the final result will look like:
2. Your kernel should support memory and swap accounting. Ensure that the
following configs are turned on in your linux kernel:
```shell
CONFIG_RESOURCE_COUNTERS=y
```shell
CONFIG_RESOURCE_COUNTERS=y
CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y
CONFIG_MEMCG_SWAP_ENABLED=y
CONFIG_MEMCG_KMEM=y
```
CONFIG_MEMCG_KMEM=y
```
3. Enable the memory and swap accounting in the kernel, at boot, as command line
parameters as follows:
```shell
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
```
NOTE: The above is specifically for GRUB2.
```shell
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
```
NOTE: The above is specifically for GRUB2.
You can check the command line parameters passed to your kernel by looking at the
output of /proc/cmdline:
```shell
$cat /proc/cmdline
```shell
$cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.18.4-aufs root=/dev/sda5 ro cgroup_enable=memory
swapaccount=1
```
swapaccount=1
```
### Step One: Run etcd
```shell
docker run --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
```
```shell
docker run --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
```
### Step Two: Run the master
```shell
```shell
docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
@ -66,17 +63,17 @@ docker run \
--privileged=true \
-d \
gcr.io/google_containers/hyperkube:v1.0.1 \
/hyperkube kubelet --containerized --hostname-override="127.0.0.1" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests
```
/hyperkube kubelet --containerized --hostname-override="127.0.0.1" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests
```
This actually runs the kubelet, which in turn runs a [pod](../user-guide/pods) that contains the other master components.
### Step Three: Run the service proxy
```shell
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
```
```shell
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
```
### Test it out
At this point you should have a running Kubernetes cluster. You can test this by downloading the kubectl
@ -87,57 +84,57 @@ binary
*Note:*
On OS/X you will need to set up port forwarding via ssh:
```shell
boot2docker ssh -L8080:localhost:8080
```
```shell
boot2docker ssh -L8080:localhost:8080
```
List the nodes in your cluster by running:
```shell
kubectl get nodes
```
```shell
kubectl get nodes
```
This should print:
```shell
```shell
NAME LABELS STATUS
127.0.0.1 <none> Ready
```
127.0.0.1 <none> Ready
```
If you are running different Kubernetes clusters, you may need to specify `-s http://localhost:8080` to select the local cluster.
### Run an application
```shell
kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
```
```shell
kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
```
Now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
### Expose it as a service
```shell
kubectl expose rc nginx --port=80
```
```shell
kubectl expose rc nginx --port=80
```
Run the following command to obtain the IP of this service we just created. There are two IPs, the first one is internal (CLUSTER_IP), and the second one is the external load-balanced IP.
```shell
kubectl get svc nginx
```
```shell
kubectl get svc nginx
```
Alternatively, you can obtain only the first IP (CLUSTER_IP) by running:
```shell
kubectl get svc nginx --template={{.spec.clusterIP}}
```
```shell
kubectl get svc nginx --template={{.spec.clusterIP}}
```
Hit the webserver with the first IP (CLUSTER_IP):
```shell
curl <insert-cluster-ip-here>
```
```shell
curl <insert-cluster-ip-here>
```
Note that you will need run this curl command on your boot2docker VM if you are running on OS X.
### A note on turning down your cluster

View File

@ -45,12 +45,10 @@ so that all hosts in the cluster can hostname-resolve one another to this interf
all Kubernetes and Calico services will be running on it.**
```
echo "10.134.251.56 kube-master" >> /etc/hosts
echo "10.134.251.55 kube-node-1" >> /etc/hosts
```
>Make sure that communication works between kube-master and each kube-node by using a utility such as ping.
## Setup Master
@ -60,44 +58,35 @@ echo "10.134.251.55 kube-node-1" >> /etc/hosts
* Both Calico and Kubernetes use etcd as their datastore. We will run etcd on Master and point all Kubernetes and Calico services at it.
```
yum -y install etcd
```
* Edit `/etc/etcd/etcd.conf`
```
ETCD_LISTEN_CLIENT_URLS="http://kube-master:4001"
ETCD_ADVERTISE_CLIENT_URLS="http://kube-master:4001"
```
### Install Kubernetes
* Run the following command on Master to install the latest Kubernetes (as well as docker):
```
yum -y install kubernetes
```
* Edit `/etc/kubernetes/config `
```
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://kube-master:8080"
```
* Edit `/etc/kubernetes/apiserver`
```
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
@ -107,21 +96,17 @@ KUBE_ETCD_SERVERS="--etcd-servers=http://kube-master:4001"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
```
* Create /var/run/kubernetes on master:
```
mkdir /var/run/kubernetes
chown kube:kube /var/run/kubernetes
chmod 750 /var/run/kubernetes
```
* Start the appropriate services on master:
```
for SERVICE in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICE
systemctl enable $SERVICE
@ -129,24 +114,20 @@ for SERVICE in etcd kube-apiserver kube-controller-manager kube-scheduler; do
done
```
### Install Calico
Next, we'll launch Calico on Master to allow communication between Pods and any services running on the Master.
* Install calicoctl, the calico configuration tool.
```
wget https://github.com/Metaswitch/calico-docker/releases/download/v0.5.5/calicoctl
chmod +x ./calicoctl
sudo mv ./calicoctl /usr/bin
```
* Create `/etc/systemd/system/calico-node.service`
```
[Unit]
Description=calicoctl node
Requires=docker.service
@ -163,18 +144,15 @@ ExecStart=/usr/bin/calicoctl node --ip=10.134.251.56 --detach=false
WantedBy=multi-user.target
```
>Be sure to substitute `--ip=10.134.251.56` with your Master's eth1 IP Address.
* Start Calico
```
systemctl enable calico-node.service
systemctl start calico-node.service
```
>Starting calico for the first time may take a few minutes as the calico-node docker image is downloaded.
## Setup Node
@ -187,7 +165,6 @@ In order to set our own address range, we will create a new virtual interface ca
* Add a virtual interface by creating `/etc/sysconfig/network-scripts/ifcfg-cbr0`:
```
DEVICE=cbr0
TYPE=Bridge
IPADDR=192.168.1.1
@ -196,60 +173,48 @@ ONBOOT=yes
BOOTPROTO=static
```
>**Note for Multi-Node Clusters:** Each node should be assigned an IP address on a unique subnet. In this example, node-1 is using 192.168.1.1/24,
so node-2 should be assigned another pool on the 192.168.x.0/24 subnet, e.g. 192.168.2.1/24.
* Ensure that your system has bridge-utils installed. Then, restart the networking daemon to activate the new interface
```
systemctl restart network.service
```
### Install Docker
* Install Docker
```
yum -y install docker
```
* Configure docker to run on `cbr0` by editing `/etc/sysconfig/docker-network`:
```
DOCKER_NETWORK_OPTIONS="--bridge=cbr0 --iptables=false --ip-masq=false"
```
* Start docker
```
systemctl start docker
```
### Install Calico
* Install calicoctl, the calico configuration tool.
```
wget https://github.com/Metaswitch/calico-docker/releases/download/v0.5.5/calicoctl
chmod +x ./calicoctl
sudo mv ./calicoctl /usr/bin
```
* Create `/etc/systemd/system/calico-node.service`
```
[Unit]
Description=calicoctl node
Requires=docker.service
@ -266,18 +231,15 @@ ExecStart=/usr/bin/calicoctl node --ip=10.134.251.55 --detach=false --kubernetes
WantedBy=multi-user.target
```
> Note: You must replace the IP address with your node's eth1 IP Address!
* Start Calico
```
systemctl enable calico-node.service
systemctl start calico-node.service
```
* Configure the IP Address Pool
Most Kubernetes application deployments will require communication between Pods and the kube-apiserver on Master. On a standard Digital
@ -285,37 +247,30 @@ Ocean Private Network, requests sent from Pods to the kube-apiserver will not be
destined for any 192.168.0.0/16 address. To resolve this, you can have calicoctl add a masquerade rule to all outgoing traffic on the node:
```
ETCD_AUTHORITY=kube-master:4001 calicoctl pool add 192.168.0.0/16 --nat-outgoing
```
### Install Kubernetes
* First, install Kubernetes.
```
yum -y install kubernetes
```
* Edit `/etc/kubernetes/config`
```
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://kube-master:8080"
```
* Edit `/etc/kubernetes/kubelet`
We'll pass in an extra parameter - `--network-plugin=calico` to tell the Kubelet to use the Calico networking plugin. Additionally, we'll add two
environment variables that will be used by the Calico networking plugin.
```
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
@ -333,11 +288,9 @@ ETCD_AUTHORITY="kube-master:4001"
KUBE_API_ROOT="http://kube-master:8080/api/v1"
```
* Start Kubernetes on the node.
```
for SERVICE in kube-proxy kubelet; do
systemctl restart $SERVICE
systemctl enable $SERVICE
@ -345,18 +298,13 @@ for SERVICE in kube-proxy kubelet; do
done
```
## Check Running Cluster
The cluster should be running! Check that your nodes are reporting as such:
```
kubectl get nodes
NAME LABELS STATUS
kube-node-1 kubernetes.io/hostname=kube-node-1 Ready
```

View File

@ -20,12 +20,11 @@ The hosts can be virtual or bare metal. Ansible will take care of the rest of th
A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a cluster with three hosts, for example:
```shell
master,etcd = kube-master.example.com
```shell
master,etcd = kube-master.example.com
node1 = kube-node-01.example.com
node2 = kube-node-02.example.com
```
```
**Make sure your local machine has**
- ansible (must be 1.9.0+)
@ -34,22 +33,20 @@ A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a c
If not
```shell
```shell
yum install -y ansible git python-netaddr
```
```
**Now clone down the Kubernetes repository**
```shell
```shell
git clone https://github.com/kubernetes/contrib.git
cd contrib/ansible
```
```
**Tell ansible about each machine and its role in your cluster**
Get the IP addresses from the master and nodes. Add those to the `~/contrib/ansible/inventory` file on the host running Ansible.
```shell
```shell
[masters]
kube-master.example.com
@ -59,8 +56,7 @@ kube-master.example.com
[nodes]
kube-node-01.example.com
kube-node-02.example.com
```
```
## Setting up ansible access to your nodes
If you already are running on a machine which has passwordless ssh access to the kube-master and kube-node-{01,02} nodes, and 'sudo' privileges, simply set the value of `ansible_ssh_user` in `~/contrib/ansible/group_vars/all.yaml` to the username which you use to ssh to the nodes (i.e. `fedora`), and proceed to the next step...
@ -69,28 +65,25 @@ If you already are running on a machine which has passwordless ssh access to the
edit: ~/contrib/ansible/group_vars/all.yml
```yaml
```yaml
ansible_ssh_user: root
```
```
**Configuring ssh access to the cluster**
If you already have ssh access to every machine using ssh public keys you may skip to [setting up the cluster](#setting-up-the-cluster)
Make sure your local machine (root) has an ssh key pair if not
```shell
```shell
ssh-keygen
```
```
Copy the ssh public key to **all** nodes in the cluster
```shell
```shell
for node in kube-master.example.com kube-node-01.example.com kube-node-02.example.com; do
ssh-copy-id ${node}
done
```
```
## Setting up the cluster
Although the default value of variables in `~/contrib/ansible/group_vars/all.yml` should be good enough, if not, change them as needed.
@ -101,18 +94,16 @@ edit: ~/contrib/ansible/group_vars/all.yml
Modify `source_type` as below to access kubernetes packages through the package manager.
```yaml
```yaml
source_type: packageManager
```
```
**Configure the IP addresses used for services**
Each Kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment.
```yaml
```yaml
kube_service_addresses: 10.254.0.0/16
```
```
**Managing flannel**
Modify `flannel_subnet`, `flannel_prefix` and `flannel_host_prefix` only if defaults are not appropriate for your cluster.
@ -122,32 +113,28 @@ Modify `flannel_subnet`, `flannel_prefix` and `flannel_host_prefix` only if defa
Set `cluster_logging` to false or true (default) to disable or enable logging with elasticsearch.
```yaml
```yaml
cluster_logging: true
```
```
Turn `cluster_monitoring` to true (default) or false to enable or disable cluster monitoring with heapster and influxdb.
```yaml
```yaml
cluster_monitoring: true
```
```
Turn `dns_setup` to true (recommended) or false to enable or disable whole DNS configuration.
```yaml
```yaml
dns_setup: true
```
```
**Tell ansible to get to work!**
This will finally setup your whole Kubernetes cluster for you.
```shell
```shell
cd ~/contrib/ansible/
./setup.sh
```
```
## Testing and using your new cluster
That's all there is to it. It's really that easy. At this point you should have a functioning Kubernetes cluster.
@ -156,25 +143,22 @@ That's all there is to it. It's really that easy. At this point you should hav
Run the following on the kube-master:
```shell
```shell
kubectl get nodes
```
```
**Show services running on masters and nodes**
```shell
```shell
systemctl | grep -i kube
```
```
**Show firewall rules on the masters and nodes**
```shell
```shell
iptables -nvL
```
```
**Create /tmp/apache.json on the master with the following contents and deploy pod**
```json
```json
{
"kind": "Pod",
"apiVersion": "v1",
@ -199,29 +183,24 @@ iptables -nvL
]
}
}
```
```shell
```
```shell
kubectl create -f /tmp/apache.json
```
```
**Check where the pod was created**
```shell
```shell
kubectl get pods
```
```
**Check Docker status on nodes**
```shell
```shell
docker ps
docker images
```
```
**After the pod is 'Running' Check web server access on the node**
```shell
```shell
curl http://localhost
```
```
That's it !

View File

@ -24,13 +24,11 @@ that _etcd_ and Kubernetes master run on the same host). The remaining host, fe
Hosts:
```
```
fed-master = 192.168.121.9
fed-node = 192.168.121.65
```
```
**Prepare the hosts:**
* Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master.
@ -43,33 +41,26 @@ fed-node = 192.168.121.65
Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum
install command below.
```shell
```shell
yum -y install --enablerepo=updates-testing kubernetes
```
```
* Install etcd and iptables
```shell
```shell
yum -y install etcd iptables
```
```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping.
```shell
```shell
echo "192.168.121.9 fed-master
192.168.121.65 fed-node" >> /etc/hosts
```
```
* Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain:
```shell
```shell
# Comma separated list of nodes in the etcd cluster
KUBE_MASTER="--master=http://fed-master:8080"
@ -82,24 +73,20 @@ KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
```
```
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install.
```shell
```shell
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
```
```
**Configure the Kubernetes services on the master.**
* Edit /etc/kubernetes/apiserver to appear as such. The service-cluster-ip-range IP addresses must be an unused block of addresses, not used anywhere else.
They do not need to be routed or assigned to anything.
```shell
```shell
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
@ -112,44 +99,36 @@ KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
```
```
* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused". Note that Fedora 22 uses etcd 2.0, One of the changes in etcd 2.0 is that now uses port 2379 and 2380 (as opposed to etcd 0.46 which userd 4001 and 7001).
```shell
```shell
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"
```
```
* Create /var/run/kubernetes on master:
```shell
```shell
mkdir /var/run/kubernetes
chown kube:kube /var/run/kubernetes
chmod 750 /var/run/kubernetes
```
```
* Start the appropriate services on master:
```shell
```shell
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
```
* Addition of nodes:
* Create following node.json file on Kubernetes master node:
```json
```json
{
"apiVersion": "v1",
"kind": "Node",
@ -162,20 +141,17 @@ done
}
}
```
```
Now create a node object internally in your Kubernetes cluster by running:
```shell
```shell
$ kubectl create -f ./node.json
$ kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Unknown
```
```
Please note that in the above, it only creates a representation for the node
_fed-node_ internally. It does not provision the actual _fed-node_. Also, it
is assumed that _fed-node_ (as specified in `name`) can be resolved and is
@ -188,8 +164,7 @@ a Kubernetes node (fed-node) below.
* Edit /etc/kubernetes/kubelet to appear as such:
```shell
```shell
###
# Kubernetes kubelet (node) config
@ -205,40 +180,33 @@ KUBELET_API_SERVER="--api-servers=http://fed-master:8080"
# Add your own!
#KUBELET_ARGS=""
```
```
* Start the appropriate services on the node (fed-node).
```shell
```shell
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
```
```
* Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_.
```shell
```shell
kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Ready
```
```
* Deletion of nodes:
To delete _fed-node_ from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
```shell
```shell
kubectl delete -f ./node.json
```
```
*You should be finished!*
**The cluster should be running! Launch a test pod.**

View File

@ -19,7 +19,7 @@ This document describes how to deploy Kubernetes on multiple hosts to set up a m
* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are:
```json
```json
{
"Network": "18.16.0.0/16",
"SubnetLen": 24,
@ -28,29 +28,26 @@ This document describes how to deploy Kubernetes on multiple hosts to set up a m
"VNI": 1
}
}
```
```
**NOTE:** Choose an IP range that is *NOT* part of the public IP address range.
* Add the configuration to the etcd server on fed-master.
```shell
```shell
etcdctl set /coreos.com/network/config < flannel-config.json
```
```
* Verify the key exists in the etcd server on fed-master.
```shell
```shell
etcdctl get /coreos.com/network/config
```
```
## Node Setup
**Perform following commands on all Kubernetes nodes**
* Edit the flannel configuration file /etc/sysconfig/flanneld as follows:
```shell
```shell
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
@ -62,52 +59,46 @@ FLANNEL_ETCD_KEY="/coreos.com/network"
# Any additional options that you want to pass
FLANNEL_OPTIONS=""
```
```
**Note:** By default, flannel uses the interface for the default route. If you have multiple interfaces and would like to use an interface other than the default route one, you could add "-iface=" to FLANNEL_OPTIONS. For additional options, run `flanneld --help` on command line.
* Enable the flannel service.
```shell
```shell
systemctl enable flanneld
```
```
* If docker is not running, then starting flannel service is enough and skip the next step.
```shell
```shell
systemctl start flanneld
```
```
* If docker is already running, then stop docker, delete docker bridge (docker0), start flanneld and restart docker as follows. Another alternative is to just reboot the system (`systemctl reboot`).
```shell
```shell
systemctl stop docker
ip link delete docker0
systemctl start flanneld
systemctl start docker
```
```
***
## **Test the cluster and flannel configuration**
* Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each Kubernetes node out of the IP range configured above. A working output should look like this:
```shell
```shell
# ip -4 a|grep inet
inet 127.0.0.1/8 scope host lo
inet 192.168.122.77/24 brd 192.168.122.255 scope global dynamic eth0
inet 18.16.29.0/16 scope global flannel.1
inet 18.16.29.1/24 scope global docker0
```
```
* From any node in the cluster, check the cluster members by issuing a query to etcd server via curl (only partial output is shown using `grep -E "\{|\}|key|value"`). If you set up a 1 master and 3 nodes cluster, you should see one block for each node showing the subnets they have been assigned. You can associate those subnets to each node by the MAC address (VtepMAC) and IP address (Public IP) that is listed in the output.
```shell
```shell
curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool
```
```json
```
```json
{
"node": {
"key": "/coreos.com/network/subnets",
@ -125,54 +116,47 @@ curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjso
}
}
}
```
```
* From all nodes, review the `/run/flannel/subnet.env` file. This file was generated automatically by flannel.
```shell
```shell
# cat /run/flannel/subnet.env
FLANNEL_SUBNET=18.16.29.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=false
```
```
* At this point, we have etcd running on the Kubernetes master, and flannel / docker running on Kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly.
* Issue the following commands on any 2 nodes:
```shell
```shell
# docker run -it fedora:latest bash
bash-4.3#
```
```
* This will place you inside the container. Install iproute and iputils packages to install ip and ping utilities. Due to a [bug](https://bugzilla.redhat.com/show_bug.cgi?id=1142311), it is required to modify capabilities of ping binary to work around "Operation not permitted" error.
```shell
```shell
bash-4.3# yum -y install iproute iputils
bash-4.3# setcap cap_net_raw-ep /usr/bin/ping
```
```
* Now note the IP address on the first node:
```shell
```shell
bash-4.3# ip -4 a l eth0 | grep inet
inet 18.16.29.4/24 scope global eth0
```
```
* And also note the IP address on the other node:
```shell
```shell
bash-4.3# ip a l eth0 | grep inet
inet 18.16.90.4/24 scope global eth0
```
```
* Now ping from the first node to the other node:
```shell
```shell
bash-4.3# ping 18.16.90.4
PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data.
64 bytes from 18.16.90.4: icmp_seq=1 ttl=62 time=0.275 ms
64 bytes from 18.16.90.4: icmp_seq=2 ttl=62 time=0.372 ms
```
```
* Now Kubernetes multi-node cluster is set up with overlay networking set up by flannel.

View File

@ -4,8 +4,6 @@ title: "Getting started on Google Compute Engine"
The example below creates a Kubernetes cluster with 4 worker node Virtual Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient).
{% include pagetoc.html %}
### Before you start
@ -29,16 +27,16 @@ If you want to use custom binaries or pure open source Kubernetes, please contin
You can install a client and start a cluster with either one of these commands (we list both in case only one is installed on your machine):
```shell
curl -sS https://get.k8s.io | bash
```
```shell
curl -sS https://get.k8s.io | bash
```
or
```shell
wget -q -O - https://get.k8s.io | bash
```
```shell
wget -q -O - https://get.k8s.io | bash
```
Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster.
By default, some containers will already be running on your cluster. Containers like `kibana` and `elasticsearch` provide [logging](logging), while `heapster` provides [monitoring](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/README.md) services.
@ -47,11 +45,11 @@ The script run by the commands above creates a cluster with the name/prefix "kub
Alternately, you can download and install the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases), then run the `<kubernetes>/cluster/kube-up.sh` script to start the cluster:
```shell
```shell
cd kubernetes
cluster/kube-up.sh
```
cluster/kube-up.sh
```
If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `<kubernetes>/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.
If you run into trouble, please see the section on [troubleshooting](gce.html#troubleshooting), post to the
@ -74,14 +72,13 @@ You will use it to look at your new cluster and bring up example apps.
Add the appropriate binary folder to your `PATH` to access kubectl:
```shell
```shell
# OS X
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
# Linux
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
```
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
```
**Note**: gcloud also ships with `kubectl`, which by default is added to your path.
However the gcloud bundled kubectl version may be older than the one downloaded by the
get.k8s.io install script. We recommend you use the downloaded binary to avoid
@ -91,18 +88,18 @@ potential issues with client/server version skew.
You may find it useful to enable `kubectl` bash completion:
```
$ source ./contrib/completions/bash/kubectl
```
```
$ source ./contrib/completions/bash/kubectl
```
**Note**: This will last for the duration of your bash session. If you want to make this permanent you need to add this line in your bash profile.
Alternatively, on most linux distributions you can also move the completions file to your bash_completions.d like this:
```
$ cp ./contrib/completions/bash/kubectl /etc/bash_completion.d/
```
```
$ cp ./contrib/completions/bash/kubectl /etc/bash_completion.d/
```
but then you have to update it when you update kubectl.
### Getting started with your cluster
@ -111,32 +108,32 @@ but then you have to update it when you update kubectl.
Once `kubectl` is in your path, you can use it to look at your cluster. E.g., running:
```shell
$ kubectl get --all-namespaces services
```
```shell
$ kubectl get --all-namespaces services
```
should show a set of [services](../user-guide/services) that look something like this:
```shell
```shell
NAMESPACE NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
default kubernetes 10.0.0.1 <none> 443/TCP <none> 1d
kube-system kube-dns 10.0.0.2 <none> 53/TCP,53/UDP k8s-app=kube-dns 1d
kube-system kube-ui 10.0.0.3 <none> 80/TCP k8s-app=kube-ui 1d
...
```
...
```
Similarly, you can take a look at the set of [pods](../user-guide/pods) that were created during cluster startup.
You can do this via the
```shell
$ kubectl get --all-namespaces pods
```
```shell
$ kubectl get --all-namespaces pods
```
command.
You'll see a list of pods that looks something like this (the name specifics will be different):
```shell
```shell
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system fluentd-cloud-logging-kubernetes-minion-63uo 1/1 Running 0 14m
kube-system fluentd-cloud-logging-kubernetes-minion-c1n9 1/1 Running 0 14m
@ -145,26 +142,26 @@ kube-system fluentd-cloud-logging-kubernetes-minion-ngua 1/1 Running
kube-system kube-dns-v5-7ztia 3/3 Running 0 15m
kube-system kube-ui-v1-curt1 1/1 Running 0 15m
kube-system monitoring-heapster-v5-ex4u3 1/1 Running 1 15m
kube-system monitoring-influx-grafana-v1-piled 2/2 Running 0 15m
```
kube-system monitoring-influx-grafana-v1-piled 2/2 Running 0 15m
```
Some of the pods may take a few seconds to start up (during this time they'll show `Pending`), but check that they all show as `Running` after a short period.
#### Run some examples
Then, see [a simple nginx example](/{{page.version}}/docs/user-guide/simple-nginx) to try out your new cluster.
For more complete applications, please look in the [examples directory](../../examples/). The [guestbook example](../../examples/guestbook/) is a good "getting started" walkthrough.
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/master/examples/). The [guestbook example](https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook/) is a good "getting started" walkthrough.
### Tearing down the cluster
To remove/delete/teardown the cluster, use the `kube-down.sh` script.
```shell
```shell
cd kubernetes
cluster/kube-down.sh
```
cluster/kube-down.sh
```
Likewise, the `kube-up.sh` in the same directory will bring it back up. You do not need to rerun the `curl` or `wget` command: everything needed to setup the Kubernetes cluster is now on your workstation.
### Customizing

View File

@ -6,8 +6,6 @@ Kubernetes by provisioning, installing and configuring all the systems in
the cluster. Once deployed the cluster can easily scale up with one command
to increase the cluster size.
{% include pagetoc.html %}
## Prerequisites
@ -20,26 +18,31 @@ to increase the cluster size.
[Install the Juju client](https://jujucharms.com/get-started) on your
local Ubuntu system:
sudo add-apt-repository ppa:juju/stable
sudo apt-get update
sudo apt-get install juju-core juju-quickstart
```shell
sudo add-apt-repository ppa:juju/stable
sudo apt-get update
sudo apt-get install juju-core juju-quickstart
```
### With Docker
If you are not using Ubuntu or prefer the isolation of Docker, you may
run the following:
mkdir ~/.juju
sudo docker run -v ~/.juju:/home/ubuntu/.juju -ti jujusolutions/jujubox:latest
```shell
mkdir ~/.juju
sudo docker run -v ~/.juju:/home/ubuntu/.juju -ti jujusolutions/jujubox:latest
```
At this point from either path you will have access to the `juju
quickstart` command.
To set up the credentials for your chosen cloud run:
juju quickstart --constraints="mem=3.75G" -i
```shell
juju quickstart --constraints="mem=3.75G" -i
```
> The `constraints` flag is optional, it changes the size of virtual machines
> that Juju will generate when it requests a new machine. Larger machines
@ -54,9 +57,11 @@ interface.
You will need to export the `KUBERNETES_PROVIDER` environment variable before
bringing up the cluster.
export KUBERNETES_PROVIDER=juju
cluster/kube-up.sh
```shell
export KUBERNETES_PROVIDER=juju
cluster/kube-up.sh
```
If this is your first time running the `kube-up.sh` script, it will install
the required dependencies to get started with Juju, additionally it will
@ -71,22 +76,25 @@ communicate with each other.
## Exploring the cluster
The `juju status` command provides information about each unit in the cluster:
$ juju status --format=oneline
- docker/0: 52.4.92.78 (started)
- flannel-docker/0: 52.4.92.78 (started)
- kubernetes/0: 52.4.92.78 (started)
- docker/1: 52.6.104.142 (started)
- flannel-docker/1: 52.6.104.142 (started)
- kubernetes/1: 52.6.104.142 (started)
- etcd/0: 52.5.216.210 (started) 4001/tcp
- juju-gui/0: 52.5.205.174 (started) 80/tcp, 443/tcp
- kubernetes-master/0: 52.6.19.238 (started) 8080/tcp
```shell
$ juju status --format=oneline
- docker/0: 52.4.92.78 (started)
- flannel-docker/0: 52.4.92.78 (started)
- kubernetes/0: 52.4.92.78 (started)
- docker/1: 52.6.104.142 (started)
- flannel-docker/1: 52.6.104.142 (started)
- kubernetes/1: 52.6.104.142 (started)
- etcd/0: 52.5.216.210 (started) 4001/tcp
- juju-gui/0: 52.5.205.174 (started) 80/tcp, 443/tcp
- kubernetes-master/0: 52.6.19.238 (started) 8080/tcp
```
You can use `juju ssh` to access any of the units:
juju ssh kubernetes-master/0
```shell
juju ssh kubernetes-master/0
```
## Run some containers!
@ -95,48 +103,52 @@ launch some containers, but one could use `kubectl` locally by setting
`KUBERNETES_MASTER` to point at the ip address of "kubernetes-master/0".
No pods will be available before starting a container:
```shell
kubectl get pods
NAME READY STATUSRESTARTS AGE
kubectl get pods
NAME READY STATUS RESTARTS AGE
kubectl get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
kubectl get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
```
We'll follow the aws-coreos example. Create a pod manifest: `pod.json`
```json
```json
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "hello",
"labels": {
"name": "hello",
"environment": "testing"
}
"name": "hello",
"labels": {
"name": "hello",
"environment": "testing"
}
},
"spec": {
"containers": [{
"name": "hello",
"image": "quay.io/kelseyhightower/hello",
"ports": [{
"containerPort": 80,
"hostPort": 80
}]
}]
"containers": [{
"name": "hello",
"image": "quay.io/kelseyhightower/hello",
"ports": [{
"containerPort": 80,
"hostPort": 80
}]
}]
}
}
}
```
Create the pod with kubectl:
```shell
kubectl create -f pod.json
```
Create the pod with kubectl:
kubectl create -f pod.json
Get info on the pod:
kubectl get pods
```shell
kubectl get pods
```
To test the hello app, we need to locate which node is hosting
the container. Better tooling for using Juju to introspect container
@ -144,51 +156,63 @@ is in the works but we can use `juju run` and `juju status` to find
our hello app.
Exit out of our ssh session and run:
juju run --unit kubernetes/0 "docker ps -n=1"
...
juju run --unit kubernetes/1 "docker ps -n=1"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
02beb61339d8 quay.io/kelseyhightower/hello:latest /hello About an hour ago Up About an hour k8s_hello....
```shell
juju run --unit kubernetes/0 "docker ps -n=1"
...
juju run --unit kubernetes/1 "docker ps -n=1"
CONTAINER IDIMAGE COMMAND CREATED STATUS PORTS NAMES
02beb61339d8quay.io/kelseyhightower/hello:latest /hello About an hour ago Up About an hourk8s_hello....
```
We see "kubernetes/1" has our container, we can open port 80:
juju run --unit kubernetes/1 "open-port 80"
juju expose kubernetes
sudo apt-get install curl
curl $(juju status --format=oneline kubernetes/1 | cut -d' ' -f3)
```shell
juju run --unit kubernetes/1 "open-port 80"
juju expose kubernetes
sudo apt-get install curl
curl $(juju status --format=oneline kubernetes/1 | cut -d' ' -f3)
```
Finally delete the pod:
juju ssh kubernetes-master/0
kubectl delete pods hello
```shell
juju ssh kubernetes-master/0
kubectl delete pods hello
```
## Scale out cluster
We can add node units like so:
juju add-unit docker # creates unit docker/2, kubernetes/2, docker-flannel/2
```shell
juju add-unit docker # creates unit docker/2, kubernetes/2, docker-flannel/2
```
## Launch the "k8petstore" example app
The [k8petstore example](../../examples/k8petstore/) is available as a
The [k8petstore example](https://github.com/kubernetes/kubernetes/tree/master/examples/k8petstore/) is available as a
[juju action](https://jujucharms.com/docs/devel/actions).
juju action do kubernetes-master/0
```shell
juju action do kubernetes-master/0
```
> Note: this example includes curl statements to exercise the app, which
> automatically generates "petstore" transactions written to redis, and allows
> you to visualize the throughput in your browser.
## Tear down cluster
./kube-down.sh
```shell
./kube-down.sh
```
or destroy your current Juju environment (using the `juju env` command):
juju destroy-environment --force `juju env`
```shell
juju destroy-environment --force `juju env`
```
## More Info
@ -205,16 +229,18 @@ github.com:
### Cloud compatibility
Juju runs natively against a variety of public cloud providers. Juju currently
works with [Amazon Web Service](https://jujucharms.com/docs/stable/config-aws),
[Windows Azure](https://jujucharms.com/docs/stable/config-azure),
[DigitalOcean](https://jujucharms.com/docs/stable/config-digitalocean),
[Google Compute Engine](https://jujucharms.com/docs/stable/config-gce),
[HP Public Cloud](https://jujucharms.com/docs/stable/config-hpcloud),
[Joyent](https://jujucharms.com/docs/stable/config-joyent),
[LXC](https://jujucharms.com/docs/stable/config-LXC), any
[OpenStack](https://jujucharms.com/docs/stable/config-openstack) deployment,
[Vagrant](https://jujucharms.com/docs/stable/config-vagrant), and
[Vmware vSphere](https://jujucharms.com/docs/stable/config-vmware).
works with:
- [Amazon Web Service](https://jujucharms.com/docs/stable/config-aws)
- [Windows Azure](https://jujucharms.com/docs/stable/config-azure)
- [DigitalOcean](https://jujucharms.com/docs/stable/config-digitalocean)
- [Google Compute Engine](https://jujucharms.com/docs/stable/config-gce)
- [HP Public Cloud](https://jujucharms.com/docs/stable/config-hpcloud)
- [Joyent](https://jujucharms.com/docs/stable/config-joyent)
- [LXC](https://jujucharms.com/docs/stable/config-LXC)
- Any [OpenStack](https://jujucharms.com/docs/stable/config-openstack) deployment
- [Vagrant](https://jujucharms.com/docs/stable/config-vagrant)
- [Vmware vSphere](https://jujucharms.com/docs/stable/config-vmware)
If you do not see your favorite cloud provider listed many clouds can be
configured for [manual provisioning](https://jujucharms.com/docs/stable/config-manual).

View File

@ -49,16 +49,19 @@ On the other hand, `libvirt-coreos` might be useful for people investigating low
You can test it with the following command:
```shell
virsh -c qemu:///system pool-list
```
```shell
virsh -c qemu:///system pool-list
```
If you have access error messages, please read https://libvirt.org/acl.html and https://libvirt.org/aclpolkit.html .
In short, if your libvirt has been compiled with Polkit support (ex: Arch, Fedora 21), you can create `/etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules` as follows to grant full access to libvirt to `$USER`
```shell
sudo /bin/sh -c "cat - > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules" << EOF
```shell
sudo /bin/sh -c "cat - > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules" << EOF
```
```conf
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
subject.user == "$USER") {
@ -67,20 +70,18 @@ polkit.addRule(function(action, subject) {
polkit.log("subject=" + subject);
}
});
EOF
```
EOF
```
If your libvirt has not been compiled with Polkit (ex: Ubuntu 14.04.1 LTS), check the permissions on the libvirt unix socket:
{% raw %}
```shell
```shell
$ ls -l /var/run/libvirt/libvirt-sock
srwxrwx--- 1 root libvirtd 0 févr. 12 16:03 /var/run/libvirt/libvirt-sock
$ usermod -a -G libvirtd $USER
# $USER needs to logout/login to have the new group be taken into account
```
{% endraw %}
# $USER needs to logout/login to have the new group be taken into account
```
(Replace `$USER` with your login name)
@ -92,10 +93,10 @@ As we're using the `qemu:///system` instance of libvirt, qemu will run with a sp
If your `$HOME` is world readable, everything is fine. If your $HOME is private, `cluster/kube-up.sh` will fail with an error message like:
```shell
error: Cannot access storage file '$HOME/.../kubernetes/cluster/libvirt-coreos/libvirt_storage_pool/kubernetes_master.img' (as uid:99, gid:78): Permission denied
```
```shell
error: Cannot access storage file '$HOME/.../kubernetes/cluster/libvirt-coreos/libvirt_storage_pool/kubernetes_master.img' (as uid:99, gid:78): Permission denied
```
In order to fix that issue, you have several possibilities:
* set `POOL_PATH` inside `cluster/libvirt-coreos/config-default.sh` to a directory:
* backed by a filesystem with a lot of free disk space
@ -105,23 +106,23 @@ In order to fix that issue, you have several possibilities:
On Arch:
```shell
setfacl -m g:kvm:--x ~
```
```shell
setfacl -m g:kvm:--x ~
```
### Setup
By default, the libvirt-coreos setup will create a single Kubernetes master and 3 Kubernetes nodes. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation.
To start your local cluster, open a shell and run:
```shell
```shell
cd kubernetes
export KUBERNETES_PROVIDER=libvirt-coreos
cluster/kube-up.sh
```
cluster/kube-up.sh
```
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
The `NUM_MINIONS` environment variable may be set to specify the number of nodes to start. If it is not set, the number of nodes defaults to 3.
@ -133,26 +134,26 @@ The `KUBE_PUSH` environment variable may be set to specify which Kubernetes bina
You can check that your machines are there and running with:
```shell
```shell
$ virsh -c qemu:///system list
Id Name State
----------------------------------------------------
15 kubernetes_master running
16 kubernetes_minion-01 running
17 kubernetes_minion-02 running
18 kubernetes_minion-03 running
```
18 kubernetes_minion-03 running
```
You can check that the Kubernetes cluster is working with:
```shell
```shell
$ kubectl get nodes
NAME LABELS STATUS
192.168.10.2 <none> Ready
192.168.10.3 <none> Ready
192.168.10.4 <none> Ready
```
192.168.10.4 <none> Ready
```
The VMs are running [CoreOS](https://coreos.com/).
Your ssh keys have already been pushed to the VM. (It looks for ~/.ssh/id_*.pub)
The user to use to connect to the VM is `core`.
@ -161,110 +162,113 @@ The IPs to connect to the nodes are 192.168.10.2 and onwards.
Connect to `kubernetes_master`:
```shell
ssh core@192.168.10.1
```
```shell
ssh core@192.168.10.1
```
Connect to `kubernetes_minion-01`:
```shell
ssh core@192.168.10.2
```
```shell
ssh core@192.168.10.2
```
### Interacting with your Kubernetes cluster with the `kube-*` scripts.
All of the following commands assume you have set `KUBERNETES_PROVIDER` appropriately:
```shell
export KUBERNETES_PROVIDER=libvirt-coreos
```
```shell
export KUBERNETES_PROVIDER=libvirt-coreos
```
Bring up a libvirt-CoreOS cluster of 5 nodes
```shell
NUM_MINIONS=5 cluster/kube-up.sh
```
```shell
NUM_MINIONS=5 cluster/kube-up.sh
```
Destroy the libvirt-CoreOS cluster
```shell
cluster/kube-down.sh
```
```shell
cluster/kube-down.sh
```
Update the libvirt-CoreOS cluster with a new Kubernetes release produced by `make release` or `make release-skip-tests`:
```shell
cluster/kube-push.sh
```
```shell
cluster/kube-push.sh
```
Update the libvirt-CoreOS cluster with the locally built Kubernetes binaries produced by `make`:
```shell
KUBE_PUSH=local cluster/kube-push.sh
```
```shell
KUBE_PUSH=local cluster/kube-push.sh
```
Interact with the cluster
```shell
kubectl ...
```
```shell
kubectl ...
```
### Troubleshooting
#### !!! Cannot find kubernetes-server-linux-amd64.tar.gz
Build the release tarballs:
```shell
make release
```
```shell
make release
```
#### Can't find virsh in PATH, please fix and retry.
Install libvirt
On Arch:
```shell
pacman -S qemu libvirt
```
```shell
pacman -S qemu libvirt
```
On Ubuntu 14.04.1:
```shell
aptitude install qemu-system-x86 libvirt-bin
```
```shell
aptitude install qemu-system-x86 libvirt-bin
```
On Fedora 21:
```shell
yum install qemu libvirt
```
```shell
yum install qemu libvirt
```
#### error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
Start the libvirt daemon
On Arch:
```shell
systemctl start libvirtd
```
```shell
systemctl start libvirtd
```
On Ubuntu 14.04.1:
```shell
service libvirt-bin start
```
```shell
service libvirt-bin start
```
#### error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied
Fix libvirt access permission (Remember to adapt `$USER`)
On Arch and Fedora 21:
```shell
cat > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules <<EOF
```shell
cat > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules <<EOF
```
```conf
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
subject.user == "$USER") {
@ -273,15 +277,15 @@ polkit.addRule(function(action, subject) {
polkit.log("subject=" + subject);
}
});
EOF
```
EOF
```
On Ubuntu:
```shell
usermod -a -G libvirtd $USER
```
```shell
usermod -a -G libvirtd $USER
```
#### error: Out of memory initializing network (virsh net-create...)
Ensure libvirtd has been restarted since ebtables was installed.

View File

@ -30,11 +30,11 @@ You need [go](https://golang.org/doc/install) at least 1.3+ in your path, please
In a separate tab of your terminal, run the following (since one needs sudo access to start/stop Kubernetes daemons, it is easier to run the entire script as root):
```shell
```shell
cd kubernetes
hack/local-up-cluster.sh
```
hack/local-up-cluster.sh
```
This will build and start a lightweight local cluster, consisting of a master
and a single node. Type Control-C to shut it down.
@ -48,13 +48,12 @@ Your cluster is running, and you want to start running containers!
You can now use any of the cluster/kubectl.sh commands to interact with your local setup.
```shell
```shell
cluster/kubectl.sh get pods
cluster/kubectl.sh get services
cluster/kubectl.sh get replicationcontrollers
cluster/kubectl.sh run my-nginx --image=nginx --replicas=2 --port=80
## begin wait for provision to complete, you can monitor the docker pull by opening a new terminal
sudo docker images
## you should see it pulling the nginx image, once the above command returns it
@ -66,10 +65,9 @@ cluster/kubectl.sh run my-nginx --image=nginx --replicas=2 --port=80
## introspect Kubernetes!
cluster/kubectl.sh get pods
cluster/kubectl.sh get services
cluster/kubectl.sh get replicationcontrollers
```
cluster/kubectl.sh get replicationcontrollers
```
### Running a user defined pod
Note the difference between a [container](../user-guide/containers)
@ -78,10 +76,10 @@ However you cannot view the nginx start page on localhost. To verify that nginx
You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein:
```shell
cluster/kubectl.sh create -f docs/user-guide/pod.yaml
```
```shell
cluster/kubectl.sh create -f docs/user-guide/pod.yaml
```
Congratulations!
### Troubleshooting
@ -104,12 +102,12 @@ You are running a single node setup. This has the limitation of only supporting
#### I changed Kubernetes code, how do I run it?
```shell
```shell
cd kubernetes
hack/build-go.sh
hack/local-up-cluster.sh
```
hack/local-up-cluster.sh
```
#### kubectl claims to start a container but `get pods` and `docker ps` don't show it.
One or more of the KUbernetes daemons might've crashed. Tail the logs of each in /tmp.

View File

@ -9,9 +9,7 @@ alternative to Google Cloud Logging.
To use Elasticsearch and Kibana for cluster logging you should set the following environment variable as shown below:
```shell
KUBE_LOGGING_DESTINATION=elasticsearch
```
You should also ensure that `KUBE_ENABLE_NODE_LOGGING=true` (which is the default for the GCE platform).
@ -20,7 +18,6 @@ Now when you create a cluster a message will indicate that the Fluentd node-leve
will target Elasticsearch:
```shell
$ cluster/kube-up.sh
...
Project: kubernetes-satnam
@ -38,14 +35,12 @@ NAME ZONE SIZE_GB TYPE STATUS
kubernetes-master-pd us-central1-b 20 pd-ssd READY
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/regions/us-central1/addresses/kubernetes-master-ip].
+++ Logging using Fluentd to elasticsearch
```
The node level Fluentd collector pods and the Elasticsearech pods used to ingest cluster logs and the pod for the Kibana
viewer should be running in the kube-system namespace soon after the cluster comes to life.
```shell
$ kubectl get pods --namespace=kube-system
NAME READY REASON RESTARTS AGE
elasticsearch-logging-v1-78nog 1/1 Running 0 2h
@ -58,7 +53,6 @@ kibana-logging-v1-bhpo8 1/1 Running 0 2h
kube-dns-v3-7r1l9 3/3 Running 0 2h
monitoring-heapster-v4-yl332 1/1 Running 1 2h
monitoring-influx-grafana-v1-o79xf 2/2 Running 0 2h
```
Here we see that for a four node cluster there is a `fluent-elasticsearch` pod running which gathers
@ -68,7 +62,6 @@ accessed via a Kubernetes service definition.
```shell
$ kubectl get services --namespace=kube-system
NAME LABELS SELECTOR IP(S) PORT(S)
elasticsearch-logging k8s-app=elasticsearch-logging,kubernetes.io/cluster-service=true,kubernetes.io/name=Elasticsearch k8s-app=elasticsearch-logging 10.0.222.57 9200/TCP
@ -79,13 +72,11 @@ kubernetes component=apiserver,provider=kubernetes
monitoring-grafana kubernetes.io/cluster-service=true,kubernetes.io/name=Grafana k8s-app=influxGrafana 10.0.167.139 80/TCP
monitoring-heapster kubernetes.io/cluster-service=true,kubernetes.io/name=Heapster k8s-app=heapster 10.0.208.221 80/TCP
monitoring-influxdb kubernetes.io/cluster-service=true,kubernetes.io/name=InfluxDB k8s-app=influxGrafana 10.0.188.57 8083/TCP
```
By default two Elasticsearch replicas are created and one Kibana replica is created.
```shell
$ kubectl get rc --namespace=kube-system
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
elasticsearch-logging-v1 elasticsearch-logging gcr.io/google_containers/elasticsearch:1.4 k8s-app=elasticsearch-logging,version=v1 2
@ -96,7 +87,6 @@ kube-dns-v3 etcd gcr.io/google_containers/
monitoring-heapster-v4 heapster gcr.io/google_containers/heapster:v0.14.3 k8s-app=heapster,version=v4 1
monitoring-influx-grafana-v1 influxdb gcr.io/google_containers/heapster_influxdb:v0.3 k8s-app=influxGrafana,version=v1 1
grafana gcr.io/google_containers/heapster_grafana:v0.7
```
The Elasticsearch and Kibana services are not directly exposed via a publicly reachable IP address. Instead,
@ -104,7 +94,6 @@ they can be accessed via the service proxy running at the master. The URLs for a
and Kibana via the service proxy can be found using the `kubectl cluster-info` command.
```shell
$ kubectl cluster-info
Kubernetes master is running at https://146.148.94.154
Elasticsearch is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
@ -114,14 +103,12 @@ KubeUI is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/
Grafana is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
Heapster is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster
InfluxDB is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
```
Before accessing the logs ingested into Elasticsearch using a browser and the service proxy URL we need to find out
the `admin` password for the cluster using `kubectl config view`.
```shell
$ kubectl config view
...
- name: kubernetes-satnam_kubernetes-basic-auth
@ -129,7 +116,6 @@ $ kubectl config view
password: 7GlspJ9Q43OnGIJO
username: admin
...
```
The first time you try to access the cluster from a browser a dialog box appears asking for the username and password.
@ -143,8 +129,10 @@ You can now type Elasticsearch queries directly into the browser. Alternatively
from your local machine using `curl` but first you need to know what your bearer token is:
```shell
$ kubectl config view --minify
```
```conf
apiVersion: v1
clusters:
- cluster:
@ -165,14 +153,15 @@ users:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp
```
Now you can issue requests to Elasticsearch:
```shell
$ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insecure https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/
```
```json
{
"status" : 200,
"name" : "Vance Astrovik",
@ -186,14 +175,15 @@ $ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insec
},
"tagline" : "You Know, for Search"
}
```
Note that you need the trailing slash at the end of the service proxy URL. Here is an example of a search:
```shell
$ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insecure https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/_search?pretty=true
```
```json
{
"took" : 7,
"timed_out" : false,
@ -227,7 +217,6 @@ $ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insec
} ]
}
}
```
The Elasticsearch website contains information about [URI search queries](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-uri-request) which can be used to extract the required logs.
@ -244,10 +233,8 @@ Another way to access Elasticsearch and Kibana in the cluster is to use `kubectl
a local proxy to the remote master:
```shell
$ kubectl proxy
Starting to serve on localhost:8001
```
Now you can visit the URL [http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging](http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging) to contact Elasticsearch and [http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kibana-logging](http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kibana-logging) to access the Kibana viewer.

View File

@ -7,7 +7,6 @@ Cluster level logging for Kubernetes allows us to collect logs which persist bey
logging and DNS resolution for names of Kubernetes services:
```shell
$ kubectl get pods --namespace=kube-system
NAME READY REASON RESTARTS AGE
fluentd-cloud-logging-kubernetes-minion-0f64 1/1 Running 0 32m
@ -16,22 +15,20 @@ fluentd-cloud-logging-kubernetes-minion-pk22 1/1 Running 0 31
fluentd-cloud-logging-kubernetes-minion-20ej 1/1 Running 0 31m
kube-dns-v3-pk22 3/3 Running 0 32m
monitoring-heapster-v1-20ej 0/1 Running 9 32m
```
Here is the same information in a picture which shows how the pods might be placed on specific nodes.
![Cluster](../../examples/blog-logging/diagrams/cloud-logging.png)
![Cluster](https://github.com/kubernetes/kubernetes/tree/master/examples/blog-logging/diagrams/cloud-logging.png)
This diagram shows four nodes created on a Google Compute Engine cluster with the name of each VM node on a purple background. The internal and public IPs of each node are shown on gray boxes and the pods running in each node are shown in green boxes. Each pod box shows the name of the pod and the namespace it runs in, the IP address of the pod and the images which are run as part of the pod's execution. Here we see that every node is running a fluentd-cloud-logging pod which is collecting the log output of the containers running on the same node and sending them to Google Cloud Logging. A pod which provides the
[cluster DNS service](../admin/dns) runs on one of the nodes and a pod which provides monitoring support runs on another node.
To help explain how cluster level logging works let's start off with a synthetic log generator pod specification [counter-pod.yaml](../../examples/blog-logging/counter-pod.yaml):
To help explain how cluster level logging works let's start off with a synthetic log generator pod specification [counter-pod.yaml](https://github.com/kubernetes/kubernetes/tree/master/examples/blog-logging/counter-pod.yaml):
<!-- BEGIN MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
```yaml
apiVersion: v1
kind: Pod
metadata:
@ -42,42 +39,36 @@ spec:
image: ubuntu:14.04
args: [bash, -c,
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
```
[Download example](../../examples/blog-logging/counter-pod.yaml)
[Download example](https://github.com/kubernetes/kubernetes/tree/master/examples/blog-logging/counter-pod.yaml)
<!-- END MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
This pod specification has one container which runs a bash script when the container is born. This script simply writes out the value of a counter and the date once per second and runs indefinitely. Let's create the pod in the default
namespace.
```shell
$ kubectl create -f examples/blog-logging/counter-pod.yaml
$ kubectl create -f examples/blog-logging/counter-pod.yaml
pods/counter
```
We can observe the running pod:
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
counter 1/1 Running 0 5m
```
This step may take a few minutes to download the ubuntu:14.04 image during which the pod status will be shown as `Pending`.
One of the nodes is now running the counter pod:
![Counter Pod](../../examples/blog-logging/diagrams/27gf-counter.png)
![Counter Pod](https://github.com/kubernetes/kubernetes/tree/master/examples/blog-logging/diagrams/27gf-counter.png)
When the pod status changes to `Running` we can use the kubectl logs command to view the output of this counter pod.
```shell
$ kubectl logs counter
0: Tue Jun 2 21:37:31 UTC 2015
1: Tue Jun 2 21:37:32 UTC 2015
@ -86,13 +77,11 @@ $ kubectl logs counter
4: Tue Jun 2 21:37:35 UTC 2015
5: Tue Jun 2 21:37:36 UTC 2015
...
```
This command fetches the log text from the Docker log file for the image that is running in this container. We can connect to the running container and observe the running counter bash script.
```shell
$ kubectl exec -i counter bash
ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
@ -100,31 +89,25 @@ root 1 0.0 0.0 17976 2888 ? Ss 00:02 0:00 bash -c for ((i
root 468 0.0 0.0 17968 2904 ? Ss 00:05 0:00 bash
root 479 0.0 0.0 4348 812 ? S 00:05 0:00 sleep 1
root 480 0.0 0.0 15572 2212 ? R 00:05 0:00 ps aux
```
What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original container's execution and only see the log lines for the new container? Let's find out. First let's stop the currently running counter.
```shell
$ kubectl stop pod counter
pods/counter
```
Now let's restart the counter.
```shell
$ kubectl create -f examples/blog-logging/counter-pod.yaml
pods/counter
```
Let's wait for the container to restart and get the log lines again.
```shell
$ kubectl logs counter
0: Tue Jun 2 21:51:40 UTC 2015
1: Tue Jun 2 21:51:41 UTC 2015
@ -135,7 +118,6 @@ $ kubectl logs counter
6: Tue Jun 2 21:51:46 UTC 2015
7: Tue Jun 2 21:51:47 UTC 2015
8: Tue Jun 2 21:51:48 UTC 2015
```
We've lost the log lines from the first invocation of the container in this pod! Ideally, we want to preserve all the log lines from each invocation of each container in the pod. Furthermore, even if the pod is restarted we would still like to preserve all the log lines that were ever emitted by the containers in the pod. But don't fear, this is the functionality provided by cluster level logging in Kubernetes. When a cluster is created, the standard output and standard error output of each container can be ingested using a [Fluentd](http://www.fluentd.org/) agent running on each node into either [Google Cloud Logging](https://cloud.google.com/logging/docs/) or into Elasticsearch and viewed with Kibana.
@ -147,7 +129,6 @@ This log collection pod has a specification which looks something like this:
<!-- BEGIN MUNGE: EXAMPLE ../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml -->
```yaml
apiVersion: v1
kind: Pod
metadata:
@ -178,7 +159,6 @@ spec:
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
```
[Download example](https://releases.k8s.io/release-1.1/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)
@ -202,11 +182,9 @@ Note the first container counted to 108 and then it was terminated. When the nex
We could query the ingested logs from BigQuery using the SQL query which reports the counter log lines showing the newest lines first:
```shell
SELECT metadata.timestamp, structPayload.log
SELECT metadata.timestamp, structPayload.log
FROM [mylogs.kubernetes_counter_default_count_20150611]
ORDER BY metadata.timestamp DESC
```
Here is some sample output:
@ -217,15 +195,12 @@ We could also fetch the logs from Google Cloud Storage buckets to our desktop or
```shell
$ gsutil -m cp -r gs://myproject/kubernetes.counter_default_count/2015/06/11 .
```
Now we can run queries over the ingested logs. The example below uses the [jq](http://stedolan.github.io/jq/) program to extract just the log lines.
```shell
$ cat 21\:00\:00_21\:59\:59_S0.json | jq '.structPayload.log'
"0: Thu Jun 11 21:39:38 UTC 2015\n"
"1: Thu Jun 11 21:39:39 UTC 2015\n"
@ -236,12 +211,8 @@ $ cat 21\:00\:00_21\:59\:59_S0.json | jq '.structPayload.log'
"6: Thu Jun 11 21:39:44 UTC 2015\n"
"7: Thu Jun 11 21:39:45 UTC 2015\n"
...
```
This page has touched briefly on the underlying mechanisms that support gathering cluster level logs on a Kubernetes deployment. The approach here only works for gathering the standard output and standard error output of the processes running in the pod's containers. To gather other logs that are stored in files one can use a sidecar container to gather the required files as described at the page [Collecting log files within containers with Fluentd](http://releases.k8s.io/release-1.1/contrib/logging/fluentd-sidecar-gcp/README.md) and sending them to the Google Cloud Logging service.
Some of the material in this section also appears in the blog article [Cluster Level Logging with Kubernetes](http://blog.kubernetes.io/2015/06/cluster-level-logging-with-kubernetes).
Some of the material in this section also appears in the blog article [Cluster Level Logging with Kubernetes](http://blog.kubernetes.io/2015/06/cluster-level-logging-with-kubernetes)

View File

@ -185,7 +185,7 @@ host machine (mac).
To learn more about Pods, Volumes, Labels, Services, and Replication Controllers, start with the
[Kubernetes Walkthrough](../user-guide/walkthrough/).
To skip to a more advanced example, see the [Guestbook Example](../../examples/guestbook/)
To skip to a more advanced example, see the [Guestbook Example](https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook/)
1. Destroy cluster

View File

@ -43,81 +43,81 @@ Further information is available in the Kubernetes on Mesos [contrib directory][
Log into the future Kubernetes *master node* over SSH, replacing the placeholder below with the correct IP address.
```shell
ssh jclouds@${ip_address_of_master_node}
```
```shell
ssh jclouds@${ip_address_of_master_node}
```
Build Kubernetes-Mesos.
```shell
```shell
git clone https://github.com/kubernetes/kubernetes
cd kubernetes
export KUBERNETES_CONTRIB=mesos
make
```
make
```
Set some environment variables.
The internal IP address of the master may be obtained via `hostname -i`.
```shell
```shell
export KUBERNETES_MASTER_IP=$(hostname -i)
export KUBERNETES_MASTER=http://${KUBERNETES_MASTER_IP}:8888
```
export KUBERNETES_MASTER=http://${KUBERNETES_MASTER_IP}:8888
```
Note that KUBERNETES_MASTER is used as the api endpoint. If you have existing `~/.kube/config` and point to another endpoint, you need to add option `--server=${KUBERNETES_MASTER}` to kubectl in later steps.
### Deploy etcd
Start etcd and verify that it is running:
```shell
```shell
sudo docker run -d --hostname $(uname -n) --name etcd \
-p 4001:4001 -p 7001:7001 quay.io/coreos/etcd:v2.0.12 \
--listen-client-urls http://0.0.0.0:4001 \
--advertise-client-urls http://${KUBERNETES_MASTER_IP}:4001
```
```shell
--advertise-client-urls http://${KUBERNETES_MASTER_IP}:4001
```
```shell
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fd7bac9e2301 quay.io/coreos/etcd:v2.0.12 "/etcd" 5s ago Up 3s 2379/tcp, 2380/... etcd
```
fd7bac9e2301 quay.io/coreos/etcd:v2.0.12 "/etcd" 5s ago Up 3s 2379/tcp, 2380/... etcd
```
It's also a good idea to ensure your etcd instance is reachable by testing it
```shell
curl -L http://${KUBERNETES_MASTER_IP}:4001/v2/keys/
```
```shell
curl -L http://${KUBERNETES_MASTER_IP}:4001/v2/keys/
```
If connectivity is OK, you will see an output of the available keys in etcd (if any).
### Start Kubernetes-Mesos Services
Update your PATH to more easily run the Kubernetes-Mesos binaries:
```shell
export PATH="$(pwd)/_output/local/go/bin:$PATH"
```
```shell
export PATH="$(pwd)/_output/local/go/bin:$PATH"
```
Identify your Mesos master: depending on your Mesos installation this is either a `host:port` like `mesos-master:5050` or a ZooKeeper URL like `zk://zookeeper:2181/mesos`.
In order to let Kubernetes survive Mesos master changes, the ZooKeeper URL is recommended for production environments.
```shell
export MESOS_MASTER=<host:port or zk:// url>
```
```shell
export MESOS_MASTER=<host:port or zk:// url>
```
Create a cloud config file `mesos-cloud.conf` in the current directory with the following contents:
```shell
```shell
$ cat <<EOF >mesos-cloud.conf
[mesos-cloud]
mesos-master = ${MESOS_MASTER}
EOF
```
EOF
```
Now start the kubernetes-mesos API server, controller manager, and scheduler on the master node:
```shell
```shell
$ km apiserver \
--address=${KUBERNETES_MASTER_IP} \
--etcd-servers=http://${KUBERNETES_MASTER_IP}:4001 \
@ -142,38 +142,38 @@ $ km scheduler \
--api-servers=${KUBERNETES_MASTER_IP}:8888 \
--cluster-dns=10.10.10.10 \
--cluster-domain=cluster.local \
--v=2 >scheduler.log 2>&1 &
```
--v=2 >scheduler.log 2>&1 &
```
Disown your background jobs so that they'll stay running if you log out.
```shell
disown -a
```
```shell
disown -a
```
#### Validate KM Services
Add the appropriate binary folder to your `PATH` to access kubectl:
```shell
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
```
```shell
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
```
Interact with the kubernetes-mesos framework via `kubectl`:
```shell
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
```
```shell
NAME READY STATUS RESTARTS AGE
```
```shell
# NOTE: your service IPs will likely differ
$ kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
k8sm-scheduler component=scheduler,provider=k8sm <none> 10.10.10.113 10251/TCP
kubernetes component=apiserver,provider=kubernetes <none> 10.10.10.1 443/TCP
```
kubernetes component=apiserver,provider=kubernetes <none> 10.10.10.1 443/TCP
```
Lastly, look for Kubernetes in the Mesos web GUI by pointing your browser to
`http://<mesos-master-ip:port>`. Make sure you have an active VPN connection.
Go to the Frameworks tab, and look for an active framework named "Kubernetes".
@ -182,11 +182,11 @@ Go to the Frameworks tab, and look for an active framework named "Kubernetes".
Write a JSON pod description to a local file:
```shell
$ cat <<EOPOD >nginx.yaml
```
```yaml
```shell
$ cat <<EOPOD >nginx.yaml
```
```yaml
apiVersion: v1
kind: Pod
metadata:
@ -197,25 +197,25 @@ spec:
image: nginx
ports:
- containerPort: 80
EOPOD
```
EOPOD
```
Send the pod description to Kubernetes using the `kubectl` CLI:
```shell
```shell
$ kubectl create -f ./nginx.yaml
pods/nginx
```
pods/nginx
```
Wait a minute or two while `dockerd` downloads the image layers from the internet.
We can use the `kubectl` interface to monitor the status of our pod:
```shell
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 14s
```
nginx 1/1 Running 0 14s
```
Verify that the pod task is running in the Mesos web GUI. Click on the
Kubernetes framework. The next screen should show the running Mesos task that
started the Kubernetes pod.
@ -251,31 +251,31 @@ In addition the service template at [cluster/addons/dns/skydns-svc.yaml.in][12]
To do this automatically:
```shell
```shell
sed -e "s/{{ pillar\['dns_replicas'\] }}/1/g;"\
"s,\(command = \"/kube2sky\"\),\\1\\"$'\n'" - --kube_master_url=${KUBERNETES_MASTER},;"\
"s/{{ pillar\['dns_domain'\] }}/cluster.local/g" \
cluster/addons/dns/skydns-rc.yaml.in > skydns-rc.yaml
sed -e "s/{{ pillar\['dns_server'\] }}/10.10.10.10/g" \
cluster/addons/dns/skydns-svc.yaml.in > skydns-svc.yaml
```
cluster/addons/dns/skydns-svc.yaml.in > skydns-svc.yaml
```
Now the kube-dns pod and service are ready to be launched:
```shell
```shell
kubectl create -f ./skydns-rc.yaml
kubectl create -f ./skydns-svc.yaml
```
kubectl create -f ./skydns-svc.yaml
```
Check with `kubectl get pods --namespace=kube-system` that 3/3 containers of the pods are eventually up and running. Note that the kube-dns pods run in the `kube-system` namespace, not in `default`.
To check that the new DNS service in the cluster works, we start a busybox pod and use that to do a DNS lookup. First create the `busybox.yaml` pod spec:
```shell
cat <<EOF >busybox.yaml
```
```yaml
```shell
cat <<EOF >busybox.yaml
```
```yaml
apiVersion: v1
kind: Pod
metadata:
@ -290,31 +290,31 @@ spec:
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
EOF
```
EOF
```
Then start the pod:
```shell
kubectl create -f ./busybox.yaml
```
```shell
kubectl create -f ./busybox.yaml
```
When the pod is up and running, start a lookup for the Kubernetes master service, made available on 10.10.10.1 by default:
```shell
kubectl exec busybox -- nslookup kubernetes
```
```shell
kubectl exec busybox -- nslookup kubernetes
```
If everything works fine, you will get this output:
```shell
```shell
Server: 10.10.10.10
Address 1: 10.10.10.10
Name: kubernetes
Address 1: 10.10.10.1
```
Address 1: 10.10.10.1
```
## What next?
Try out some of the standard [Kubernetes examples][9].
@ -338,7 +338,4 @@ Future work will add instructions to this guide to enable support for Kubernetes
[10]: http://open.mesosphere.com/getting-started/cloud/google/mesosphere/#vpn-setup
[11]: https://releases.k8s.io/release-1.1/cluster/addons/dns/skydns-rc.yaml.in
[12]: https://releases.k8s.io/release-1.1/cluster/addons/dns/skydns-svc.yaml.in
[13]: https://releases.k8s.io/release-1.1/contrib/mesos/README.md
[13]: https://releases.k8s.io/release-1.1/contrib/mesos/README.md

View File

@ -23,50 +23,40 @@ If you are using the [hack/local-up-cluster.sh](https://releases.k8s.io/release-
set these flags:
```shell
$ export CONTAINER_RUNTIME=rkt
$ export RKT_PATH=$PATH_TO_RKT_BINARY
$ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE
```
Then we can launch the local cluster using the script:
```shell
$ hack/local-up-cluster.sh
```
### CoreOS cluster on Google Compute Engine (GCE)
To use rkt as the container runtime for your CoreOS cluster on GCE, you need to specify the OS distribution, project, image:
```shell
$ export KUBE_OS_DISTRIBUTION=coreos
$ export KUBE_GCE_MINION_IMAGE=<image_id>
$ export KUBE_GCE_MINION_PROJECT=coreos-cloud
$ export KUBE_CONTAINER_RUNTIME=rkt
```
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
```shell
$ export KUBE_RKT_VERSION=0.8.0
```
Then you can launch the cluster by:
```shell
$ kube-up.sh
```
Note that we are still working on making all containerized the master components run smoothly in rkt. Before that we are not able to run the master node with rkt yet.
### CoreOS cluster on AWS
@ -74,37 +64,29 @@ Note that we are still working on making all containerized the master components
To use rkt as the container runtime for your CoreOS cluster on AWS, you need to specify the provider and OS distribution:
```shell
$ export KUBERNETES_PROVIDER=aws
$ export KUBE_OS_DISTRIBUTION=coreos
$ export KUBE_CONTAINER_RUNTIME=rkt
```
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
```shell
$ export KUBE_RKT_VERSION=0.8.0
```
You can optionally choose the CoreOS channel by setting `COREOS_CHANNEL`:
```shell
$ export COREOS_CHANNEL=stable
```
Then you can launch the cluster by:
```shell
$ kube-up.sh
```
Note: CoreOS is not supported as the master using the automated launch
scripts. The master node is always Ubuntu.
@ -112,7 +94,7 @@ scripts. The master node is always Ubuntu.
See [a simple nginx example](/{{page.version}}/docs/user-guide/simple-nginx) to try out your new cluster.
For more complete applications, please look in the [examples directory](../../../examples/).
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/master/examples/).
### Debugging
@ -138,21 +120,17 @@ using `journalctl`:
- Check the running state of the systemd service:
```shell
$ sudo journalctl -u $SERVICE_FILE
```
where `$SERVICE_FILE` is the name of the service file created for the pod, you can find it in the kubelet logs.
##### Check the log of the container in the pod:
```shell
$ sudo journalctl -M rkt-$UUID -u $CONTAINER_NAME
```
where `$UUID` is the rkt pod's UUID, which you can find via `rkt list --full`, and `$CONTAINER_NAME` is the container's name.
##### Check Kubernetes events, logs.

View File

@ -23,50 +23,40 @@ If you are using the [hack/local-up-cluster.sh](https://releases.k8s.io/release-
set these flags:
```shell
$ export CONTAINER_RUNTIME=rkt
$ export RKT_PATH=$PATH_TO_RKT_BINARY
$ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE
```
Then we can launch the local cluster using the script:
```shell
$ hack/local-up-cluster.sh
```
### CoreOS cluster on Google Compute Engine (GCE)
To use rkt as the container runtime for your CoreOS cluster on GCE, you need to specify the OS distribution, project, image:
```shell
$ export KUBE_OS_DISTRIBUTION=coreos
$ export KUBE_GCE_MINION_IMAGE=<image_id>
$ export KUBE_GCE_MINION_PROJECT=coreos-cloud
$ export KUBE_CONTAINER_RUNTIME=rkt
```
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
```shell
$ export KUBE_RKT_VERSION=0.8.0
```
Then you can launch the cluster by:
```shell
$ kube-up.sh
```
Note that we are still working on making all containerized the master components run smoothly in rkt. Before that we are not able to run the master node with rkt yet.
### CoreOS cluster on AWS
@ -74,37 +64,29 @@ Note that we are still working on making all containerized the master components
To use rkt as the container runtime for your CoreOS cluster on AWS, you need to specify the provider and OS distribution:
```shell
$ export KUBERNETES_PROVIDER=aws
$ export KUBE_OS_DISTRIBUTION=coreos
$ export KUBE_CONTAINER_RUNTIME=rkt
```
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
```shell
$ export KUBE_RKT_VERSION=0.8.0
```
You can optionally choose the CoreOS channel by setting `COREOS_CHANNEL`:
```shell
$ export COREOS_CHANNEL=stable
```
Then you can launch the cluster by:
```shell
$ kube-up.sh
```
Note: CoreOS is not supported as the master using the automated launch
scripts. The master node is always Ubuntu.
@ -112,7 +94,7 @@ scripts. The master node is always Ubuntu.
See [a simple nginx example](/{{page.version}}/docs/user-guide/simple-nginx) to try out your new cluster.
For more complete applications, please look in the [examples directory](../../../examples/).
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/master/examples/).
### Debugging
@ -138,21 +120,17 @@ using `journalctl`:
- Check the running state of the systemd service:
```shell
$ sudo journalctl -u $SERVICE_FILE
```
where `$SERVICE_FILE` is the name of the service file created for the pod, you can find it in the kubelet logs.
##### Check the log of the container in the pod:
```shell
$ sudo journalctl -M rkt-$UUID -u $CONTAINER_NAME
```
where `$UUID` is the rkt pod's UUID, which you can find via `rkt list --full`, and `$CONTAINER_NAME` is the container's name.
##### Check Kubernetes events, logs.

View File

@ -258,7 +258,7 @@ many distinct files to make:
You can make the files by copying the `$HOME/.kube/config`, by following the code
in `cluster/gce/configure-vm.sh` or by using the following template:
```yaml
```yaml
apiVersion: v1
kind: Config
users:
@ -275,8 +275,7 @@ contexts:
user: kubelet
name: service-account-context
current-context: service-account-context
```
```
Put the kubeconfig(s) on every node. The examples later in this
guide assume that there are kubeconfigs in `/var/lib/kube-proxy/kubeconfig` and
`/var/lib/kubelet/kubeconfig`.
@ -305,12 +304,11 @@ If you previously had Docker installed on a node without setting Kubernetes-spec
options, you may have a Docker-created bridge and iptables rules. You may want to remove these
as follows before proceeding to configure Docker for Kubernetes.
```shell
```shell
iptables -t nat -F
ifconfig docker0 down
brctl delbr docker0
```
```
The way you configure docker will depend in whether you have chosen the routable-vip or overlay-network approaches for your network.
Some suggested docker options:
- create your own bridge for the per-node CIDR ranges, call it cbr0, and set `--bridge=cbr0` option on docker.
@ -412,10 +410,9 @@ If you have turned off Docker's IP masquerading to allow pods to talk to each
other, then you may need to do masquerading just for destination IPs outside
the cluster network. For example:
```shell
```shell
iptables -w -t nat -A POSTROUTING -o eth0 -j MASQUERADE \! -d ${CLUSTER_SUBNET}
```
```
This will rewrite the source address from
the PodIP to the Node IP for traffic bound outside the cluster, and kernel
[connection tracking](http://www.iptables.info/en/connection-state)
@ -483,7 +480,7 @@ For each of these components, the steps to start them running are similar:
#### Apiserver pod template
```json
```json
{
"kind": "Pod",
"apiVersion": "v1",
@ -554,8 +551,7 @@ For each of these components, the steps to start them running are similar:
]
}
}
```
```
Here are some apiserver flags you may need to set:
- `--cloud-provider=` see [cloud providers](#cloud-providers)
@ -614,8 +610,7 @@ Some cloud providers require a config file. If so, you need to put config file i
Complete this template for the scheduler pod:
```json
```json
{
"kind": "Pod",
"apiVersion": "v1",
@ -650,8 +645,7 @@ Complete this template for the scheduler pod:
}
}
```
```
Typically, no additional flags are required for the scheduler.
Optionally, you may want to mount `/var/log` as well and redirect output there.
@ -660,8 +654,7 @@ Optionally, you may want to mount `/var/log` as well and redirect output there.
Template for controller manager pod:
```json
```json
{
"kind": "Pod",
"apiVersion": "v1",
@ -721,8 +714,7 @@ Template for controller manager pod:
}
}
```
```
Flags to consider using with controller manager:
- `--cluster-name=$CLUSTER_NAME`
- `--cluster-cidr=`
@ -742,14 +734,13 @@ controller manager will retry reaching the apiserver until it is up.
Use `ps` or `docker ps` to verify that each process has started. For example, verify that kubelet has started a container for the apiserver like this:
```shell
```shell
$ sudo docker ps | grep apiserver:
5783290746d5 gcr.io/google_containers/kube-apiserver:e36bf367342b5a80d7467fd7611ad873 "/bin/sh -c '/usr/lo'" 10 seconds ago Up 9 seconds k8s_kube-apiserver.feb145e7_kube-apiserver-kubernetes-master_default_eaebc600cf80dae59902b44225f2fc0a_225a4695
```
```
Then try to connect to the apiserver:
```shell
```shell
$ echo $(curl -s http://localhost:8080/healthz)
ok
$ curl -s http://localhost:8080/api
@ -758,8 +749,7 @@ $ curl -s http://localhost:8080/api
"v1"
]
}
```
```
If you have selected the `--register-node=true` option for kubelets, they will now begin self-registering with the apiserver.
You should soon be able to see all your nodes by running the `kubectl get nodes` command.
Otherwise, you will need to manually create node objects.

View File

@ -32,12 +32,10 @@ On each Node:
First, get the sample configurations for this tutorial
```
wget https://github.com/Metaswitch/calico-kubernetes-ubuntu-demo/archive/master.tar.gz
tar -xvf master.tar.gz
```
### Setup environment variables for systemd services on Master
Many of the sample systemd services provided rely on environment variables on a per-node basis. Here we'll edit those environment variables and move them into place.
@ -45,27 +43,22 @@ Many of the sample systemd services provided rely on environment variables on a
1.) Copy the network-environment-template from the `master` directory for editing.
```
cp calico-kubernetes-ubuntu-demo-master/master/network-environment-template network-environment
```
2.) Edit `network-environment` to represent your current host's settings.
3.) Move the `network-environment` into `/etc`
```
sudo mv -f network-environment /etc
```
### Install Kubernetes on Master
1.) Build & Install Kubernetes binaries
```
# Get the Kubernetes Source
wget https://github.com/kubernetes/kubernetes/releases/download/v1.0.3/kubernetes.tar.gz
@ -79,11 +72,9 @@ sudo cp -f binaries/master/* /usr/bin
sudo cp -f binaries/kubectl /usr/bin
```
2.) Install the sample systemd processes settings for launching kubernetes services
```
sudo cp -f calico-kubernetes-ubuntu-demo-master/master/*.service /etc/systemd
sudo systemctl enable /etc/systemd/etcd.service
sudo systemctl enable /etc/systemd/kube-apiserver.service
@ -91,24 +82,20 @@ sudo systemctl enable /etc/systemd/kube-controller-manager.service
sudo systemctl enable /etc/systemd/kube-scheduler.service
```
3.) Launch the processes.
```
sudo systemctl start etcd.service
sudo systemctl start kube-apiserver.service
sudo systemctl start kube-controller-manager.service
sudo systemctl start kube-scheduler.service
```
### Install Calico on Master
In order to allow the master to route to pods on our nodes, we will launch the calico-node daemon on our master. This will allow it to learn routes over BGP from the other calico-node daemons in the cluster. The docker daemon should already be running before calico is started.
```
# Install the calicoctl binary, which will be used to launch calico
wget https://github.com/projectcalico/calico-docker/releases/download/v0.5.5/calicoctl
chmod +x calicoctl
@ -120,7 +107,6 @@ sudo systemctl enable /etc/systemd/calico-node.service
sudo systemctl start calico-node.service
```
>Note: calico-node may take a few minutes on first boot while it downloads the calico-node docker image.
## Setup Nodes
@ -132,30 +118,24 @@ Perform these steps **once on each node**, ensuring you appropriately set the en
1.) Get the sample configurations for this tutorial
```
wget https://github.com/Metaswitch/calico-kubernetes-ubuntu-demo/archive/master.tar.gz
tar -xvf master.tar.gz
```
2.) Copy the network-environment-template from the `node` directory
```
cp calico-kubernetes-ubuntu-demo-master/node/network-environment-template network-environment
```
3.) Edit `network-environment` to represent your current host's settings.
4.) Move `network-environment` into `/etc`
```
sudo mv -f network-environment /etc
```
### Configure Docker on the Node
#### Create the veth
@ -163,14 +143,12 @@ sudo mv -f network-environment /etc
Instead of using docker's default interface (docker0), we will configure a new one to use desired IP ranges
```
sudo apt-get install -y bridge-utils
sudo brctl addbr cbr0
sudo ifconfig cbr0 up
sudo ifconfig cbr0 <IP>/24
```
> Replace \<IP\> with the subnet for this host's containers. Example topology:
Node | cbr0 IP
@ -190,18 +168,15 @@ The Docker daemon must be started and told to use the already configured cbr0 in
3.) Reload systemctl and restart docker.
```
sudo systemctl daemon-reload
sudo systemctl restart docker
```
### Install Calico on the Node
1.) Install Calico
```
# Get the calicoctl binary
wget https://github.com/projectcalico/calico-docker/releases/download/v0.5.5/calicoctl
chmod +x calicoctl
@ -213,24 +188,20 @@ sudo systemctl enable /etc/systemd/calico-node.service
sudo systemctl start calico-node.service
```
>The calico-node service will automatically get the kubernetes-calico plugin binary and install it on the host system.
2.) Use calicoctl to add an IP pool. We must specify the IP and port that the master's etcd is listening on.
**NOTE: This step only needs to be performed once per Kubernetes deployment, as it covers all the node's IP ranges.**
```
ETCD_AUTHORITY=<MASTER_IP>:4001 calicoctl pool add 192.168.0.0/16
```
### Install Kubernetes on the Node
1.) Build & Install Kubernetes binaries
```
# Get the Kubernetes Source
wget https://github.com/kubernetes/kubernetes/releases/download/v1.0.3/kubernetes.tar.gz
@ -248,11 +219,9 @@ sudo cp kube-proxy /usr/bin/
sudo chmod +x /usr/bin/kube-proxy
```
2.) Install and launch the sample systemd processes settings for launching Kubernetes services
```
sudo cp calico-kubernetes-ubuntu-demo-master/node/kube-proxy.service /etc/systemd/
sudo cp calico-kubernetes-ubuntu-demo-master/node/kube-kubelet.service /etc/systemd/
sudo systemctl enable /etc/systemd/kube-proxy.service
@ -261,7 +230,6 @@ sudo systemctl start kube-proxy.service
sudo systemctl start kube-kubelet.service
```
>*You may want to consider checking their status after to ensure everything is running*
## Install the DNS Addon
@ -274,7 +242,7 @@ Replace `<MASTER_IP>` in `calico-kubernetes-ubuntu-demo-master/dns/skydns-rc.yam
## Launch other Services With Calico-Kubernetes
At this point, you have a fully functioning cluster running on kubernetes with a master and 2 nodes networked with Calico. You can now follow any of the [standard documentation](../../examples/) to set up other services on your cluster.
At this point, you have a fully functioning cluster running on kubernetes with a master and 2 nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/master/examples/) to set up other services on your cluster.
## Connectivity to outside the cluster
@ -290,7 +258,6 @@ We need to NAT traffic that has a destination outside of the cluster. Internal t
- `HOST_INTERFACE`: The interface on the Kubernetes node which is used for external connectivity. The above example uses `eth0`
```
sudo iptables -t nat -N KUBE-OUTBOUND-NAT
sudo iptables -t nat -A KUBE-OUTBOUND-NAT -d <CONTAINER_SUBNET> -o <HOST_INTERFACE> -j RETURN
sudo iptables -t nat -A KUBE-OUTBOUND-NAT -d <KUBERNETES_HOST_SUBNET> -o <HOST_INTERFACE> -j RETURN
@ -298,7 +265,6 @@ sudo iptables -t nat -A KUBE-OUTBOUND-NAT -j MASQUERADE
sudo iptables -t nat -A POSTROUTING -j KUBE-OUTBOUND-NAT
```
This chain should be applied on the master and all nodes. In production, these rules should be persisted, e.g. with `iptables-persistent`.
### NAT at the border router

View File

@ -31,20 +31,16 @@ Ubuntu 15 which use systemd instead of upstart. We are working around fixing thi
First clone the kubernetes github repo
```shell
$ git clone https://github.com/kubernetes/kubernetes.git
```
Then download all the needed binaries into given directory (cluster/ubuntu/binaries)
```shell
$ cd kubernetes/cluster/ubuntu
$ ./build.sh
```
You can customize your etcd version, flannel version, k8s version by changing corresponding variables
`ETCD_VERSION` , `FLANNEL_VERSION` and `KUBE_VERSION` in build.sh, by default etcd version is 2.0.12,
flannel version is 0.4.0 and k8s version is 1.0.3.
@ -68,7 +64,6 @@ An example cluster is listed below:
First configure the cluster information in cluster/ubuntu/config-default.sh, below is a simple sample.
```shell
export nodes="vcap@10.10.103.250 vcap@10.10.103.162 vcap@10.10.103.223"
export role="ai i i"
@ -80,7 +75,6 @@ export SERVICE_CLUSTER_IP_RANGE=192.168.3.0/24
export FLANNEL_NET=172.16.0.0/16
```
The first variable `nodes` defines all your cluster nodes, MASTER node comes first and
separated with blank space like `<user_1@ip_1> <user_2@ip_2> <user_3@ip_3> `
@ -115,21 +109,17 @@ The scripts automatically scp binaries and config files to all the machines and
The only thing you need to do is to type the sudo password when promoted.
```shell
Deploying minion on machine 10.10.103.223
...
[sudo] password to copy files and start minion:
```
If all things goes right, you will see the below message from console indicating the k8s is up.
```shell
Cluster validation succeeded
```
### Test it out
You can use `kubectl` command to check if the newly created k8s is working correctly.
@ -139,7 +129,6 @@ You can make it available via PATH, then you can use the below command smoothly.
For example, use `$ kubectl get nodes` to see if all of your nodes are ready.
```shell
$ kubectl get nodes
NAME LABELS STATUS
10.10.103.162 kubernetes.io/hostname=10.10.103.162 Ready
@ -147,8 +136,7 @@ NAME LABELS STATUS
10.10.103.250 kubernetes.io/hostname=10.10.103.250 Ready
```
Also you can run Kubernetes [guest-example](../../examples/guestbook/) to build a redis backend cluster on the k8s
Also you can run Kubernetes [guest-example](https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook/) to build a redis backend cluster on the k8s
### Deploy addons
@ -159,7 +147,6 @@ and UI onto the existing cluster.
The configuration of DNS is configured in cluster/ubuntu/config-default.sh.
```shell
ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
DNS_SERVER_IP="192.168.3.10"
@ -169,27 +156,22 @@ DNS_DOMAIN="cluster.local"
DNS_REPLICAS=1
```
The `DNS_SERVER_IP` is defining the ip of dns server which must be in the `SERVICE_CLUSTER_IP_RANGE`.
The `DNS_REPLICAS` describes how many dns pod running in the cluster.
By default, we also take care of kube-ui addon.
```shell
ENABLE_CLUSTER_UI="${KUBE_ENABLE_CLUSTER_UI:-true}"
```
After all the above variables have been set, just type the following command.
```shell
$ cd cluster/ubuntu
$ KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh
```
After some time, you can use `$ kubectl get pods --namespace=kube-system` to see the DNS and UI pods are running in the cluster.
### On going
@ -218,17 +200,14 @@ Please try:
```sh
ETCD_OPTS="-name infra1 -initial-advertise-peer-urls <http://ip_of_this_node:2380> -listen-peer-urls <http://ip_of_this_node:2380> -initial-cluster-token etcd-cluster-1 -initial-cluster infra1=<http://ip_of_this_node:2380>,infra2=<http://ip_of_another_node:2380>,infra3=<http://ip_of_another_node:2380> -initial-cluster-state new"
```
3. You may find following commands useful, the former one to bring down the cluster, while
the latter one could start it again.
```shell
$ KUBERNETES_PROVIDER=ubuntu ./kube-down.sh
$ KUBERNETES_PROVIDER=ubuntu ./kube-down.sh
$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
```
4. You can also customize your own settings in `/etc/default/{component_name}`.
@ -238,16 +217,13 @@ If you already have a kubernetes cluster, and want to upgrade to a new version,
you can use following command in cluster/ directory to update the whole cluster or a specified node to a new version.
```shell
$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh [-m|-n <node id>] <version>
```
It can be done for all components (by default), master(`-m`) or specified node(`-n`).
If the version is not specified, the script will try to use local binaries.You should ensure all the binaries are well prepared in path `cluster/ubuntu/binaries`.
```shell
$ tree cluster/ubuntu/binaries
binaries/
'<27><>'<27><>'<27><> kubectl
@ -264,15 +240,12 @@ binaries/
'<27><>'<27><>'<27><> kube-proxy
```
Upgrading single node is experimental now. You can use following command to get a help.
```shell
$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh -h
```
Some examples are as follows:
* upgrade master to version 1.0.5: `$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh -m 1.0.5`

View File

@ -22,20 +22,18 @@ Running Kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/deve
Setting up a cluster is as simple as running:
```shell
```shell
export KUBERNETES_PROVIDER=vagrant
curl -sS https://get.k8s.io | bash
```
```
Alternatively, you can download [Kubernetes release](https://github.com/kubernetes/kubernetes/releases) and extract the archive. To start your local cluster, open a shell and run:
```shell
```shell
cd kubernetes
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
```
```
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-minion-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space).
@ -44,35 +42,32 @@ Vagrant will provision each machine in the cluster with all the necessary compon
If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default) environment variable:
```shell
```shell
export VAGRANT_DEFAULT_PROVIDER=parallels
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
```
```
By default, each VM in the cluster is running Fedora.
To access the master or any node:
```shell
```shell
vagrant ssh master
vagrant ssh minion-1
```
```
If you are running more than one node, you can access the others by:
```shell
```shell
vagrant ssh minion-2
vagrant ssh minion-3
```
```
Each node in the cluster installs the docker daemon and the kubelet.
The master node instantiates the Kubernetes master components as pods on the machine.
To view the service status and/or logs on the kubernetes-master:
```shell
```shell
[vagrant@kubernetes-master ~] $ vagrant ssh master
[vagrant@kubernetes-master ~] $ sudo su
@ -85,11 +80,10 @@ To view the service status and/or logs on the kubernetes-master:
[root@kubernetes-master ~] $ tail -f /var/log/kube-apiserver.log
[root@kubernetes-master ~] $ tail -f /var/log/kube-controller-manager.log
[root@kubernetes-master ~] $ tail -f /var/log/kube-scheduler.log
```
```
To view the services on any of the nodes:
```shell
```shell
[vagrant@kubernetes-master ~] $ vagrant ssh minion-1
[vagrant@kubernetes-master ~] $ sudo su
@ -98,86 +92,77 @@ To view the services on any of the nodes:
[root@kubernetes-master ~] $ systemctl status docker
[root@kubernetes-master ~] $ journalctl -ru docker
```
```
### Interacting with your Kubernetes cluster with Vagrant.
With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands.
To push updates to new Kubernetes code after making source changes:
```shell
```shell
./cluster/kube-push.sh
```
```
To stop and then restart the cluster:
```shell
```shell
vagrant halt
./cluster/kube-up.sh
```
```
To destroy the cluster:
```shell
```shell
vagrant destroy
```
```
Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script.
You may need to build the binaries first, you can do this with `make`
```shell
```shell
$ ./cluster/kubectl.sh get nodes
NAME LABELS
10.245.1.4 <none>
10.245.1.5 <none>
10.245.1.3 <none>
```
```
### Authenticating with your master
When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future.
```shell
```shell
cat ~/.kubernetes_vagrant_auth
```
```json
```
```json
{ "User": "vagrant",
"Password": "vagrant",
"CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt",
"CertFile": "/home/k8s_user/.kubecfg.vagrant.crt",
"KeyFile": "/home/k8s_user/.kubecfg.vagrant.key"
}
```
```
You should now be set to use the `cluster/kubectl.sh` script. For example try to list the nodes that you have started with:
```shell
```shell
./cluster/kubectl.sh get nodes
```
```
### Running containers
Your cluster is running, you can list the nodes in your cluster:
```shell
```shell
$ ./cluster/kubectl.sh get nodes
NAME LABELS
10.245.2.4 <none>
10.245.2.3 <none>
10.245.2.2 <none>
```
```
Now start running some containers!
You can now use any of the `cluster/kube-*.sh` commands to interact with your VM machines.
Before starting a container there will be no pods, services and replication controllers.
```shell
```shell
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
@ -186,38 +171,34 @@ NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR
$ ./cluster/kubectl.sh get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
```
```
Start a container running nginx with a replication controller and three replicas
```shell
```shell
$ ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80
```
```
When listing the pods, you will see that three containers have been started and are in Waiting state:
```shell
```shell
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-5kq0g 0/1 Pending 0 10s
my-nginx-gr3hh 0/1 Pending 0 10s
my-nginx-xql4j 0/1 Pending 0 10s
```
```
You need to wait for the provisioning to complete, you can monitor the nodes by doing:
```shell
```shell
$ vagrant ssh minion-1 -c 'sudo docker images'
kubernetes-minion-1:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
<none> <none> 96864a7d2df3 26 hours ago 204.4 MB
google/cadvisor latest e0575e677c50 13 days ago 12.64 MB
kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB
```
```
Once the docker image for nginx has been downloaded, the container will start and you can list it:
```shell
```shell
$ vagrant ssh minion-1 -c 'sudo docker ps'
kubernetes-minion-1:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
@ -225,11 +206,10 @@ kubernetes-minion-1:
fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b
aa2ee3ed844a google/cadvisor:latest "/usr/bin/cadvisor" 38 minutes ago Up 38 minutes k8s--cadvisor.9e90d182--cadvisor_-_agent.file--4626b3a2
65a3a926f357 kubernetes/pause:latest "/pause" 39 minutes ago Up 39 minutes 0.0.0.0:4194->8080/tcp k8s--net.c5ba7f0e--cadvisor_-_agent.file--342fd561
```
```
Going back to listing the pods, services and replicationcontrollers, you now have:
```shell
```shell
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-5kq0g 1/1 Running 0 1m
@ -239,20 +219,18 @@ my-nginx-xql4j 1/1 Running 0 1m
$ ./cluster/kubectl.sh get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
my-nginx 10.0.0.1 <none> 80/TCP run=my-nginx 1h
```
```
We did not start any services, hence there are none listed. But we see three replicas displayed properly.
Check the [guestbook](../../examples/guestbook/README) application to learn how to create a service.
Check the [guestbook](https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook/README) application to learn how to create a service.
You can already play with scaling the replicas with:
```shell
```shell
$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-5kq0g 1/1 Running 0 2m
my-nginx-gr3hh 1/1 Running 0 2m
```
```
Congratulations!
### Troubleshooting
@ -261,34 +239,30 @@ Congratulations!
By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh`
```shell
```shell
export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box
export KUBERNETES_BOX_URL=path_of_your_kuber_box
export KUBERNETES_PROVIDER=vagrant
./cluster/kube-up.sh
```
```
#### I just created the cluster, but I am getting authorization errors!
You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact.
```shell
```shell
rm ~/.kubernetes_vagrant_auth
```
```
After using kubectl.sh make sure that the correct credentials are set:
```shell
```shell
cat ~/.kubernetes_vagrant_auth
```
```json
```
```json
{
"User": "vagrant",
"Password": "vagrant"
}
```
```
#### I just created the cluster, but I do not see my container running!
If this is your first time creating the cluster, the kubelet on each node schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned.
@ -305,26 +279,23 @@ Log on to one of the nodes (`vagrant ssh minion-1`) and inspect the salt minion
You can control the number of nodes that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_MINIONS` to 1 like so:
```shell
```shell
export NUM_MINIONS=1
```
```
#### I want my VMs to have more memory!
You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable.
Just set it to the number of megabytes you would like the machines to have. For example:
```shell
```shell
export KUBERNETES_MEMORY=2048
```
```
If you need more granular control, you can set the amount of memory for the master and nodes independently. For example:
```shell
```shell
export KUBERNETES_MASTER_MEMORY=1536
export KUBERNETES_MINION_MEMORY=2048
```
```
#### I ran vagrant suspend and nothing works!
`vagrant suspend` seems to mess up the network. This is not supported at this time.
@ -333,6 +304,6 @@ export KUBERNETES_MINION_MEMORY=2048
You can ensure that vagrant uses nfs to sync folders with virtual machines by setting the KUBERNETES_VAGRANT_USE_NFS environment variable to 'true'. nfs is faster than virtualbox or vmware's 'shared folders' and does not require guest additions. See the [vagrant docs](http://docs.vagrantup.com/v2/synced-folders/nfs) for details on configuring nfs on the host. This setting will have no effect on the libvirt provider, which uses nfs by default. For example:
```shell
```shell
export KUBERNETES_VAGRANT_USE_NFS=true
```

View File

@ -16,47 +16,42 @@ convenient).
2. You must have Go (version 1.2 or later) installed: [www.golang.org](http://www.golang.org).
3. You must have your `GOPATH` set up and include `$GOPATH/bin` in your `PATH`.
```shell
export GOPATH=$HOME/src/go
```shell
export GOPATH=$HOME/src/go
mkdir -p $GOPATH
export PATH=$PATH:$GOPATH/bin
```
```
4. Install the govc tool to interact with ESXi/vCenter:
```shell
go get github.com/vmware/govmomi/govc
```
```shell
go get github.com/vmware/govmomi/govc
```
5. Get or build a [binary release](binary_release)
### Setup
Download a prebuilt Debian 7.7 VMDK that we'll use as a base image:
```shell
```shell
curl --remote-name-all https://storage.googleapis.com/govmomi/vmdk/2014-11-11/kube.vmdk.gz{,.md5}
md5sum -c kube.vmdk.gz.md5
gzip -d kube.vmdk.gz
```
```
Import this VMDK into your vSphere datastore:
```shell
```shell
export GOVC_URL='user:pass@hostname'
export GOVC_INSECURE=1 # If the host above uses a self-signed cert
export GOVC_DATASTORE='target datastore'
export GOVC_RESOURCE_POOL='resource pool or cluster with access to datastore'
govc import.vmdk kube.vmdk ./kube/
```
```
Verify that the VMDK was correctly uploaded and expanded to ~3GiB:
```shell
```shell
govc datastore.ls ./kube/
```
```
Take a look at the file `cluster/vsphere/config-common.sh` fill in the required
parameters. The guest login for the image that you imported is `kube:kube`.
@ -65,12 +60,11 @@ parameters. The guest login for the image that you imported is `kube:kube`.
Now, let's continue with deploying Kubernetes.
This process takes about ~10 minutes.
```shell
```shell
cd kubernetes # Extracted binary release OR repository root
export KUBERNETES_PROVIDER=vsphere
cluster/kube-up.sh
```
```
Refer to the top level README and the getting started guide for Google Compute
Engine. Once you have successfully reached this point, your vSphere Kubernetes
deployment works just as any other one!

View File

@ -19,7 +19,7 @@ title: "Kubernetes Documentation: releases.k8s.io/release-1.1"
* An overview of the [Design of Kubernetes](design/)
* There are example files and walkthroughs in the [examples](../examples/)
* There are example files and walkthroughs in the [examples](https://github.com/kubernetes/kubernetes/tree/master/examples)
folder.
* If something went wrong, see the [troubleshooting](troubleshooting) document for how to debug.

View File

@ -20,11 +20,10 @@ or someone else setup the cluster and provided you with credentials and a locati
Check the location and credentials that kubectl knows about with this command:
```shell
```shell
$ kubectl config view
```
Many of the [examples](../../examples/) provide an introduction to using
```
Many of the [examples](https://github.com/kubernetes/kubernetes/tree/master/examples/) provide an introduction to using
kubectl and complete documentation is found in the [kubectl manual](kubectl/kubectl).
### Directly accessing the REST API
@ -49,29 +48,27 @@ The following command runs kubectl in a mode where it acts as a reverse proxy.
locating the apiserver and authenticating.
Run it like this:
```shell
```shell
$ kubectl proxy --port=8080 &
```
```
See [kubectl proxy](kubectl/kubectl_proxy) for more details.
Then you can explore the API with curl, wget, or a browser, like so:
```shell
```shell
$ curl http://localhost:8080/api/
{
"versions": [
"v1"
]
}
```
```
#### Without kubectl proxy
It is also possible to avoid using kubectl proxy by passing an authentication token
directly to the apiserver, like this:
```shell
```shell
$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
$ TOKEN=$(kubectl config view | grep token | cut -f 2 -d ":" | tr -d " ")
$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
@ -80,8 +77,7 @@ $ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
"v1"
]
}
```
```
The above example uses the `--insecure` flag. This leaves it subject to MITM
attacks. When kubectl accesses the cluster it uses a stored root certificate
and client certificates to access the server. (These are installed in the
@ -125,7 +121,7 @@ From within a pod the recommended ways to connect to API are:
process within a container. This proxies the
Kubernetes API to the localhost interface of the pod, so that other processes
in any container of the pod can access it. See this [example of using kubectl proxy
in a pod](../../examples/kubectl-container/).
in a pod](https://github.com/kubernetes/kubernetes/tree/master/examples/kubectl-container/).
- use the Go client library, and create a client using the `client.NewInCluster()` factory.
This handles locating and authenticating to the apiserver.
In each case, the credentials of the pod are used to communicate securely with the apiserver.
@ -173,7 +169,7 @@ You have several options for connecting to nodes, pods and services from outside
Typically, there are several services which are started on a cluster by kube-system. Get a list of these
with the `kubectl cluster-info` command:
```shell
```shell
$ kubectl cluster-info
Kubernetes master is running at https://104.197.5.247
@ -182,8 +178,7 @@ $ kubectl cluster-info
kube-dns is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/kube-dns
grafana is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
heapster is running at https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster
```
```
This shows the proxy-verb URL for accessing each service.
For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached
at `https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/` if suitable credentials are passed, or through a kubectl proxy at, for example:
@ -202,8 +197,8 @@ about namespaces? 'proxy' verb? -->
* To access the Elasticsearch service endpoint `_search?q=user:kimchy`, you would use: `http://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/_search?q=user:kimchy`
* To access the Elasticsearch cluster health information `_cluster/health?pretty=true`, you would use: `https://104.197.5.247/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/_cluster/health?pretty=true`
```json
{
```json
{
"cluster_name" : "kubernetes_logging",
"status" : "yellow",
"timed_out" : false,
@ -215,8 +210,7 @@ about namespaces? 'proxy' verb? -->
"initializing_shards" : 0,
"unassigned_shards" : 5
}
```
```
#### Using web browsers to access services running on the cluster
You may be able to put an apiserver proxy url into the address bar of a browser. However:

View File

@ -8,14 +8,12 @@ It is also useful to be able to attach arbitrary non-identifying metadata, for r
Like labels, annotations are key-value maps.
```json
"annotations": {
"key1" : "value1",
"key2" : "value2"
}
```
Possible information that could be recorded in annotations:
* fields managed by a declarative configuration layer, to distinguish them from client- and/or server-set default values and other auto-generated fields, fields set by auto-sizing/auto-scaling systems, etc., in order to facilitate merging

View File

@ -24,11 +24,9 @@ your Service?
The first step in debugging a Pod is taking a look at it. Check the current state of the Pod and recent events with the following command:
```shell
$ kubectl describe pods ${POD_NAME}
```
Look at the state of the containers in the pod. Are they all `Running`? Have there been recent restarts?
Continue debugging depending on the state of the pods.
@ -62,38 +60,29 @@ First, take a look at the logs of
the current container:
```shell
$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}
```
If your container has previously crashed, you can access the previous container's crash log with:
```shell
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
```
Alternately, you can run commands inside that container with `exec`:
```shell
$ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
```
Note that `-c ${CONTAINER_NAME}` is optional and can be omitted for Pods that only contain a single container.
As an example, to look at the logs from a running Cassandra pod, you might run
```shell
$ kubectl exec cassandra -- cat /var/log/cassandra/system.log
```
If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host,
but this should generally not be necessary given tools in the Kubernetes API. Therefore, if you find yourself needing to ssh into a machine, please file a
feature request on GitHub describing your use case and why these tools are insufficient.
@ -112,13 +101,11 @@ For example, run `kubectl create --validate -f mypod.yaml`.
If you misspelled `command` as `commnd` then will give an error like this:
```
I0805 10:43:25.129850 46757 schema.go:126] unknown field: commnd
I0805 10:43:25.129973 46757 schema.go:129] this may be a false alarm, see https://github.com/kubernetes/kubernetes/issues/6842
pods/mypod
```
<!-- TODO: Now that #11914 is merged, this advice may need to be updated -->
The next thing to check is whether the pod on the apiserver
@ -148,11 +135,9 @@ First, verify that there are endpoints for the service. For every Service object
You can view this resource with:
```shell
$ kubectl get endpoints ${SERVICE_NAME}
```
Make sure that the endpoints match up with the number of containers that you expect to be a member of your service.
For example, if your Service is for an nginx container with 3 replicas, you would expect to see three different
IP addresses in the Service's endpoints.
@ -163,7 +148,6 @@ If you are missing endpoints, try listing pods using the labels that Service use
a Service where the labels are:
```yaml
...
spec:
- selector:
@ -171,15 +155,12 @@ spec:
type: frontend
```
You can use:
```shell
$ kubectl get pods --selector=name=nginx,type=frontend
```
to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service.
If the list of pods matches expectations, but your endpoints are still empty, it's possible that you don't

View File

@ -42,7 +42,6 @@ be said to have a request of 0.5 core and 128 MiB of memory and a limit of 1 cor
memory.
```yaml
apiVersion: v1
kind: Pod
metadata:
@ -69,7 +68,6 @@ spec:
cpu: "500m"
```
## How Pods with Resource Requests are Scheduled
When a pod is created, the Kubernetes scheduler selects a node for the pod to
@ -123,7 +121,6 @@ until a place can be found. An event will be produced each time the scheduler
place for the pod, like this:
```shell
$ kubectl describe pod frontend | grep -A 3 Events
Events:
FirstSeen LastSeen Count From Subobject PathReason Message
@ -131,7 +128,6 @@ Events:
```
In the case shown above, the pod "frontend" fails to be scheduled due to insufficient
CPU resource on the node. Similar error messages can also suggest failure due to insufficient
memory (PodExceedsFreeMemory). In general, if a pod or pods are pending with this message and
@ -145,7 +141,6 @@ You can check node capacities and amounts allocated with the `kubectl describe n
For example:
```shell
$ kubectl describe nodes gke-cluster-4-386701dd-node-ww4p
Name: gke-cluster-4-386701dd-node-ww4p
[ ... lines removed for clarity ...]
@ -169,7 +164,6 @@ TotalResourceLimits:
[ ... lines removed for clarity ...]
```
Here you can see from the `Allocated resources` section that that a pod which ask for more than
90 millicpus or more than 1341MiB of memory will not be able to fit on this node.
@ -185,7 +179,6 @@ Your container may be terminated because it's resource-starved. To check if a co
on the pod you are interested in:
```shell
[12:54:41] $ ./cluster/kubectl.sh describe pod simmemleak-hra99
Name: simmemleak-hra99
Namespace: default
@ -223,19 +216,16 @@ Events:
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-minion-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a
```
The `Restart Count: 5` indicates that the `simmemleak` container in this pod was terminated and restarted 5 times.
You can call `get pod` with the `-o go-template=...` option to fetch the status of previously terminated containers:
```shell
[13:59:01] $ ./cluster/kubectl.sh get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-60xbc
Container Name: simmemleak
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]][13:59:03] clusterScaleDoc ~/go/src/github.com/kubernetes/kubernetes $
```
We can see that this container was terminated because `reason:OOM Killed`, where *OOM* stands for Out Of Memory.
## Planned Improvements

View File

@ -14,7 +14,6 @@ In the declarative style, all configuration is stored in YAML or JSON configurat
Kubernetes executes containers in [*Pods*](pods). A pod containing a simple Hello World container can be specified in YAML as follows:
```yaml
apiVersion: v1
kind: Pod
metadata:
@ -27,7 +26,6 @@ spec: # specification of the pod's contents
command: ["/bin/echo","hello'?,'?world"]
```
The value of `metadata.name`, `hello-world`, will be the name of the pod resource created, and must be unique within the cluster, whereas `containers[0].name` is just a nickname for the container within that pod. `image` is the name of the Docker image, which Kubernetes expects to be able to pull from a registry, the [Docker Hub](https://registry.hub.docker.com/) by default.
`restartPolicy: Never` indicates that we just want to run the container once and then terminate the pod.
@ -35,21 +33,17 @@ The value of `metadata.name`, `hello-world`, will be the name of the pod resourc
The [`command`](containers.html#containers-and-commands) overrides the Docker container's `Entrypoint`. Command arguments (corresponding to Docker's `Cmd`) may be specified using `args`, as follows:
```yaml
command: ["/bin/echo"]
command: ["/bin/echo"]
args: ["hello","world"]
```
This pod can be created using the `create` command:
```shell
$ kubectl create -f ./hello-world.yaml
pods/hello-world
```
`kubectl` prints the resource type and name of the resource created when successful.
## Validating configuration
@ -57,21 +51,17 @@ pods/hello-world
If you're not sure you specified the resource correctly, you can ask `kubectl` to validate it for you:
```shell
$ kubectl create -f ./hello-world.yaml --validate
```
Let's say you specified `entrypoint` instead of `command`. You'd see output as follows:
```shell
I0709 06:33:05.600829 14160 schema.go:126] unknown field: entrypoint
I0709 06:33:05.600988 14160 schema.go:129] this may be a false alarm, see http://issue.k8s.io/6842
pods/hello-world
```
`kubectl create --validate` currently warns about problems it detects, but creates the resource anyway, unless a required field is absent or a field value is invalid. Unknown API fields are ignored, so be careful. This pod was created, but with no `command`, which is an optional field, since the image may specify an `Entrypoint`.
View the [Pod API
object](http://kubernetes.io/v1.1/docs/api-reference/v1/definitions.html#_v1_pod)
@ -82,7 +72,6 @@ to see the list of valid fields.
Kubernetes [does not automatically run commands in a shell](https://github.com/kubernetes/kubernetes/wiki/User-FAQ#use-of-environment-variables-on-the-command-line) (not all images contain shells). If you would like to run your command in a shell, such as to expand environment variables (specified using `env`), you could do the following:
```yaml
apiVersion: v1
kind: Pod
metadata:
@ -99,16 +88,13 @@ spec: # specification of the pod's contents
args: ["/bin/echo \"${MESSAGE}\""]
```
However, a shell isn't necessary just to expand environment variables. Kubernetes will do it for you if you use [`$(ENVVAR)` syntax](/{{page.version}}/docs/design/expansion):
```yaml
command: ["/bin/echo"]
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
```
## Viewing pod status
You can see the pod you created (actually all of your cluster's pods) using the `get` command.
@ -116,70 +102,58 @@ You can see the pod you created (actually all of your cluster's pods) using the
If you're quick, it will look as follows:
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world 0/1 Pending 0 0s
```
Initially, a newly created pod is unscheduled -- no node has been selected to run it. Scheduling happens after creation, but is fast, so you normally shouldn't see pods in an unscheduled state unless there's a problem.
After the pod has been scheduled, the image may need to be pulled to the node on which it was scheduled, if it hadn't been pulled already. After a few seconds, you should see the container running:
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world 1/1 Running 0 5s
```
The `READY` column shows how many containers in the pod are running.
Almost immediately after it starts running, this command will terminate. `kubectl` shows that the container is no longer running and displays the exit status:
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world 0/1 ExitCode:0 0 15s
```
## Viewing pod output
You probably want to see the output of the command you ran. As with [`docker logs`](https://docs.docker.com/userguide/usingdocker/), `kubectl logs` will show you the output:
```shell
$ kubectl logs hello-world
hello world
```
## Deleting pods
When you're done looking at the output, you should delete the pod:
```shell
$ kubectl delete pod hello-world
pods/hello-world
```
As with `create`, `kubectl` prints the resource type and name of the resource deleted when successful.
You can also use the resource/name format to specify the pod:
```shell
$ kubectl delete pods/hello-world
pods/hello-world
```
Terminated pods aren't currently automatically deleted, so that you can observe their final status, so be sure to clean up your dead pods.
On the other hand, containers and their logs are eventually deleted automatically in order to free up disk space on the nodes.

View File

@ -18,7 +18,6 @@ This guide uses a simple nginx server to demonstrate proof of concept. The same
We did this in a previous example, but lets do it once again and focus on the networking perspective. Create an nginx pod, and note that it has a container port specification:
```yaml
$ cat nginxrc.yaml
apiVersion: v1
kind: ReplicationController
@ -38,28 +37,23 @@ spec:
- containerPort: 80
```
This makes it accessible from any node in your cluster. Check the nodes the pod is running on:
```shell
$ kubectl create -f ./nginxrc.yaml
$ kubectl get pods -l app=nginx -o wide
my-nginx-6isf4 1/1 Running 0 2h e2e-test-beeps-minion-93ly
my-nginx-t26zt 1/1 Running 0 2h e2e-test-beeps-minion-93ly
```
Check your pods' IPs:
```shell
$ kubectl get pods -l app=nginx -o json | grep podIP
"podIP": "10.245.0.15",
"podIP": "10.245.0.14",
```
You should be able to ssh into any node in your cluster and curl both IPs. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same containerPort and access them from any other pod or node in your cluster using IP. Like Docker, ports can still be published to the host node's interface(s), but the need for this is radically diminished because of the networking model.
You can read more about [how we achieve this](../admin/networking.html#how-to-achieve-this) if you're curious.
@ -73,7 +67,6 @@ A Kubernetes Service is an abstraction which defines a logical set of Pods runni
You can create a Service for your 2 nginx replicas with the following yaml:
```yaml
$ cat nginxsvc.yaml
apiVersion: v1
kind: Service
@ -89,23 +82,19 @@ spec:
app: nginx
```
This specification will create a Service which targets TCP port 80 on any Pod with the `app=nginx` label, and expose it on an abstracted Service port (`targetPort`: is the port the container accepts traffic on, `port`: is the abstracted Service port, which can be any port other pods use to access the Service). View [service API object](http://kubernetes.io/v1.1/docs/api-reference/v1/definitions.html#_v1_service) to see the list of supported fields in service definition.
Check your Service:
```shell
$ kubectl get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.179.240.1 <none> 443/TCP <none> 8d
nginxsvc 10.179.252.126 122.222.183.144 80/TCP,81/TCP,82/TCP run=nginx2 11m
```
As mentioned previously, a Service is backed by a group of pods. These pods are exposed through `endpoints`. The Service's selector will be evaluated continuously and the results will be POSTed to an Endpoints object also named `nginxsvc`. When a pod dies, it is automatically removed from the endpoints, and new pods matching the Service's selector will automatically get added to the endpoints. Check the endpoints, and note that the IPs are the same as the pods created in the first step:
```shell
$ kubectl describe svc nginxsvc
Name: nginxsvc
Namespace: default
@ -123,7 +112,6 @@ NAME ENDPOINTS
nginxsvc 10.245.0.14:80,10.245.0.15:80
```
You should now be able to curl the nginx Service on `10.0.116.146:80` from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire, if you're curious about how this works you can read more about the [service proxy](services.html#virtual-ips-and-service-proxies).
## Accessing the Service
@ -135,17 +123,14 @@ Kubernetes supports 2 primary modes of finding a Service - environment variables
When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. This introduces an ordering problem. To see why, inspect the environment of your running nginx pods:
```shell
$ kubectl exec my-nginx-6isf4 -- printenv | grep SERVICE
KUBERNETES_SERVICE_HOST=10.0.0.1
KUBERNETES_SERVICE_PORT=443
```
Note there's no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both pods on the same machine, which will take your entire Service down if it dies. We can do this the right way by killing the 2 pods and waiting for the replication controller to recreate them. This time around the Service exists *before* the replicas. This will given you scheduler level Service spreading of your pods (provided all your nodes have equal capacity), as well as the right environment variables:
```shell
$ kubectl scale rc my-nginx --replicas=0; kubectl scale rc my-nginx --replicas=2;
$ kubectl get pods -l app=nginx -o wide
NAME READY STATUS RESTARTS AGE NODE
@ -159,23 +144,19 @@ KUBERNETES_SERVICE_HOST=10.0.0.1
NGINXSVC_SERVICE_PORT=80
```
### DNS
Kubernetes offers a DNS cluster addon Service that uses skydns to automatically assign dns names to other Services. You can check if it's running on your cluster:
```shell
$ kubectl get services kube-dns --namespace=kube-system
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kube-dns 10.179.240.10 <none> 53/UDP,53/TCP k8s-app=kube-dns 8d
```
If it isn't running, you can [enable it](http://releases.k8s.io/release-1.1/cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived IP (nginxsvc), and a dns server that has assigned a name to that IP (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Let's create another pod to test this:
```yaml
$ cat curlpod.yaml
apiVersion: v1
kind: Pod
@ -192,11 +173,9 @@ spec:
restartPolicy: Always
```
And perform a lookup of the nginx Service
```shell
$ kubectl create -f ./curlpod.yaml
default/curlpod
$ kubectl get pods curlpod
@ -210,7 +189,6 @@ Name: nginxsvc
Address 1: 10.0.116.146
```
## Securing the Service
Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need:
@ -218,10 +196,9 @@ Till now we have only accessed the nginx server from within the cluster. Before
* An nginx server configured to use the certificates
* A [secret](secrets) that makes the certificates accessible to pods
You can acquire all these from the [nginx https example](../../examples/https-nginx/README), in short:
You can acquire all these from the [nginx https example](https://github.com/kubernetes/kubernetes/tree/master/examples/https-nginx/README), in short:
```shell
$ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json
$ kubectl create -f /tmp/secret.json
secrets/nginxsecret
@ -231,11 +208,9 @@ default-token-il9rc kubernetes.io/service-account-token 1
nginxsecret Opaque 2
```
Now modify your nginx replicas to start a https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):
```yaml
$ cat nginx-app.yaml
apiVersion: v1
kind: Service
@ -282,14 +257,12 @@ spec:
name: secret-volume
```
Noteworthy points about the nginx-app manifest:
- It contains both rc and service specification in the same file
- The [nginx server](../../examples/https-nginx/default.conf) serves http traffic on port 80 and https traffic on 443, and nginx Service exposes both ports.
- The [nginx server](https://github.com/kubernetes/kubernetes/tree/master/examples/https-nginx/default.conf) serves http traffic on port 80 and https traffic on 443, and nginx Service exposes both ports.
- Each container has access to the keys through a volume mounted at /etc/nginx/ssl. This is setup *before* the nginx server is started.
```shell
$ kubectl delete rc,svc -l app=nginx; kubectl create -f ./nginx-app.yaml
replicationcontrollers/my-nginx
services/nginxsvc
@ -297,11 +270,9 @@ services/nginxsvc
replicationcontrollers/my-nginx
```
At this point you can reach the nginx server from any node.
```shell
$ kubectl get pods -o json | grep -i podip
"podIP": "10.1.0.80",
node $ curl -k https://10.1.0.80
@ -309,13 +280,11 @@ node $ curl -k https://10.1.0.80
<h1>Welcome to nginx!</h1>
```
Note how we supplied the `-k` parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time,
so we have to tell curl to ignore the CName mismatch. By creating a Service we linked the CName used in the certificate with the actual DNS name used by pods during Service lookup.
Lets test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service):
```shell
$ cat curlpod.yaml
vapiVersion: v1
kind: ReplicationController
@ -355,13 +324,11 @@ $ kubectl exec curlpod -- curl https://nginxsvc --cacert /etc/nginx/ssl/nginx.cr
...
```
## Exposing the Service
For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the last section already used `NodePort`, so your nginx https replica is ready to serve traffic on the internet if your node has a public IP.
```shell
$ kubectl get svc nginxsvc -o json | grep -i nodeport -C 5
{
"name": "http",
@ -394,11 +361,9 @@ $ curl https://104.197.63.17:30645 -k
<h1>Welcome to nginx!</h1>
```
Lets now recreate the Service to use a cloud load balancer, just change the `Type` of Service in the nginx-app.yaml from `NodePort` to `LoadBalancer`:
```shell
$ kubectl delete rc, svc -l app=nginx
$ kubectl create -f ./nginx-app.yaml
$ kubectl get svc nginxsvc
@ -410,7 +375,6 @@ $ curl https://162.22.184.144 -k
<title>Welcome to nginx!</title>
```
The IP address in the `EXTERNAL_IP` column is the one that is available on the public internet. The `CLUSTER_IP` is only available inside your
cluster/private cloud network.

View File

@ -5,44 +5,37 @@ kubectl port-forward forwards connections to a local port to a port on a pod. It
## Creating a Redis master
```shell
```shell
$ kubectl create examples/redis/redis-master.yaml
pods/redis-master
```
```
wait until the Redis master pod is Running and Ready,
```shell
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master 2/2 Running 0 41s
```
```
## Connecting to the Redis master[a]
The Redis master is listening on port 6397, to verify this,
```shell
```shell
$ kubectl get pods redis-master -t='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
6379
```
```
then we forward the port 6379 on the local workstation to the port 6379 of pod redis-master,
```shell
```shell
$ kubectl port-forward redis-master 6379:6379
I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:6379 -> 6379
I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:6379 -> 6379
```
```
To verify the connection is successful, we run a redis-cli on the local workstation,
```shell
```shell
$ redis-cli
127.0.0.1:6379> ping
PONG
```
```
Now one can debug the database from the local workstation.

View File

@ -8,11 +8,10 @@ You have seen the [basics](accessing-the-cluster) about `kubectl proxy` and `api
kube-ui is deployed as a cluster add-on. To find its apiserver proxy URL,
```shell
```shell
$ kubectl cluster-info | grep "KubeUI"
KubeUI is running at https://173.255.119.104/api/v1/proxy/namespaces/kube-system/services/kube-ui
```
```
if this command does not find the URL, try the steps [here](ui.html#accessing-the-ui).
@ -20,9 +19,8 @@ if this command does not find the URL, try the steps [here](ui.html#accessing-th
The above proxy URL is an access to the kube-ui service provided by the apiserver. To access it, you still need to authenticate to the apiserver. `kubectl proxy` can handle the authentication.
```shell
```shell
$ kubectl proxy --port=8001
Starting to serve on localhost:8001
```
```
Now you can access the kube-ui service on your local workstation at [http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-ui](http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-ui)

View File

@ -34,12 +34,10 @@ Currently the list of all services that are running at the time when the contain
For a service named **foo** that maps to a container port named **bar**, the following variables are defined:
```shell
FOO_SERVICE_HOST=<the host the service is running on>
FOO_SERVICE_PORT=<the port the service is running on>
```
Services have dedicated IP address, and are also surfaced to the container via DNS (If [DNS addon](http://releases.k8s.io/release-1.1/cluster/addons/dns/) is enabled).  Of course DNS is still not an enumerable protocol, so we will continue to provide environment variables so that containers can do discovery.
## Container Hooks

View File

@ -19,30 +19,24 @@ clear what is expected, this document will use the following conventions.
If the command "COMMAND" is expected to run in a `Pod` and produce "OUTPUT":
```shell
u@pod$ COMMAND
OUTPUT
```
If the command "COMMAND" is expected to run on a `Node` and produce "OUTPUT":
```shell
u@node$ COMMAND
OUTPUT
```
If the command is "kubectl ARGS":
```shell
$ kubectl ARGS
OUTPUT
```
## Running commands in a Pod
For many steps here you will want to see what a `Pod` running in the cluster
@ -50,7 +44,6 @@ sees. Kubernetes does not directly support interactive `Pod`s (yet), but you ca
approximate it:
```shell
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
@ -67,25 +60,20 @@ EOF
pods/busybox-sleep
```
Now, when you need to run a command (even an interactive shell) in a `Pod`-like
context, use:
```shell
$ kubectl exec busybox-sleep -- <COMMAND>
```
or
```shell
$ kubectl exec -ti busybox-sleep sh
/ #
```
## Setup
For the purposes of this walk-through, let's run some `Pod`s. Since you're
@ -93,7 +81,6 @@ probably debugging your own `Service` you can substitute your own details, or yo
can follow along and get a second data point.
```shell
$ kubectl run hostnames --image=gcr.io/google_containers/serve_hostname \
--labels=app=hostnames \
--port=9376 \
@ -102,12 +89,10 @@ CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR
hostnames hostnames gcr.io/google_containers/serve_hostname app=hostnames 3
```
Note that this is the same as if you had started the `ReplicationController` with
the following YAML:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
@ -129,11 +114,9 @@ spec:
protocol: TCP
```
Confirm your `Pod`s are running:
```shell
$ kubectl get pods -l app=hostnames
NAME READY STATUS RESTARTS AGE
hostnames-0uton 1/1 Running 0 12s
@ -141,7 +124,6 @@ hostnames-bvc05 1/1 Running 0 12s
hostnames-yp2kp 1/1 Running 0 12s
```
## Does the Service exist?
The astute reader will have noticed that we did not actually create a `Service`
@ -153,53 +135,42 @@ have another `Pod` that consumes this `Service` by name you would get something
like:
```shell
u@pod$ wget -qO- hostnames
wget: bad address 'hostname'
```
or:
```shell
u@pod$ echo $HOSTNAMES_SERVICE_HOST
```
So the first thing to check is whether that `Service` actually exists:
```shell
$ kubectl get svc hostnames
Error from server: service "hostnames" not found
```
So we have a culprit, let's create the `Service`. As before, this is for the
walk-through - you can use your own `Service`'s details here.
```shell
$ kubectl expose rc hostnames --port=80 --target-port=9376
service "hostnames" exposed
```
And read it back, just to be sure:
```shell
$ kubectl get svc hostnames
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
hostnames 10.0.0.1 <none> 80/TCP run=hostnames 1h
```
As before, this is the same as if you had started the `Service` with YAML:
```yaml
apiVersion: v1
kind: Service
metadata:
@ -214,7 +185,6 @@ spec:
targetPort: 9376
```
Now you can confirm that the `Service` exists.
## Does the Service work by DNS?
@ -222,7 +192,6 @@ Now you can confirm that the `Service` exists.
From a `Pod` in the same `Namespace`:
```shell
u@pod$ nslookup hostnames
Server: 10.0.0.10
Address: 10.0.0.10#53
@ -231,12 +200,10 @@ Name: hostnames
Address: 10.0.1.175
```
If this fails, perhaps your `Pod` and `Service` are in different
`Namespace`s, try a namespace-qualified name:
```shell
u@pod$ nslookup hostnames.default
Server: 10.0.0.10
Address: 10.0.0.10#53
@ -245,12 +212,10 @@ Name: hostnames.default
Address: 10.0.1.175
```
If this works, you'll need to ensure that `Pod`s and `Service`s run in the same
`Namespace`. If this still fails, try a fully-qualified name:
```shell
u@pod$ nslookup hostnames.default.svc.cluster.local
Server: 10.0.0.10
Address: 10.0.0.10#53
@ -259,7 +224,6 @@ Name: hostnames.default.svc.cluster.local
Address: 10.0.1.175
```
Note the suffix here: "default.svc.cluster.local". The "default" is the
`Namespace` we're operating in. The "svc" denotes that this is a `Service`.
The "cluster.local" is your cluster domain.
@ -268,7 +232,6 @@ You can also try this from a `Node` in the cluster (note: 10.0.0.10 is my DNS
`Service`):
```shell
u@node$ nslookup hostnames.default.svc.cluster.local 10.0.0.10
Server: 10.0.0.10
Address: 10.0.0.10#53
@ -277,7 +240,6 @@ Name: hostnames.default.svc.cluster.local
Address: 10.0.1.175
```
If you are able to do a fully-qualified name lookup but not a relative one, you
need to check that your `kubelet` is running with the right flags.
The `--cluster-dns` flag needs to point to your DNS `Service`'s IP and the
@ -292,7 +254,6 @@ can take a step back and see what else is not working. The Kubernetes master
`Service` should always work:
```shell
u@pod$ nslookup kubernetes.default
Server: 10.0.0.10
Address 1: 10.0.0.10
@ -301,7 +262,6 @@ Name: kubernetes
Address 1: 10.0.0.1
```
If this fails, you might need to go to the kube-proxy section of this doc, or
even go back to the top of this document and start over, but instead of
debugging your own `Service`, debug DNS.
@ -312,7 +272,6 @@ The next thing to test is whether your `Service` works at all. From a
`Node` in your cluster, access the `Service`'s IP (from `kubectl get` above).
```shell
u@node$ curl 10.0.1.175:80
hostnames-0uton
@ -323,7 +282,6 @@ u@node$ curl 10.0.1.175:80
hostnames-bvc05
```
If your `Service` is working, you should get correct responses. If not, there
are a number of things that could be going wrong. Read on.
@ -334,7 +292,6 @@ It might sound silly, but you should really double and triple check that your
verify it:
```shell
$ kubectl get service hostnames -o json
{
"kind": "Service",
@ -373,7 +330,6 @@ $ kubectl get service hostnames -o json
}
```
Is the port you are trying to access in `spec.ports[]`? Is the `targetPort`
correct for your `Pod`s? If you meant it to be a numeric port, is it a number
(9376) or a string "9376"? If you meant it to be a named port, do your `Pod`s
@ -389,7 +345,6 @@ actually being selected by the `Service`.
Earlier we saw that the `Pod`s were running. We can re-check that:
```shell
$ kubectl get pods -l app=hostnames
NAME READY STATUS RESTARTS AGE
hostnames-0uton 1/1 Running 0 1h
@ -397,7 +352,6 @@ hostnames-bvc05 1/1 Running 0 1h
hostnames-yp2kp 1/1 Running 0 1h
```
The "AGE" column says that these `Pod`s are about an hour old, which implies that
they are running fine and not crashing.
@ -406,13 +360,11 @@ has. Inside the Kubernetes system is a control loop which evaluates the
selector of every `Service` and save the results into an `Endpoints` object.
```shell
$ kubectl get endpoints hostnames
NAME ENDPOINTS
hostnames 10.244.0.5:9376,10.244.0.6:9376,10.244.0.7:9376
```
This confirms that the control loop has found the correct `Pod`s for your
`Service`. If the `hostnames` row is blank, you should check that the
`spec.selector` field of your `Service` actually selects for `metadata.labels`
@ -425,7 +377,6 @@ Let's check that the `Pod`s are actually working - we can bypass the `Service`
mechanism and go straight to the `Pod`s.
```shell
u@pod$ wget -qO- 10.244.0.5:9376
hostnames-0uton
@ -436,7 +387,6 @@ u@pod$ wget -qO- 10.244.0.7:9376
hostnames-yp2kp
```
We expect each `Pod` in the `Endpoints` list to return its own hostname. If
this is not what happens (or whatever the correct behavior is for your own
`Pod`s), you should investigate what's happening there. You might find
@ -455,12 +405,10 @@ Confirm that `kube-proxy` is running on your `Node`s. You should get something
like the below:
```shell
u@node$ ps auxw | grep kube-proxy
root 4194 0.4 0.1 101864 17696 ? Sl Jul04 25:43 /usr/local/bin/kube-proxy --master=https://kubernetes-master --kubeconfig=/var/lib/kube-proxy/kubeconfig --v=2
```
Next, confirm that it is not failing something obvious, like contacting the
master. To do this, you'll have to look at the logs. Accessing the logs
depends on your `Node` OS. On some OSes it is a file, such as
@ -468,7 +416,6 @@ depends on your `Node` OS. On some OSes it is a file, such as
should see something like:
```shell
I0707 17:34:53.945651 30031 server.go:88] Running in resource-only container "/kube-proxy"
I0707 17:34:53.945921 30031 proxier.go:121] Setting proxy IP to 10.240.115.247 and initializing iptables
I0707 17:34:54.053023 30031 roundrobin.go:262] LoadBalancerRR: Setting endpoints for default/kubernetes: to [10.240.169.188:443]
@ -489,7 +436,6 @@ I0707 17:35:46.015868 30031 proxysocket.go:246] New UDP connection from 10.244
I0707 17:35:46.017061 30031 proxysocket.go:246] New UDP connection from 10.244.3.2:55471
```
If you see error messages about not being able to contact the master, you
should double-check your `Node` configuration and installation steps.
@ -500,13 +446,11 @@ rules which implement `Service`s. Let's check that those rules are getting
written.
```shell
u@node$ iptables-save | grep hostnames
-A KUBE-PORTALS-CONTAINER -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j REDIRECT --to-ports 48577
-A KUBE-PORTALS-HOST -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j DNAT --to-destination 10.240.115.247:48577
```
There should be 2 rules for each port on your `Service` (just one in this
example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST". If you do
not see these, try restarting `kube-proxy` with the `-V` flag set to 4, and
@ -517,32 +461,26 @@ then look at the logs again.
Assuming you do see the above rules, try again to access your `Service` by IP:
```shell
u@node$ curl 10.0.1.175:80
hostnames-0uton
```
If this fails, we can try accessing the proxy directly. Look back at the
`iptables-save` output above, and extract the port number that `kube-proxy` is
using for your `Service`. In the above examples it is "48577". Now connect to
that:
```shell
u@node$ curl localhost:48577
hostnames-yp2kp
```
If this still fails, look at the `kube-proxy` logs for specific lines like:
```shell
Setting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376]
```
If you don't see those, try restarting `kube-proxy` with the `-V` flag set to 4, and
then look at the logs again.

View File

@ -14,7 +14,6 @@ A replication controller simply ensures that a specified number of pod "replicas
The replication controller created to run nginx by `kubectl run` in the [Quick start](quick-start) could be specified using YAML as follows:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
@ -33,7 +32,6 @@ spec:
- containerPort: 80
```
Some differences compared to specifying just a pod are that the `kind` is `ReplicationController`, the number of `replicas` desired is specified, and the pod specification is under the `template` field. The names of the pods don't need to be specified explicitly because they are generated from the name of the replication controller.
View the [replication controller API
object](http://kubernetes.io/v1.1/docs/api-reference/v1/definitions.html#_v1_replicationcontroller)
@ -42,12 +40,10 @@ to view the list of supported fields.
This replication controller can be created using `create`, just as with pods:
```shell
$ kubectl create -f ./nginx-rc.yaml
replicationcontrollers/my-nginx
```
Unlike in the case where you directly create pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure. For this reason, we recommend that you use a replication controller for a continuously running application even if your application requires only a single pod, in which case you can omit `replicas` and it will default to a single replica.
## Viewing replication controller status
@ -55,37 +51,31 @@ Unlike in the case where you directly create pods, a replication controller repl
You can view the replication controller you created using `get`:
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
my-nginx nginx nginx app=nginx 2
```
This tells you that your controller will ensure that you have two nginx replicas.
You can see those replicas using `get`, just as with pods you created directly:
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-065jq 1/1 Running 0 51s
my-nginx-buaiq 1/1 Running 0 51s
```
## Deleting replication controllers
When you want to kill your application, delete your replication controller, as in the [Quick start](quick-start):
```shell
$ kubectl delete rc my-nginx
replicationcontrollers/my-nginx
```
By default, this will also cause the pods managed by the replication controller to be deleted. If there were a large number of pods, this may take a while to complete. If you want to leave the pods running, specify `--cascade=false`.
If you try to delete the pods before deleting the replication controller, it will just replace them, as it is supposed to do.
@ -95,33 +85,27 @@ If you try to delete the pods before deleting the replication controller, it wil
Kubernetes uses user-defined key-value attributes called [*labels*](labels) to categorize and identify sets of resources, such as pods and replication controllers. The example above specified a single label in the pod template, with key `app` and value `nginx`. All pods created carry that label, which can be viewed using `-L`:
```shell
$ kubectl get pods -L app
NAME READY STATUS RESTARTS AGE APP
my-nginx-afv12 0/1 Running 0 3s nginx
my-nginx-lg99z 0/1 Running 0 3s nginx
```
The labels from the pod template are copied to the replication controller's labels by default, as well -- all resources in Kubernetes support labels:
```shell
$ kubectl get rc my-nginx -L app
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS APP
my-nginx nginx nginx app=nginx 2 nginx
```
More importantly, the pod template's labels are used to create a [`selector`](labels.html#label-selectors) that will match pods carrying those labels. You can see this field by requesting it using the [Go template output format of `kubectl get`](kubectl/kubectl_get):
```shell
$ kubectl get rc my-nginx -o template --template="{{.spec.selector}}"
map[app:nginx]
```
You could also specify the `selector` explicitly, such as if you wanted to specify labels in the pod template that you didn't want to select on, but you should ensure that the selector will match the labels of the pods created from the pod template, and that it won't match pods created by other replication controllers. The most straightforward way to ensure the latter is to create a unique label value for the replication controller, and to specify it in both the pod template's labels and in the selector.
## What's next?

View File

@ -37,7 +37,6 @@ bring up 3 nginx pods.
<!-- BEGIN MUNGE: EXAMPLE nginx-deployment.yaml -->
```yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
@ -56,56 +55,48 @@ spec:
- containerPort: 80
```
[Download example](nginx-deployment.yaml)
<!-- END MUNGE: EXAMPLE nginx-deployment.yaml -->
Run the example by downloading the example file and then running this command:
```shell
$ kubectl create -f docs/user-guide/nginx-deployment.yaml
deployment "nginx-deployment" created
```
Running a get immediately will give:
```shell
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 0/3 8s
```
This indicates that deployment is trying to update 3 replicas. It has not
updated any one of those yet.
Running a get again after a minute, will give:
```shell
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 3/3 1m
```
This indicates that deployent has created all the 3 replicas.
Running ```kubectl get rc``` and ```kubectl get pods``` will show the replication controller (RC) and pods created.
Running ```kubectl get rc```
and ```kubectl get pods```
will show the replication controller (RC) and pods created.
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
REPLICAS AGE
deploymentrc-1975012602 nginx nginx:1.7.9 deployment.kubernetes.io/podTemplateHash=1975012602,app=nginx 3 2m
```
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
deploymentrc-1975012602-4f2tb 1/1 Running 0 1m
@ -113,7 +104,6 @@ deploymentrc-1975012602-j975u 1/1 Running 0 1m
deploymentrc-1975012602-uashb 1/1 Running 0 1m
```
The created RC will ensure that there are 3 nginx pods at all time.
## Updating a Deployment
@ -125,7 +115,6 @@ For this, we update our deployment to be as follows:
<!-- BEGIN MUNGE: EXAMPLE new-nginx-deployment.yaml -->
```yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
@ -144,68 +133,57 @@ spec:
- containerPort: 80
```
[Download example](new-nginx-deployment.yaml)
<!-- END MUNGE: EXAMPLE new-nginx-deployment.yaml -->
```shell
$ kubectl apply -f docs/user-guide/new-nginx-deployment.yaml
deployment "nginx-deployment" configured
```
Running a get immediately will still give:
```shell
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 3/3 8s
```
This indicates that deployment status has not been updated yet (it is still
showing old status).
Running a get again after a minute, will give:
```shell
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 1/3 1m
```
This indicates that deployment has updated one of the three pods that it needs
to update.
Eventually, it will get around to updating all the pods.
```shell
$ kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 3/3 3m
```
We can run ```kubectl get rc``` to see that deployment updated the pods by creating a new RC
We can run ```kubectl get rc```
to see that deployment updated the pods by creating a new RC
which it scaled up to 3 and scaled down the old RC to 0.
```shell
kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
deploymentrc-1562004724 nginx nginx:1.9.1 deployment.kubernetes.io/podTemplateHash=1562004724,app=nginx 3 5m
deploymentrc-1975012602 nginx nginx:1.7.9 deployment.kubernetes.io/podTemplateHash=1975012602,app=nginx 0 7m
```
Running get pods, will only show the new pods.
```shell
kubectl get pods
NAME READY STATUS RESTARTS AGE
deploymentrc-1562004724-0tgk5 1/1 Running 0 9m
@ -213,7 +191,6 @@ deploymentrc-1562004724-1rkfl 1/1 Running 0 8m
deploymentrc-1562004724-6v702 1/1 Running 0 8m
```
Next time we want to update pods, we can just update the deployment again.
Deployment ensures that not all pods are down while they are being updated. By
@ -223,7 +200,6 @@ it first created a new pod, then deleted some old pods and created new ones. It
does not kill old pods until a sufficient number of new pods have come up.
```shell
$ kubectl describe deployments
Name: nginx-deployment
Namespace: default
@ -245,7 +221,6 @@ Events:
1m 1m 1 {deployment-controller } ScalingRC Scaled down rc deploymentrc-1975012602 to 0
```
Here we see that when we first created the deployment, it created an RC and scaled it up to 3 replicas directly.
When we updated the deployment, it created a new RC and scaled it up to 1 and then scaled down the old RC by 1, so that at least 2 pods were available at all times.
It then scaled up the new RC to 3 and when those pods were ready, it scaled down the old RC to 0.

View File

@ -12,7 +12,6 @@ How do I run an nginx container and expose it to the world? Checkout [kubectl ru
With docker:
```shell
$ docker run -d --restart=always -e DOMAIN=cluster --name nginx-app -p 80:80 nginx
a9ec34d9878748d2f33dc20cb25c714ff21da8d40558b45bfaec9955859075d0
$ docker ps
@ -20,11 +19,9 @@ CONTAINER ID IMAGE COMMAND CREATED
a9ec34d98787 nginx "nginx -g 'daemon of 2 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp, 443/tcp nginx-app
```
With kubectl:
```shell
# start the pod running nginx
$ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"
replicationcontroller "nginx-app" created
@ -32,17 +29,14 @@ replicationcontroller "nginx-app" created
$ kubectl expose rc nginx-app --port=80 --name=nginx-http
```
With kubectl, we create a [replication controller](replication-controller) which will make sure that N pods are running nginx (where N is the number of replicas stated in the spec, which defaults to 1). We also create a [service](services) with a selector that matches the replication controller's selector. See the [Quick start](quick-start) for more information.
By default images are run in the background, similar to `docker run -d ...`, if you want to run things in the foreground, use:
```shell
kubectl run [-i] [--tty] --attach <name> --image=<image>
```
Unlike `docker run ...`, if `--attach` is specified, we attach to `stdin`, `stdout` and `stderr`, there is no ability to control which streams are attached (`docker -a ...`).
Because we start a replication controller for your container, it will be restarted if you terminate the attached process (e.g. `ctrl-c`), this is different than `docker run -it`.
@ -55,23 +49,19 @@ How do I list what is currently running? Checkout [kubectl get](kubectl/kubectl_
With docker:
```shell
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of About an hour ago Up About an hour 0.0.0.0:80->80/tcp, 443/tcp nginx-app
```
With kubectl:
```shell
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-app-5jyvm 1/1 Running 0 1h
```
#### docker attach
How do I attach to a process that is already running in a container? Checkout [kubectl attach](kubectl/kubectl_attach)
@ -79,7 +69,6 @@ How do I attach to a process that is already running in a container? Checkout [
With docker:
```shell
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of 8 minutes ago Up 8 minutes 0.0.0.0:80->80/tcp, 443/tcp nginx-app
@ -87,11 +76,9 @@ $ docker attach -it a9ec34d98787
...
```
With kubectl:
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-app-5jyvm 1/1 Running 0 10m
@ -100,7 +87,6 @@ $ kubectl attach -it nginx-app-5jyvm
```
#### docker exec
How do I execute a command in a container? Checkout [kubectl exec](kubectl/kubectl_exec).
@ -108,8 +94,6 @@ How do I execute a command in a container? Checkout [kubectl exec](kubectl/kubec
With docker:
```shell
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of 8 minutes ago Up 8 minutes 0.0.0.0:80->80/tcp, 443/tcp nginx-app
@ -118,12 +102,9 @@ a9ec34d98787
```
With kubectl:
```shell
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-app-5jyvm 1/1 Running 0 10m
@ -132,34 +113,27 @@ nginx-app-5jyvm
```
What about interactive commands?
With docker:
```shell
$ docker exec -ti a9ec34d98787 /bin/sh
# exit
```
With kubectl:
```shell
$ kubectl exec -ti nginx-app-5jyvm -- /bin/sh
# exit
```
For more information see [Getting into containers](getting-into-containers).
#### docker logs
@ -170,39 +144,30 @@ How do I follow stdout/stderr of a running process? Checkout [kubectl logs](kube
With docker:
```shell
$ docker logs -f a9e
192.168.9.1 - - [14/Jul/2015:01:04:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.35.0" "-"
192.168.9.1 - - [14/Jul/2015:01:04:03 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.35.0" "-"
```
With kubectl:
```shell
$ kubectl logs -f nginx-app-zibvs
10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
```
Now's a good time to mention slight difference between pods and containers; by default pods will not terminate if their processes exit. Instead it will restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated but for Kubernetes, each invocation is separate. To see the output from a previous run in Kubernetes, do this:
```shell
$ kubectl logs --previous nginx-app-zibvs
10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
```
See [Logging](logging) for more information.
#### docker stop and docker rm
@ -212,8 +177,6 @@ How do I stop and delete a running process? Checkout [kubectl delete](kubectl/ku
With docker
```shell
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9ec34d98787 nginx "nginx -g 'daemon of 22 hours ago Up 22 hours 0.0.0.0:80->80/tcp, 443/tcp nginx-app
@ -224,12 +187,9 @@ a9ec34d98787
```
With kubectl:
```shell
$ kubectl get rc nginx-app
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
nginx-app nginx-app nginx run=nginx-app 1
@ -244,7 +204,6 @@ NAME READY STATUS RESTARTS AGE
```
Notice that we don't delete the pod directly. With kubectl we want to delete the replication controller that owns the pod. If we delete the pod directly, the replication controller will recreate the pod.
#### docker login
@ -258,8 +217,6 @@ How do I get the version of my client and server? Checkout [kubectl version](kub
With docker:
```shell
$ docker version
Client version: 1.7.0
Client API version: 1.19
@ -274,19 +231,15 @@ OS/Arch (server): linux/amd64
```
With kubectl:
```shell
$ kubectl version
Client Version: version.Info{Major:"0", Minor:"20.1", GitVersion:"v0.20.1", GitCommit:"", GitTreeState:"not a git tree"}
Server Version: version.Info{Major:"0", Minor:"21+", GitVersion:"v0.21.1-411-g32699e873ae1ca-dirty", GitCommit:"32699e873ae1caa01812e41de7eab28df4358ee4", GitTreeState:"dirty"}
```
#### docker info
How do I get miscellaneous info about my environment and configuration? Checkout [kubectl cluster-info](kubectl/kubectl_cluster-info).
@ -294,8 +247,6 @@ How do I get miscellaneous info about my environment and configuration? Checkout
With docker:
```shell
$ docker info
Containers: 40
Images: 168
@ -316,12 +267,9 @@ WARNING: No swap limit support
```
With kubectl:
```shell
$ kubectl cluster-info
Kubernetes master is running at https://108.59.85.141
KubeDNS is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system/services/kube-dns
@ -332,6 +280,3 @@ InfluxDB is running at https://108.59.85.141/api/v1/proxy/namespaces/kube-system
```

View File

@ -51,7 +51,6 @@ downward API:
<!-- BEGIN MUNGE: EXAMPLE downward-api/dapi-pod.yaml -->
```yaml
apiVersion: v1
kind: Pod
metadata:
@ -77,7 +76,6 @@ spec:
restartPolicy: Never
```
[Download example](downward-api/dapi-pod.yaml)
<!-- END MUNGE: EXAMPLE downward-api/dapi-pod.yaml -->
@ -92,12 +90,10 @@ volume type and the different items represent the files to be created. `fieldPat
Downward API volume permits to store more complex data like [`metadata.labels`](labels) and [`metadata.annotations`](annotations). Currently key/value pair set fields are saved using `key="value"` format:
```
key1="value1"
key2="value2"
```
In future, it will be possible to specify an output format option.
Downward API volumes can expose:
@ -118,7 +114,6 @@ This is an example of a pod that consumes its labels and annotations via the dow
<!-- BEGIN MUNGE: EXAMPLE downward-api/volume/dapi-volume.yaml -->
```yaml
apiVersion: v1
kind: Pod
metadata:
@ -151,7 +146,6 @@ spec:
fieldPath: metadata.annotations
```
[Download example](downward-api/volume/dapi-volume.yaml)
<!-- END MUNGE: EXAMPLE downward-api/volume/dapi-volume.yaml -->

View File

@ -19,24 +19,18 @@ Use the [`examples/downward-api/dapi-pod.yaml`](dapi-pod.yaml) file to create a
downward API.
```shell
$ kubectl create -f docs/user-guide/downward-api/dapi-pod.yaml
```
### Examine the logs
This pod runs the `env` command in a container that consumes the downward API. You can grep
through the pod logs to see that the pod was injected with the correct values:
```shell
$ kubectl logs dapi-test-pod | grep POD_
2015-04-30T20:22:18.568024817Z MY_POD_NAME=dapi-test-pod
2015-04-30T20:22:18.568087688Z MY_POD_NAMESPACE=default
2015-04-30T20:22:18.568092435Z MY_POD_IP=10.0.1.6
```

View File

@ -19,24 +19,18 @@ Use the [`examples/downward-api/dapi-pod.yaml`](dapi-pod.yaml) file to create a
downward API.
```shell
$ kubectl create -f docs/user-guide/downward-api/dapi-pod.yaml
```
### Examine the logs
This pod runs the `env` command in a container that consumes the downward API. You can grep
through the pod logs to see that the pod was injected with the correct values:
```shell
$ kubectl logs dapi-test-pod | grep POD_
2015-04-30T20:22:18.568024817Z MY_POD_NAME=dapi-test-pod
2015-04-30T20:22:18.568087688Z MY_POD_NAMESPACE=default
2015-04-30T20:22:18.568092435Z MY_POD_IP=10.0.1.6
```

View File

@ -13,24 +13,22 @@ Supported metadata fields:
### Step Zero: Prerequisites
This example assumes you have a Kubernetes cluster installed and running, and the ```kubectl``` command line tool somewhere in your path. Please see the [gettingstarted](..//{{page.version}}/docs/getting-started-guides/) for installation instructions for your platform.
This example assumes you have a Kubernetes cluster installed and running, and the ```kubectl```
command line tool somewhere in your path. Please see the [gettingstarted](..//{{page.version}}/docs/getting-started-guides/) for installation instructions for your platform.
### Step One: Create the pod
Use the `docs/user-guide/downward-api/dapi-volume.yaml` file to create a Pod with a  downward API volume which stores pod labels and pod annotations to `/etc/labels` and  `/etc/annotations` respectively.
```shell
$ kubectl create -f docs/user-guide/downward-api/volume/dapi-volume.yaml
```
### Step Two: Examine pod/container output
The pod displays (every 5 seconds) the content of the dump files which can be executed via the usual `kubectl log` command
```shell
$ kubectl logs kubernetes-downwardapi-volume-example
cluster="test-cluster1"
rack="rack-22"
@ -41,13 +39,11 @@ kubernetes.io/config.seen="2015-08-24T13:47:23.432459138Z"
kubernetes.io/config.source="api"
```
### Internals
In pod's `/etc` directory one may find the file created by the plugin (system files elided):
```shell
$ kubectl exec kubernetes-downwardapi-volume-example -i -t -- sh
/ # ls -laR /etc
/etc:
@ -68,5 +64,4 @@ drwxrwxrwt 3 0 0 180 Aug 24 13:03 ..
/ #
```
The file `labels` is stored in a temporary directory (`..2015_08_24_13_03_44259413923` in the example above) which is symlinked to by `..downwardapi`. Symlinks for annotations and labels in `/etc` point to files containing the actual metadata through the `..downwardapi` indirection.  This structure allows for dynamic atomic refresh of the metadata: updates are written to a new temporary directory, and the `..downwardapi` symlink is updated atomically using `rename(2)`.

View File

@ -13,24 +13,22 @@ Supported metadata fields:
### Step Zero: Prerequisites
This example assumes you have a Kubernetes cluster installed and running, and the ```kubectl``` command line tool somewhere in your path. Please see the [gettingstarted](..//{{page.version}}/docs/getting-started-guides/) for installation instructions for your platform.
This example assumes you have a Kubernetes cluster installed and running, and the ```kubectl```
command line tool somewhere in your path. Please see the [gettingstarted](..//{{page.version}}/docs/getting-started-guides/) for installation instructions for your platform.
### Step One: Create the pod
Use the `docs/user-guide/downward-api/dapi-volume.yaml` file to create a Pod with a  downward API volume which stores pod labels and pod annotations to `/etc/labels` and  `/etc/annotations` respectively.
```shell
$ kubectl create -f docs/user-guide/downward-api/volume/dapi-volume.yaml
```
### Step Two: Examine pod/container output
The pod displays (every 5 seconds) the content of the dump files which can be executed via the usual `kubectl log` command
```shell
$ kubectl logs kubernetes-downwardapi-volume-example
cluster="test-cluster1"
rack="rack-22"
@ -41,13 +39,11 @@ kubernetes.io/config.seen="2015-08-24T13:47:23.432459138Z"
kubernetes.io/config.source="api"
```
### Internals
In pod's `/etc` directory one may find the file created by the plugin (system files elided):
```shell
$ kubectl exec kubernetes-downwardapi-volume-example -i -t -- sh
/ # ls -laR /etc
/etc:
@ -68,5 +64,4 @@ drwxrwxrwt 3 0 0 180 Aug 24 13:03 ..
/ #
```
The file `labels` is stored in a temporary directory (`..2015_08_24_13_03_44259413923` in the example above) which is symlinked to by `..downwardapi`. Symlinks for annotations and labels in `/etc` point to files containing the actual metadata through the `..downwardapi` indirection.  This structure allows for dynamic atomic refresh of the metadata: updates are written to a new temporary directory, and the `..downwardapi` symlink is updated atomically using `rename(2)`.

View File

@ -44,8 +44,7 @@ your service.
Run `curl <public ip>:80` to query the service. You should get
something like this back:
```
```
Pod Name: show-rc-xxu6i
Pod Namespace: default
USER_VAR: important information
@ -66,8 +65,7 @@ Backend Container
Backend Pod Name: backend-rc-6qiya
Backend Namespace: default
```
```
First the frontend pod's information is printed. The pod name and
[namespace](/{{page.version}}/docs/design/namespaces) are retrieved from the
[Downward API](/{{page.version}}/docs/user-guide/downward-api). Next, `USER_VAR` is the name of

View File

@ -44,8 +44,7 @@ your service.
Run `curl <public ip>:80` to query the service. You should get
something like this back:
```
```
Pod Name: show-rc-xxu6i
Pod Namespace: default
USER_VAR: important information
@ -66,8 +65,7 @@ Backend Container
Backend Pod Name: backend-rc-6qiya
Backend Namespace: default
```
```
First the frontend pod's information is printed. The pod name and
[namespace](/{{page.version}}/docs/design/namespaces) are retrieved from the
[Downward API](/{{page.version}}/docs/user-guide/downward-api). Next, `USER_VAR` is the name of

View File

@ -10,29 +10,26 @@ Kubernetes exposes [services](services.html#environment-variables) through envir
We first create a pod and a service,
```shell
```shell
$ kubectl create -f examples/guestbook/redis-master-controller.yaml
$ kubectl create -f examples/guestbook/redis-master-service.yaml
```
```
wait until the pod is Running and Ready,
```shell
```shell
$ kubectl get pod
NAME READY REASON RESTARTS AGE
redis-master-ft9ex 1/1 Running 0 12s
```
```
then we can check the environment variables of the pod,
```shell
```shell
$ kubectl exec redis-master-ft9ex env
...
REDIS_MASTER_SERVICE_PORT=6379
REDIS_MASTER_SERVICE_HOST=10.0.0.219
...
```
```
We can use these environment variables in applications to find the service.
@ -41,32 +38,28 @@ We can use these environment variables in applications to find the service.
It is convenient to use `kubectl exec` to check if the volumes are mounted as expected.
We first create a Pod with a volume mounted at /data/redis,
```shell
```shell
kubectl create -f docs/user-guide/walkthrough/pod-redis.yaml
```
```
wait until the pod is Running and Ready,
```shell
```shell
$ kubectl get pods
NAME READY REASON RESTARTS AGE
storage 1/1 Running 0 1m
```
```
we then use `kubectl exec` to verify that the volume is mounted at /data/redis,
```shell
```shell
$ kubectl exec storage ls /data
redis
```
```
## Using kubectl exec to open a bash terminal in a pod
After all, open a terminal in a pod is the most direct way to introspect the pod. Assuming the pod/storage is still running, run
```shell
```shell
$ kubectl exec -ti storage -- bash
root@storage:/data#
```
```
This gets you a terminal.

View File

@ -27,7 +27,6 @@ First, we will start a replication controller running the image and expose it as
<a name="kubectl-run"></a>
```shell
$ kubectl run php-apache --image=gcr.io/google_containers/hpa-example --requests=cpu=200m
replicationcontroller "php-apache" created
@ -35,11 +34,9 @@ $ kubectl expose rc php-apache --port=80 --type=LoadBalancer
service "php-apache" exposed
```
Now, we will wait some time and verify that both the replication controller and the service were correctly created and are running. We will also determine the IP address of the service:
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
php-apache-wa3t1 1/1 Running 0 12m
@ -48,21 +45,17 @@ $ kubectl describe services php-apache | grep "LoadBalancer Ingress"
LoadBalancer Ingress: 146.148.24.244
```
We may now check that php-apache server works correctly by calling ``curl`` with the service's IP:
```shell
$ curl http://146.148.24.244
OK!
```
Please notice that when exposing the service we assumed that our cluster runs on a provider which supports load balancers (e.g.: on GCE).
If load balancers are not supported (e.g.: on Vagrant), we can expose php-apache service as ``ClusterIP`` and connect to it using the proxy on the master:
```shell
$ kubectl expose rc php-apache --port=80 --type=ClusterIP
service "php-apache" exposed
@ -73,15 +66,12 @@ $ curl -k -u <admin>:<password> https://146.148.6.215/api/v1/proxy/namespaces/de
OK!
```
## Step Two: Create horizontal pod autoscaler
Now that the server is running, we will create a horizontal pod autoscaler for it.
To create it, we will use the [hpa-php-apache.yaml](hpa-php-apache.yaml) file, which looks like this:
```yaml
apiVersion: extensions/v1beta1
kind: HorizontalPodAutoscaler
metadata:
@ -98,7 +88,6 @@ spec:
targetPercentage: 50
```
This defines a horizontal pod autoscaler that maintains between 1 and 10 replicas of the Pods
controlled by the php-apache replication controller we created in the first step of these instructions.
Roughly speaking, the horizontal autoscaler will increase and decrease the number of replicas
@ -109,32 +98,26 @@ See [here](/{{page.version}}/docs/design/horizontal-pod-autoscaler.html#autoscal
We will create the autoscaler by executing the following command:
```shell
$ kubectl create -f docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml
horizontalpodautoscaler "php-apache" created
```
Alternatively, we can create the autoscaler using [kubectl autoscale](../kubectl/kubectl_autoscale).
The following command will create the equivalent autoscaler as defined in the [hpa-php-apache.yaml](hpa-php-apache.yaml) file:
```
$ kubectl autoscale rc php-apache --cpu-percent=50 --min=1 --max=10
replicationcontroller "php-apache" autoscaled
```
We may check the current status of autoscaler by running:
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 0% 1 10 27s
```
Please note that the current CPU consumption is 0% as we are not sending any requests to the server
(the ``CURRENT`` column shows the average across all the pods controlled by the corresponding replication controller).
@ -144,44 +127,35 @@ Now, we will see how the autoscaler reacts on the increased load of the server.
We will start an infinite loop of queries to our server (please run it in a different terminal):
```shell
$ while true; do curl http://146.148.6.244; done
```
We may examine, how CPU load was increased (the results should be visible after about 3-4 minutes) by executing:
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 305% 1 10 4m
```
In the case presented here, it bumped CPU consumption to 305% of the request.
As a result, the replication controller was resized to 7 replicas:
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 7 18m
```
Now, we may increase the load even more by running yet another infinite loop of queries (in yet another terminal):
```shell
$ while true; do curl http://146.148.6.244; done
```
In the case presented here, it increased the number of serving pods to 10:
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 65% 1 10 14m
@ -191,14 +165,12 @@ CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 10 24m
```
## Step Four: Stop load
We will finish our example by stopping the user load.
We will terminate both infinite ``while`` loops sending requests to the server and verify the result state:
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 0% 1 10 21m
@ -208,7 +180,6 @@ CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 1 31m
```
As we see, in the presented case CPU utilization dropped to 0, and the number of replicas dropped to 1.

View File

@ -27,7 +27,6 @@ First, we will start a replication controller running the image and expose it as
<a name="kubectl-run"></a>
```shell
$ kubectl run php-apache --image=gcr.io/google_containers/hpa-example --requests=cpu=200m
replicationcontroller "php-apache" created
@ -35,11 +34,9 @@ $ kubectl expose rc php-apache --port=80 --type=LoadBalancer
service "php-apache" exposed
```
Now, we will wait some time and verify that both the replication controller and the service were correctly created and are running. We will also determine the IP address of the service:
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
php-apache-wa3t1 1/1 Running 0 12m
@ -48,21 +45,17 @@ $ kubectl describe services php-apache | grep "LoadBalancer Ingress"
LoadBalancer Ingress: 146.148.24.244
```
We may now check that php-apache server works correctly by calling ``curl`` with the service's IP:
```shell
$ curl http://146.148.24.244
OK!
```
Please notice that when exposing the service we assumed that our cluster runs on a provider which supports load balancers (e.g.: on GCE).
If load balancers are not supported (e.g.: on Vagrant), we can expose php-apache service as ``ClusterIP`` and connect to it using the proxy on the master:
```shell
$ kubectl expose rc php-apache --port=80 --type=ClusterIP
service "php-apache" exposed
@ -73,15 +66,12 @@ $ curl -k -u <admin>:<password> https://146.148.6.215/api/v1/proxy/namespaces/de
OK!
```
## Step Two: Create horizontal pod autoscaler
Now that the server is running, we will create a horizontal pod autoscaler for it.
To create it, we will use the [hpa-php-apache.yaml](hpa-php-apache.yaml) file, which looks like this:
```yaml
apiVersion: extensions/v1beta1
kind: HorizontalPodAutoscaler
metadata:
@ -98,7 +88,6 @@ spec:
targetPercentage: 50
```
This defines a horizontal pod autoscaler that maintains between 1 and 10 replicas of the Pods
controlled by the php-apache replication controller we created in the first step of these instructions.
Roughly speaking, the horizontal autoscaler will increase and decrease the number of replicas
@ -109,32 +98,26 @@ See [here](/{{page.version}}/docs/design/horizontal-pod-autoscaler.html#autoscal
We will create the autoscaler by executing the following command:
```shell
$ kubectl create -f docs/user-guide/horizontal-pod-autoscaling/hpa-php-apache.yaml
horizontalpodautoscaler "php-apache" created
```
Alternatively, we can create the autoscaler using [kubectl autoscale](../kubectl/kubectl_autoscale).
The following command will create the equivalent autoscaler as defined in the [hpa-php-apache.yaml](hpa-php-apache.yaml) file:
```
$ kubectl autoscale rc php-apache --cpu-percent=50 --min=1 --max=10
replicationcontroller "php-apache" autoscaled
```
We may check the current status of autoscaler by running:
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 0% 1 10 27s
```
Please note that the current CPU consumption is 0% as we are not sending any requests to the server
(the ``CURRENT`` column shows the average across all the pods controlled by the corresponding replication controller).
@ -144,44 +127,35 @@ Now, we will see how the autoscaler reacts on the increased load of the server.
We will start an infinite loop of queries to our server (please run it in a different terminal):
```shell
$ while true; do curl http://146.148.6.244; done
```
We may examine, how CPU load was increased (the results should be visible after about 3-4 minutes) by executing:
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 305% 1 10 4m
```
In the case presented here, it bumped CPU consumption to 305% of the request.
As a result, the replication controller was resized to 7 replicas:
```shell
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 7 18m
```
Now, we may increase the load even more by running yet another infinite loop of queries (in yet another terminal):
```shell
$ while true; do curl http://146.148.6.244; done
```
In the case presented here, it increased the number of serving pods to 10:
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 65% 1 10 14m
@ -191,14 +165,12 @@ CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 10 24m
```
## Step Four: Stop load
We will finish our example by stopping the user load.
We will terminate both infinite ``while`` loops sending requests to the server and verify the result state:
```shell
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
php-apache ReplicationController/default/php-apache/ 50% 0% 1 10 21m
@ -208,7 +180,6 @@ CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR
php-apache php-apache gcr.io/google_containers/hpa-example run=php-apache 1 31m
```
As we see, in the presented case CPU utilization dropped to 0, and the number of replicas dropped to 1.

View File

@ -74,7 +74,6 @@ example, run these on your desktop/laptop:
Verify by creating a pod that uses a private image, e.g.:
```yaml
$ cat <<EOF > /tmp/private-image-test-1.yaml
apiVersion: v1
kind: Pod
@ -92,26 +91,20 @@ pods/private-image-test-1
$
```
If everything is working, then, after a few moments, you should see:
```shell
$ kubectl logs private-image-test-1
SUCCESS
```
If it failed, then you will see:
```shell
$ kubectl describe pods/private-image-test-1 | grep "Failed"
Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found
```
You must ensure all nodes in the cluster have the same `.dockercfg`. Otherwise, pods will run on
some nodes and fail to run on others. For example, if you use node autoscaling, then each instance
template needs to include the `.dockercfg` or mount a drive that contains it.
@ -153,7 +146,6 @@ First, create a `.dockercfg`, such as running `docker login <registry.domain>`.
Then put the resulting `.dockercfg` file into a [secret resource](secrets). For example:
```shell
$ docker login
Username: janedoe
Password: '<27>?'<27>?'<27>?'<27>?'<27>?'<27>?'<27>?'<27>?'<27>?'<27>?'<27>?
@ -182,7 +174,6 @@ secrets/myregistrykey
$
```
If you get the error message `error: no objects passed to create`, it may mean the base64 encoded string is invalid.
If you get an error message like `Secret "myregistrykey" is invalid: data[.dockercfg]: invalid value ...` it means
the data was successfully un-base64 encoded, but could not be parsed as a dockercfg file.
@ -193,7 +184,6 @@ Now, you can create pods which reference that secret by adding an `imagePullSecr
section to a pod definition.
```yaml
apiVersion: v1
kind: Pod
metadata:
@ -206,7 +196,6 @@ spec:
- name: myregistrykey
```
This needs to be done for each pod that is using a private registry.
However, setting of this field can be automated by setting the imagePullSecrets
in a [serviceAccount](service-accounts) resource.

View File

@ -18,26 +18,22 @@ Throughout this doc you will see a few terms that are sometimes used interchanga
Typically, services and pods have IPs only routable by the cluster network. All traffic that ends up at an edge router is either dropped or forwarded elsewhere. Conceptually, this might look like:
```
internet
internet
|
------------
[ Services ]
```
An Ingress is a collection of rules that allow inbound connections to reach the cluster services.
```
internet
internet
|
[ Ingress ]
--|-----|--
[ Services ]
```
It can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc. Users request ingress by POSTing the Ingress resource to the API server. An [Ingress controller](#ingress-controllers) is responsible for fulfilling the Ingress, usually with a loadbalancer, though it may also configure your edge router or additional frontends to help handle the traffic in an HA manner.
## Prerequisites
@ -53,7 +49,6 @@ Before you start using the Ingress resource, there are a few things you should u
A minimal Ingress might look like:
```yaml
01. apiVersion: extensions/v1beta1
02. kind: Ingress
03. metadata:
@ -68,7 +63,6 @@ A minimal Ingress might look like:
12. servicePort: 80
```
*POSTing this to the API server will have no effect if you have not configured an [Ingress controller](#ingress-controllers).*
__Lines 1-4__: As with all other Kubernetes config, an Ingress needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see [here](simple-yaml), [here](configuring-containers), and [here](working-with-resources).
@ -94,7 +88,6 @@ There are existing Kubernetes concepts that allow you to expose a single service
<!-- BEGIN MUNGE: EXAMPLE ingress.yaml -->
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
@ -105,20 +98,17 @@ spec:
servicePort: 80
```
[Download example](ingress.yaml)
<!-- END MUNGE: EXAMPLE ingress.yaml -->
If you create it using `kubectl -f` you should see:
```shell
$ kubectl get ing
NAME RULE BACKEND ADDRESS
test-ingress - testsvc:80 107.178.254.228
```
Where `107.178.254.228` is the IP allocated by the Ingress controller to satisfy this Ingress. The `RULE` column shows that all traffic send to the IP is directed to the Kubernetes Service listed under `BACKEND`.
### Simple fanout
@ -126,16 +116,13 @@ Where `107.178.254.228` is the IP allocated by the Ingress controller to satisfy
As described previously, pods within kubernetes have ips only visible on the cluster network, so we need something at the edge accepting ingress traffic and proxying it to the right endpoints. This component is usually a highly available loadbalancer/s. An Ingress allows you to keep the number of loadbalancers down to a minimum, for example, a setup like:
```
foo.bar.com -> 178.91.123.132 -> / foo s1:80
/ bar s2:80
```
would require an Ingress such as:
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
@ -155,11 +142,9 @@ spec:
servicePort: 80
```
When you create the Ingress with `kubectl create -f`:
```
$ kubectl get ing
NAME RULE BACKEND ADDRESS
test -
@ -168,7 +153,6 @@ test -
/bar s2:80
```
The Ingress controller will provision an implementation specific loadbalancer that satisfies the Ingress, as long as the services (s1, s2) exist. When it has done so, you will see the address of the loadbalancer under the last column of the Ingress.
### Name based virtual hosting
@ -176,18 +160,14 @@ The Ingress controller will provision an implementation specific loadbalancer th
Name-based virtual hosts use multiple host names for the same IP address.
```
foo.bar.com --| |-> foo.bar.com s1:80
| 178.91.123.132 |
bar.foo.com --| |-> bar.foo.com s2:80
```
The following Ingress tells the backing loadbalancer to route requests based on the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4).
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
@ -208,8 +188,6 @@ spec:
servicePort: 80
```
__Default Backends__: An Ingress with no rules, like the one shown in the previous section, sends all traffic to a single default backend. You can use the same technique to tell a loadbalancer where to find your website's 404 page, by specifying a set of rules *and* a default backend. Traffic is routed to your default backend if none of the Hosts in your Ingress match the Host in the request header, and/or none of the paths match the url of the request.
### Loadbalancing
@ -223,7 +201,6 @@ It's also worth noting that even though health checks are not exposed directly t
Say you'd like to add a new Host to an existing Ingress, you can update it by editing the resource:
```shell
$ kubectl get ing
NAME RULE BACKEND ADDRESS
test - 178.91.123.132
@ -232,11 +209,9 @@ test - 178.91.123.132
$ kubectl edit ing test
```
This should pop up an editor with the existing yaml, modify it to include the new Host.
```yaml
spec:
rules:
- host: foo.bar.com
@ -256,11 +231,9 @@ spec:
..
```
saving it will update the resource in the API server, which should tell the Ingress controller to reconfigure the loadbalancer.
```shell
$ kubectl get ing
NAME RULE BACKEND ADDRESS
test - 178.91.123.132
@ -270,7 +243,6 @@ test - 178.91.123.132
/foo s2:80
```
You can achieve the same by invoking `kubectl replace -f` on a modified Ingress yaml file.
## Future Work

View File

@ -12,7 +12,6 @@ your pods. But there are a number of ways to get even more information about you
For this example we'll use a ReplicationController to create two pods, similar to the earlier example.
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
@ -35,27 +34,21 @@ spec:
- containerPort: 80
```
```shell
$ kubectl create -f ./my-nginx-rc.yaml
replicationcontrollers/my-nginx
```
```shell
$ kubectl get pods
NAME READY REASON RESTARTS AGE
my-nginx-gy1ij 1/1 Running 0 1m
my-nginx-yv5cn 1/1 Running 0 1m
```
We can retrieve a lot more information about each of these pods using `kubectl describe pod`. For example:
```shell
$ kubectl describe pod my-nginx-gy1ij
Name: my-nginx-gy1ij
Image(s): nginx
@ -90,7 +83,6 @@ Events:
Thu, 09 Jul 2015 15:33:07 -0700 Thu, 09 Jul 2015 15:33:07 -0700 1 {kubelet kubernetes-minion-y3vk} spec.containers{nginx} started Started with docker id 56d7a7b14dac
```
Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart count, events, etc.)
The container state is one of Waiting, Running, or Terminated. Depending on the state, additional information will be provided -- here you can see that for a container in Running state, the system tells you when the container started.
@ -108,7 +100,6 @@ Lastly, you see a log of recent events related to your Pod. The system compresse
A common scenario that you can detect using events is when you've created a Pod that won't fit on any node. For example, the Pod might request more resources than are free on any node, or it might specify a label selector that doesn't match any nodes. Let's say we created the previous Replication Controller with 5 replicas (instead of 2) and requesting 600 millicores instead of 500, on a four-node cluster where each (virtual) machine has 1 CPU. In that case one of the Pods will not be able to schedule. (Note that because of the cluster addon pods such as fluentd, skydns, etc., that run on each node, if we requested 1000 millicores then none of the Pods would be able to schedule.)
```shell
$ kubectl get pods
NAME READY REASON RESTARTS AGE
my-nginx-9unp9 0/1 Pending 0 8s
@ -118,11 +109,9 @@ my-nginx-iichp 0/1 Running 0 8s
my-nginx-tc2j9 0/1 Running 0 8s
```
To find out why the my-nginx-9unp9 pod is not running, we can use `kubectl describe pod` on the pending Pod and look at its events:
```shell
$ kubectl describe pod my-nginx-9unp9
Name: my-nginx-9unp9
Image(s): nginx
@ -147,7 +136,6 @@ Events:
Thu, 09 Jul 2015 23:56:21 -0700 Fri, 10 Jul 2015 00:01:30 -0700 21 {scheduler } failedScheduling Failed for reason PodFitsResources and possibly others
```
Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason `PodFitsResources` (and possibly others). `PodFitsResources` means there were not enough resources for the Pod on any of the nodes. Due to the way the event is generated, there may be other reasons as well, hence "and possibly others."
To correct this situation, you can use `kubectl scale` to update your Replication Controller to specify four or fewer replicas. (Or you could just leave the one Pod pending, which is harmless.)
@ -155,25 +143,20 @@ To correct this situation, you can use `kubectl scale` to update your Replicatio
Events such as the ones you saw at the end of `kubectl describe pod` are persisted in etcd and provide high-level information on what is happening in the cluster. To list all events you can use
```
kubectl get events
```
but you have to remember that events are namespaced. This means that if you're interested in events for some namespaced object (e.g. what happened with Pods in namespace `my-namespace`) you need to explicitly provide a namespace to the command:
```
kubectl get events --namespace=my-namespace
```
To see events from all namespaces, you can use the `--all-namespaces` argument.
In addition to `kubectl describe pod`, another way to get extra information about a pod (beyond what is provided by `kubectl get pod`) is to pass the `-o yaml` output format flag to `kubectl get pod`. This will give you, in YAML format, even more information than `kubectl describe pod`--essentially all of the information the system has about the Pod. Here you will see things like annotations (which are key-value metadata without the label restrictions, that is used internally by Kubernetes system components), restart policy, ports, and volumes.
```yaml
$ kubectl get pod my-nginx-i595c -o yaml
apiVersion: v1
kind: Pod
@ -235,13 +218,11 @@ status:
startTime: 2015-07-10T06:56:21Z
```
## Example: debugging a down/unreachable node
Sometimes when debugging it can be useful to look at the status of a node -- for example, because you've noticed strange behavior of a Pod that's running on the node, or to find out why a Pod won't schedule onto the node. As with Pods, you can use `kubectl describe node` and `kubectl get node -o yaml` to retrieve detailed information about nodes. For example, here's what you'll see if a node is down (disconnected from the network, or kubelet dies and won't restart, etc.). Notice the events that show the node is NotReady, and also notice that the pods are no longer running (they are evicted after five minutes of NotReady status).
```shell
$ kubectl get nodes
NAME LABELS STATUS
kubernetes-minion-861h kubernetes.io/hostname=kubernetes-minion-861h NotReady
@ -322,7 +303,6 @@ status:
systemUUID: ABE5F6B4-D44B-108B-C46A-24CCE16C8B6E
```
## What's next?
Learn about additional debugging tools, including:

View File

@ -20,7 +20,6 @@ It takes around 10s to complete.
<!-- BEGIN MUNGE: EXAMPLE job.yaml -->
```yaml
apiVersion: extensions/v1beta1
kind: Job
metadata:
@ -42,23 +41,19 @@ spec:
restartPolicy: Never
```
[Download example](job.yaml)
<!-- END MUNGE: EXAMPLE job.yaml -->
Run the example job by downloading the example file and then running this command:
```shell
$ kubectl create -f ./job.yaml
jobs/pi
```
Check on the status of the job using this command:
```shell
$ kubectl describe jobs/pi
Name: pi
Namespace: default
@ -75,31 +70,26 @@ Events:
```
To view completed pods of a job, use `kubectl get pods --show-all`. The `--show-all` will show completed pods too.
To list all the pods that belong to job in a machine readable form, you can use a command like this:
```shell
$ pods=$(kubectl get pods --selector=app=pi --output=jsonpath={.items..metadata.name})
echo $pods
pi-aiw0a
```
Here, the selector is the same as the selector for the job. The `--output=jsonpath` option specifies an expression
that just gets the name from each pod in the returned list.
View the standard output of one of the pods:
```shell
$ kubectl logs pi-aiw0a
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
```
## Writing a Job Spec
As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields. For

View File

@ -14,7 +14,6 @@ The result object is printed as its String() function.
Given the input:
```json
{
"kind": "List",
"items":[
@ -51,7 +50,6 @@ Given the input:
}
```
Function | Description | Example | Result
---------|--------------------|--------------------|------------------
text | the plain text | kind is {.kind} | kind is List

View File

@ -22,7 +22,7 @@ http://issue.k8s.io/1755
The below file contains a `current-context` which will be used by default by clients which are using the file to connect to a cluster. Thus, this kubeconfig file has more information in it then we will necessarily have to use in a given session. You can see it defines many clusters, and users associated with those clusters. The context itself is associated with both a cluster AND a user.
```yaml
```yaml
current-context: federal-context
apiVersion: v1
clusters:
@ -60,8 +60,7 @@ users:
user:
client-certificate: path/to/my/client/cert
client-key: path/to/my/client/key
```
```
### Building your own kubeconfig file
NOTE, that if you are deploying k8s via kube-up.sh, you do not need to create your own kubeconfig files, the script will do it for you.
@ -72,11 +71,10 @@ So, lets do a quick walk through the basics of the above file so you can easily
The above file would likely correspond to an api-server which was launched using the `--token-auth-file=tokens.csv` option, where the tokens.csv file looked something like this:
```
```
blue-user,blue-user,1
mister-red,mister-red,2
```
```
Also, since we have other users who validate using **other** mechanisms, the api-server would have probably been launched with other authentication options (there are many such options, make sure you understand which ones YOU care about before crafting a kubeconfig file, as nobody needs to implement all the different permutations of possible authentication schemes).
- Since the user for the current context is "green-user", any client of the api-server using this kubeconfig file would naturally be able to log in succesfully, because we are providigin the green-user's client credentials.
@ -126,18 +124,17 @@ See [kubectl/kubectl_config.md](kubectl/kubectl_config) for help.
### Example
```shell
```shell
$ kubectl config set-credentials myself --username=admin --password=secret
$ kubectl config set-cluster local-server --server=http://localhost:8080
$ kubectl config set-context default-context --cluster=local-server --user=myself
$ kubectl config use-context default-context
$ kubectl config set contexts.default-context.namespace the-right-prefix
$ kubectl config view
```
```
produces this output
```yaml
```yaml
apiVersion: v1
clusters:
- cluster:
@ -157,11 +154,10 @@ users:
user:
password: secret
username: admin
```
```
and a kubeconfig file that looks like this
```yaml
```yaml
apiVersion: v1
clusters:
- cluster:
@ -181,11 +177,10 @@ users:
user:
password: secret
username: admin
```
```
#### Commands for the example file
```shell
```shell
$ kubectl config set preferences.colors true
$ kubectl config set-cluster cow-cluster --server=http://cow.org:8080 --api-version=v1
$ kubectl config set-cluster horse-cluster --server=https://horse.org:4443 --certificate-authority=path/to/my/cafile
@ -195,8 +190,7 @@ $ kubectl config set-credentials green-user --client-certificate=path/to/my/clie
$ kubectl config set-context queen-anne-context --cluster=pig-cluster --user=black-user --namespace=saw-ns
$ kubectl config set-context federal-context --cluster=horse-cluster --user=green-user --namespace=chisel-ns
$ kubectl config use-context federal-context
```
```
### Final notes for tying it all together
So, tying this all together, a quick start to creating your own kubeconfig file:

View File

@ -10,24 +10,20 @@ TODO: Auto-generate this file to ensure it's always in sync with any `kubectl` c
Use the following syntax to run `kubectl` commands from your terminal window:
```
kubectl [command] [TYPE] [NAME] [flags]
```
where `command`, `TYPE`, `NAME`, and `flags` are:
* `command`: Specifies the operation that you want to perform on one or more resources, for example `create`, `get`, `describe`, `delete`.
* `TYPE`: Specifies the [resource type](#resource-types). Resource types are case-sensitive and you can specify the singular, plural, or abbreviated forms. For example, the following commands produce the same output:
```
$ kubectl get pod pod1
$ kubectl get pod pod1
$ kubectl get pods pod1
$ kubectl get po pod1
```
* `NAME`: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example `$ kubectl get pods`.
When performing an operation on multiple resources, you can specify each resource by type and name or specify one or more files:
@ -112,11 +108,9 @@ The default output format for all `kubectl` commands is the human readable plain
#### Syntax
```
kubectl [command] [TYPE] [NAME] -o=<output_format>
```
Depending on the `kubectl` operation, the following output formats are supported:
Output format | Description
@ -147,37 +141,29 @@ To define custom columns and output only the details that you want into a table,
* Inline:
```shell
$ kubectl get pods <pod-name> -o=custom-columns=NAME:.metadata.name,RSRC:.metadata.resourceVersion
$ kubectl get pods <pod-name> -o=custom-columns=NAME:.metadata.name,RSRC:.metadata.resourceVersion
```
* Template file:
* Template file:
```shell
$ kubectl get pods <pod-name> -o=custom-columns-file=template.txt
$ kubectl get pods <pod-name> -o=custom-columns-file=template.txt
```
where the `template.txt` file contains:
where the `template.txt` file contains:
```
NAME RSRC
NAME RSRC
metadata.name metadata.resourceVersion
```
The result of running either command is:
```shell
NAME RSRC
submit-queue 610995
```
### Sorting list objects
To output objects to a sorted list in your terminal window, you can add the `--sort-by` flag to a supported `kubectl` command. Sort your objects by specifying any numeric or string field with the `--sort-by` flag. To specify a field, use a [jsonpath](jsonpath) expression.
@ -185,11 +171,9 @@ To output objects to a sorted list in your terminal window, you can add the `--s
#### Syntax
```
kubectl [command] [TYPE] [NAME] --sort-by=<jsonpath_exp>
```
##### Example
To print a list of pods sorted by name, you run:

View File

@ -7,14 +7,12 @@ Labels can be used to organize and to select subsets of objects. Labels can be
Each object can have a set of key/value labels defined. Each Key must be unique for a given object.
```json
"labels": {
"key1" : "value1",
"key2" : "value2"
}
```
We'll eventually index and reverse-index labels for efficient queries and watches, use them to sort and group in UIs and CLIs, etc. We don't want to pollute labels with non-identifying, especially large and/or structured, data. Non-identifying information should be recorded using [annotations](annotations).
{% include pagetoc.html %}
@ -61,12 +59,10 @@ _Equality-_ or _inequality-based_ requirements allow filtering by label keys and
Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are simply synonyms), while the latter represents _inequality_. For example:
```
environment = production
tier != frontend
```
The former selects all resources with key equal to `environment` and value equal to `production`.
The latter selects all resources with key equal to `tier` and value distinct from `frontend`, and all resources with no labels with the `tier` key.
One could filter for resources in `production` excluding `frontend` using the comma operator: `environment=production,tier!=frontend`
@ -77,14 +73,12 @@ One could filter for resources in `production` excluding `frontend` using the co
_Set-based_ label requirements allow filtering keys according to a set of values. Three kinds of operators are supported: `in`,`notin` and exists (only the key identifier). For example:
```
environment in (production, qa)
tier notin (frontend, backend)
partition
!partition
```
The first example selects all resources with key equal to `environment` and value equal to `production` or `qa`.
The second example selects all resources with key equal to `tier` and values other than `frontend` and `backend`, and all resources with no labels with the `tier` key.
The third example selects all resources including a label with key `partition`; no values are checked.
@ -107,35 +101,27 @@ LIST and WATCH operations may specify label selectors to filter the sets of obje
Both label selector styles can be used to list or watch resources via a REST client. For example targetting `apiserver` with `kubectl` and using _equality-based_ one may write:
```shell
$ kubectl get pods -l environment=production,tier=frontend
```
or using _set-based_ requirements:
```shell
$ kubectl get pods -l 'environment in (production),tier in (frontend)'
```
As already mentioned _set-based_ requirements are more expressive.  For instance, they can implement the _OR_ operator on values:
```shell
$ kubectl get pods -l 'environment in (production, qa)'
```
or restricting negative matching via _exists_ operator:
```shell
$ kubectl get pods -l 'environment,environment notin (frontend)'
```
### Set references in API objects
Some Kubernetes objects, such as [`service`s](services) and [`replicationcontroller`s](replication-controller), also use label selectors to specify sets of other resources, such as [pods](pods).
@ -147,22 +133,18 @@ The set of pods that a `service` targets is defined with a label selector. Simil
Labels selectors for both objects are defined in `json` or `yaml` files using maps, and only _equality-based_ requirement selectors are supported:
```json
"selector": {
"component" : "redis",
}
```
or
```yaml
selector:
component: redis
```
this selector (respectively in `json` or `yaml` format) is equivalent to `component=redis` or `component in (redis)`.
#### Job and other new resources
@ -170,7 +152,6 @@ this selector (respectively in `json` or `yaml` format) is equivalent to `compon
Newer resources, such as [job](jobs), support _set-based_ requirements as well.
```yaml
selector:
matchLabels:
component: redis
@ -179,7 +160,6 @@ selector:
- {key: environment, operator: NotIn, values: [dev]}
```
`matchLabels` is a map of `{key,value}` pairs. A single `{key,value}` in the `matchLabels` map is equivalent to an element of `matchExpressions`, whose `key` field is "key", the `operator` is "In", and the `values` array contains only "value". `matchExpressions` is a list of pod selector requirements. Valid operators include In, NotIn, Exists, and DoesNotExist. The values set must be non-empty in the case of In and NotIn. All of the requirements, from both `matchLabels` and `matchExpressions` are ANDed together -- they must all be satisfied in order to match.

Some files were not shown because too many files have changed in this diff Show More