* 'master' of https://github.com/kubernetes/kubernetes.github.io:
  getting-started-guides: add CoreOS Tectonic
  add DefaultTolerationSeconds admission controller
  Move Guide topic: Connect with Proxies. (#2663)
  Move Guide topic: Bootstrapping Pet Sets. (#2662)
  Move Guide topic: Using Port Forwarding. (#2661)
  fix typo (#2656)
  Move Guide topic: Using Environment Variables. (#2645)
  missing word
  Prototype for deprecating User Guide topic.
  Update static-pods.md
  Update static-pods.md
reviewable/pr2686/r2
Andrew Chen 2017-03-02 09:49:33 -08:00
commit 06b5791afa
15 changed files with 71 additions and 203 deletions

View File

@ -20,6 +20,7 @@ toc:
- title: Controllers
section:
- docs/concepts/abstractions/controllers/statefulsets.md
- docs/concepts/abstractions/controllers/petsets.md
- docs/concepts/abstractions/controllers/garbage-collection.md
- title: Object Metadata

View File

@ -56,7 +56,6 @@ toc:
- title: Containers and Pods
section:
- docs/user-guide/simple-nginx.md
- docs/user-guide/pods/single-container.md
- docs/user-guide/pods/multi-container.md
- docs/user-guide/pods/init-container.md

View File

@ -252,6 +252,11 @@ This plugin ignores any `PersistentVolumeClaim` updates, it acts only on creatio
See [persistent volume](/docs/user-guide/persistent-volumes) documentation about persistent volume claims and
storage classes and how to mark a storage class as default.
### DefaultTolerationSeconds
This plug-in sets the default forgiveness toleration for pods, which have no forgiveness tolerations, to tolerate
the taints `notready:NoExecute` and `unreachable:NoExecute` for 5 minutes.
## Is there a recommended set of plug-ins to use?
Yes.

View File

@ -44,9 +44,9 @@ You can list the current namespaces in a cluster using:
```shell
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
kube-system <none> Active
NAME STATUS AGE
default Active 11d
kube-system Active 11d
```
Kubernetes starts with two initial namespaces:

View File

@ -22,45 +22,46 @@ For example, this is how to start a simple web server as a static pod:
1. Choose a node where we want to run the static pod. In this example, it's `my-node1`.
```shell
[joe@host ~] $ ssh my-node1
```
```
[joe@host ~] $ ssh my-node1
```
2. Choose a directory, say `/etc/kubelet.d` and place a web server pod definition there, e.g. `/etc/kubelet.d/static-web.yaml`:
```shell
[root@my-node1 ~] $ mkdir /etc/kubelet.d/
[root@my-node1 ~] $ cat <<EOF >/etc/kubelet.d/static-web.yaml
apiVersion: v1
kind: Pod
metadata:
name: static-web
labels:
role: myrole
spec:
containers:
- name: web
image: nginx
ports:
```
[root@my-node1 ~] $ mkdir /etc/kubernetes.d/
[root@my-node1 ~] $ cat <<EOF >/etc/kubernetes.d/static-web.yaml
apiVersion: v1
kind: Pod
metadata:
name: static-web
labels:
role: myrole
spec:
containers:
- name: web
containerPort: 80
protocol: TCP
EOF
```
image: nginx
ports:
- name: web
containerPort: 80
protocol: TCP
EOF
```
2. Configure your kubelet daemon on the node to use this directory by running it with `--pod-manifest-path=/etc/kubelet.d/` argument. On Fedora edit `/etc/kubernetes/kubelet` to include this line:
3. Configure your kubelet daemon on the node to use this directory by running it with `--pod-manifest-path=/etc/kubelet.d/` argument.
On Fedora edit `/etc/kubernetes/kubelet` to include this line:
```conf
KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --pod-manifest-path=/etc/kubelet.d/"
```
```
KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --pod-manifest-path=/etc/kubelet.d/"
```
Instructions for other distributions or Kubernetes installations may vary.
Instructions for other distributions or Kubernetes installations may vary.
3. Restart kubelet. On Fedora, this is:
4. Restart kubelet. On Fedora, this is:
```shell
[root@my-node1 ~] $ systemctl restart kubelet
```
```
[root@my-node1 ~] $ systemctl restart kubelet
```
## Pods created via HTTP

View File

@ -0,0 +1,15 @@
---
assignees:
- bprashanth
- enisoc
- erictune
- foxish
- janetkuo
- kow3ns
- smarterclayton
title: PetSets
---
__Warning:__ Starting in Kubernetes version 1.5, PetSet has been renamed to [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets). To use (or continue to use) PetSet in Kubernetes 1.5, you _must_ [migrate](/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set/) your existing PetSets to StatefulSets. For information on working with StatefulSet, see the tutorial on [how to run replicated stateful applications](/docs/tutorials/stateful-application/run-replicated-stateful-application).
__This document has been deprecated__.

View File

@ -8,7 +8,7 @@ The Concepts section helps you learn about the parts of the Kubernetes system an
To work with Kubernetes, you use *Kubernetes API objects* to describe your cluster's *desired state*: what applications or other workloads you want to run, what container images they use, the number of replicas, what network and disk resources you want to make available, and more. You set your desired state by creating objects using the Kubernetes API, typically via the command-line interface, `kubectl`. You can also use the Kubernetes API directly to interact with the cluster and set or modify your desired state.
Once you've set your desired state, the *Kubernetes Control Plane* works to make the cluster's current state match the desired state. To do so, Kuberentes performs a variety of tasks automatically--such as starting or restarting containers, scaling the number of replicas of a given application, and more. The Kubernetes Control Plane consists of a collection processes running on your cluster:
Once you've set your desired state, the *Kubernetes Control Plane* works to make the cluster's current state match the desired state. To do so, Kuberentes performs a variety of tasks automatically--such as starting or restarting containers, scaling the number of replicas of a given application, and more. The Kubernetes Control Plane consists of a collection of processes running on your cluster:
* The **Kubernetes Master** is a collection of four processes that run on a single node in your cluster, which is designated as the master node.
* Each individual non-master node in your cluster runs two processes:

View File

@ -61,6 +61,7 @@ few commands, and have active community support.
- [GCE](/docs/getting-started-guides/gce)
- [AWS](/docs/getting-started-guides/aws)
- [Azure](/docs/getting-started-guides/azure)
- [Tectonic by CoreOS](https://coreos.com/tectonic)
- [CenturyLink Cloud](/docs/getting-started-guides/clc)
- [IBM SoftLayer](https://github.com/patrocinio/kubernetes-softlayer)
- [Stackpoint.io](/docs/getting-started-guides/stackpoint/)

View File

@ -1,5 +1,8 @@
---
title: Running a Stateless Application Using a Deployment
redirect_from:
- "/docs/user-guide/simple-nginx/"
- "/docs/user-guide/simple-nginx.html"
---
{% capture overview %}

View File

@ -5,46 +5,6 @@ assignees:
title: Connect with Port Forwarding
---
kubectl port-forward forwards connections to a local port to a port on a pod. Its man page is available [here](/docs/user-guide/kubectl/kubectl_port-forward). Compared to [kubectl proxy](/docs/user-guide/accessing-the-cluster/#using-kubectl-proxy), `kubectl port-forward` is more generic as it can forward TCP traffic while `kubectl proxy` can only forward HTTP traffic. This guide demonstrates how to use `kubectl port-forward` to connect to a Redis database, which may be useful for database debugging.
{% include user-guide-content-moved.md %}
## Creating a Redis master
```shell
$ kubectl create -f examples/redis/redis-master.yaml
pods/redis-master
```
wait until the Redis master pod is Running and Ready,
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master 2/2 Running 0 41s
```
## Connecting to the Redis master[a]
The Redis master is listening on port 6379, to verify this,
```shell{% raw %}
$ kubectl get pods redis-master --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
6379{% endraw %}
```
then we forward the port 6379 on the local workstation to the port 6379 of pod redis-master,
```shell
$ kubectl port-forward redis-master 6379:6379
I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:6379 -> 6379
I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:6379 -> 6379
```
To verify the connection is successful, we run a redis-cli on the local workstation,
```shell
$ redis-cli
127.0.0.1:6379> ping
PONG
```
Now one can debug the database from the local workstation.
[Using Port Forwarding to Access Applications in a Cluster](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)

View File

@ -5,28 +5,6 @@ assignees:
title: Connect with Proxies
---
You have seen the [basics](/docs/user-guide/accessing-the-cluster) about `kubectl proxy` and `apiserver proxy`. This guide shows how to use them together to access a service([kube-ui](/docs/user-guide/ui)) running on the Kubernetes cluster from your workstation.
{% include user-guide-content-moved.md %}
## Getting the apiserver proxy URL of kube-ui
kube-ui is deployed as a cluster add-on. To find its apiserver proxy URL,
```shell
$ kubectl cluster-info | grep "KubeUI"
KubeUI is running at https://173.255.119.104/api/v1/proxy/namespaces/kube-system/services/kube-ui
```
if this command does not find the URL, try the steps [here](/docs/user-guide/ui/#accessing-the-ui).
## Connecting to the kube-ui service from your local workstation
The above proxy URL is an access to the kube-ui service provided by the apiserver. To access it, you still need to authenticate to the apiserver. `kubectl proxy` can handle the authentication.
```shell
$ kubectl proxy --port=8001
Starting to serve on localhost:8001
```
Now you can access the kube-ui service on your local workstation at [http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-ui](http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-ui)
[Using an HTTP Proxy to Access the Kubernetes API](/docs/tasks/access-kubernetes-api/http-proxy-access-api/)

View File

@ -4,97 +4,6 @@ assignees:
title: Using Environment Variables
---
This example demonstrates running pods, replication controllers, and
services. It shows two types of pods: frontend and backend, with
services on top of both. Accessing the frontend pod will return
environment information about itself, and a backend pod that it has
accessed through the service. The goal is to illuminate the
environment metadata available to running containers inside the
Kubernetes cluster. The documentation for the Kubernetes environment
is [here](/docs/user-guide/container-environment).
{% include user-guide-content-moved.md %}
![Diagram](/images/docs/diagram.png)
## Prerequisites
This example assumes that you have a Kubernetes cluster installed and
running, and that you have installed the `kubectl` command line tool
somewhere in your path. Please see the [getting
started](/docs/getting-started-guides/) for installation instructions
for your platform.
## Optional: Build your own containers
These are the configuration files for the containers:
* [backend-rc.yaml](https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master/docs/user-guide/environment-guide/backend-rc.yaml)
* [backend-srv.yaml](https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master/docs/user-guide/environment-guide/backend-srv.yaml)
* [show-rc.yaml](https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master/docs/user-guide/environment-guide/show-rc.yaml)
* [show-srv.yaml](https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master/docs/user-guide/environment-guide/show-srv.yaml)
## Get everything running
```shell
kubectl create -f ./backend-rc.yaml
kubectl create -f ./backend-srv.yaml
kubectl create -f ./show-rc.yaml
kubectl create -f ./show-srv.yaml
```
## Query the service
Use `kubectl describe service show-srv` to determine the public IP of
your service.
> Note: If your platform does not support external load balancers,
you'll need to open the proper port and direct traffic to the
internal IP shown for the frontend service with the above command
Run `curl <public ip>:80` to query the service. You should get
something like this back:
```shell
Pod Name: show-rc-xxu6i
Pod Namespace: default
USER_VAR: important information
Kubernetes environment variables
BACKEND_SRV_SERVICE_HOST = 10.147.252.185
BACKEND_SRV_SERVICE_PORT = 5000
KUBERNETES_RO_SERVICE_HOST = 10.147.240.1
KUBERNETES_RO_SERVICE_PORT = 80
KUBERNETES_SERVICE_HOST = 10.147.240.2
KUBERNETES_SERVICE_PORT = 443
KUBE_DNS_SERVICE_HOST = 10.147.240.10
KUBE_DNS_SERVICE_PORT = 53
Found backend ip: 10.147.252.185 port: 5000
Response from backend
Backend Container
Backend Pod Name: backend-rc-6qiya
Backend Namespace: default
```
First the frontend pod's information is printed. The pod name and
[namespace](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/namespaces.md) are retrieved from the
[Downward API](/docs/user-guide/downward-api). Next, `USER_VAR` is the name of
an environment variable set in the [pod
definition](https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master/docs/user-guide/environment-guide/show-rc.yaml). Then, the dynamic Kubernetes environment
variables are scanned and printed. These are used to find the backend
service, named `backend-srv`. Finally, the frontend pod queries the
backend service and prints the information returned. Again the backend
pod returns its own pod name and namespace.
Try running the `curl` command a few times, and notice what
changes. Ex: `watch -n 1 curl -s <ip>` Firstly, the frontend service
is directing your request to different frontend pods each time. The
frontend pods are always contacting the backend through the backend
service. This results in a different backend pod servicing each
request as well.
## Cleanup
```shell
kubectl delete rc,service -l type=show-type
kubectl delete rc,service -l type=backend-type
```
[Exposing Pod Information to Containers Through Environment Variables](/docs/tasks/configure-pod-container/environment-variable-expose-pod-information/)

View File

@ -10,7 +10,9 @@ assignees:
title: Bootstrapping Pet Sets
---
__Warning:__ Starting in Kubernetes version 1.5, PetSet has been renamed to [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets). To use (or continue to use) PetSet in Kubernetes 1.5, you _must_ [migrate](/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set/) your existing PetSets to StatefulSets. For information on working with StatefulSet, see the tutorial on [how to run replicated stateful applications](/docs/tutorials/stateful-application/run-replicated-stateful-application).
{% include user-guide-content-moved.md %}
[PetSets](/docs/concepts/abstractions/controllers/petsets/)
__This document has been deprecated__.

View File

@ -1,7 +0,0 @@
---
title: Running Your First Containers
---
{% include user-guide-content-moved.md %}
[Running a Stateless Application Using a Deployment](/docs/tutorials/stateless-application/run-stateless-application-deployment/)

View File

@ -5,5 +5,6 @@ Disallow: /v1.0/
Disallow: /v1.1/
Disallow: /404/
Disallow: 404.html
Disallow: /docs/user-guide/docs/user-guide/simple-nginx/
SITEMAP: http://kubernetes.io/sitemap.xml