diff --git a/css/styles.css b/css/styles.css index 2eea6c0f9a..eb30b1735a 100755 --- a/css/styles.css +++ b/css/styles.css @@ -781,6 +781,7 @@ section { #docsContent code { font-size: 16px; background-color: #f7f7f7; + font-weight: bold; padding: 2px 4px; } #docsContent pre pi, #docsContent pre s { margin: 0px; diff --git a/v1.1/docs/README.md b/v1.1/docs/README.md deleted file mode 100644 index d4df3427d0..0000000000 --- a/v1.1/docs/README.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -title: "Kubernetes Documentation: releases.k8s.io/release-1.1" ---- -* The [User's guide](user-guide/README) is for anyone who wants to run programs and - services on an existing Kubernetes cluster. - -* The [Cluster Admin's guide](admin/README) is for anyone setting up - a Kubernetes cluster or administering it. - -* The [Developer guide](devel/README) is for anyone wanting to write - programs that access the Kubernetes API, write plugins or extensions, or - modify the core code of Kubernetes. - -* The [Kubectl Command Line Interface](user-guide/kubectl/kubectl) is a detailed reference on - the `kubectl` CLI. - -* The [API object documentation](http://kubernetes.io/third_party/swagger-ui/) - is a detailed description of all fields found in core API objects. - -* An overview of the [Design of Kubernetes](design/) - -* There are example files and walkthroughs in the [examples](https://github.com/kubernetes/kubernetes/tree/master/examples) - folder. - -* If something went wrong, see the [troubleshooting](troubleshooting) document for how to debug. -You should also check the [known issues](user-guide/known-issues) for the release you're using. - -* To report a security issue, see [Reporting a Security Issue](reporting-security-issues). - - - diff --git a/v1.1/docs/admin/README.md b/v1.1/docs/admin/README.md deleted file mode 100644 index 6d672796c2..0000000000 --- a/v1.1/docs/admin/README.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -title: "Kubernetes Cluster Admin Guide" ---- -The cluster admin guide is for anyone creating or administering a Kubernetes cluster. -It assumes some familiarity with concepts in the [User Guide](../user-guide/README). - -## Admin Guide Table of Contents - -[Introduction](introduction) - -1. [Components of a cluster](cluster-components) - 1. [Cluster Management](cluster-management) - 1. Administrating Master Components - 1. [The kube-apiserver binary](kube-apiserver) - 1. [Authorization](authorization) - 1. [Authentication](authentication) - 1. [Accessing the api](accessing-the-api) - 1. [Admission Controllers](admission-controllers) - 1. [Administrating Service Accounts](service-accounts-admin) - 1. [Resource Quotas](resource-quota) - 1. [The kube-scheduler binary](kube-scheduler) - 1. [The kube-controller-manager binary](kube-controller-manager) - 1. [Administrating Kubernetes Nodes](node) - 1. [The kubelet binary](kubelet) - 1. [Garbage Collection](garbage-collection) - 1. [The kube-proxy binary](kube-proxy) - 1. Administrating Addons - 1. [DNS](dns) - 1. [Networking](networking) - 1. [OVS Networking](ovs-networking) - 1. Example Configurations - 1. [Multiple Clusters](multi-cluster) - 1. [High Availability Clusters](high-availability) - 1. [Large Clusters](cluster-large) - 1. [Getting started from scratch](../getting-started-guides/scratch) - 1. [Kubernetes's use of salt](salt) - 1. [Troubleshooting](cluster-troubleshooting) - - - diff --git a/v1.1/docs/admin/index.md b/v1.1/docs/admin/index.md index 6d672796c2..04c84c581b 100644 --- a/v1.1/docs/admin/index.md +++ b/v1.1/docs/admin/index.md @@ -34,7 +34,4 @@ It assumes some familiarity with concepts in the [User Guide](../user-guide/READ 1. [Large Clusters](cluster-large) 1. [Getting started from scratch](../getting-started-guides/scratch) 1. [Kubernetes's use of salt](salt) - 1. [Troubleshooting](cluster-troubleshooting) - - - + 1. [Troubleshooting](cluster-troubleshooting) \ No newline at end of file diff --git a/v1.1/docs/admin/limitrange/README.md b/v1.1/docs/admin/limitrange/README.md deleted file mode 100644 index 5b6688e420..0000000000 --- a/v1.1/docs/admin/limitrange/README.md +++ /dev/null @@ -1,193 +0,0 @@ ---- -title: "Limit Range" ---- -By default, pods run with unbounded CPU and memory limits. This means that any pod in the -system will be able to consume as much CPU and memory on the node that executes the pod. - -Users may want to impose restrictions on the amount of resource a single pod in the system may consume -for a variety of reasons. - -For example: - -1. Each node in the cluster has 2GB of memory. The cluster operator does not want to accept pods -that require more than 2GB of memory since no node in the cluster can support the requirement. To prevent a -pod from being permanently unscheduled to a node, the operator instead chooses to reject pods that exceed 2GB -of memory as part of admission control. -2. A cluster is shared by two communities in an organization that runs production and development workloads -respectively. Production workloads may consume up to 8GB of memory, but development workloads may consume up -to 512MB of memory. The cluster operator creates a separate namespace for each workload, and applies limits to -each namespace. -3. Users may create a pod which consumes resources just below the capacity of a machine. The left over space -may be too small to be useful, but big enough for the waste to be costly over the entire cluster. As a result, -the cluster operator may want to set limits that a pod must consume at least 20% of the memory and cpu of their -average node size in order to provide for more uniform scheduling and to limit waste. - -This example demonstrates how limits can be applied to a Kubernetes namespace to control -min/max resource limits per pod. In addition, this example demonstrates how you can -apply default resource limits to pods in the absence of an end-user specified value. - -See [LimitRange design doc](../../design/admission_control_limit_range) for more information. For a detailed description of the Kubernetes resource model, see [Resources](/{{page.version}}/docs/user-guide/compute-resources) - -## Step 0: Prerequisites - -This example requires a running Kubernetes cluster. See the [Getting Started guides](/{{page.version}}/docs/getting-started-guides/) for how to get started. - -Change to the `` directory if you're not already there. - -## Step 1: Create a namespace - -This example will work in a custom namespace to demonstrate the concepts involved. - -Let's create a new namespace called limit-example: - -```shell -$ kubectl create -f docs/admin/limitrange/namespace.yaml -namespace "limit-example" created -$ kubectl get namespaces -NAME LABELS STATUS AGE -default Active 5m -limit-example Active 53s -``` - -## Step 2: Apply a limit to the namespace - -Let's create a simple limit in our namespace. - -```shell -$ kubectl create -f docs/admin/limitrange/limits.yaml --namespace=limit-example -limitrange "mylimits" created -``` - -Let's describe the limits that we have imposed in our namespace. - -```shell -$ kubectl describe limits mylimits --namespace=limit-example -Name: mylimits -Namespace: limit-example -Type Resource Min Max Request Limit Limit/Request ----- -------- --- --- ------- ----- ------------- -Pod cpu 200m 2 - - - -Pod memory 6Mi 1Gi - - - -Container cpu 100m 2 200m 300m - -Container memory 3Mi 1Gi 100Mi 200Mi - -``` - -In this scenario, we have said the following: - -1. If a max constraint is specified for a resource (2 CPU and 1Gi memory in this case), then a limit -must be specified for that resource across all containers. Failure to specify a limit will result in -a validation error when attempting to create the pod. Note that a default value of limit is set by -*default* in file `limits.yaml` (300m CPU and 200Mi memory). -2. If a min constraint is specified for a resource (100m CPU and 3Mi memory in this case), then a -request must be specified for that resource across all containers. Failure to specify a request will -result in a validation error when attempting to create the pod. Note that a default value of request is -set by *defaultRequest* in file `limits.yaml` (200m CPU and 100Mi memory). -3. For any pod, the sum of all containers memory requests must be >= 6Mi and the sum of all containers -memory limits must be <= 1Gi; the sum of all containers CPU requests must be >= 200m and the sum of all -containers CPU limits must be <= 2. - -## Step 3: Enforcing limits at point of creation - -The limits enumerated in a namespace are only enforced when a pod is created or updated in -the cluster. If you change the limits to a different value range, it does not affect pods that -were previously created in a namespace. - -If a resource (cpu or memory) is being restricted by a limit, the user will get an error at time -of creation explaining why. - -Let's first spin up a replication controller that creates a single container pod to demonstrate -how default values are applied to each pod. - -```shell -$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example -replicationcontroller "nginx" created -$ kubectl get pods --namespace=limit-example -NAME READY STATUS RESTARTS AGE -nginx-aq0mf 1/1 Running 0 35s -$ kubectl get pods nginx-aq0mf --namespace=limit-example -o yaml | grep resources -C 8 -``` - -```yaml -resourceVersion: "127" - selfLink: /api/v1/namespaces/limit-example/pods/nginx-aq0mf - uid: 51be42a7-7156-11e5-9921-286ed488f785 -spec: - containers: - - image: nginx - imagePullPolicy: IfNotPresent - name: nginx - resources: - limits: - cpu: 300m - memory: 200Mi - requests: - cpu: 200m - memory: 100Mi - terminationMessagePath: /dev/termination-log - volumeMounts: -``` - -Note that our nginx container has picked up the namespace default cpu and memory resource *limits* and *requests*. - -Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 cpu cores. - -```shell -$ kubectl create -f docs/admin/limitrange/invalid-pod.yaml --namespace=limit-example -Error from server: error when creating "docs/admin/limitrange/invalid-pod.yaml": Pod "invalid-pod" is forbidden: [Maximum cpu usage per Pod is 2, but limit is 3., Maximum cpu usage per Container is 2, but limit is 3.] -``` - -Let's create a pod that falls within the allowed limit boundaries. - -```shell -$ kubectl create -f docs/admin/limitrange/valid-pod.yaml --namespace=limit-example -pod "valid-pod" created -$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources -``` - -```yaml -uid: 162a12aa-7157-11e5-9921-286ed488f785 -spec: - containers: - - image: gcr.io/google_containers/serve_hostname - imagePullPolicy: IfNotPresent - name: kubernetes-serve-hostname - resources: - limits: - cpu: "1" - memory: 512Mi - requests: - cpu: "1" - memory: 512Mi -``` -Note that this pod specifies explicit resource *limits* and *requests* so it did not pick up the namespace -default values. - -Note: The *limits* for CPU resource are not enforced in the default Kubernetes setup on the physical node -that runs the container unless the administrator deploys the kubelet with the folllowing flag: - -``` -$ kubelet --help -Usage of kubelet -.... - --cpu-cfs-quota[=false]: Enable CPU CFS quota enforcement for containers that specify CPU limits -$ kubelet --cpu-cfs-quota=true ... -``` - -## Step 4: Cleanup - -To remove the resources used by this example, you can just delete the limit-example namespace. - -```shell -$ kubectl delete namespace limit-example -namespace "limit-example" deleted -$ kubectl get namespaces -NAME LABELS STATUS AGE -default Active 20m -``` - -## Summary - -Cluster operators that want to restrict the amount of resources a single container or pod may consume -are able to define allowable ranges per Kubernetes namespace. In the absence of any explicit assignments, -the Kubernetes system is able to apply default resource *limits* and *requests* if desired in order to -constrain the amount of resource a pod consumes on a node. \ No newline at end of file diff --git a/v1.1/docs/admin/namespaces/README.md b/v1.1/docs/admin/namespaces/README.md deleted file mode 100644 index a77a1de537..0000000000 --- a/v1.1/docs/admin/namespaces/README.md +++ /dev/null @@ -1,251 +0,0 @@ ---- -title: "Kubernetes Namespaces" ---- - -Kubernetes _[namespaces](/{{page.version}}/docs/admin/namespaces)_ help different projects, teams, or customers to share a Kubernetes cluster. - -It does this by providing the following: - -1. A scope for [Names](../../user-guide/identifiers). -2. A mechanism to attach authorization and policy to a subsection of the cluster. - -Use of multiple namespaces is optional. - -This example demonstrates how to use Kubernetes namespaces to subdivide your cluster. - -### Step Zero: Prerequisites - -This example assumes the following: - -1. You have an [existing Kubernetes cluster](../../getting-started-guides/). -2. You have a basic understanding of Kubernetes _[pods](../../user-guide/pods)_, _[services](../../user-guide/services)_, and _[replication controllers](../../user-guide/replication-controller)_. - -### Step One: Understand the default namespace - -By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of pods, -services, and replication controllers used by the cluster. - -Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following: - -```shell -$ kubectl get namespaces -NAME LABELS -default -``` - -### Step Two: Create new namespaces - -For this exercise, we will create two additional Kubernetes namespaces to hold our content. - -Let's imagine a scenario where an organization is using a shared Kubernetes cluster for development and production use cases. - -The development team would like to maintain a space in the cluster where they can get a view on the list of pods, services, and replication controllers -they use to build and run their application. In this space, Kubernetes resources come and go, and the restrictions on who can or cannot modify resources -are relaxed to enable agile development. - -The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of -pods, services, and replication controllers that run the production site. - -One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production. - -Let's create two new namespaces to hold our work. - -Use the file [`namespace-dev.json`](namespace-dev.json) which describes a development namespace: - - - -```json -{ - "kind": "Namespace", - "apiVersion": "v1", - "metadata": { - "name": "development", - "labels": { - "name": "development" - } - } -} -``` - -[Download example](namespace-dev.json) - - -Create the development namespace using kubectl. - -```shell -$ kubectl create -f docs/admin/namespaces/namespace-dev.json -``` - -And then lets create the production namespace using kubectl. - -```shell -$ kubectl create -f docs/admin/namespaces/namespace-prod.json -``` - -To be sure things are right, let's list all of the namespaces in our cluster. - -```shell -$ kubectl get namespaces -NAME LABELS STATUS -default Active -development name=development Active -production name=production Active -``` - -### Step Three: Create pods in each namespace - -A Kubernetes namespace provides the scope for pods, services, and replication controllers in the cluster. - -Users interacting with one namespace do not see the content in another namespace. - -To demonstrate this, let's spin up a simple replication controller and pod in the development namespace. - -We first check what is the current context: - -```yaml -apiVersion: v1 -clusters: -- cluster: - certificate-authority-data: REDACTED - server: https://130.211.122.180 - name: lithe-cocoa-92103_kubernetes -contexts: -- context: - cluster: lithe-cocoa-92103_kubernetes - user: lithe-cocoa-92103_kubernetes - name: lithe-cocoa-92103_kubernetes -current-context: lithe-cocoa-92103_kubernetes -kind: Config -preferences: {} -users: -- name: lithe-cocoa-92103_kubernetes - user: - client-certificate-data: REDACTED - client-key-data: REDACTED - token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b -- name: lithe-cocoa-92103_kubernetes-basic-auth - user: - password: h5M0FtUUIflBSdI7 - username: admin -``` - -The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context. - -```shell -$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes -$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes -``` - -The above commands provided two request contexts you can alternate against depending on what namespace you -wish to work against. - -Let's switch to operate in the development namespace. - -```shell -$ kubectl config use-context dev -``` - -You can verify your current context by doing the following: - -```shell -$ kubectl config view -``` - -```yaml -apiVersion: v1 -clusters: -- cluster: - certificate-authority-data: REDACTED - server: https://130.211.122.180 - name: lithe-cocoa-92103_kubernetes -contexts: -- context: - cluster: lithe-cocoa-92103_kubernetes - namespace: development - user: lithe-cocoa-92103_kubernetes - name: dev -- context: - cluster: lithe-cocoa-92103_kubernetes - user: lithe-cocoa-92103_kubernetes - name: lithe-cocoa-92103_kubernetes -- context: - cluster: lithe-cocoa-92103_kubernetes - namespace: production - user: lithe-cocoa-92103_kubernetes - name: prod -current-context: dev -kind: Config -preferences: {} -users: -- name: lithe-cocoa-92103_kubernetes - user: - client-certificate-data: REDACTED - client-key-data: REDACTED - token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b -- name: lithe-cocoa-92103_kubernetes-basic-auth - user: - password: h5M0FtUUIflBSdI7 - username: admin -``` - -At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace. - -Let's create some content. - -```shell -$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2 -``` - -We have just created a replication controller whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname. - -```shell -$ kubectl get rc -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -snowflake snowflake kubernetes/serve_hostname run=snowflake 2 - -$ kubectl get pods -NAME READY STATUS RESTARTS AGE -snowflake-8w0qn 1/1 Running 0 22s -snowflake-jrpzb 1/1 Running 0 22s -``` - -And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace. - -Let's switch to the production namespace and show how resources in one namespace are hidden from the other. - -```shell -$ kubectl config use-context prod -``` - -The production namespace should be empty. - -```shell -$ kubectl get rc -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS - -$ kubectl get pods -NAME READY STATUS RESTARTS AGE -``` - -Production likes to run cattle, so let's create some cattle pods. - -```shell -$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5 - -$ kubectl get rc -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -cattle cattle kubernetes/serve_hostname run=cattle 5 - -$ kubectl get pods -NAME READY STATUS RESTARTS AGE -cattle-97rva 1/1 Running 0 12s -cattle-i9ojn 1/1 Running 0 12s -cattle-qj3yv 1/1 Running 0 12s -cattle-yc7vn 1/1 Running 0 12s -cattle-zz7ea 1/1 Running 0 12s -``` - -At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace. - -As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different -authorization rules for each namespace. \ No newline at end of file diff --git a/v1.1/docs/admin/namespaces/index.md b/v1.1/docs/admin/namespaces/index.md index 5e96a3fc7f..9709f75e79 100644 --- a/v1.1/docs/admin/namespaces/index.md +++ b/v1.1/docs/admin/namespaces/index.md @@ -73,8 +73,8 @@ Create the development namespace using kubectl. ```shell $ kubectl create -f docs/admin/namespaces/namespace-dev.json - ``` + And then lets create the production namespace using kubectl. ```shell diff --git a/v1.1/docs/devel/README.md b/v1.1/docs/devel/README.md deleted file mode 100644 index 564ce00a81..0000000000 --- a/v1.1/docs/devel/README.md +++ /dev/null @@ -1,78 +0,0 @@ ---- -title: "Kubernetes Developer Guide" ---- -The developer guide is for anyone wanting to either write code which directly accesses the -Kubernetes API, or to contribute directly to the Kubernetes project. -It assumes some familiarity with concepts in the [User Guide](../user-guide/README) and the [Cluster Admin -Guide](../admin/README). - - -## The process of developing and contributing code to the Kubernetes project - -* **On Collaborative Development** ([collab.md](collab)): Info on pull requests and code reviews. - -* **GitHub Issues** ([issues.md](issues)): How incoming issues are reviewed and prioritized. - -* **Pull Request Process** ([pull-requests.md](pull-requests)): When and why pull requests are closed. - -* **Faster PR reviews** ([faster_reviews.md](faster_reviews)): How to get faster PR reviews. - -* **Getting Recent Builds** ([getting-builds.md](getting-builds)): How to get recent builds including the latest builds that pass CI. - -* **Automated Tools** ([automation.md](automation)): Descriptions of the automation that is running on our github repository. - - -## Setting up your dev environment, coding, and debugging - -* **Development Guide** ([development.md](development)): Setting up your development environment. - -* **Hunting flaky tests** ([flaky-tests.md](flaky-tests)): We have a goal of 99.9% flake free tests. - Here's how to run your tests many times. - -* **Logging Conventions** ([logging.md](logging)]: Glog levels. - -* **Profiling Kubernetes** ([profiling.md](profiling)): How to plug in go pprof profiler to Kubernetes. - -* **Instrumenting Kubernetes with a new metric** - ([instrumentation.md](instrumentation)): How to add a new metrics to the - Kubernetes code base. - -* **Coding Conventions** ([coding-conventions.md](coding-conventions)): - Coding style advice for contributors. - - -## Developing against the Kubernetes API - -* API objects are explained at [http://kubernetes.io/third_party/swagger-ui/](http://kubernetes.io/third_party/swagger-ui/). - -* **Annotations** ([docs/user-guide/annotations.md](../user-guide/annotations)): are for attaching arbitrary non-identifying metadata to objects. - Programs that automate Kubernetes objects may use annotations to store small amounts of their state. - -* **API Conventions** ([api-conventions.md](api-conventions)): - Defining the verbs and resources used in the Kubernetes API. - -* **API Client Libraries** ([client-libraries.md](client-libraries)): - A list of existing client libraries, both supported and user-contributed. - - -## Writing plugins - -* **Authentication Plugins** ([docs/admin/authentication.md](../admin/authentication)): - The current and planned states of authentication tokens. - -* **Authorization Plugins** ([docs/admin/authorization.md](../admin/authorization)): - Authorization applies to all HTTP requests on the main apiserver port. - This doc explains the available authorization implementations. - -* **Admission Control Plugins** ([admission_control](../design/admission_control)) - - -## Building releases - -* **Making release notes** ([making-release-notes.md](making-release-notes)): Generating release nodes for a new release. - -* **Releasing Kubernetes** ([releasing.md](releasing)): How to create a Kubernetes release (as in version) - and how the version information gets embedded into the built binaries. - - -