Merge remote-tracking branch 'origin' into release-1.4
commit
f4457ae677
|
@ -0,0 +1,22 @@
|
|||
<!-- Thanks for filing an issue! Before submitting, please fill in the following information. -->
|
||||
|
||||
<!--Required Information-->
|
||||
|
||||
**This is a...**
|
||||
<!-- choose one by changing [ ] to [x] -->
|
||||
- [ ] Feature Request
|
||||
- [ ] Bug Report
|
||||
|
||||
**Problem:**
|
||||
|
||||
|
||||
**Proposed Solution:**
|
||||
|
||||
|
||||
**Page to Update:**
|
||||
http://kubernetes.io/...
|
||||
|
||||
<!--Optional Information (remove the comment tags around information you would like to include)-->
|
||||
<!--Kubernetes Version:-->
|
||||
|
||||
<!--Additional Information:-->
|
32
README.md
32
README.md
|
@ -11,7 +11,7 @@ change the name of the fork to be:
|
|||
|
||||
YOUR_GITHUB_USERNAME.github.io
|
||||
|
||||
Then make your changes.
|
||||
Then make your changes.
|
||||
|
||||
When you visit [http://YOUR_GITHUB_USERNAME.github.io](http://YOUR_GITHUB_USERNAME.github.io) you should see a special-to-you version of the site that contains the changes you just made.
|
||||
|
||||
|
@ -21,11 +21,11 @@ Don't like installing stuff? Download and run a local staging server with a sing
|
|||
|
||||
git clone https://github.com/kubernetes/kubernetes.github.io.git
|
||||
cd kubernetes.github.io
|
||||
docker run -ti --rm -v "$PWD":/k8sdocs -p 4000:4000 johndmulhausen/k8sdocs
|
||||
docker run -ti --rm -v "$PWD":/k8sdocs -p 4000:4000 gcr.io/google-samples/k8sdocs:1.0
|
||||
|
||||
Then visit [http://localhost:4000](http://localhost:4000) to see our site. Any changes you make on your local machine will be automatically staged.
|
||||
|
||||
If you're interested you can view [the Dockerfile for this image](https://gist.github.com/johndmulhausen/f8f0ab8d82d2c755af3a4709729e1859).
|
||||
If you're interested you can view [the Dockerfile for this image](https://github.com/kubernetes/kubernetes.github.io/blob/master/staging-container/Dockerfile).
|
||||
|
||||
## Staging the site locally (from scratch setup)
|
||||
|
||||
|
@ -152,6 +152,32 @@ http://kubernetes-v1-3.github.io/
|
|||
|
||||
Editing of these branches will kick off a build using Travis CI that auto-updates these URLs; you can monitor the build progress at [https://travis-ci.org/kubernetes/kubernetes.github.io](https://travis-ci.org/kubernetes/kubernetes.github.io).
|
||||
|
||||
## Config yaml guidelines
|
||||
|
||||
Guidelines for config yamls that are included in the site docs. These
|
||||
are the yaml or json files that contain Kubernetes object
|
||||
configuration to be used with `kubectl create -f` Config yamls should
|
||||
be:
|
||||
|
||||
* Separate deployable files, not embedded in the document, unless very
|
||||
small variations of a full config.
|
||||
* Included in the doc with the include code
|
||||
[above.](#include-code-from-another-file)
|
||||
* In the same directory as the doc that they are being used in
|
||||
* If you are re-using a yaml from another doc, that is OK, just
|
||||
leave it there, don't move it up to a higher level directory.
|
||||
* Tested in
|
||||
[test/examples_test.go](https://github.com/kubernetes/kubernetes.github.io/blob/master/test/examples_test.go)
|
||||
* Follows
|
||||
[best practices.](http://kubernetes.io/docs/user-guide/config-best-practices/)
|
||||
|
||||
Don't assume the reader has this repository checked out, use `kubectl
|
||||
create -f https://github...` in example commands. For Docker images
|
||||
used in config yamls, try to use an image from an existing Kubernetes
|
||||
example. If creating an image for a doc, follow the
|
||||
[example guidelines](https://github.com/kubernetes/kubernetes/blob/master/examples/guidelines.md#throughout)
|
||||
section on "Docker images" from the Kubernetes repository.
|
||||
|
||||
## Partners
|
||||
Kubernetes partners refers to the companies who contribute to the Kubernetes core codebase and/or extend their platform to support Kubernetes. Partners can get their logos added to the partner section of the [community page](http://k8s.io/community) by following the below steps and meeting the below logo specifications. Partners will also need to have a URL that is specific to integrating with Kubernetes ready; this URL will be the destination when the logo is clicked.
|
||||
|
||||
|
|
|
@ -47,6 +47,7 @@
|
|||
<a href="" onclick="window.open('https://github.com/kubernetes/kubernetes.github.io/issues/new?title=Issue%20with%20' +
|
||||
window.location.pathname + '&body=Issue%20with%20' +
|
||||
window.location.pathname)" class="button issue">Create Issue</a>
|
||||
<a href="/editdocs#{{ page.path }}" class="button issue">Edit This Page</a>
|
||||
{% endif %}
|
||||
</div>
|
||||
</section>
|
||||
|
|
|
@ -74,6 +74,8 @@ title: Community
|
|||
<a href="http://info.crunchydata.com/blog/advanced-crunchy-containers-for-postgresql"><img src="/images/community_logos/crunchy_data_logo.png"></a>
|
||||
<a href="https://content.mirantis.com/Containerizing-OpenStack-on-Kubernetes-Video-Landing-Page.html"><img src="/images/community_logos/mirantis_logo.png"></a>
|
||||
<a href="http://blog.aquasec.com/security-best-practices-for-kubernetes-deployment"><img src="/images/community_logos/aqua_logo.png"></a>
|
||||
<a href="https://jujucharms.com/canonical-kubernetes/"><img src="/images/community_logos/ubuntu_cannonical_logo.png"></a>
|
||||
<a href="https://github.com/nuagenetworks/nuage-kubernetes"><img src="/images/community_logos/nuage_network_logo.png"></a>
|
||||
</div>
|
||||
</div>
|
||||
</main>
|
||||
|
|
|
@ -204,7 +204,7 @@ As of 1.3 RBAC mode is in alpha and considered experimental.
|
|||
|
||||
To use RBAC, you must both enable the authorization module with `--authorization-mode=RBAC`,
|
||||
and [enable the API version](
|
||||
cluster-management.md/#Turn-on-or-off-an-API-version-for-your-cluster),
|
||||
/docs/admin/cluster-management/#turn-on-or-off-an-api-version-for-your-cluster),
|
||||
with a `--runtime-config=` that includes `rbac.authorization.k8s.io/v1alpha1`.
|
||||
|
||||
### Roles, RolesBindings, ClusterRoles, and ClusterRoleBindings
|
||||
|
|
|
@ -41,7 +41,7 @@ Kubernetes installations. This required some minor
|
|||
(backward-compatible) changes to the way
|
||||
the Kubernetes cluster DNS server processes DNS queries, to facilitate
|
||||
the lookup of federated services (which span multiple Kubernetes clusters).
|
||||
See the [Cluster Federation Administrators' Guide](/docs/admin/federation/index.md) for more
|
||||
See the [Cluster Federation Administrators' Guide](/docs/admin/federation) for more
|
||||
details on Cluster Federation and multi-site support.
|
||||
|
||||
## References
|
||||
|
|
|
@ -0,0 +1,3 @@
|
|||
# *Stop. This guide has been superseded by [Minikube](../minikube/). The link below is present only for historical purposes*
|
||||
|
||||
The document has been moved to [here](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/local-cluster/docker.md)
|
|
@ -57,6 +57,7 @@ on how flags are set on various components.
|
|||
|
||||
### Network
|
||||
|
||||
#### Network Connectivity
|
||||
Kubernetes has a distinctive [networking model](/docs/admin/networking).
|
||||
|
||||
Kubernetes allocates an IP address to each pod. When creating a cluster, you
|
||||
|
@ -66,23 +67,35 @@ the node is added. A process in one pod should be able to communicate with
|
|||
another pod using the IP of the second pod. This connectivity can be
|
||||
accomplished in two ways:
|
||||
|
||||
- Configure network to route Pod IPs
|
||||
- Harder to setup from scratch.
|
||||
- Google Compute Engine ([GCE](/docs/getting-started-guides/gce)) and [AWS](/docs/getting-started-guides/aws) guides use this approach.
|
||||
- Need to make the Pod IPs routable by programming routers, switches, etc.
|
||||
- Can be configured external to Kubernetes, or can implement in the "Routes" interface of a Cloud Provider module.
|
||||
- Generally highest performance.
|
||||
- Create an Overlay network
|
||||
- Easier to setup
|
||||
- Traffic is encapsulated, so per-pod IPs are routable.
|
||||
- Examples:
|
||||
- **Using an overlay network**
|
||||
- An overlay network obscures the underlying network architecture from the
|
||||
pod network through traffic encapsulation (e.g vxlan).
|
||||
- Encapsulation reduces performance, though exactly how much depends on your solution.
|
||||
- **Without an overlay network**
|
||||
- Configure the underlying network fabric (switches, routers, etc) to be aware of pod IP addresses.
|
||||
- This does not require the encapsulation provided by an overlay, and so can achieve
|
||||
better performance.
|
||||
|
||||
Which method you choose depends on your environment and requirements. There are various ways
|
||||
to implement one of the above options:
|
||||
|
||||
- **Use a network plugin which is called by Kubernetes**
|
||||
- Kubernetes supports the [CNI](https://github.com/containernetworking/cni) network plugin interface.
|
||||
- There are a number of solutions which provide plugins for Kubernetes:
|
||||
- [Flannel](https://github.com/coreos/flannel)
|
||||
- [Calico](http://https://github.com/projectcalico/calico-containers)
|
||||
- [Weave](http://weave.works/)
|
||||
- [Open vSwitch (OVS)](http://openvswitch.org/)
|
||||
- Does not require "Routes" portion of Cloud Provider module.
|
||||
- Reduced performance (exactly how much depends on your solution).
|
||||
- [More found here](/docs/admin/networking#how-to-achieve-this)
|
||||
- You can also write your own.
|
||||
- **Compile support directly into Kubernetes**
|
||||
- This can be done by implementing the "Routes" interface of a Cloud Provider module.
|
||||
- The Google Compute Engine ([GCE](/docs/getting-started-guides/gce)) and [AWS](/docs/getting-started-guides/aws) guides use this approach.
|
||||
- **Configure the network external to Kubernetes**
|
||||
- This can be done by manually running commands, or through a set of externally maintained scripts.
|
||||
- You have to implement this yourself, but it can give you an extra degree of flexibility.
|
||||
|
||||
You need to select an address range for the Pod IPs.
|
||||
You will need to select an address range for the Pod IPs. Note that IPv6 is not yet supported for Pod IPs.
|
||||
|
||||
- Various approaches:
|
||||
- GCE: each project has its own `10.0.0.0/8`. Carve off a `/16` for each
|
||||
|
@ -90,10 +103,8 @@ You need to select an address range for the Pod IPs.
|
|||
Each node gets a further subdivision of this space.
|
||||
- AWS: use one VPC for whole organization, carve off a chunk for each
|
||||
cluster, or use different VPC for different clusters.
|
||||
- IPv6 is not supported yet.
|
||||
- Allocate one CIDR subnet for each node's PodIPs, or a single large CIDR
|
||||
from which smaller CIDRs are automatically allocated to each node (if nodes
|
||||
are dynamically added).
|
||||
from which smaller CIDRs are automatically allocated to each node.
|
||||
- You need max-pods-per-node * max-number-of-nodes IPs in total. A `/24` per
|
||||
node supports 254 pods per machine and is a common choice. If IPs are
|
||||
scarce, a `/26` (62 pods per machine) or even a `/27` (30 pods) may be sufficient.
|
||||
|
@ -116,6 +127,17 @@ Also, you need to pick a static IP for master node.
|
|||
- Open any firewalls to allow access to the apiserver ports 80 and/or 443.
|
||||
- Enable ipv4 forwarding sysctl, `net.ipv4.ip_forward = 1`
|
||||
|
||||
#### Network Policy
|
||||
|
||||
Kubernetes enables the definition of fine-grained network policy between Pods
|
||||
using the [NetworkPolicy](/docs/user-guide/networkpolicy) resource.
|
||||
|
||||
Not all networking providers support the Kubernetes NetworkPolicy features.
|
||||
For clusters which choose to enable NetworkPolicy, the
|
||||
[Calico policy controller addon](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/calico-policy-controller)
|
||||
can enforce the NetworkPolicy API on top of native cloud-provider networking,
|
||||
Flannel, or Calico networking.
|
||||
|
||||
### Cluster Naming
|
||||
|
||||
You should pick a name for your cluster. Pick a short name for each cluster
|
||||
|
|
|
@ -20,7 +20,7 @@ In the reference section, you can find reference documentation for Kubernetes AP
|
|||
|
||||
## Glossary
|
||||
|
||||
Explore the glossary of essential Kubernetes concepts. Some good starting points are the entries for [Pods](/docs/user-guide/pods/), [Nodes](/docs/admin/node/), [Services](/docs/user-guide/services/), and [Replication Controllers](/docs/user-guide/replication-controller/).
|
||||
Explore the glossary of essential Kubernetes concepts. Some good starting points are the entries for [Pods](/docs/user-guide/pods/), [Nodes](/docs/admin/node/), [Services](/docs/user-guide/services/), and [ReplicaSets](/docs/user-guide/replicasets/).
|
||||
|
||||
## Design Docs
|
||||
|
||||
|
|
|
@ -37,14 +37,14 @@ git fetch upstream
|
|||
git reset --hard upstream/docsv2
|
||||
```
|
||||
|
||||
### Step 3: Make sure you can serve rendered docs
|
||||
### Step 3: Make sure you can serve rendered docs
|
||||
|
||||
One option is to simply rename your fork's repo on GitHub.com to `yourusername.github.io`, which will auto-stage your commits at that URL.
|
||||
|
||||
Or, just use Docker! Run this from within your local `kubernetes.github.io` directory and you should be good:
|
||||
|
||||
```shell
|
||||
docker run -ti --rm -v "$PWD":/k8sdocs -p 4000:4000 johndmulhausen/k8sdocs
|
||||
docker run -ti --rm -v "$PWD":/k8sdocs -p 4000:4000 gcr.io/google-samples/k8sdocs:1.0
|
||||
```
|
||||
|
||||
The site will then be viewable at [http://localhost:4000/](http://localhost:4000/).
|
||||
|
@ -246,4 +246,4 @@ You probably shouldn't be using this, but we also have templates which consume Y
|
|||
|
||||
### Adding page to navigation
|
||||
|
||||
Once your page is saved, somewhere in the `/docs/` directory, add a reference to the `reference.yml` file under `/_data/` so that it will appear in the left-hand navigation of the site. This is also where you add a title to the page.
|
||||
Once your page is saved, somewhere in the `/docs/` directory, add a reference to the `reference.yml` file under `/_data/` so that it will appear in the left-hand navigation of the site. This is also where you add a title to the page.
|
||||
|
|
|
@ -100,13 +100,36 @@ with future high-availability support.
|
|||
|
||||
### Programmatic access to the API
|
||||
|
||||
There are [client libraries](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/client-libraries.md) for accessing the API
|
||||
from several languages. The Kubernetes project-supported
|
||||
[Go](http://releases.k8s.io/{{page.githubbranch}}/pkg/client/)
|
||||
client library can use the same [kubeconfig file](/docs/user-guide/kubeconfig-file)
|
||||
as the kubectl CLI does to locate and authenticate to the apiserver.
|
||||
The Kubernetes project-supported Go client library is at [https://github.com/kubernetes/client-go](https://github.com/kubernetes/client-go).
|
||||
|
||||
See documentation for other libraries for how they authenticate.
|
||||
To use it,
|
||||
* To get the library, run the following command: `go get k8s.io/client-go/<version number>/kubernetes` See [https://github.com/kubernetes/client-go](https://github.com/kubernetes/client-go) to see which versions are supported.
|
||||
* Write an application atop of the client-go clients. Note that client-go defines its own API objects, so if needed, please import API definitions from client-go rather than from the main repository, e.g., `import "k8s.io/client-go/1.4/pkg/api/v1"` is correct.
|
||||
|
||||
The Go client can use the same [kubeconfig file](/docs/user-guide/kubeconfig-file)
|
||||
as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes/client-go/examples/out-of-cluster.go):
|
||||
|
||||
```golang
|
||||
import (
|
||||
"fmt"
|
||||
"k8s.io/client-go/1.4/kubernetes"
|
||||
"k8s.io/client-go/1.4/pkg/api/v1"
|
||||
"k8s.io/client-go/1.4/tools/clientcmd"
|
||||
)
|
||||
...
|
||||
// uses the current context in kubeconfig
|
||||
config, _ := clientcmd.BuildConfigFromFlags("", "path to kubeconfig")
|
||||
// creates the clientset
|
||||
clientset, _:= kubernetes.NewForConfig(config)
|
||||
// access the API to list pods
|
||||
pods, _:= clientset.Core().Pods("").List(v1.ListOptions{})
|
||||
fmt.Printf("There are %d pods in the cluster\n", len(pods.Items))
|
||||
...
|
||||
```
|
||||
|
||||
If the application is deployed as a Pod in the cluster, please refer to the [next section](#accessing-the-api-from-a-pod).
|
||||
|
||||
There are [client libraries](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/client-libraries.md) for accessing the API from other languages. See documentation for other libraries for how they authenticate.
|
||||
|
||||
### Accessing the API from a Pod
|
||||
|
||||
|
@ -138,7 +161,7 @@ From within a pod the recommended ways to connect to API are:
|
|||
in any container of the pod can access it. See this [example of using kubectl proxy
|
||||
in a pod](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/kubectl-container/).
|
||||
- use the Go client library, and create a client using the `client.NewInCluster()` factory.
|
||||
This handles locating and authenticating to the apiserver.
|
||||
This handles locating and authenticating to the apiserver. [example](https://github.com/kubernetes/client-go/examples/in-cluster.go)
|
||||
|
||||
In each case, the credentials of the pod are used to communicate securely with the apiserver.
|
||||
|
||||
|
|
|
@ -307,8 +307,8 @@ $ kubectl config use-context federal-context
|
|||
|
||||
So, tying this all together, a quick start to creating your own kubeconfig file:
|
||||
|
||||
- Take a good look and understand how you're api-server is being launched: You need to know YOUR security requirements and policies before you can design a kubeconfig file for convenient authentication.
|
||||
- Take a good look and understand how your api-server is being launched: You need to know YOUR security requirements and policies before you can design a kubeconfig file for convenient authentication.
|
||||
|
||||
- Replace the snippet above with information for your cluster's api-server endpoint.
|
||||
|
||||
- Make sure your api-server is launched in such a way that at least one user (i.e. green-user) credentials are provided to it. You will of course have to look at api-server documentation in order to determine the current state-of-the-art in terms of providing authentication details.
|
||||
- Make sure your api-server is launched in such a way that at least one user (i.e. green-user) credentials are provided to it. You will of course have to look at api-server documentation in order to determine the current state-of-the-art in terms of providing authentication details.
|
||||
|
|
|
@ -12,13 +12,13 @@ Drain node in preparation for maintenance
|
|||
Drain node in preparation for maintenance.
|
||||
|
||||
The given node will be marked unschedulable to prevent new pods from arriving.
|
||||
Then drain deletes all pods except mirror pods (which cannot be deleted through
|
||||
The `drain` deletes all pods except mirror pods (which cannot be deleted through
|
||||
the API server). If there are DaemonSet-managed pods, drain will not proceed
|
||||
without --ignore-daemonsets, and regardless it will not delete any
|
||||
DaemonSet-managed pods, because those pods would be immediately replaced by the
|
||||
DaemonSet controller, which ignores unschedulable markings. If there are any
|
||||
pods that are neither mirror pods nor managed--by ReplicationController,
|
||||
ReplicaSet, DaemonSet or Job--, then drain will not delete any pods unless you
|
||||
pods that are neither mirror pods nor managed by ReplicationController,
|
||||
ReplicaSet, DaemonSet or Job, then drain will not delete any pods unless you
|
||||
use --force.
|
||||
|
||||
When you are ready to put the node back into service, use kubectl uncordon, which
|
||||
|
|
|
@ -49,7 +49,7 @@ kubectl version
|
|||
|
||||
### SEE ALSO
|
||||
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
* [kubectl](../kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra on 2-Sep-2016
|
||||
|
||||
|
|
|
@ -90,7 +90,7 @@ across namespaces, you need to use the fully qualified domain name (FQDN).
|
|||
## Not All Objects are in a Namespace
|
||||
|
||||
Most kubernetes resources (e.g. pods, services, replication controllers, and others) are
|
||||
in a some namespace. However namespace resources are not themselves in a namespace.
|
||||
And, low-level resources, such as [nodes](/docs/admin/node) and
|
||||
in some namespace. However namespace resources are not themselves in a namespace.
|
||||
And low-level resources, such as [nodes](/docs/admin/node) and
|
||||
persistentVolumes, are not in any namespace. Events are an exception: they may or may not
|
||||
have a namespace, depending on the object the event is about.
|
||||
|
|
|
@ -480,7 +480,7 @@ start until all the pod's volumes are mounted.
|
|||
Create a secret containing some ssh keys:
|
||||
|
||||
```shell
|
||||
$ kubectl create secret generic my-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub
|
||||
$ kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub
|
||||
```
|
||||
|
||||
**Security Note:** think carefully before sending your own ssh keys: other users of the cluster may have access to the secret. Use a service account which you want to have accessible to all the users with whom you share the kubernetes cluster, and can revoke if they are compromised.
|
||||
|
|
|
@ -372,7 +372,7 @@ writers simultaneously.
|
|||
__Important: You must have your own Ceph server running with the share exported
|
||||
before you can use it__
|
||||
|
||||
See the [CephFS example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/cephfs/) for more details.
|
||||
See the [CephFS example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/cephfs/) for more details.
|
||||
|
||||
### gitRepo
|
||||
|
||||
|
|
Binary file not shown.
After Width: | Height: | Size: 13 KiB |
Binary file not shown.
After Width: | Height: | Size: 17 KiB |
|
@ -0,0 +1,13 @@
|
|||
FROM starefossen/ruby-node:2-4
|
||||
|
||||
RUN gem install github-pages
|
||||
|
||||
VOLUME /k8sdocs
|
||||
|
||||
EXPOSE 4000
|
||||
|
||||
WORKDIR /k8sdocs
|
||||
|
||||
CMD bundle && jekyll clean && jekyll serve -H 0.0.0.0 -P 4000
|
||||
|
||||
# For instructions, see http://kubernetes.io/editdocs/
|
|
@ -33,7 +33,7 @@ import (
|
|||
"k8s.io/kubernetes/pkg/apis/extensions"
|
||||
expvalidation "k8s.io/kubernetes/pkg/apis/extensions/validation"
|
||||
"k8s.io/kubernetes/pkg/capabilities"
|
||||
"k8s.io/kubernetes/pkg/registry/job"
|
||||
"k8s.io/kubernetes/pkg/registry/batch/job"
|
||||
"k8s.io/kubernetes/pkg/runtime"
|
||||
"k8s.io/kubernetes/pkg/types"
|
||||
"k8s.io/kubernetes/pkg/util/validation/field"
|
||||
|
|
Loading…
Reference in New Issue