diff --git a/404.md b/404.md index bf053c1e3b..3d32e81bcf 100644 --- a/404.md +++ b/404.md @@ -2,67 +2,9 @@ layout: docwithnav title: 404 Error! permalink: /404.html +no_canonical: true --- - + + Sorry, this page was not found. :( diff --git a/README.md b/README.md index a468c1f947..ad6c85778b 100644 --- a/README.md +++ b/README.md @@ -4,16 +4,22 @@ Welcome! We are very pleased you want to contribute to the documentation and/or You can click the "Fork" button in the upper-right area of the screen to create a copy of our site on your GitHub account called a "fork." Make any changes you want in your fork, and when you are ready to send those changes to us, go to the index page for your fork and click "New Pull Request" to let us know about it. -## Staging the site on GitHub Pages +## Automatic Staging for Pull Requests -If you want to see your changes staged without having to install anything locally, remove the CNAME file in this directory and -change the name of the fork to be: +When you create a pull request (either against master or the upcoming release), your changes are staged in a custom subdomain on Netlify so that you can see your changes in rendered form before the PR is merged. You can use this to verify that everything is correct before the PR gets merged. To view your changes: - YOUR_GITHUB_USERNAME.github.io +- Scroll down to the PR's list of Automated Checks +- Click "Show All Checks" +- Look for "deploy/netlify"; you'll see "Deploy Preview Ready!" if staging was successful +- Click "Details" to bring up the staged site and navigate to your changes -Then make your changes. +## Release Branch Staging -When you visit [http://YOUR_GITHUB_USERNAME.github.io](http://YOUR_GITHUB_USERNAME.github.io) you should see a special-to-you version of the site that contains the changes you just made. +The Kubernetes site maintains staged versions at a subdomain provided by Netlify. Every PR for the Kubernetes site, either against the master branch or the upcoming release branch, is staged automatically. + +The staging site for the next upcoming Kubernetes release is here: [http://kubernetes-io-vnext-staging.netlify.com/](http://kubernetes-io-vnext-staging.netlify.com/) + +The staging site reflects the current state of what's been merged in the release branch, or in other words, what the docs will look like for the next upcoming release. It's automatically updated as new PRs get merged. ## Staging the site locally (using Docker) @@ -64,7 +70,6 @@ Make any changes you want. Then, to see your changes locally: Your copy of the site will then be viewable at: [http://localhost:4000](http://localhost:4000) (or wherever Jekyll tells you). - ## GitHub help If you're a bit rusty with git/GitHub, you might want to read @@ -137,20 +142,13 @@ That, of course, will send users to: ## Branch structure -The current version of the website is served out of the `master` branch. +The current version of the website is served out of the `master` branch. To make changes to the live docs, such as bug fixes, broken links, typos, etc, **target your pull request to the master branch**. -All versions of the site that relate to past and future versions will be named after their Kubernetes release number. For example, [the old branch for the 1.1 docs is called `release-1.1`](https://github.com/kubernetes/kubernetes.github.io/tree/release-1.1). +The `release-1.x` branches store changes for **upcoming releases of Kubernetes**. For example, the `release-1.5` branch has changes for the upcoming 1.5 release. These changes target branches (and *not* master) to avoid publishing documentation updates prior to the release for which they're relevant. If you have a change for an upcoming release of Kubernetes, **target your pull request to the appropriate release branch**. Changes in the "docsv2" branch (where we are testing a revamp of the docs) are automatically staged here: http://k8sdocs.github.io/docs/tutorials/ -Changes in the "release-1.1" branch (for k8s v1.1 docs) are automatically staged here: -http://kubernetes-v1-1.github.io/ - -Changes in the "release-1.3" branch (for k8s v1.3 docs) are automatically staged here: -http://kubernetes-v1-3.github.io/ - -Editing of these branches will kick off a build using Travis CI that auto-updates these URLs; you can monitor the build progress at [https://travis-ci.org/kubernetes/kubernetes.github.io](https://travis-ci.org/kubernetes/kubernetes.github.io). ## Config yaml guidelines diff --git a/_config.yml b/_config.yml index 5094499bfe..7ace374fca 100644 --- a/_config.yml +++ b/_config.yml @@ -18,7 +18,7 @@ defaults: values: version: "v1.3" githubbranch: "master" - docsbranch: "release-1.3" + docsbranch: "master" - scope: path: "docs" @@ -27,3 +27,7 @@ defaults: showedit: true permalink: pretty + +gems: + - jekyll-redirect-from + diff --git a/_data/globals.yml b/_data/globals.yml index c83dae26dd..73978ea750 100644 --- a/_data/globals.yml +++ b/_data/globals.yml @@ -4,5 +4,6 @@ tocs: - tasks - concepts - reference +- tools - samples - support diff --git a/_data/guides.yml b/_data/guides.yml index 7c41285d0e..40d47b08d6 100644 --- a/_data/guides.yml +++ b/_data/guides.yml @@ -163,10 +163,10 @@ toc: path: /docs/getting-started-guides/gce/ - title: Running Kubernetes on AWS EC2 path: /docs/getting-started-guides/aws/ + - title: Running Kubernetes on Azure + path: /docs/getting-started-guides/azure/ - title: Running Kubernetes on Azure (Weave-based) path: /docs/getting-started-guides/coreos/azure/ - - title: Running Kubernetes on Azure (Flannel-based) - path: /docs/getting-started-guides/azure/ - title: Running Kubernetes on CenturyLink Cloud path: /docs/getting-started-guides/clc/ - title: Running Kubernetes on IBM SoftLayer @@ -252,6 +252,8 @@ toc: path: /docs/admin/ - title: Cluster Management Guide path: /docs/admin/cluster-management/ + - title: kubeadm reference + path: /docs/admin/kubeadm/ - title: Installing Addons path: /docs/admin/addons/ - title: Sharing a Cluster with Namespaces diff --git a/_data/reference.yml b/_data/reference.yml index 3cdd91de6e..5d4fe17f7b 100644 --- a/_data/reference.yml +++ b/_data/reference.yml @@ -63,7 +63,7 @@ toc: - title: kubectl Commands section: - title: kubectl - path: /docs/user-guide/kubectl/kubectl/ + path: /docs/user-guide/kubectl/ - title: kubectl annotate path: /docs/user-guide/kubectl/kubectl_annotate/ - title: kubectl api-versions @@ -230,6 +230,8 @@ toc: path: /docs/user-guide/services/ - title: Service Accounts path: /docs/user-guide/service-accounts/ + - title: Third Party Resources + path: /docs/user-guide/thirdpartyresources/ - title: Volumes path: /docs/user-guide/volumes/ diff --git a/_data/tools.yml b/_data/tools.yml new file mode 100644 index 0000000000..8993e091bb --- /dev/null +++ b/_data/tools.yml @@ -0,0 +1,4 @@ +bigheader: "Tools" +toc: +- title: Tools + path: /docs/tools/ diff --git a/_data/tutorials.yml b/_data/tutorials.yml index 9e3b79c8a2..01440b09d7 100644 --- a/_data/tutorials.yml +++ b/_data/tutorials.yml @@ -2,59 +2,51 @@ bigheader: "Tutorials" toc: - title: Tutorials path: /docs/tutorials/ -- title: Getting Started +- title: Kubernetes Basics section: + - title: Overview + path: /docs/tutorials/kubernetes-basics/ - title: 1. Create a Cluster section: - - title: Creating a Cluster - path: /docs/tutorials/getting-started/create-cluster/ - title: Using Minikube to Create a Cluster - path: /docs/tutorials/getting-started/cluster-intro/ + path: /docs/tutorials/kubernetes-basics/cluster-intro/ - title: Interactive Tutorial - Creating a Cluster - path: /docs/tutorials/getting-started/cluster-interactive/ + path: /docs/tutorials/kubernetes-basics/cluster-interactive/ - title: 2. Deploy an App section: - - title: Deploying an App - path: /docs/tutorials/getting-started/deploy-app/ - title: Using kubectl to Create a Deployment - path: /docs/tutorials/getting-started/deploy-intro/ + path: /docs/tutorials/kubernetes-basics/deploy-intro/ - title: Interactive Tutorial - Deploying an App - path: /docs/tutorials/getting-started/deploy-interactive/ + path: /docs/tutorials/kubernetes-basics/deploy-interactive/ - title: 3. Explore Your App section: - - title: Exploring Your App - path: /docs/tutorials/getting-started/explore-app/ - title: Viewing Pods and Nodes - path: /docs/tutorials/getting-started/explore-intro/ + path: /docs/tutorials/kubernetes-basics/explore-intro/ - title: Interactive Tutorial - Exploring Your App - path: /docs/tutorials/getting-started/explore-interactive/ + path: /docs/tutorials/kubernetes-basics/explore-interactive/ - title: 4. Expose Your App Publicly section: - - title: Exposing Your App Publicly - path: /docs/tutorials/getting-started/expose-app/ - title: Using a Service to Expose Your App - path: /docs/tutorials/getting-started/expose-intro/ + path: /docs/tutorials/kubernetes-basics/expose-intro/ - title: Interactive Tutorial - Exposing Your App - path: /docs/tutorials/getting-started/expose-interactive/ + path: /docs/tutorials/kubernetes-basics/expose-interactive/ - title: 5. Scale Your App section: - - title: Scaling Your App - path: /docs/tutorials/getting-started/scale-app/ - title: Running Multiple Instances of Your App - path: /docs/tutorials/getting-started/scale-intro/ + path: /docs/tutorials/kubernetes-basics/scale-intro/ - title: Interactive Tutorial - Scaling Your App - path: /docs/tutorials/getting-started/scale-interactive/ + path: /docs/tutorials/kubernetes-basics/scale-interactive/ - title: 6. Update Your App section: - - title: Updating Your App - path: /docs/tutorials/getting-started/update-app/ - title: Performing a Rolling Update - path: /docs/tutorials/getting-started/update-intro/ + path: /docs/tutorials/kubernetes-basics/update-intro/ - title: Interactive Tutorial - Updating Your App - path: /docs/tutorials/getting-started/update-interactive/ + path: /docs/tutorials/kubernetes-basics/update-interactive/ - title: Stateless Applications section: - title: Running a Stateless Application Using a Deployment path: /docs/tutorials/stateless-application/run-stateless-application-deployment/ - title: Using a Service to Access an Application in a Cluster path: /docs/tutorials/stateless-application/expose-external-ip-address-service/ + - title: Exposing an External IP Address to Access an Application in a Cluster + path: /docs/tutorials/stateless-application/expose-external-ip-address/ diff --git a/_includes/head-header.html b/_includes/head-header.html index 12de81d975..0405f3699c 100644 --- a/_includes/head-header.html +++ b/_includes/head-header.html @@ -2,7 +2,7 @@ - + {% if !page.no_canonical %}{% endif %} diff --git a/_layouts/docwithnav.html b/_layouts/docwithnav.html index 16b1235bf5..c2f74c1bb4 100755 --- a/_layouts/docwithnav.html +++ b/_layouts/docwithnav.html @@ -16,6 +16,7 @@
  • TASKS
  • CONCEPTS
  • REFERENCE
  • +
  • TOOLS
  • SAMPLES
  • SUPPORT
  • @@ -48,7 +49,6 @@ (function(d,c,j){if(!document.getElementById(j)){var pd=d.createElement(c),s;pd.id=j;pd.src=('https:'==document.location.protocol)?'https://polldaddy.com/js/rating/rating.js':'http://i0.poll.fm/js/rating/rating.js';s=document.getElementsByTagName(c)[0];s.parentNode.insertBefore(pd,s);}}(document,'script','pd-rating-js')); Create Issue Edit This Page {% endif %} diff --git a/docs/admin/accessing-the-api.md b/docs/admin/accessing-the-api.md index 6569ac93f8..cb3f3d4ce4 100644 --- a/docs/admin/accessing-the-api.md +++ b/docs/admin/accessing-the-api.md @@ -52,8 +52,8 @@ On GCE, Client Certificates, Password, Plain Tokens, and JWT Tokens are all enab If the request cannot be authenticated, it is rejected with HTTP status code 401. Otherwise, the user is authenticated as a specific `username`, and the user name is available to subsequent steps to use in their decisions. Some authenticators -may also provide the group memberships of the user, while other authenticators -do not (and expect the authorizer to determine these). +also provide the group memberships of the user, while other authenticators +do not. While Kubernetes uses "usernames" for access control decisions and in request logging, it does not have a `user` object nor does it store usernames or other information about diff --git a/docs/admin/apparmor/index.md b/docs/admin/apparmor/index.md index 395aba1989..9730c07953 100644 --- a/docs/admin/apparmor/index.md +++ b/docs/admin/apparmor/index.md @@ -349,8 +349,8 @@ logs or through `journalctl`. More information is provided in Additional resources: -- http://wiki.apparmor.net/index.php/QuickProfileLanguage -- http://wiki.apparmor.net/index.php/ProfileLanguage +- [Quick guide to the AppArmor profile language](http://wiki.apparmor.net/index.php/QuickProfileLanguage) +- [AppArmor core policy reference](http://wiki.apparmor.net/index.php/ProfileLanguage) ## API Reference diff --git a/docs/admin/authentication.md b/docs/admin/authentication.md index c3c5e52c77..5664e8b266 100644 --- a/docs/admin/authentication.md +++ b/docs/admin/authentication.md @@ -25,10 +25,11 @@ manually through API calls. Service accounts are tied to a set of credentials stored as `Secrets`, which are mounted into pods allowing in cluster processes to talk to the Kubernetes API. -All API requests are tied to either a normal user or a service account. This -means every process inside or outside the cluster, from a human user typing -`kubectl` on a workstation, to `kubelets` on nodes, to members of the control -plane, must authenticate when making requests to the the API server. +API requests are tied to either a normal user or a service account, or are treated +as anonymous requests. This means every process inside or outside the cluster, from +a human user typing `kubectl` on a workstation, to `kubelets` on nodes, to members +of the control plane, must authenticate when making requests to the the API server, +or be treated as an anonymous user. ## Authentication strategies @@ -54,20 +55,31 @@ When multiple are enabled, the first authenticator module to successfully authenticate the request short-circuits evaluation. The API server does not guarantee the order authenticators run in. +The `system:authenticated` group is included in the list of groups for all authenticated users. + ### X509 Client Certs Client certificate authentication is enabled by passing the `--client-ca-file=SOMEFILE` option to API server. The referenced file must contain one or more certificates authorities to use to validate client certificates presented to the API server. If a client certificate is presented and verified, the common name of the subject is used as the user name for the -request. +request. As of Kubernetes 1.4, client certificates can also indicate a user's group memberships +using the certificate's organization fields. To include multiple group memberships for a user, +include multiple organization fields in the certificate. + +For example, using the `openssl` command line tool to generate a certificate signing request: + +``` bash +openssl req -new -key jbeda.pem -out jbeda-csr.pem -subj "/CN=jbeda/O=app1/O=app2" +``` + +This would create a CSR for the username "jbeda", belonging to two groups, "app1" and "app2". See [APPENDIX](#appendix) for how to generate a client cert. ### Static Token File -Token file is enabled by passing the `--token-auth-file=SOMEFILE` option to the -API server. Currently, tokens last indefinitely, and the token list cannot be +The API server reads bearer tokens from a file when given the `--token-auth-file=SOMEFILE` option on the command line. Currently, tokens last indefinitely, and the token list cannot be changed without restarting API server. The token file format is implemented in `plugin/pkg/auth/authenticator/token/tokenfile/...` @@ -78,8 +90,19 @@ optional group names. Note, if you have more than one group the column must be d token,user,uid,"group1,group2,group3" ``` -When using token authentication from an http client the API server expects an `Authorization` -header with a value of `Bearer SOMETOKEN`. +#### Putting a Bearer Token in a Request + +When using bearer token authentication from an http client, the API +server expects an `Authorization` header with a value of `Bearer +THETOKEN`. The bearer token must be a character sequence that can be +put in an HTTP header value using no more than the encoding and +quoting facilities of HTTP. For example: if the bearer token is +`31ada4fd-adec-460c-809a-9e56ceb75269` then it would appear in an HTTP +header as shown below. + +```http +Authentication: Bearer 31ada4fd-adec-460c-809a-9e56ceb75269 +``` ### Static Password File @@ -171,7 +194,8 @@ type: kubernetes.io/service-account-token Note: values are base64 encoded because secrets are always base64 encoded. The signed JWT can be used as a bearer token to authenticate as the given service -account. Normally these secrets are mounted into pods for in-cluster access to +account. See [above](#putting-a-bearer-token-in-a-request) for how the token is included +in a request. Normally these secrets are mounted into pods for in-cluster access to the API server, but can be used from outside the cluster as well. Service accounts authenticate with the username `system:serviceaccount:(NAMESPACE):(SERVICEACCOUNT)`, @@ -192,11 +216,8 @@ email, signed by the server. To identify the user, the authenticator uses the `id_token` (not the `access_token`) from the OAuth2 [token response](https://openid.net/specs/openid-connect-core-1_0.html#TokenResponse) -as a bearer token. - -``` -Authentication: Bearer (id_token) -``` +as a bearer token. See [above](#putting-a-bearer-token-in-a-request) for how the token +is included in a request. To enable the plugin, pass the following required flags: @@ -272,10 +293,11 @@ contexts: name: webhook ``` -When a client attempts to authenticate with the API server using a bearer token, -using the `Authorization: Bearer (TOKEN)` HTTP header the authentication webhook +When a client attempts to authenticate with the API server using a bearer token +as discussed [above](#putting-a-bearer-token-in-a-request), +the authentication webhook queries the remote service with a review object containing the token. Kubernetes -will not challenge request that lack such a header. +will not challenge a request that lacks such a header. Note that webhook API objects are subject to the same [versioning compatibility rules](/docs/api/) as other Kubernetes API objects. Implementers should be aware of looser @@ -354,6 +376,22 @@ Please refer to the [discussion](https://github.com/kubernetes/kubernetes/pull/1 [blueprint](https://github.com/kubernetes/kubernetes/issues/11626) and [proposed changes](https://github.com/kubernetes/kubernetes/pull/25536) for more details. +## Anonymous requests + +Anonymous access is enabled by default, and can be disabled by passing `--anonymous-auth=false` +option to the API server during startup. + +When enabled, requests that are not rejected by other configured authentication methods are +treated as anonymous requests, and given a username of `system:anonymous` and a group of +`system:unauthenticated`. + +For example, on a server with token authentication configured, and anonymous access enabled, +a request providing an invalid bearer token would receive a `401 Unauthorized` error. +A request providing no bearer token would be treated as an anonymous request. + +If you rely on authentication alone to authorize access, either change to use an +authorization mode other than `AlwaysAllow`, or set `--anonymous-auth=false`. + ## Plugin Development We plan for the Kubernetes API server to issue tokens after the user has been diff --git a/docs/admin/authorization.md b/docs/admin/authorization.md index 1e7b180773..a72a855cb2 100644 --- a/docs/admin/authorization.md +++ b/docs/admin/authorization.md @@ -53,7 +53,7 @@ A request has the following attributes that can be considered for authorization: - what resource is being accessed (for resource requests only) - what subresource is being accessed (for resource requests only) - the namespace of the object being accessed (for namespaced resource requests only) - - the API group being accessed (for resource requests only) + - the API group being accessed (for resource requests only); an empty string designates the [core API group](../api.md#api-groups) The request verb for a resource API endpoint can be determined by the HTTP verb used and whether or not the request acts on an individual resource or a collection of resources: @@ -231,7 +231,7 @@ metadata: namespace: default name: pod-reader rules: - - apiGroups: [""] # The API group "" indicates the default API Group. + - apiGroups: [""] # The API group "" indicates the core API Group. resources: ["pods"] verbs: ["get", "watch", "list"] nonResourceURLs: [] @@ -323,6 +323,32 @@ roleRef: apiVersion: rbac.authorization.k8s.io/v1alpha1 ``` +### Referring to Resources + +Most resources are represented by a string representation of their name, such as "pods", just as it +appears in the URL for the relevant API endpoint. However, some Kubernetes APIs involve a +"subresource" such as the logs for a pod. The URL for the pods logs endpoint is: + +``` +GET /api/v1/namespaces/{namespace}/pods/{name}/log +``` + +In this case, "pods" is the namespaced resource, and "log" is a subresource of pods. To represent +this in an RBAC role, use a slash to delimit the resource and subresource names. To allow a subject +to read both pods and pod logs, you would write: + +```yaml +kind: Role +apiVersion: rbac.authorization.k8s.io/v1alpha1 +metadata: + namespace: default + name: pod-and-pod-logs-reader +rules: + - apiGroups: [""] + resources: ["pods", "pods/log"] + verbs: ["get", "list"] +``` + ### Referring to Subjects RoleBindings and ClusterRoleBindings bind "subjects" to "roles". @@ -351,6 +377,7 @@ to groups with the `system:` prefix. Only the `subjects` section of a RoleBinding object shown in the following examples. For a user called `alice@example.com`, specify + ```yaml subjects: - kind: User @@ -358,6 +385,7 @@ subjects: ``` For a group called `frontend-admins`, specify: + ```yaml subjects: - kind: Group @@ -365,6 +393,7 @@ subjects: ``` For the default service account in the kube-system namespace: + ```yaml subjects: - kind: ServiceAccount @@ -373,6 +402,7 @@ subjects: ``` For all service accounts in the `qa` namespace: + ```yaml subjects: - kind: Group @@ -380,6 +410,7 @@ subjects: ``` For all service accounts everywhere: + ```yaml subjects: - kind: Group @@ -601,4 +632,4 @@ subjectaccessreview "" created ``` This is useful for debugging access problems, in that you can use this resource -to determine what access an authorizer is granting. \ No newline at end of file +to determine what access an authorizer is granting. diff --git a/docs/admin/dns.md b/docs/admin/dns.md index a85f2338ce..cc132201aa 100644 --- a/docs/admin/dns.md +++ b/docs/admin/dns.md @@ -9,10 +9,14 @@ assignees: ## Introduction As of Kubernetes 1.3, DNS is a built-in service launched automatically using the addon manager [cluster add-on](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/README.md). -A DNS Pod and Service will be scheduled on the cluster, and the kubelets will be -configured to tell individual containers to use the DNS Service's IP to resolve DNS names. -Every Service defined in the cluster (including the DNS server itself) will be +Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures +the kubelets to tell individual containers to use the DNS Service's IP to +resolve DNS names. + +## What things get DNS names? + +Every Service defined in the cluster (including the DNS server itself) is assigned a DNS name. By default, a client Pod's DNS search list will include the Pod's own namespace and the cluster's default domain. This is best illustrated by example: @@ -22,17 +26,164 @@ in namespace `bar` can look up this service by simply doing a DNS query for `foo`. A Pod running in namespace `quux` can look up this service by doing a DNS query for `foo.bar`. -The Kubernetes cluster DNS server (based off the [SkyDNS](https://github.com/skynetservices/skydns) library) -supports forward lookups (A records), service lookups (SRV records) and reverse IP address lookups (PTR records). +## Supported DNS schema +The following sections detail the supported record types and layout that is +supported. Any other layout or names or queries that happen to work are +considered implementation details and are subject to change without warning. -## How it Works +### Services -The running Kubernetes DNS pod holds 3 containers - kubedns, dnsmasq and a health check called healthz. -The kubedns process watches the Kubernetes master for changes in Services and Endpoints, and maintains -in-memory lookup structures to service DNS requests. The dnsmasq container adds DNS caching to improve -performance. The healthz container provides a single health check endpoint while performing dual healthchecks -(for dnsmasq and kubedns). +#### A records + +"Normal" (not headless) Services are assigned a DNS A record for a name of the +form `my-svc.my-namespace.svc.cluster.local`. This resolves to the cluster IP +of the Service. + +"Headless" (without a cluster IP) Services are also assigned a DNS A record for +a name of the form `my-svc.my-namespace.svc.cluster.local`. Unlike normal +Services, this resolves to the set of IPs of the pods selected by the Service. +Clients are expected to consume the set or else use standard round-robin +selection from the set. + +### SRV records + +SRV Records are created for named ports that are part of normal or Headless +Services. +For each named port, the SRV record would have the form +`_my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster.local`. +For a regular service, this resolves to the port number and the CNAME: +`my-svc.my-namespace.svc.cluster.local`. +For a headless service, this resolves to multiple answers, one for each pod +that is backing the service, and contains the port number and a CNAME of the pod +of the form `auto-generated-name.my-svc.my-namespace.svc.cluster.local`. + +### Backwards compatibility + +Previous versions of kube-dns made names of the for +`my-svc.my-namespace.cluster.local` (the 'svc' level was added later). This +is no longer supported. + +### Pods + +#### A Records + +When enabled, pods are assigned a DNS A record in the form of `pod-ip-address.my-namespace.pod.cluster.local`. + +For example, a pod with ip `1.2.3.4` in the namespace `default` with a dns name of `cluster.local` would have an entry: `1-2-3-4.default.pod.cluster.local`. + +#### A Records and hostname based on Pod's hostname and subdomain fields + +Currently when a pod is created, its hostname is the Pod's `metadata.name` value. + +With v1.2, users can specify a Pod annotation, `pod.beta.kubernetes.io/hostname`, to specify what the Pod's hostname should be. +The Pod annotation, if specified, takes precendence over the Pod's name, to be the hostname of the pod. +For example, given a Pod with annotation `pod.beta.kubernetes.io/hostname: my-pod-name`, the Pod will have its hostname set to "my-pod-name". + +With v1.3, the PodSpec has a `hostname` field, which can be used to specify the Pod's hostname. This field value takes precedence over the +`pod.beta.kubernetes.io/hostname` annotation value. + +v1.2 introduces a beta feature where the user can specify a Pod annotation, `pod.beta.kubernetes.io/subdomain`, to specify the Pod's subdomain. +The final domain will be "...svc.". +For example, a Pod with the hostname annotation set to "foo", and the subdomain annotation set to "bar", in namespace "my-namespace", will have the FQDN "foo.bar.my-namespace.svc.cluster.local" + +With v1.3, the PodSpec has a `subdomain` field, which can be used to specify the Pod's subdomain. This field value takes precedence over the +`pod.beta.kubernetes.io/subdomain` annotation value. + +Example: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: busybox + namespace: default +spec: + hostname: busybox-1 + subdomain: default + containers: + - image: busybox + command: + - sleep + - "3600" + name: busybox +``` + +If there exists a headless service in the same namespace as the pod and with the same name as the subdomain, the cluster's KubeDNS Server also returns an A record for the Pod's fully qualified hostname. +Given a Pod with the hostname set to "foo" and the subdomain set to "bar", and a headless Service named "bar" in the same namespace, the pod will see it's own FQDN as "foo.bar.my-namespace.svc.cluster.local". DNS serves an A record at that name, pointing to the Pod's IP. + +With v1.2, the Endpoints object also has a new annotation `endpoints.beta.kubernetes.io/hostnames-map`. Its value is the json representation of map[string(IP)][endpoints.HostRecord], for example: '{"10.245.1.6":{HostName: "my-webserver"}}'. +If the Endpoints are for a headless service, an A record is created with the format ...svc. +For the example json, if endpoints are for a headless service named "bar", and one of the endpoints has IP "10.245.1.6", an A is created with the name "my-webserver.bar.my-namespace.svc.cluster.local" and the A record lookup would return "10.245.1.6". +This endpoints annotation generally does not need to be specified by end-users, but can used by the internal service controller to deliver the aforementioned feature. + +With v1.3, The Endpoints object can specify the `hostname` for any endpoint, along with its IP. The hostname field takes precedence over the hostname value +that might have been specified via the `endpoints.beta.kubernetes.io/hostnames-map` annotation. + +With v1.3, the following annotations are deprecated: `pod.beta.kubernetes.io/hostname`, `pod.beta.kubernetes.io/subdomain`, `endpoints.beta.kubernetes.io/hostnames-map` + +## How do I test if it is working? + +### Create a simple Pod to use as a test environment. + +Create a file named busybox.yaml with the +following contents: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: busybox + namespace: default +spec: + containers: + - image: busybox + command: + - sleep + - "3600" + imagePullPolicy: IfNotPresent + name: busybox + restartPolicy: Always +``` + +Then create a pod using this file: + +``` +kubectl create -f busybox.yaml +``` + +### Wait for this pod to go into the running state. + +You can get its status with: +``` +kubectl get pods busybox +``` + +You should see: +``` +NAME READY STATUS RESTARTS AGE +busybox 1/1 Running 0 +``` + +### Validate DNS works + +Once that pod is running, you can exec nslookup in that environment: + +``` +kubectl exec busybox -- nslookup kubernetes.default +``` + +You should see something like: + +``` +Server: 10.0.0.10 +Address 1: 10.0.0.10 + +Name: kubernetes.default +Address 1: 10.0.0.1 +``` + +If you see that, DNS is working correctly. ## Kubernetes Federation (Multiple Zone support) @@ -44,6 +195,25 @@ the lookup of federated services (which span multiple Kubernetes clusters). See the [Cluster Federation Administrators' Guide](/docs/admin/federation) for more details on Cluster Federation and multi-site support. +## How it Works + +The running Kubernetes DNS pod holds 3 containers - kubedns, dnsmasq and a health check called healthz. +The kubedns process watches the Kubernetes master for changes in Services and Endpoints, and maintains +in-memory lookup structures to service DNS requests. The dnsmasq container adds DNS caching to improve +performance. The healthz container provides a single health check endpoint while performing dual healthchecks +(for dnsmasq and kubedns). + +The DNS pod is exposed as a Kubernetes Service with a static IP. Once assigned the +kubelet passes DNS configured using the `--cluster-dns=10.0.0.10` flag to each +container. + +DNS names also need domains. The local domain is configurable, in the kubelet using +the flag `--cluster-domain=` + +The Kubernetes cluster DNS server (based off the [SkyDNS](https://github.com/skynetservices/skydns) library) +supports forward lookups (A records), service lookups (SRV records) and reverse IP address lookups (PTR records). + + ## References - [Docs for the DNS cluster addon](http://releases.k8s.io/{{page.githubbranch}}/build/kube-dns/README.md) diff --git a/docs/admin/kubeadm.md b/docs/admin/kubeadm.md new file mode 100644 index 0000000000..57a21528c2 --- /dev/null +++ b/docs/admin/kubeadm.md @@ -0,0 +1,150 @@ +--- +assignees: +- mikedanese +- luxas +- errordeveloper + +--- + + +This document provides information on how to use kubeadm's advanced options. + +Running kubeadm init bootstraps a Kubernetes cluster. This consists of the +following steps: + +1. kubeadm generates a token that additional nodes can use to register themselves +with the master in future. + +1. kubeadm generates a self-signed CA using openssl to provision identities +for each node in the cluster, and for the API server to secure communication +with clients. + +1. Outputting a kubeconfig file for the kubelet to use to connect to the API server, +as well as an additional kubeconfig file for administration. + +1. kubeadm generates Kubernetes resource manifests for the API server, controller manager +and scheduler, and placing them in `/etc/kubernetes/manifests`. The kubelet watches +this directory for static resources to create on startup. These are the core +components of Kubernetes, and once they are up and running we can use `kubectl` +to set up/manage any additional components. + +1. kubeadm installs any add-on components, such as DNS or discovery, via the API server. + +## Usage + +Fields that support multiple values do so either with comma separation, or by specifying +the flag multiple times. + +### `kubeadm init` + +It is usually sufficient to run `kubeadm init` without any flags, +but in some cases you might like to override the default behaviour. +Here we specify all the flags that can be used to customise the Kubernetes +installation. + +- `--api-advertise-addresses` (multiple values are allowed) +- `--api-external-dns-names` (multiple values are allowed) + +By default, `kubeadm init` automatically detects IP addresses and uses +these to generate certificates for the API server. This uses the IP address +of the default network interface. If you would like to access the API server +through a different IP address, or through a hostname, you can override these +defaults with `--api-advertise-addresses` and `--api-external-dns-names`. +For example, to generate certificates that verify the API server at addresses +`10.100.245.1` and `100.123.121.1`, you could use +`--api-advertise-addresses=10.100.245.1,100.123.121.1`. To allow it to be accessed +with a hostname, `--api-external-dns-names=kubernetes.example.com,kube.example.com` +Specifying `--api-advertise-addresses` disables auto detection of IP addresses. + +- `--cloud-provider` + +Currently, `kubeadm init` does not provide autodetection of cloud provider. +This means that load balancing and persistent volumes are not supported out +of the box. You can specify a cloud provider using `--cloud-provider`. +Valid values are the ones supported by `controller-manager`, namely `"aws"`, +`"azure"`, `"cloudstack"`, `"gce"`, `"mesos"`, `"openstack"`, `"ovirt"`, +`"rackspace"`, `"vsphere"`. In order to provide additional configuration for +the cloud provider, you should create a `/etc/kubernetes/cloud-config.json` +file manually, before running `kubeadm init`. `kubeadm` automatically +picks those settings up and ensures other nodes are configured correctly. +You must also set the `--cloud-provider` and `--cloud-config` parameters +yourself by editing the `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` +file appropriately. + +- `--external-etcd-cafile` etcd certificate authority file +- `--external-etcd-endpoints` (multiple values are allowed) +- `--external-etcd-certfile` etcd client certificate file +- `--external-etcd-keyfile` etcd client key file + +By default, `kubeadm` deploys a single node etcd cluster on the master +to store Kubernetes state. This means that any failure on the master node +requires you to rebuild your cluster from scratch. Currently `kubeadm init` +does not support automatic deployment of a highly available etcd cluster. +If you would like to use your own etcd cluster, you can override this +behaviour with `--external-etcd-endpoints`. `kubeadm` supports etcd client +authentication using the `--external-etcd-cafile`, `--external-etcd-certfile` +and `--external-etcd-keyfile` flags. + +- `--pod-network-cidr` + +By default, `kubeadm init` does not set node CIDR's for pods and allows you to +bring your own networking configuration through a CNI compatible network +controller addon such as [Weave Net](https://github.com/weaveworks/weave-kube), +[Calico](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes/manifests/kubeadm) +or [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm). +If you are using a compatible cloud provider or flannel, you can specify a +subnet to use for each pod on the cluster with the `--pod-network-cidr` flag. +This should be a minimum of a /16 so that kubeadm is able to assign /24 subnets +to each node in the cluster. + +- `--service-cidr` (default '10.12.0.0/12') + +You can use the `--service-cidr` flag to override the subnet Kubernetes uses to +assign pods IP addresses. If you do, you will also need to update the +`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` file to reflect this change +else DNS will not function correctly. + +- `--service-dns-domain` (default 'cluster.local') + +By default, `kubeadm init` deploys a cluster that assigns services with DNS names +`..svc.cluster.local`. You can use the `--service-dns-domain` +to change the DNS name suffix. Again, you will need to update the +`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` file accordingly else DNS will +not function correctly. + +- `--token` + +By default, `kubeadm init` automatically generates the token used to initialise +each new node. If you would like to manually specify this token, you can use the +`--token` flag. The token must be of the format '<6 character string>.<16 character string>'. + +- `--use-kubernetes-version` (default 'v1.4.1') the kubernetes version to initialise + +`kubeadm` was originally built for Kubernetes version **v1.4.0**, older versions are not +supported. With this flag you can try any future version, e.g. **v1.5.0-beta.1** +whenever it comes out (check [releases page](https://github.com/kubernetes/kubernetes/releases) +for a full list of available versions). + +### `kubeadm join` + +`kubeadm join` has one mandatory flag, the token used to secure cluster bootstrap, +and one mandatory argument, the master IP address. + +Here's an example on how to use it: + +`kubeadm join --token=the_secret_token 192.168.1.1` + +- `--token=` + +By default, when `kubeadm init` runs, a token is generated and revealed in the output. +That's the token you should use here. + +## Troubleshooting + +* Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure `net.bridge.bridge-nf-call-iptables` is set to 1 in your sysctl config, eg. + +``` +# cat /etc/sysctl.d/k8s.conf +net.bridge.bridge-nf-call-ip6tables = 1 +net.bridge.bridge-nf-call-iptables = 1 +``` diff --git a/docs/admin/limitrange/index.md b/docs/admin/limitrange/index.md index f38737981d..0336264bc3 100644 --- a/docs/admin/limitrange/index.md +++ b/docs/admin/limitrange/index.md @@ -1,214 +1,214 @@ ---- -assignees: -- derekwaynecarr -- janetkuo - ---- - -By default, pods run with unbounded CPU and memory limits. This means that any pod in the -system will be able to consume as much CPU and memory on the node that executes the pod. - -Users may want to impose restrictions on the amount of resource a single pod in the system may consume -for a variety of reasons. - -For example: - -1. Each node in the cluster has 2GB of memory. The cluster operator does not want to accept pods -that require more than 2GB of memory since no node in the cluster can support the requirement. To prevent a -pod from being permanently unscheduled to a node, the operator instead chooses to reject pods that exceed 2GB -of memory as part of admission control. -2. A cluster is shared by two communities in an organization that runs production and development workloads -respectively. Production workloads may consume up to 8GB of memory, but development workloads may consume up -to 512MB of memory. The cluster operator creates a separate namespace for each workload, and applies limits to -each namespace. -3. Users may create a pod which consumes resources just below the capacity of a machine. The left over space -may be too small to be useful, but big enough for the waste to be costly over the entire cluster. As a result, -the cluster operator may want to set limits that a pod must consume at least 20% of the memory and cpu of their -average node size in order to provide for more uniform scheduling and to limit waste. - -This example demonstrates how limits can be applied to a Kubernetes [namespace](/docs/admin/namespaces/walkthrough/) to control -min/max resource limits per pod. In addition, this example demonstrates how you can -apply default resource limits to pods in the absence of an end-user specified value. - -See [LimitRange design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_limit_range.md) for more information. For a detailed description of the Kubernetes resource model, see [Resources](/docs/user-guide/compute-resources/) - -## Step 0: Prerequisites - -This example requires a running Kubernetes cluster. See the [Getting Started guides](/docs/getting-started-guides/) for how to get started. - -Change to the `` directory if you're not already there. - -## Step 1: Create a namespace - -This example will work in a custom namespace to demonstrate the concepts involved. - -Let's create a new namespace called limit-example: - -```shell -$ kubectl create namespace limit-example -namespace "limit-example" created -``` - -Note that `kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands: - -```shell -$ kubectl get namespaces -NAME STATUS AGE -default Active 51s -limit-example Active 45s -``` - -## Step 2: Apply a limit to the namespace - -Let's create a simple limit in our namespace. - -```shell -$ kubectl create -f docs/admin/limitrange/limits.yaml --namespace=limit-example -limitrange "mylimits" created -``` - -Let's describe the limits that we have imposed in our namespace. - -```shell -$ kubectl describe limits mylimits --namespace=limit-example -Name: mylimits -Namespace: limit-example -Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ----- -------- --- --- --------------- ------------- ----------------------- -Pod cpu 200m 2 - - - -Pod memory 6Mi 1Gi - - - -Container cpu 100m 2 200m 300m - -Container memory 3Mi 1Gi 100Mi 200Mi - -``` - -In this scenario, we have said the following: - -1. If a max constraint is specified for a resource (2 CPU and 1Gi memory in this case), then a limit -must be specified for that resource across all containers. Failure to specify a limit will result in -a validation error when attempting to create the pod. Note that a default value of limit is set by -*default* in file `limits.yaml` (300m CPU and 200Mi memory). -2. If a min constraint is specified for a resource (100m CPU and 3Mi memory in this case), then a -request must be specified for that resource across all containers. Failure to specify a request will -result in a validation error when attempting to create the pod. Note that a default value of request is -set by *defaultRequest* in file `limits.yaml` (200m CPU and 100Mi memory). -3. For any pod, the sum of all containers memory requests must be >= 6Mi and the sum of all containers -memory limits must be <= 1Gi; the sum of all containers CPU requests must be >= 200m and the sum of all -containers CPU limits must be <= 2. - -## Step 3: Enforcing limits at point of creation - -The limits enumerated in a namespace are only enforced when a pod is created or updated in -the cluster. If you change the limits to a different value range, it does not affect pods that -were previously created in a namespace. - -If a resource (cpu or memory) is being restricted by a limit, the user will get an error at time -of creation explaining why. - -Let's first spin up a [Deployment](/docs/user-guide/deployments) that creates a single container Pod to demonstrate -how default values are applied to each pod. - -```shell -$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example -deployment "nginx" created -``` - -Note that `kubectl run` creates a Deployment named "nginx" on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead. -If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/kubectl_run/) for more details. -The Deployment manages 1 replica of single container Pod. Let's take a look at the Pod it manages. First, find the name of the Pod: - -```shell -$ kubectl get pods --namespace=limit-example -NAME READY STATUS RESTARTS AGE -nginx-2040093540-s8vzu 1/1 Running 0 11s -``` - -Let's print this Pod with yaml output format (using `-o yaml` flag), and then `grep` the `resources` field. Note that your pod name will be different. - -``` shell -$ kubectl get pods nginx-2040093540-s8vzu --namespace=limit-example -o yaml | grep resources -C 8 - resourceVersion: "57" - selfLink: /api/v1/namespaces/limit-example/pods/nginx-2040093540-ivimu - uid: 67b20741-f53b-11e5-b066-64510658e388 -spec: - containers: - - image: nginx - imagePullPolicy: Always - name: nginx - resources: - limits: - cpu: 300m - memory: 200Mi - requests: - cpu: 200m - memory: 100Mi - terminationMessagePath: /dev/termination-log - volumeMounts: -``` - -Note that our nginx container has picked up the namespace default cpu and memory resource *limits* and *requests*. - -Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 cpu cores. - -```shell -$ kubectl create -f docs/admin/limitrange/invalid-pod.yaml --namespace=limit-example -Error from server: error when creating "docs/admin/limitrange/invalid-pod.yaml": Pod "invalid-pod" is forbidden: [Maximum cpu usage per Pod is 2, but limit is 3., Maximum cpu usage per Container is 2, but limit is 3.] -``` - -Let's create a pod that falls within the allowed limit boundaries. - -```shell -$ kubectl create -f docs/admin/limitrange/valid-pod.yaml --namespace=limit-example -pod "valid-pod" created -``` - -Now look at the Pod's resources field: - -```shell -$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources - uid: 3b1bfd7a-f53c-11e5-b066-64510658e388 -spec: - containers: - - image: gcr.io/google_containers/serve_hostname - imagePullPolicy: Always - name: kubernetes-serve-hostname - resources: - limits: - cpu: "1" - memory: 512Mi - requests: - cpu: "1" - memory: 512Mi -``` - -Note that this pod specifies explicit resource *limits* and *requests* so it did not pick up the namespace -default values. - -Note: The *limits* for CPU resource are enforced in the default Kubernetes setup on the physical node -that runs the container unless the administrator deploys the kubelet with the folllowing flag: - -```shell -$ kubelet --help -Usage of kubelet -.... - --cpu-cfs-quota[=true]: Enable CPU CFS quota enforcement for containers that specify CPU limits -$ kubelet --cpu-cfs-quota=false ... -``` - -## Step 4: Cleanup - -To remove the resources used by this example, you can just delete the limit-example namespace. - -```shell -$ kubectl delete namespace limit-example -namespace "limit-example" deleted -$ kubectl get namespaces -NAME STATUS AGE -default Active 12m -``` - -## Summary - -Cluster operators that want to restrict the amount of resources a single container or pod may consume -are able to define allowable ranges per Kubernetes namespace. In the absence of any explicit assignments, -the Kubernetes system is able to apply default resource *limits* and *requests* if desired in order to -constrain the amount of resource a pod consumes on a node. +--- +assignees: +- derekwaynecarr +- janetkuo + +--- + +By default, pods run with unbounded CPU and memory limits. This means that any pod in the +system will be able to consume as much CPU and memory on the node that executes the pod. + +Users may want to impose restrictions on the amount of resources a single pod in the system may consume +for a variety of reasons. + +For example: + +1. Each node in the cluster has 2GB of memory. The cluster operator does not want to accept pods +that require more than 2GB of memory since no node in the cluster can support the requirement. To prevent a +pod from being permanently unscheduled to a node, the operator instead chooses to reject pods that exceed 2GB +of memory as part of admission control. +2. A cluster is shared by two communities in an organization that runs production and development workloads +respectively. Production workloads may consume up to 8GB of memory, but development workloads may consume up +to 512MB of memory. The cluster operator creates a separate namespace for each workload, and applies limits to +each namespace. +3. Users may create a pod which consumes resources just below the capacity of a machine. The left over space +may be too small to be useful, but big enough for the waste to be costly over the entire cluster. As a result, +the cluster operator may want to set limits that a pod must consume at least 20% of the memory and cpu of their +average node size in order to provide for more uniform scheduling and to limit waste. + +This example demonstrates how limits can be applied to a Kubernetes [namespace](/docs/admin/namespaces/walkthrough/) to control +min/max resource limits per pod. In addition, this example demonstrates how you can +apply default resource limits to pods in the absence of an end-user specified value. + +See [LimitRange design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_limit_range.md) for more information. For a detailed description of the Kubernetes resource model, see [Resources](/docs/user-guide/compute-resources/) + +## Step 0: Prerequisites + +This example requires a running Kubernetes cluster. See the [Getting Started guides](/docs/getting-started-guides/) for how to get started. + +Change to the `` directory if you're not already there. + +## Step 1: Create a namespace + +This example will work in a custom namespace to demonstrate the concepts involved. + +Let's create a new namespace called limit-example: + +```shell +$ kubectl create namespace limit-example +namespace "limit-example" created +``` + +Note that `kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands: + +```shell +$ kubectl get namespaces +NAME STATUS AGE +default Active 51s +limit-example Active 45s +``` + +## Step 2: Apply a limit to the namespace + +Let's create a simple limit in our namespace. + +```shell +$ kubectl create -f docs/admin/limitrange/limits.yaml --namespace=limit-example +limitrange "mylimits" created +``` + +Let's describe the limits that we have imposed in our namespace. + +```shell +$ kubectl describe limits mylimits --namespace=limit-example +Name: mylimits +Namespace: limit-example +Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio +---- -------- --- --- --------------- ------------- ----------------------- +Pod cpu 200m 2 - - - +Pod memory 6Mi 1Gi - - - +Container cpu 100m 2 200m 300m - +Container memory 3Mi 1Gi 100Mi 200Mi - +``` + +In this scenario, we have said the following: + +1. If a max constraint is specified for a resource (2 CPU and 1Gi memory in this case), then a limit +must be specified for that resource across all containers. Failure to specify a limit will result in +a validation error when attempting to create the pod. Note that a default value of limit is set by +*default* in file `limits.yaml` (300m CPU and 200Mi memory). +2. If a min constraint is specified for a resource (100m CPU and 3Mi memory in this case), then a +request must be specified for that resource across all containers. Failure to specify a request will +result in a validation error when attempting to create the pod. Note that a default value of request is +set by *defaultRequest* in file `limits.yaml` (200m CPU and 100Mi memory). +3. For any pod, the sum of all containers memory requests must be >= 6Mi and the sum of all containers +memory limits must be <= 1Gi; the sum of all containers CPU requests must be >= 200m and the sum of all +containers CPU limits must be <= 2. + +## Step 3: Enforcing limits at point of creation + +The limits enumerated in a namespace are only enforced when a pod is created or updated in +the cluster. If you change the limits to a different value range, it does not affect pods that +were previously created in a namespace. + +If a resource (cpu or memory) is being restricted by a limit, the user will get an error at time +of creation explaining why. + +Let's first spin up a [Deployment](/docs/user-guide/deployments) that creates a single container Pod to demonstrate +how default values are applied to each pod. + +```shell +$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example +deployment "nginx" created +``` + +Note that `kubectl run` creates a Deployment named "nginx" on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead. +If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/user-guide/kubectl/kubectl_run/) for more details. +The Deployment manages 1 replica of single container Pod. Let's take a look at the Pod it manages. First, find the name of the Pod: + +```shell +$ kubectl get pods --namespace=limit-example +NAME READY STATUS RESTARTS AGE +nginx-2040093540-s8vzu 1/1 Running 0 11s +``` + +Let's print this Pod with yaml output format (using `-o yaml` flag), and then `grep` the `resources` field. Note that your pod name will be different. + +```shell +$ kubectl get pods nginx-2040093540-s8vzu --namespace=limit-example -o yaml | grep resources -C 8 + resourceVersion: "57" + selfLink: /api/v1/namespaces/limit-example/pods/nginx-2040093540-ivimu + uid: 67b20741-f53b-11e5-b066-64510658e388 +spec: + containers: + - image: nginx + imagePullPolicy: Always + name: nginx + resources: + limits: + cpu: 300m + memory: 200Mi + requests: + cpu: 200m + memory: 100Mi + terminationMessagePath: /dev/termination-log + volumeMounts: +``` + +Note that our nginx container has picked up the namespace default cpu and memory resource *limits* and *requests*. + +Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 cpu cores. + +```shell +$ kubectl create -f docs/admin/limitrange/invalid-pod.yaml --namespace=limit-example +Error from server: error when creating "docs/admin/limitrange/invalid-pod.yaml": Pod "invalid-pod" is forbidden: [Maximum cpu usage per Pod is 2, but limit is 3., Maximum cpu usage per Container is 2, but limit is 3.] +``` + +Let's create a pod that falls within the allowed limit boundaries. + +```shell +$ kubectl create -f docs/admin/limitrange/valid-pod.yaml --namespace=limit-example +pod "valid-pod" created +``` + +Now look at the Pod's resources field: + +```shell +$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources + uid: 3b1bfd7a-f53c-11e5-b066-64510658e388 +spec: + containers: + - image: gcr.io/google_containers/serve_hostname + imagePullPolicy: Always + name: kubernetes-serve-hostname + resources: + limits: + cpu: "1" + memory: 512Mi + requests: + cpu: "1" + memory: 512Mi +``` + +Note that this pod specifies explicit resource *limits* and *requests* so it did not pick up the namespace +default values. + +Note: The *limits* for CPU resource are enforced in the default Kubernetes setup on the physical node +that runs the container unless the administrator deploys the kubelet with the folllowing flag: + +```shell +$ kubelet --help +Usage of kubelet +.... + --cpu-cfs-quota[=true]: Enable CPU CFS quota enforcement for containers that specify CPU limits +$ kubelet --cpu-cfs-quota=false ... +``` + +## Step 4: Cleanup + +To remove the resources used by this example, you can just delete the limit-example namespace. + +```shell +$ kubectl delete namespace limit-example +namespace "limit-example" deleted +$ kubectl get namespaces +NAME STATUS AGE +default Active 12m +``` + +## Summary + +Cluster operators that want to restrict the amount of resources a single container or pod may consume +are able to define allowable ranges per Kubernetes namespace. In the absence of any explicit assignments, +the Kubernetes system is able to apply default resource *limits* and *requests* if desired in order to +constrain the amount of resource a pod consumes on a node. diff --git a/docs/admin/out-of-resource.md b/docs/admin/out-of-resource.md index 480e074c42..8af7114ed6 100644 --- a/docs/admin/out-of-resource.md +++ b/docs/admin/out-of-resource.md @@ -29,7 +29,7 @@ table below. The value of each signal is described in the description column ba summary API. | Eviction Signal | Description | -|------------------|---------------------------------------------------------------------------------| +|----------------------------|-----------------------------------------------------------------------| | `memory.available` | `memory.available` := `node.status.capacity[memory]` - `node.stats.memory.workingSet` | | `nodefs.available` | `nodefs.available` := `node.stats.fs.available` | | `nodefs.inodesFree` | `nodefs.inodesFree` := `node.stats.fs.inodesFree` | @@ -128,7 +128,7 @@ reflects the node is under pressure. The following node conditions are defined that correspond to the specified eviction signal. | Node Condition | Eviction Signal | Description | -|----------------|------------------|------------------------------------------------------------------| +|-------------------------|-------------------------------|--------------------------------------------| | `MemoryPressure` | `memory.available` | Available memory on the node has satisfied an eviction threshold | | `DiskPressure` | `nodefs.available`, `nodefs.inodesFree`, `imagefs.available`, or `imagefs.inodesFree` | Available disk space and inodes on either the node's root filesytem or image filesystem has satisfied an eviction threshold | @@ -270,7 +270,7 @@ the node depends on the [oom_killer](https://lwn.net/Articles/391222/) to respon The `kubelet` sets a `oom_score_adj` value for each container based on the quality of service for the pod. | Quality of Service | oom_score_adj | -| ----------------- | ------------- | +|----------------------------|-----------------------------------------------------------------------| | `Guaranteed` | -998 | | `BestEffort` | 1000 | | `Burstable` | min(max(2, 1000 - (1000 * memoryRequestBytes) / machineMemoryCapacityBytes), 999) | diff --git a/docs/admin/resourcequota/index.md b/docs/admin/resourcequota/index.md index bda5120121..ff76942702 100644 --- a/docs/admin/resourcequota/index.md +++ b/docs/admin/resourcequota/index.md @@ -58,7 +58,7 @@ that can be requested in a given namespace. The following resource types are supported: | Resource Name | Description | -| ------------ | ----------- | +| --------------------- | ----------------------------------------------------------- | | `cpu` | Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value. | | `limits.cpu` | Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. | | `limits.memory` | Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. | @@ -73,7 +73,7 @@ The number of objects of a given type can be restricted. The following types are supported: | Resource Name | Description | -| ------------ | ----------- | +| ------------------------------- | ------------------------------------------------- | | `configmaps` | The total number of config maps that can exist in the namespace. | | `persistentvolumeclaims` | The total number of [persistent volume claims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. | | `pods` | The total number of pods in a non-terminal state that can exist in the namespace. A pod is in a terminal state if `status.phase in (Failed, Succeeded)` is true. | diff --git a/docs/admin/salt.md b/docs/admin/salt.md index 2cb634d7c6..5d82b54d39 100644 --- a/docs/admin/salt.md +++ b/docs/admin/salt.md @@ -1,106 +1,106 @@ ---- -assignees: -- davidopp -- lavalamp - ---- - -The Kubernetes cluster can be configured using Salt. - -The Salt scripts are shared across multiple hosting providers, so it's important to understand some background information prior to making a modification to ensure your changes do not break hosting Kubernetes across multiple environments. Depending on where you host your Kubernetes cluster, you may be using different operating systems and different networking configurations. As a result, it's important to understand some background information before making Salt changes in order to minimize introducing failures for other hosting providers. - -## Salt cluster setup - -The **salt-master** service runs on the kubernetes-master [(except on the default GCE setup)](#standalone-salt-configuration-on-gce). - -The **salt-minion** service runs on the kubernetes-master and each kubernetes-node in the cluster. - -Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file [(except on GCE)](#standalone-salt-configuration-on-gce). - -```shell -[root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf -master: kubernetes-master -``` - -The salt-master is contacted by each salt-minion and depending upon the machine information presented, the salt-master will provision the machine as either a kubernetes-master or kubernetes-node with all the required capabilities needed to run Kubernetes. - -If you are running the Vagrant based environment, the **salt-api** service is running on the kubernetes-master. It is configured to enable the vagrant user to introspect the salt cluster in order to find out about machines in the Vagrant environment via a REST API. - -## Standalone Salt Configuration on GCE - -On GCE, the master and nodes are all configured as [standalone minions](http://docs.saltstack.com/en/latest/topics/tutorials/standalone_minion.html). The configuration for each VM is derived from the VM's [instance metadata](https://cloud.google.com/compute/docs/metadata) and then stored in Salt grains (`/etc/salt/minion.d/grains.conf`) and pillars (`/srv/salt-overlay/pillar/cluster-params.sls`) that local Salt uses to enforce state. - -All remaining sections that refer to master/minion setups should be ignored for GCE. One fallout of the GCE setup is that the Salt mine doesn't exist - there is no sharing of configuration amongst nodes. - -## Salt security - -*(Not applicable on default GCE setup.)* - -Security is not enabled on the salt-master, and the salt-master is configured to auto-accept incoming requests from minions. It is not recommended to use this security configuration in production environments without deeper study. (In some environments this isn't as bad as it might sound if the salt master port isn't externally accessible and you trust everyone on your network.) - -```shell -[root@kubernetes-master] $ cat /etc/salt/master.d/auto-accept.conf -open_mode: True -auto_accept: True -``` - -## Salt minion configuration - -Each minion in the salt cluster has an associated configuration that instructs the salt-master how to provision the required resources on the machine. - -An example file is presented below using the Vagrant based environment. - -```shell -[root@kubernetes-master] $ cat /etc/salt/minion.d/grains.conf -grains: - etcd_servers: $MASTER_IP - cloud: vagrant - roles: - - kubernetes-master -``` - -Each hosting environment has a slightly different grains.conf file that is used to build conditional logic where required in the Salt files. - -The following enumerates the set of defined key/value pairs that are supported today. If you add new ones, please make sure to update this list. - -Key | Value -------------- | ------------- -`api_servers` | (Optional) The IP address / host name where a kubelet can get read-only access to kube-apiserver -`cbr-cidr` | (Optional) The minion IP address range used for the docker container bridge. -`cloud` | (Optional) Which IaaS platform is used to host Kubernetes, *gce*, *azure*, *aws*, *vagrant* -`etcd_servers` | (Optional) Comma-delimited list of IP addresses the kube-apiserver and kubelet use to reach etcd. Uses the IP of the first machine in the kubernetes_master role, or 127.0.0.1 on GCE. -`hostnamef` | (Optional) The full host name of the machine, i.e. uname -n -`node_ip` | (Optional) The IP address to use to address this node -`hostname_override` | (Optional) Mapped to the kubelet hostname-override -`network_mode` | (Optional) Networking model to use among nodes: *openvswitch* -`networkInterfaceName` | (Optional) Networking interface to use to bind addresses, default value *eth0* -`publicAddressOverride` | (Optional) The IP address the kube-apiserver should use to bind against for external read-only access -`roles` | (Required) 1. `kubernetes-master` means this machine is the master in the Kubernetes cluster. 2. `kubernetes-pool` means this machine is a kubernetes-node. Depending on the role, the Salt scripts will provision different resources on the machine. - -These keys may be leveraged by the Salt sls files to branch behavior. - -In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, it's important to sometimes distinguish behavior based on operating system using if branches like the following. - -```liquid -{% raw %} -{% if grains['os_family'] == 'RedHat' %} -// something specific to a RedHat environment (Centos, Fedora, RHEL) where you may use yum, systemd, etc. -{% else %} -// something specific to Debian environment (apt-get, initd) -{% endif %} -{% endraw %} -``` - -## Best Practices - -1. When configuring default arguments for processes, it's best to avoid the use of EnvironmentFiles (Systemd in Red Hat environments) or init.d files (Debian distributions) to hold default values that should be common across operating system environments. This helps keep our Salt template files easy to understand for editors who may not be familiar with the particulars of each distribution. - -## Future enhancements (Networking) - -Per pod IP configuration is provider-specific, so when making networking changes, it's important to sandbox these as all providers may not use the same mechanisms (iptables, openvswitch, etc.) - -We should define a grains.conf key that captures more specifically what network configuration environment is being used to avoid future confusion across providers. - -## Further reading - -The [cluster/saltbase](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/) tree has more details on the current SaltStack configuration. \ No newline at end of file +--- +assignees: +- davidopp +- lavalamp + +--- + +The Kubernetes cluster can be configured using Salt. + +The Salt scripts are shared across multiple hosting providers, so it's important to understand some background information prior to making a modification to ensure your changes do not break hosting Kubernetes across multiple environments. Depending on where you host your Kubernetes cluster, you may be using different operating systems and different networking configurations. As a result, it's important to understand some background information before making Salt changes in order to minimize introducing failures for other hosting providers. + +## Salt cluster setup + +The **salt-master** service runs on the kubernetes-master [(except on the default GCE setup)](#standalone-salt-configuration-on-gce). + +The **salt-minion** service runs on the kubernetes-master and each kubernetes-node in the cluster. + +Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file [(except on GCE)](#standalone-salt-configuration-on-gce). + +```shell +[root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf +master: kubernetes-master +``` + +The salt-master is contacted by each salt-minion and depending upon the machine information presented, the salt-master will provision the machine as either a kubernetes-master or kubernetes-node with all the required capabilities needed to run Kubernetes. + +If you are running the Vagrant based environment, the **salt-api** service is running on the kubernetes-master. It is configured to enable the vagrant user to introspect the salt cluster in order to find out about machines in the Vagrant environment via a REST API. + +## Standalone Salt Configuration on GCE + +On GCE, the master and nodes are all configured as [standalone minions](http://docs.saltstack.com/en/latest/topics/tutorials/standalone_minion.html). The configuration for each VM is derived from the VM's [instance metadata](https://cloud.google.com/compute/docs/metadata) and then stored in Salt grains (`/etc/salt/minion.d/grains.conf`) and pillars (`/srv/salt-overlay/pillar/cluster-params.sls`) that local Salt uses to enforce state. + +All remaining sections that refer to master/minion setups should be ignored for GCE. One fallout of the GCE setup is that the Salt mine doesn't exist - there is no sharing of configuration amongst nodes. + +## Salt security + +*(Not applicable on default GCE setup.)* + +Security is not enabled on the salt-master, and the salt-master is configured to auto-accept incoming requests from minions. It is not recommended to use this security configuration in production environments without deeper study. (In some environments this isn't as bad as it might sound if the salt master port isn't externally accessible and you trust everyone on your network.) + +```shell +[root@kubernetes-master] $ cat /etc/salt/master.d/auto-accept.conf +open_mode: True +auto_accept: True +``` + +## Salt minion configuration + +Each minion in the salt cluster has an associated configuration that instructs the salt-master how to provision the required resources on the machine. + +An example file is presented below using the Vagrant based environment. + +```shell +[root@kubernetes-master] $ cat /etc/salt/minion.d/grains.conf +grains: + etcd_servers: $MASTER_IP + cloud: vagrant + roles: + - kubernetes-master +``` + +Each hosting environment has a slightly different grains.conf file that is used to build conditional logic where required in the Salt files. + +The following enumerates the set of defined key/value pairs that are supported today. If you add new ones, please make sure to update this list. + +Key | Value +-----------------------------------|---------------------------------------------------------------- +`api_servers` | (Optional) The IP address / host name where a kubelet can get read-only access to kube-apiserver +`cbr-cidr` | (Optional) The minion IP address range used for the docker container bridge. +`cloud` | (Optional) Which IaaS platform is used to host Kubernetes, *gce*, *azure*, *aws*, *vagrant* +`etcd_servers` | (Optional) Comma-delimited list of IP addresses the kube-apiserver and kubelet use to reach etcd. Uses the IP of the first machine in the kubernetes_master role, or 127.0.0.1 on GCE. +`hostnamef` | (Optional) The full host name of the machine, i.e. uname -n +`node_ip` | (Optional) The IP address to use to address this node +`hostname_override` | (Optional) Mapped to the kubelet hostname-override +`network_mode` | (Optional) Networking model to use among nodes: *openvswitch* +`networkInterfaceName` | (Optional) Networking interface to use to bind addresses, default value *eth0* +`publicAddressOverride` | (Optional) The IP address the kube-apiserver should use to bind against for external read-only access +`roles` | (Required) 1. `kubernetes-master` means this machine is the master in the Kubernetes cluster. 2. `kubernetes-pool` means this machine is a kubernetes-node. Depending on the role, the Salt scripts will provision different resources on the machine. + +These keys may be leveraged by the Salt sls files to branch behavior. + +In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, it's important to sometimes distinguish behavior based on operating system using if branches like the following. + +```liquid +{% raw %} +{% if grains['os_family'] == 'RedHat' %} +// something specific to a RedHat environment (Centos, Fedora, RHEL) where you may use yum, systemd, etc. +{% else %} +// something specific to Debian environment (apt-get, initd) +{% endif %} +{% endraw %} +``` + +## Best Practices + +1. When configuring default arguments for processes, it's best to avoid the use of EnvironmentFiles (Systemd in Red Hat environments) or init.d files (Debian distributions) to hold default values that should be common across operating system environments. This helps keep our Salt template files easy to understand for editors who may not be familiar with the particulars of each distribution. + +## Future enhancements (Networking) + +Per pod IP configuration is provider-specific, so when making networking changes, it's important to sandbox these as all providers may not use the same mechanisms (iptables, openvswitch, etc.) + +We should define a grains.conf key that captures more specifically what network configuration environment is being used to avoid future confusion across providers. + +## Further reading + +The [cluster/saltbase](http://releases.k8s.io/{{page.githubbranch}}/cluster/saltbase/) tree has more details on the current SaltStack configuration. diff --git a/docs/admin/static-pods.md b/docs/admin/static-pods.md index d1ad849b3a..531494fb04 100644 --- a/docs/admin/static-pods.md +++ b/docs/admin/static-pods.md @@ -88,7 +88,7 @@ static-web-my-node1 172.17.0.3 my-node1/192.168 Labels from the static pod are propagated into the mirror-pod and can be used as usual for filtering. -Notice we cannot delete the pod with the API server (e.g. via [`kubectl`](/docs/user-guide/kubectl/kubectl/) command), kubelet simply won't remove it. +Notice we cannot delete the pod with the API server (e.g. via [`kubectl`](/docs/user-guide/kubectl/) command), kubelet simply won't remove it. ```shell [joe@my-master ~] $ kubectl delete pod static-web-my-node1 diff --git a/docs/contribute/page-templates.md b/docs/contribute/page-templates.md index da77bfc38e..d70077e246 100644 --- a/docs/contribute/page-templates.md +++ b/docs/contribute/page-templates.md @@ -12,7 +12,7 @@
  • Concept
  • -

    The page templates are in the _includes/templates directory of the kubernetes.github.io repository. +

    The page templates are in the _includes/templates directory of the kubernetes.github.io repository.

    Task template

    diff --git a/docs/getting-started-guides/azure.md b/docs/getting-started-guides/azure.md index a6f29ef030..40652e3172 100644 --- a/docs/getting-started-guides/azure.md +++ b/docs/getting-started-guides/azure.md @@ -5,12 +5,8 @@ assignees: --- -* TOC -{:toc} - - -## Overview - The recommended approach for deploying a Kubernetes 1.4 cluster on Azure is the -[`kubernetes-anywhere`](https://github.com/kubernetes/kubernetes-anywhere) project. You will want to take a look at the -[Azure Getting Started Guide](https://github.com/kubernetes/kubernetes-anywhere/blob/master/phase1/azure/README.md). \ No newline at end of file +[`kubernetes-anywhere`](https://github.com/kubernetes/kubernetes-anywhere) project. + +You will want to take a look at the +[Azure Getting Started Guide](https://github.com/kubernetes/kubernetes-anywhere/blob/master/phase1/azure/README.md). diff --git a/docs/getting-started-guides/centos/centos_manual_config.md b/docs/getting-started-guides/centos/centos_manual_config.md index d2d12e9e15..f794f485a4 100644 --- a/docs/getting-started-guides/centos/centos_manual_config.md +++ b/docs/getting-started-guides/centos/centos_manual_config.md @@ -1,182 +1,182 @@ ---- -assignees: -- lavalamp -- thockin - ---- - -* TOC -{:toc} - -## Prerequisites - -You need two machines with CentOS installed on them. - -## Starting a cluster - -This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc... - -This guide will only get ONE node working. Multiple nodes requires a functional [networking configuration](/docs/admin/networking) done outside of kubernetes. Although the additional Kubernetes configuration requirements should be obvious. - -The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the node and run kubelet, proxy, cadvisor and docker. - -**System Information:** - -Hosts: - -Please replace host IP with your environment. - -```conf -centos-master = 192.168.121.9 -centos-minion = 192.168.121.65 -``` - -**Prepare the hosts:** - -* Create a /etc/yum.repos.d/virt7-docker-common-release.repo on all hosts - centos-{master,minion} with following information. - -```conf -[virt7-docker-common-release] -name=virt7-docker-common-release -baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/ -gpgcheck=0 -``` - -* Install Kubernetes and etcd on all hosts - centos-{master,minion}. This will also pull in docker and cadvisor. - -```shell -yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd -``` - -* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS) - -```shell -echo "192.168.121.9 centos-master -192.168.121.65 centos-minion" >> /etc/hosts -``` - -* Edit /etc/kubernetes/config which will be the same on all hosts to contain: - -```shell -# Comma separated list of nodes in the etcd cluster -KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379" - -# logging to stderr means we get it in the systemd journal -KUBE_LOGTOSTDERR="--logtostderr=true" - -# journal message level, 0 is debug -KUBE_LOG_LEVEL="--v=0" - -# Should this cluster be allowed to run privileged docker containers -KUBE_ALLOW_PRIV="--allow-privileged=false" - -# How the replication controller and scheduler find the kube-apiserver -KUBE_MASTER="--master=http://centos-master:8080" -``` - -* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers - -```shell -systemctl disable iptables-services firewalld -systemctl stop iptables-services firewalld -``` - -**Configure the Kubernetes services on the master.** - -* Edit /etc/etcd/etcd.conf to appear as such: - -```shell -# [member] -ETCD_NAME=default -ETCD_DATA_DIR="/var/lib/etcd/default.etcd" -ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" - -#[cluster] -ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379" -``` - -* Edit /etc/kubernetes/apiserver to appear as such: - -```shell -# The address on the local server to listen to. -KUBE_API_ADDRESS="--address=0.0.0.0" - -# The port on the local server to listen on. -KUBE_API_PORT="--port=8080" - -# Port kubelets listen on -KUBELET_PORT="--kubelet-port=10250" - -# Address range to use for services -KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" - -# Add your own! -KUBE_API_ARGS="" -``` - -* Start the appropriate services on master: - -```shell -for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do - systemctl restart $SERVICES - systemctl enable $SERVICES - systemctl status $SERVICES -done -``` - -**Configure the Kubernetes services on the node.** - -***We need to configure the kubelet and start the kubelet and proxy*** - -* Edit /etc/kubernetes/kubelet to appear as such: - -```shell -# The address for the info server to serve on -KUBELET_ADDRESS="--address=0.0.0.0" - -# The port for the info server to serve on -KUBELET_PORT="--port=10250" - -# You may leave this blank to use the actual hostname -KUBELET_HOSTNAME="--hostname-override=centos-minion" - -# Location of the api-server -KUBELET_API_SERVER="--api-servers=http://centos-master:8080" - -# Add your own! -KUBELET_ARGS="" -``` - -* Start the appropriate services on node (centos-minion). - -```shell -for SERVICES in kube-proxy kubelet docker; do - systemctl restart $SERVICES - systemctl enable $SERVICES - systemctl status $SERVICES -done -``` - -*You should be finished!* - -* Check to make sure the cluster can see the node (on centos-master) - -```shell -$ kubectl get nodes -NAME LABELS STATUS -centos-minion Ready -``` - -**The cluster should be running! Launch a test pod.** - -You should have a functional cluster, check out [101](/docs/user-guide/walkthrough/)! - -## Support Level - - -IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level --------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Bare-metal | custom | CentOS | _none_ | [docs](/docs/getting-started-guides/centos/centos_manual_config) | | Community ([@coolsvap](https://github.com/coolsvap)) - -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. - +--- +assignees: +- lavalamp +- thockin + +--- + +* TOC +{:toc} + +## Prerequisites + +You need two machines with CentOS installed on them. + +## Starting a cluster + +This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc... + +This guide will only get ONE node working. Multiple nodes requires a functional [networking configuration](/docs/admin/networking) done outside of kubernetes. Although the additional Kubernetes configuration requirements should be obvious. + +The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the node and run kubelet, proxy, cadvisor and docker. + +**System Information:** + +Hosts: + +Please replace host IP with your environment. + +```conf +centos-master = 192.168.121.9 +centos-minion = 192.168.121.65 +``` + +**Prepare the hosts:** + +* Create a /etc/yum.repos.d/virt7-docker-common-release.repo on all hosts - centos-{master,minion} with following information. + +```conf +[virt7-docker-common-release] +name=virt7-docker-common-release +baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/ +gpgcheck=0 +``` + +* Install Kubernetes and etcd on all hosts - centos-{master,minion}. This will also pull in docker and cadvisor. + +```shell +yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd +``` + +* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS) + +```shell +echo "192.168.121.9 centos-master +192.168.121.65 centos-minion" >> /etc/hosts +``` + +* Edit /etc/kubernetes/config which will be the same on all hosts to contain: + +```shell +# Comma separated list of nodes in the etcd cluster +KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379" + +# logging to stderr means we get it in the systemd journal +KUBE_LOGTOSTDERR="--logtostderr=true" + +# journal message level, 0 is debug +KUBE_LOG_LEVEL="--v=0" + +# Should this cluster be allowed to run privileged docker containers +KUBE_ALLOW_PRIV="--allow-privileged=false" + +# How the replication controller and scheduler find the kube-apiserver +KUBE_MASTER="--master=http://centos-master:8080" +``` + +* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers + +```shell +systemctl disable iptables-services firewalld +systemctl stop iptables-services firewalld +``` + +**Configure the Kubernetes services on the master.** + +* Edit /etc/etcd/etcd.conf to appear as such: + +```shell +# [member] +ETCD_NAME=default +ETCD_DATA_DIR="/var/lib/etcd/default.etcd" +ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" + +#[cluster] +ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379" +``` + +* Edit /etc/kubernetes/apiserver to appear as such: + +```shell +# The address on the local server to listen to. +KUBE_API_ADDRESS="--address=0.0.0.0" + +# The port on the local server to listen on. +KUBE_API_PORT="--port=8080" + +# Port kubelets listen on +KUBELET_PORT="--kubelet-port=10250" + +# Address range to use for services +KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" + +# Add your own! +KUBE_API_ARGS="" +``` + +* Start the appropriate services on master: + +```shell +for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do + systemctl restart $SERVICES + systemctl enable $SERVICES + systemctl status $SERVICES +done +``` + +**Configure the Kubernetes services on the node.** + +***We need to configure the kubelet and start the kubelet and proxy*** + +* Edit /etc/kubernetes/kubelet to appear as such: + +```shell +# The address for the info server to serve on +KUBELET_ADDRESS="--address=0.0.0.0" + +# The port for the info server to serve on +KUBELET_PORT="--port=10250" + +# You may leave this blank to use the actual hostname +KUBELET_HOSTNAME="--hostname-override=centos-minion" + +# Location of the api-server +KUBELET_API_SERVER="--api-servers=http://centos-master:8080" + +# Add your own! +KUBELET_ARGS="" +``` + +* Start the appropriate services on node (centos-minion). + +```shell +for SERVICES in kube-proxy kubelet docker; do + systemctl restart $SERVICES + systemctl enable $SERVICES + systemctl status $SERVICES +done +``` + +*You should be finished!* + +* Check to make sure the cluster can see the node (on centos-master) + +```shell +$ kubectl get nodes +NAME LABELS STATUS +centos-minion Ready +``` + +**The cluster should be running! Launch a test pod.** + +You should have a functional cluster, check out [101](/docs/user-guide/walkthrough/)! + +## Support Level + + +IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level +-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- +Bare-metal | custom | CentOS | _none_ | [docs](/docs/getting-started-guides/centos/centos_manual_config) | | Community ([@coolsvap](https://github.com/coolsvap)) + +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. + diff --git a/docs/getting-started-guides/coreos/bare_metal_calico.md b/docs/getting-started-guides/coreos/bare_metal_calico.md index 9cbdd89b17..7c3f7ccca0 100644 --- a/docs/getting-started-guides/coreos/bare_metal_calico.md +++ b/docs/getting-started-guides/coreos/bare_metal_calico.md @@ -1,209 +1,209 @@ ---- - ---- - -This document describes how to deploy Kubernetes with Calico networking on _bare metal_ CoreOS. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-containers repository](https://github.com/projectcalico/calico-containers). - -To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes). - -Specifically, this guide will have you do the following: - -- Deploy a Kubernetes master node on CoreOS using cloud-config. -- Deploy two Kubernetes compute nodes with Calico Networking using cloud-config. -- Configure `kubectl` to access your cluster. - -The resulting cluster will use SSL between Kubernetes components. It will run the SkyDNS service and kube-ui, and be fully conformant with the Kubernetes v1.1 conformance tests. - -## Prerequisites and Assumptions - -- At least three bare-metal machines (or VMs) to work with. This guide will configure them as follows: - - 1 Kubernetes Master - - 2 Kubernetes Nodes -- Your nodes should have IP connectivity to each other and the internet. -- This guide assumes a DHCP server on your network to assign server IPs. -- This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands). - -## Cloud-config - -This guide will use [cloud-config](https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/) to configure each of the nodes in our Kubernetes cluster. - -We'll use two cloud-config files: -- `master-config.yaml`: cloud-config for the Kubernetes master -- `node-config.yaml`: cloud-config for each Kubernetes node - -## Download CoreOS - -Download the stable CoreOS bootable ISO from the [CoreOS website](https://coreos.com/docs/running-coreos/platforms/iso/). - -## Configure the Kubernetes Master - -1. Once you've downloaded the ISO image, burn the ISO to a CD/DVD/USB key and boot from it (if using a virtual machine you can boot directly from the ISO). Once booted, you should be automatically logged in as the `core` user at the terminal. At this point CoreOS is running from the ISO and it hasn't been installed yet. - -2. *On another machine*, download the [master cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/master-config-template.yaml) and save it as `master-config.yaml`. - -3. Replace the following variables in the `master-config.yaml` file. - - - ``: The public key you will use for SSH access to this server. See [generating ssh keys](https://help.github.com/articles/generating-ssh-keys/) - -4. Copy the edited `master-config.yaml` to your Kubernetes master machine (using a USB stick, for example). - -5. The CoreOS bootable ISO comes with a tool called `coreos-install` which will allow us to install CoreOS and configure the machine using a cloud-config file. The following command will download and install stable CoreOS using the `master-config.yaml` file we just created for configuration. Run this on the Kubernetes master. - - > **Warning:** this is a destructive operation that erases disk `sda` on your server. - - ```shell - sudo coreos-install -d /dev/sda -C stable -c master-config.yaml - ``` - -6. Once complete, restart the server and boot from `/dev/sda` (you may need to remove the ISO image). When it comes back up, you should have SSH access as the `core` user using the public key provided in the `master-config.yaml` file. - -### Configure TLS - -The master requires the CA certificate, `ca.pem`; its own certificate, `apiserver.pem` and its private key, `apiserver-key.pem`. This [CoreOS guide](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to generate these. - -1. Generate the necessary certificates for the master. This [guide for generating Kubernetes TLS Assets](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to use OpenSSL to generate the required assets. - -2. Send the three files to your master host (using `scp` for example). - -3. Move them to the `/etc/kubernetes/ssl` folder and ensure that only the root user can read the key: - - ```shell - # Move keys - sudo mkdir -p /etc/kubernetes/ssl/ - sudo mv -t /etc/kubernetes/ssl/ ca.pem apiserver.pem apiserver-key.pem - - # Set Permissions - sudo chmod 600 /etc/kubernetes/ssl/apiserver-key.pem - sudo chown root:root /etc/kubernetes/ssl/apiserver-key.pem - ``` - -4. Restart the kubelet to pick up the changes: - - ```shell - sudo systemctl restart kubelet - ``` - -## Configure the compute nodes - -The following steps will set up a single Kubernetes node for use as a compute host. Run these steps to deploy each Kubernetes node in your cluster. - -1. Boot up the node machine using the bootable ISO we downloaded earlier. You should be automatically logged in as the `core` user. - -2. Make a copy of the [node cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/node-config-template.yaml) for this machine. - -3. Replace the following placeholders in the `node-config.yaml` file to match your deployment. - - - ``: Hostname for this node (e.g. kube-node1, kube-node2) - - ``: The public key you will use for SSH access to this server. - - ``: The IPv4 address of the Kubernetes master. - -4. Replace the following placeholders with the contents of their respective files. - - - ``: Complete contents of `ca.pem` - - ``: Complete contents of `ca-key.pem` - - > **Important:** in a production deployment, embedding the secret key in cloud-config is a bad idea! In production you should use an appropriate secret manager. - - > **Important:** Make sure you indent the entire file to match the indentation of the placeholder. For example: - > - > ```shell - > - path: /etc/kubernetes/ssl/ca.pem - > owner: core - > permissions: 0644 - > content: | - > - > ``` - > - > should look like this once the certificate is in place: - > - > ```shell - > - path: /etc/kubernetes/ssl/ca.pem - > owner: core - > permissions: 0644 - > content: | - > -----BEGIN CERTIFICATE----- - > MIIC9zCCAd+gAwIBAgIJAJMnVnhVhy5pMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV - > ...... - > QHwi1rNc8eBLNrd4BM/A1ZeDVh/Q9KxN+ZG/hHIXhmWKgN5wQx6/81FIFg== - > -----END CERTIFICATE----- - > ``` - -5. Move the modified `node-config.yaml` to your Kubernetes node machine and install and configure CoreOS on the node using the following command. - - > **Warning:** this is a destructive operation that erases disk `sda` on your server. - - ```shell - sudo coreos-install -d /dev/sda -C stable -c node-config.yaml - ``` - -6. Once complete, restart the server and boot into `/dev/sda`. When it comes back up, you should have SSH access as the `core` user using the public key provided in the `node-config.yaml` file. It will take some time for the node to be fully configured. - -## Configure Kubeconfig - -To administer your cluster from a separate host, you will need the client and admin certificates generated earlier (`ca.pem`, `admin.pem`, `admin-key.pem`). With certificates in place, run the following commands with the appropriate filepaths. - -```shell -kubectl config set-cluster calico-cluster --server=https:// --certificate-authority= -kubectl config set-credentials calico-admin --certificate-authority= --client-key= --client-certificate= -kubectl config set-context calico --cluster=calico-cluster --user=calico-admin -kubectl config use-context calico -``` - -Check your work with `kubectl get nodes`. - -## Install the DNS Addon - -Most Kubernetes deployments will require the DNS addon for service discovery. To install DNS, create the skydns service and replication controller provided. - -```shell -kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/dns/skydns.yaml -``` - -## Install the Kubernetes UI Addon (Optional) - -The Kubernetes UI can be installed using `kubectl` to run the following manifest file. - -```shell -kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kube-ui/kube-ui.yaml -``` - -## Launch other Services With Calico-Kubernetes - -At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.version}}.0/examples/) to set up other services on your cluster. - -## Connectivity to outside the cluster - -Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP. - -### NAT on the nodes - -The simplest method for enabling connectivity from containers to the internet is to use outgoing NAT on your Kubernetes nodes. - -Calico can provide outgoing NAT for containers. To enable it, use the following `calicoctl` command: - -```shell -ETCD_AUTHORITY= calicoctl pool add --nat-outgoing -``` - -By default, `` will be `192.168.0.0/16`. You can find out which pools have been configured with the following command: - -```shell -ETCD_AUTHORITY= calicoctl pool show -``` - -### NAT at the border router - -In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity). - -The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md). - -## Support Level - - -IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level --------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Bare-metal | CoreOS | CoreOS | Calico | [docs](/docs/getting-started-guides/coreos/bare_metal_calico) | | Community ([@caseydavenport](https://github.com/caseydavenport)) - - -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. - +--- + +--- + +This document describes how to deploy Kubernetes with Calico networking on _bare metal_ CoreOS. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-containers repository](https://github.com/projectcalico/calico-containers). + +To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes). + +Specifically, this guide will have you do the following: + +- Deploy a Kubernetes master node on CoreOS using cloud-config. +- Deploy two Kubernetes compute nodes with Calico Networking using cloud-config. +- Configure `kubectl` to access your cluster. + +The resulting cluster will use SSL between Kubernetes components. It will run the SkyDNS service and kube-ui, and be fully conformant with the Kubernetes v1.1 conformance tests. + +## Prerequisites and Assumptions + +- At least three bare-metal machines (or VMs) to work with. This guide will configure them as follows: + - 1 Kubernetes Master + - 2 Kubernetes Nodes +- Your nodes should have IP connectivity to each other and the internet. +- This guide assumes a DHCP server on your network to assign server IPs. +- This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands). + +## Cloud-config + +This guide will use [cloud-config](https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/) to configure each of the nodes in our Kubernetes cluster. + +We'll use two cloud-config files: +- `master-config.yaml`: cloud-config for the Kubernetes master +- `node-config.yaml`: cloud-config for each Kubernetes node + +## Download CoreOS + +Download the stable CoreOS bootable ISO from the [CoreOS website](https://coreos.com/docs/running-coreos/platforms/iso/). + +## Configure the Kubernetes Master + +1. Once you've downloaded the ISO image, burn the ISO to a CD/DVD/USB key and boot from it (if using a virtual machine you can boot directly from the ISO). Once booted, you should be automatically logged in as the `core` user at the terminal. At this point CoreOS is running from the ISO and it hasn't been installed yet. + +2. *On another machine*, download the [master cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/master-config-template.yaml) and save it as `master-config.yaml`. + +3. Replace the following variables in the `master-config.yaml` file. + + - ``: The public key you will use for SSH access to this server. See [generating ssh keys](https://help.github.com/articles/generating-ssh-keys/) + +4. Copy the edited `master-config.yaml` to your Kubernetes master machine (using a USB stick, for example). + +5. The CoreOS bootable ISO comes with a tool called `coreos-install` which will allow us to install CoreOS and configure the machine using a cloud-config file. The following command will download and install stable CoreOS using the `master-config.yaml` file we just created for configuration. Run this on the Kubernetes master. + + > **Warning:** this is a destructive operation that erases disk `sda` on your server. + + ```shell + sudo coreos-install -d /dev/sda -C stable -c master-config.yaml + ``` + +6. Once complete, restart the server and boot from `/dev/sda` (you may need to remove the ISO image). When it comes back up, you should have SSH access as the `core` user using the public key provided in the `master-config.yaml` file. + +### Configure TLS + +The master requires the CA certificate, `ca.pem`; its own certificate, `apiserver.pem` and its private key, `apiserver-key.pem`. This [CoreOS guide](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to generate these. + +1. Generate the necessary certificates for the master. This [guide for generating Kubernetes TLS Assets](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to use OpenSSL to generate the required assets. + +2. Send the three files to your master host (using `scp` for example). + +3. Move them to the `/etc/kubernetes/ssl` folder and ensure that only the root user can read the key: + + ```shell + # Move keys + sudo mkdir -p /etc/kubernetes/ssl/ + sudo mv -t /etc/kubernetes/ssl/ ca.pem apiserver.pem apiserver-key.pem + + # Set Permissions + sudo chmod 600 /etc/kubernetes/ssl/apiserver-key.pem + sudo chown root:root /etc/kubernetes/ssl/apiserver-key.pem + ``` + +4. Restart the kubelet to pick up the changes: + + ```shell + sudo systemctl restart kubelet + ``` + +## Configure the compute nodes + +The following steps will set up a single Kubernetes node for use as a compute host. Run these steps to deploy each Kubernetes node in your cluster. + +1. Boot up the node machine using the bootable ISO we downloaded earlier. You should be automatically logged in as the `core` user. + +2. Make a copy of the [node cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/node-config-template.yaml) for this machine. + +3. Replace the following placeholders in the `node-config.yaml` file to match your deployment. + + - ``: Hostname for this node (e.g. kube-node1, kube-node2) + - ``: The public key you will use for SSH access to this server. + - ``: The IPv4 address of the Kubernetes master. + +4. Replace the following placeholders with the contents of their respective files. + + - ``: Complete contents of `ca.pem` + - ``: Complete contents of `ca-key.pem` + + > **Important:** in a production deployment, embedding the secret key in cloud-config is a bad idea! In production you should use an appropriate secret manager. + + > **Important:** Make sure you indent the entire file to match the indentation of the placeholder. For example: + > + > ```shell + > - path: /etc/kubernetes/ssl/ca.pem + > owner: core + > permissions: 0644 + > content: | + > + > ``` + > + > should look like this once the certificate is in place: + > + > ```shell + > - path: /etc/kubernetes/ssl/ca.pem + > owner: core + > permissions: 0644 + > content: | + > -----BEGIN CERTIFICATE----- + > MIIC9zCCAd+gAwIBAgIJAJMnVnhVhy5pMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV + > ...... + > QHwi1rNc8eBLNrd4BM/A1ZeDVh/Q9KxN+ZG/hHIXhmWKgN5wQx6/81FIFg== + > -----END CERTIFICATE----- + > ``` + +5. Move the modified `node-config.yaml` to your Kubernetes node machine and install and configure CoreOS on the node using the following command. + + > **Warning:** this is a destructive operation that erases disk `sda` on your server. + + ```shell + sudo coreos-install -d /dev/sda -C stable -c node-config.yaml + ``` + +6. Once complete, restart the server and boot into `/dev/sda`. When it comes back up, you should have SSH access as the `core` user using the public key provided in the `node-config.yaml` file. It will take some time for the node to be fully configured. + +## Configure Kubeconfig + +To administer your cluster from a separate host, you will need the client and admin certificates generated earlier (`ca.pem`, `admin.pem`, `admin-key.pem`). With certificates in place, run the following commands with the appropriate filepaths. + +```shell +kubectl config set-cluster calico-cluster --server=https:// --certificate-authority= +kubectl config set-credentials calico-admin --certificate-authority= --client-key= --client-certificate= +kubectl config set-context calico --cluster=calico-cluster --user=calico-admin +kubectl config use-context calico +``` + +Check your work with `kubectl get nodes`. + +## Install the DNS Addon + +Most Kubernetes deployments will require the DNS addon for service discovery. To install DNS, create the skydns service and replication controller provided. + +```shell +kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/dns/skydns.yaml +``` + +## Install the Kubernetes UI Addon (Optional) + +The Kubernetes UI can be installed using `kubectl` to run the following manifest file. + +```shell +kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kube-ui/kube-ui.yaml +``` + +## Launch other Services With Calico-Kubernetes + +At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](https://github.com/kubernetes/kubernetes/tree/{{page.version}}.0/examples/) to set up other services on your cluster. + +## Connectivity to outside the cluster + +Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP. + +### NAT on the nodes + +The simplest method for enabling connectivity from containers to the internet is to use outgoing NAT on your Kubernetes nodes. + +Calico can provide outgoing NAT for containers. To enable it, use the following `calicoctl` command: + +```shell +ETCD_AUTHORITY= calicoctl pool add --nat-outgoing +``` + +By default, `` will be `192.168.0.0/16`. You can find out which pools have been configured with the following command: + +```shell +ETCD_AUTHORITY= calicoctl pool show +``` + +### NAT at the border router + +In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity). + +The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md). + +## Support Level + + +IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level +-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- +Bare-metal | CoreOS | CoreOS | Calico | [docs](/docs/getting-started-guides/coreos/bare_metal_calico) | | Community ([@caseydavenport](https://github.com/caseydavenport)) + + +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. + diff --git a/docs/getting-started-guides/docker-multinode.md b/docs/getting-started-guides/docker-multinode.md index 14e475ee10..5d6d4ee295 100644 --- a/docs/getting-started-guides/docker-multinode.md +++ b/docs/getting-started-guides/docker-multinode.md @@ -16,7 +16,7 @@ and a _worker_ node which receives work from the master. You can repeat the proc times to create larger clusters. Here's a diagram of what the final result will look like: -![Kubernetes on Docker](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/k8s-docker.png) +![Kubernetes on Docker](/images/docs/k8s-docker.png) ### Bootstrap Docker @@ -86,7 +86,7 @@ Clone the `kube-deploy` repo, and run `worker.sh` on the worker machine _with ro ```shell $ git clone https://github.com/kubernetes/kube-deploy -$ cd docker-multinode +$ cd kube-deploy/docker-multinode $ export MASTER_IP=${SOME_IP} $ ./worker.sh ``` diff --git a/docs/getting-started-guides/fedora/fedora_ansible_config.md b/docs/getting-started-guides/fedora/fedora_ansible_config.md index aa439f07b0..b5fe3802e0 100644 --- a/docs/getting-started-guides/fedora/fedora_ansible_config.md +++ b/docs/getting-started-guides/fedora/fedora_ansible_config.md @@ -1,241 +1,241 @@ ---- -assignees: -- aveshagarwal -- erictune - ---- - -Configuring Kubernetes on Fedora via Ansible offers a simple way to quickly create a clustered environment with little effort. - -* TOC -{:toc} - -## Prerequisites - -1. Host able to run ansible and able to clone the following repo: [kubernetes](https://github.com/kubernetes/kubernetes.git) -2. A Fedora 21+ host to act as cluster master -3. As many Fedora 21+ hosts as you would like, that act as cluster nodes - -The hosts can be virtual or bare metal. Ansible will take care of the rest of the configuration for you - configuring networking, installing packages, handling the firewall, etc. This example will use one master and two nodes. - -## Architecture of the cluster - -A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a cluster with three hosts, for example: - -```shell -master,etcd = kube-master.example.com - node1 = kube-node-01.example.com - node2 = kube-node-02.example.com -``` - -**Make sure your local machine has** - - - ansible (must be 1.9.0+) - - git - - python-netaddr - -If not - -```shell -yum install -y ansible git python-netaddr -``` - -**Now clone down the Kubernetes repository** - -```shell -git clone https://github.com/kubernetes/contrib.git -cd contrib/ansible -``` - -**Tell ansible about each machine and its role in your cluster** - -Get the IP addresses from the master and nodes. Add those to the `~/contrib/ansible/inventory` file on the host running Ansible. - -```shell -[masters] -kube-master.example.com - -[etcd] -kube-master.example.com - -[nodes] -kube-node-01.example.com -kube-node-02.example.com -``` - -## Setting up ansible access to your nodes - -If you already are running on a machine which has passwordless ssh access to the kube-master and kube-node-{01,02} nodes, and 'sudo' privileges, simply set the value of `ansible_ssh_user` in `~/contrib/ansible/group_vars/all.yml` to the username which you use to ssh to the nodes (i.e. `fedora`), and proceed to the next step... - -*Otherwise* setup ssh on the machines like so (you will need to know the root password to all machines in the cluster). - -edit: ~/contrib/ansible/group_vars/all.yml - -```yaml -ansible_ssh_user: root -``` - -**Configuring ssh access to the cluster** - -If you already have ssh access to every machine using ssh public keys you may skip to [setting up the cluster](#setting-up-the-cluster) - -Make sure your local machine (root) has an ssh key pair if not - -```shell -ssh-keygen -``` - -Copy the ssh public key to **all** nodes in the cluster - -```shell -for node in kube-master.example.com kube-node-01.example.com kube-node-02.example.com; do - ssh-copy-id ${node} -done -``` - -## Setting up the cluster - -Although the default value of variables in `~/contrib/ansible/group_vars/all.yml` should be good enough, if not, change them as needed. - -```conf -edit: ~/contrib/ansible/group_vars/all.yml -``` - -**Configure access to kubernetes packages** - -Modify `source_type` as below to access kubernetes packages through the package manager. - -```yaml -source_type: packageManager -``` - -**Configure the IP addresses used for services** - -Each Kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment. - -```yaml -kube_service_addresses: 10.254.0.0/16 -``` - -**Managing flannel** - -Modify `flannel_subnet`, `flannel_prefix` and `flannel_host_prefix` only if defaults are not appropriate for your cluster. - - -**Managing add on services in your cluster** - -Set `cluster_logging` to false or true (default) to disable or enable logging with elasticsearch. - -```yaml -cluster_logging: true -``` - -Turn `cluster_monitoring` to true (default) or false to enable or disable cluster monitoring with heapster and influxdb. - -```yaml -cluster_monitoring: true -``` - -Turn `dns_setup` to true (recommended) or false to enable or disable whole DNS configuration. - -```yaml -dns_setup: true -``` - -**Tell ansible to get to work!** - -This will finally setup your whole Kubernetes cluster for you. - -```shell -cd ~/contrib/ansible/ - -./setup.sh -``` - -## Testing and using your new cluster - -That's all there is to it. It's really that easy. At this point you should have a functioning Kubernetes cluster. - -**Show kubernetes nodes** - -Run the following on the kube-master: - -```shell -kubectl get nodes -``` - -**Show services running on masters and nodes** - -```shell -systemctl | grep -i kube -``` - -**Show firewall rules on the masters and nodes** - -```shell -iptables -nvL - -``` - -**Create /tmp/apache.json on the master with the following contents and deploy pod** - -```json -{ - "kind": "Pod", - "apiVersion": "v1", - "metadata": { - "name": "fedoraapache", - "labels": { - "name": "fedoraapache" - } - }, - "spec": { - "containers": [ - { - "name": "fedoraapache", - "image": "fedora/apache", - "ports": [ - { - "hostPort": 80, - "containerPort": 80 - } - ] - } - ] - } -} -``` - -```shell -kubectl create -f /tmp/apache.json -``` - -**Check where the pod was created** - -```shell -kubectl get pods -``` - -**Check Docker status on nodes** - -```shell -docker ps -docker images -``` - -**After the pod is 'Running' Check web server access on the node** - -```shell -curl http://localhost -``` - -That's it ! - -## Support Level - - -IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level --------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Bare-metal | Ansible | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/fedora_ansible_config) | | Project - -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. +--- +assignees: +- aveshagarwal +- erictune + +--- + +Configuring Kubernetes on Fedora via Ansible offers a simple way to quickly create a clustered environment with little effort. + +* TOC +{:toc} + +## Prerequisites + +1. Host able to run ansible and able to clone the following repo: [kubernetes](https://github.com/kubernetes/kubernetes.git) +2. A Fedora 21+ host to act as cluster master +3. As many Fedora 21+ hosts as you would like, that act as cluster nodes + +The hosts can be virtual or bare metal. Ansible will take care of the rest of the configuration for you - configuring networking, installing packages, handling the firewall, etc. This example will use one master and two nodes. + +## Architecture of the cluster + +A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a cluster with three hosts, for example: + +```shell +master,etcd = kube-master.example.com + node1 = kube-node-01.example.com + node2 = kube-node-02.example.com +``` + +**Make sure your local machine has** + + - ansible (must be 1.9.0+) + - git + - python-netaddr + +If not + +```shell +yum install -y ansible git python-netaddr +``` + +**Now clone down the Kubernetes repository** + +```shell +git clone https://github.com/kubernetes/contrib.git +cd contrib/ansible +``` + +**Tell ansible about each machine and its role in your cluster** + +Get the IP addresses from the master and nodes. Add those to the `~/contrib/ansible/inventory` file on the host running Ansible. + +```shell +[masters] +kube-master.example.com + +[etcd] +kube-master.example.com + +[nodes] +kube-node-01.example.com +kube-node-02.example.com +``` + +## Setting up ansible access to your nodes + +If you already are running on a machine which has passwordless ssh access to the kube-master and kube-node-{01,02} nodes, and 'sudo' privileges, simply set the value of `ansible_ssh_user` in `~/contrib/ansible/group_vars/all.yml` to the username which you use to ssh to the nodes (i.e. `fedora`), and proceed to the next step... + +*Otherwise* setup ssh on the machines like so (you will need to know the root password to all machines in the cluster). + +edit: ~/contrib/ansible/group_vars/all.yml + +```yaml +ansible_ssh_user: root +``` + +**Configuring ssh access to the cluster** + +If you already have ssh access to every machine using ssh public keys you may skip to [setting up the cluster](#setting-up-the-cluster) + +Make sure your local machine (root) has an ssh key pair if not + +```shell +ssh-keygen +``` + +Copy the ssh public key to **all** nodes in the cluster + +```shell +for node in kube-master.example.com kube-node-01.example.com kube-node-02.example.com; do + ssh-copy-id ${node} +done +``` + +## Setting up the cluster + +Although the default value of variables in `~/contrib/ansible/group_vars/all.yml` should be good enough, if not, change them as needed. + +```conf +edit: ~/contrib/ansible/group_vars/all.yml +``` + +**Configure access to kubernetes packages** + +Modify `source_type` as below to access kubernetes packages through the package manager. + +```yaml +source_type: packageManager +``` + +**Configure the IP addresses used for services** + +Each Kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment. + +```yaml +kube_service_addresses: 10.254.0.0/16 +``` + +**Managing flannel** + +Modify `flannel_subnet`, `flannel_prefix` and `flannel_host_prefix` only if defaults are not appropriate for your cluster. + + +**Managing add on services in your cluster** + +Set `cluster_logging` to false or true (default) to disable or enable logging with elasticsearch. + +```yaml +cluster_logging: true +``` + +Turn `cluster_monitoring` to true (default) or false to enable or disable cluster monitoring with heapster and influxdb. + +```yaml +cluster_monitoring: true +``` + +Turn `dns_setup` to true (recommended) or false to enable or disable whole DNS configuration. + +```yaml +dns_setup: true +``` + +**Tell ansible to get to work!** + +This will finally setup your whole Kubernetes cluster for you. + +```shell +cd ~/contrib/ansible/ + +./setup.sh +``` + +## Testing and using your new cluster + +That's all there is to it. It's really that easy. At this point you should have a functioning Kubernetes cluster. + +**Show kubernetes nodes** + +Run the following on the kube-master: + +```shell +kubectl get nodes +``` + +**Show services running on masters and nodes** + +```shell +systemctl | grep -i kube +``` + +**Show firewall rules on the masters and nodes** + +```shell +iptables -nvL + +``` + +**Create /tmp/apache.json on the master with the following contents and deploy pod** + +```json +{ + "kind": "Pod", + "apiVersion": "v1", + "metadata": { + "name": "fedoraapache", + "labels": { + "name": "fedoraapache" + } + }, + "spec": { + "containers": [ + { + "name": "fedoraapache", + "image": "fedora/apache", + "ports": [ + { + "hostPort": 80, + "containerPort": 80 + } + ] + } + ] + } +} +``` + +```shell +kubectl create -f /tmp/apache.json +``` + +**Check where the pod was created** + +```shell +kubectl get pods +``` + +**Check Docker status on nodes** + +```shell +docker ps +docker images +``` + +**After the pod is 'Running' Check web server access on the node** + +```shell +curl http://localhost +``` + +That's it ! + +## Support Level + + +IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level +-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- +Bare-metal | Ansible | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/fedora_ansible_config) | | Project + +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/fedora/fedora_manual_config.md b/docs/getting-started-guides/fedora/fedora_manual_config.md index 238498d18c..d1948530a8 100644 --- a/docs/getting-started-guides/fedora/fedora_manual_config.md +++ b/docs/getting-started-guides/fedora/fedora_manual_config.md @@ -1,219 +1,219 @@ ---- -assignees: -- aveshagarwal -- eparis -- thockin - ---- - -* TOC -{:toc} - -## Prerequisites - -1. You need 2 or more machines with Fedora installed. - -## Instructions - -This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc... - -This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](/docs/admin/networking/) done outside of Kubernetes. Although the additional Kubernetes configuration requirements should be obvious. - -The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and Kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker. - -**System Information:** - -Hosts: - -```conf -fed-master = 192.168.121.9 -fed-node = 192.168.121.65 -``` - -**Prepare the hosts:** - -* Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond. -* The [--enablerepo=updates-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive. -* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below. -* Running on AWS EC2 with RHEL 7.2, you need to enable "extras" repository for yum by editing `/etc/yum.repos.d/redhat-rhui.repo` and changing the changing the `enable=0` to `enable=1` for extras. - -```shell -yum -y install --enablerepo=updates-testing kubernetes -``` - -* Install etcd and iptables - -```shell -yum -y install etcd iptables -``` - -* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping. - -```shell -echo "192.168.121.9 fed-master -192.168.121.65 fed-node" >> /etc/hosts -``` - -* Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain: - -```shell -# Comma separated list of nodes in the etcd cluster -KUBE_MASTER="--master=http://fed-master:8080" - -# logging to stderr means we get it in the systemd journal -KUBE_LOGTOSTDERR="--logtostderr=true" - -# journal message level, 0 is debug -KUBE_LOG_LEVEL="--v=0" - -# Should this cluster be allowed to run privileged docker containers -KUBE_ALLOW_PRIV="--allow-privileged=false" -``` - -* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install. - -```shell -systemctl disable iptables-services firewalld -systemctl stop iptables-services firewalld -``` - -**Configure the Kubernetes services on the master.** - -* Edit /etc/kubernetes/apiserver to appear as such. The service-cluster-ip-range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything. - -```shell -# The address on the local server to listen to. -KUBE_API_ADDRESS="--address=0.0.0.0" - -# Comma separated list of nodes in the etcd cluster -KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:4001" - -# Address range to use for services -KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" - -# Add your own! -KUBE_API_ARGS="" -``` - -* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused". Note that Fedora 22 uses etcd 2.0, One of the changes in etcd 2.0 is that now uses port 2379 and 2380 (as opposed to etcd 0.46 which userd 4001 and 7001). - -```shell -ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001" -``` - -* Create /var/run/kubernetes on master: - -```shell -mkdir /var/run/kubernetes -chown kube:kube /var/run/kubernetes -chmod 750 /var/run/kubernetes -``` - -* Start the appropriate services on master: - -```shell -for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do - systemctl restart $SERVICES - systemctl enable $SERVICES - systemctl status $SERVICES -done -``` - -* Addition of nodes: - -* Create following node.json file on Kubernetes master node: - -```json -{ - "apiVersion": "v1", - "kind": "Node", - "metadata": { - "name": "fed-node", - "labels":{ "name": "fed-node-label"} - }, - "spec": { - "externalID": "fed-node" - } -} -``` - -Now create a node object internally in your Kubernetes cluster by running: - -```shell -$ kubectl create -f ./node.json - -$ kubectl get nodes -NAME LABELS STATUS -fed-node name=fed-node-label Unknown -``` - -Please note that in the above, it only creates a representation for the node -_fed-node_ internally. It does not provision the actual _fed-node_. Also, it -is assumed that _fed-node_ (as specified in `name`) can be resolved and is -reachable from Kubernetes master node. This guide will discuss how to provision -a Kubernetes node (fed-node) below. - -**Configure the Kubernetes services on the node.** - -***We need to configure the kubelet on the node.*** - -* Edit /etc/kubernetes/kubelet to appear as such: - -```shell -### -# Kubernetes kubelet (node) config - -# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) -KUBELET_ADDRESS="--address=0.0.0.0" - -# You may leave this blank to use the actual hostname -KUBELET_HOSTNAME="--hostname-override=fed-node" - -# location of the api-server -KUBELET_API_SERVER="--api-servers=http://fed-master:8080" - -# Add your own! -#KUBELET_ARGS="" -``` - -* Start the appropriate services on the node (fed-node). - -```shell -for SERVICES in kube-proxy kubelet docker; do - systemctl restart $SERVICES - systemctl enable $SERVICES - systemctl status $SERVICES -done -``` - -* Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_. - -```shell -kubectl get nodes -NAME LABELS STATUS -fed-node name=fed-node-label Ready -``` - -* Deletion of nodes: - -To delete _fed-node_ from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information): - -```shell -kubectl delete -f ./node.json -``` - -*You should be finished!* - -**The cluster should be running! Launch a test pod.** - -You should have a functional cluster, check out [101](/docs/user-guide/walkthrough/)! - -## Support Level - - -IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level --------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Bare-metal | custom | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config) | | Project - -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. - +--- +assignees: +- aveshagarwal +- eparis +- thockin + +--- + +* TOC +{:toc} + +## Prerequisites + +1. You need 2 or more machines with Fedora installed. + +## Instructions + +This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc... + +This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](/docs/admin/networking/) done outside of Kubernetes. Although the additional Kubernetes configuration requirements should be obvious. + +The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and Kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker. + +**System Information:** + +Hosts: + +```conf +fed-master = 192.168.121.9 +fed-node = 192.168.121.65 +``` + +**Prepare the hosts:** + +* Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond. +* The [--enablerepo=updates-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive. +* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below. +* Running on AWS EC2 with RHEL 7.2, you need to enable "extras" repository for yum by editing `/etc/yum.repos.d/redhat-rhui.repo` and changing the changing the `enable=0` to `enable=1` for extras. + +```shell +yum -y install --enablerepo=updates-testing kubernetes +``` + +* Install etcd and iptables + +```shell +yum -y install etcd iptables +``` + +* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping. + +```shell +echo "192.168.121.9 fed-master +192.168.121.65 fed-node" >> /etc/hosts +``` + +* Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain: + +```shell +# Comma separated list of nodes in the etcd cluster +KUBE_MASTER="--master=http://fed-master:8080" + +# logging to stderr means we get it in the systemd journal +KUBE_LOGTOSTDERR="--logtostderr=true" + +# journal message level, 0 is debug +KUBE_LOG_LEVEL="--v=0" + +# Should this cluster be allowed to run privileged docker containers +KUBE_ALLOW_PRIV="--allow-privileged=false" +``` + +* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install. + +```shell +systemctl disable iptables-services firewalld +systemctl stop iptables-services firewalld +``` + +**Configure the Kubernetes services on the master.** + +* Edit /etc/kubernetes/apiserver to appear as such. The service-cluster-ip-range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything. + +```shell +# The address on the local server to listen to. +KUBE_API_ADDRESS="--address=0.0.0.0" + +# Comma separated list of nodes in the etcd cluster +KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:4001" + +# Address range to use for services +KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" + +# Add your own! +KUBE_API_ARGS="" +``` + +* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused". Note that Fedora 22 uses etcd 2.0, One of the changes in etcd 2.0 is that now uses port 2379 and 2380 (as opposed to etcd 0.46 which userd 4001 and 7001). + +```shell +ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001" +``` + +* Create /var/run/kubernetes on master: + +```shell +mkdir /var/run/kubernetes +chown kube:kube /var/run/kubernetes +chmod 750 /var/run/kubernetes +``` + +* Start the appropriate services on master: + +```shell +for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do + systemctl restart $SERVICES + systemctl enable $SERVICES + systemctl status $SERVICES +done +``` + +* Addition of nodes: + +* Create following node.json file on Kubernetes master node: + +```json +{ + "apiVersion": "v1", + "kind": "Node", + "metadata": { + "name": "fed-node", + "labels":{ "name": "fed-node-label"} + }, + "spec": { + "externalID": "fed-node" + } +} +``` + +Now create a node object internally in your Kubernetes cluster by running: + +```shell +$ kubectl create -f ./node.json + +$ kubectl get nodes +NAME LABELS STATUS +fed-node name=fed-node-label Unknown +``` + +Please note that in the above, it only creates a representation for the node +_fed-node_ internally. It does not provision the actual _fed-node_. Also, it +is assumed that _fed-node_ (as specified in `name`) can be resolved and is +reachable from Kubernetes master node. This guide will discuss how to provision +a Kubernetes node (fed-node) below. + +**Configure the Kubernetes services on the node.** + +***We need to configure the kubelet on the node.*** + +* Edit /etc/kubernetes/kubelet to appear as such: + +```shell +### +# Kubernetes kubelet (node) config + +# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) +KUBELET_ADDRESS="--address=0.0.0.0" + +# You may leave this blank to use the actual hostname +KUBELET_HOSTNAME="--hostname-override=fed-node" + +# location of the api-server +KUBELET_API_SERVER="--api-servers=http://fed-master:8080" + +# Add your own! +#KUBELET_ARGS="" +``` + +* Start the appropriate services on the node (fed-node). + +```shell +for SERVICES in kube-proxy kubelet docker; do + systemctl restart $SERVICES + systemctl enable $SERVICES + systemctl status $SERVICES +done +``` + +* Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_. + +```shell +kubectl get nodes +NAME LABELS STATUS +fed-node name=fed-node-label Ready +``` + +* Deletion of nodes: + +To delete _fed-node_ from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information): + +```shell +kubectl delete -f ./node.json +``` + +*You should be finished!* + +**The cluster should be running! Launch a test pod.** + +You should have a functional cluster, check out [101](/docs/user-guide/walkthrough/)! + +## Support Level + + +IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level +-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- +Bare-metal | custom | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config) | | Project + +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. + diff --git a/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md b/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md index 7f4504f2c9..bcd10f57b9 100644 --- a/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md +++ b/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md @@ -1,191 +1,191 @@ ---- -assignees: -- dchen1107 -- erictune -- thockin - ---- -* TOC -{:toc} - -This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](/docs/getting-started-guides/fedora/fedora_manual_config/) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network. - -## Prerequisites - -You need 2 or more machines with Fedora installed. - -## Master Setup - -**Perform following commands on the Kubernetes master** - -* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. Flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are: - -```json -{ - "Network": "18.16.0.0/16", - "SubnetLen": 24, - "Backend": { - "Type": "vxlan", - "VNI": 1 - } -} -``` - -**NOTE:** Choose an IP range that is *NOT* part of the public IP address range. - -Add the configuration to the etcd server on fed-master. - -```shell -etcdctl set /coreos.com/network/config < flannel-config.json -``` - -* Verify the key exists in the etcd server on fed-master. - -```shell -etcdctl get /coreos.com/network/config -``` - -## Node Setup - -**Perform following commands on all Kubernetes nodes** - -Edit the flannel configuration file /etc/sysconfig/flanneld as follows: - -```shell -# Flanneld configuration options - -# etcd url location. Point this to the server where etcd runs -FLANNEL_ETCD="http://fed-master:4001" - -# etcd config key. This is the configuration key that flannel queries -# For address range assignment -FLANNEL_ETCD_KEY="/coreos.com/network" - -# Any additional options that you want to pass -FLANNEL_OPTIONS="" -``` - -**Note:** By default, flannel uses the interface for the default route. If you have multiple interfaces and would like to use an interface other than the default route one, you could add "-iface=" to FLANNEL_OPTIONS. For additional options, run `flanneld --help` on command line. - -Enable the flannel service. - -```shell -systemctl enable flanneld -``` - -If docker is not running, then starting flannel service is enough and skip the next step. - -```shell -systemctl start flanneld -``` - -If docker is already running, then stop docker, delete docker bridge (docker0), start flanneld and restart docker as follows. Another alternative is to just reboot the system (`systemctl reboot`). - -```shell -systemctl stop docker -ip link delete docker0 -systemctl start flanneld -systemctl start docker -``` - - -## **Test the cluster and flannel configuration** - -Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each Kubernetes node out of the IP range configured above. A working output should look like this: - -```shell -# ip -4 a|grep inet - inet 127.0.0.1/8 scope host lo - inet 192.168.122.77/24 brd 192.168.122.255 scope global dynamic eth0 - inet 18.16.29.0/16 scope global flannel.1 - inet 18.16.29.1/24 scope global docker0 -``` - -From any node in the cluster, check the cluster members by issuing a query to etcd server via curl (only partial output is shown using `grep -E "\{|\}|key|value"`). If you set up a 1 master and 3 nodes cluster, you should see one block for each node showing the subnets they have been assigned. You can associate those subnets to each node by the MAC address (VtepMAC) and IP address (Public IP) that is listed in the output. - -```shell -curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool -``` - -```json -{ - "node": { - "key": "/coreos.com/network/subnets", - { - "key": "/coreos.com/network/subnets/18.16.29.0-24", - "value": "{\"PublicIP\":\"192.168.122.77\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"46:f1:d0:18:d0:65\"}}" - }, - { - "key": "/coreos.com/network/subnets/18.16.83.0-24", - "value": "{\"PublicIP\":\"192.168.122.36\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"ca:38:78:fc:72:29\"}}" - }, - { - "key": "/coreos.com/network/subnets/18.16.90.0-24", - "value": "{\"PublicIP\":\"192.168.122.127\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"92:e2:80:ba:2d:4d\"}}" - } - } -} -``` - -From all nodes, review the `/run/flannel/subnet.env` file. This file was generated automatically by flannel. - -```shell -# cat /run/flannel/subnet.env -FLANNEL_SUBNET=18.16.29.1/24 -FLANNEL_MTU=1450 -FLANNEL_IPMASQ=false -``` - -At this point, we have etcd running on the Kubernetes master, and flannel / docker running on Kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly. - -Issue the following commands on any 2 nodes: - -```shell -# docker run -it fedora:latest bash -bash-4.3# -``` - -This will place you inside the container. Install iproute and iputils packages to install ip and ping utilities. Due to a [bug](https://bugzilla.redhat.com/show_bug.cgi?id=1142311), it is required to modify capabilities of ping binary to work around "Operation not permitted" error. - -```shell -bash-4.3# yum -y install iproute iputils -bash-4.3# setcap cap_net_raw-ep /usr/bin/ping -``` - -Now note the IP address on the first node: - -```shell -bash-4.3# ip -4 a l eth0 | grep inet - inet 18.16.29.4/24 scope global eth0 -``` - -And also note the IP address on the other node: - -```shell -bash-4.3# ip a l eth0 | grep inet - inet 18.16.90.4/24 scope global eth0 -``` -Now ping from the first node to the other node: - -```shell -bash-4.3# ping 18.16.90.4 -PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data. -64 bytes from 18.16.90.4: icmp_seq=1 ttl=62 time=0.275 ms -64 bytes from 18.16.90.4: icmp_seq=2 ttl=62 time=0.372 ms -``` - -Now Kubernetes multi-node cluster is set up with overlay networking set up by flannel. - -## Support Level - - -IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level --------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Bare-metal | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) -libvirt | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) -KVM | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) - - - -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. - +--- +assignees: +- dchen1107 +- erictune +- thockin + +--- +* TOC +{:toc} + +This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](/docs/getting-started-guides/fedora/fedora_manual_config/) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network. + +## Prerequisites + +You need 2 or more machines with Fedora installed. + +## Master Setup + +**Perform following commands on the Kubernetes master** + +* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. Flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are: + +```json +{ + "Network": "18.16.0.0/16", + "SubnetLen": 24, + "Backend": { + "Type": "vxlan", + "VNI": 1 + } +} +``` + +**NOTE:** Choose an IP range that is *NOT* part of the public IP address range. + +Add the configuration to the etcd server on fed-master. + +```shell +etcdctl set /coreos.com/network/config < flannel-config.json +``` + +* Verify the key exists in the etcd server on fed-master. + +```shell +etcdctl get /coreos.com/network/config +``` + +## Node Setup + +**Perform following commands on all Kubernetes nodes** + +Edit the flannel configuration file /etc/sysconfig/flanneld as follows: + +```shell +# Flanneld configuration options + +# etcd url location. Point this to the server where etcd runs +FLANNEL_ETCD="http://fed-master:4001" + +# etcd config key. This is the configuration key that flannel queries +# For address range assignment +FLANNEL_ETCD_KEY="/coreos.com/network" + +# Any additional options that you want to pass +FLANNEL_OPTIONS="" +``` + +**Note:** By default, flannel uses the interface for the default route. If you have multiple interfaces and would like to use an interface other than the default route one, you could add "-iface=" to FLANNEL_OPTIONS. For additional options, run `flanneld --help` on command line. + +Enable the flannel service. + +```shell +systemctl enable flanneld +``` + +If docker is not running, then starting flannel service is enough and skip the next step. + +```shell +systemctl start flanneld +``` + +If docker is already running, then stop docker, delete docker bridge (docker0), start flanneld and restart docker as follows. Another alternative is to just reboot the system (`systemctl reboot`). + +```shell +systemctl stop docker +ip link delete docker0 +systemctl start flanneld +systemctl start docker +``` + + +## **Test the cluster and flannel configuration** + +Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each Kubernetes node out of the IP range configured above. A working output should look like this: + +```shell +# ip -4 a|grep inet + inet 127.0.0.1/8 scope host lo + inet 192.168.122.77/24 brd 192.168.122.255 scope global dynamic eth0 + inet 18.16.29.0/16 scope global flannel.1 + inet 18.16.29.1/24 scope global docker0 +``` + +From any node in the cluster, check the cluster members by issuing a query to etcd server via curl (only partial output is shown using `grep -E "\{|\}|key|value"`). If you set up a 1 master and 3 nodes cluster, you should see one block for each node showing the subnets they have been assigned. You can associate those subnets to each node by the MAC address (VtepMAC) and IP address (Public IP) that is listed in the output. + +```shell +curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool +``` + +```json +{ + "node": { + "key": "/coreos.com/network/subnets", + { + "key": "/coreos.com/network/subnets/18.16.29.0-24", + "value": "{\"PublicIP\":\"192.168.122.77\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"46:f1:d0:18:d0:65\"}}" + }, + { + "key": "/coreos.com/network/subnets/18.16.83.0-24", + "value": "{\"PublicIP\":\"192.168.122.36\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"ca:38:78:fc:72:29\"}}" + }, + { + "key": "/coreos.com/network/subnets/18.16.90.0-24", + "value": "{\"PublicIP\":\"192.168.122.127\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"92:e2:80:ba:2d:4d\"}}" + } + } +} +``` + +From all nodes, review the `/run/flannel/subnet.env` file. This file was generated automatically by flannel. + +```shell +# cat /run/flannel/subnet.env +FLANNEL_SUBNET=18.16.29.1/24 +FLANNEL_MTU=1450 +FLANNEL_IPMASQ=false +``` + +At this point, we have etcd running on the Kubernetes master, and flannel / docker running on Kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly. + +Issue the following commands on any 2 nodes: + +```shell +# docker run -it fedora:latest bash +bash-4.3# +``` + +This will place you inside the container. Install iproute and iputils packages to install ip and ping utilities. Due to a [bug](https://bugzilla.redhat.com/show_bug.cgi?id=1142311), it is required to modify capabilities of ping binary to work around "Operation not permitted" error. + +```shell +bash-4.3# yum -y install iproute iputils +bash-4.3# setcap cap_net_raw-ep /usr/bin/ping +``` + +Now note the IP address on the first node: + +```shell +bash-4.3# ip -4 a l eth0 | grep inet + inet 18.16.29.4/24 scope global eth0 +``` + +And also note the IP address on the other node: + +```shell +bash-4.3# ip a l eth0 | grep inet + inet 18.16.90.4/24 scope global eth0 +``` +Now ping from the first node to the other node: + +```shell +bash-4.3# ping 18.16.90.4 +PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data. +64 bytes from 18.16.90.4: icmp_seq=1 ttl=62 time=0.275 ms +64 bytes from 18.16.90.4: icmp_seq=2 ttl=62 time=0.372 ms +``` + +Now Kubernetes multi-node cluster is set up with overlay networking set up by flannel. + +## Support Level + + +IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level +-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- +Bare-metal | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) +libvirt | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) +KVM | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) + + + +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. + diff --git a/docs/getting-started-guides/gce.md b/docs/getting-started-guides/gce.md index 22b1679a50..778795c3db 100644 --- a/docs/getting-started-guides/gce.md +++ b/docs/getting-started-guides/gce.md @@ -25,7 +25,8 @@ If you want to use custom binaries or pure open source Kubernetes, please contin 1. Install `gcloud` as necessary. `gcloud` can be installed as a part of the [Google Cloud SDK](https://cloud.google.com/sdk/). 1. Enable the [Compute Engine Instance Group Manager API](https://developers.google.com/console/help/new/#activatingapis) in the [Google Cloud developers console](https://console.developers.google.com). 1. Make sure that gcloud is set to use the Google Cloud Platform project you want. You can check the current project using `gcloud config list project` and change it via `gcloud config set project `. -1. Make sure you have credentials for GCloud by running ` gcloud auth login`. +1. Make sure you have credentials for GCloud by running `gcloud auth login`. +1. (Optional) In order to make API calls against GCE, you must also run `gcloud auth application-default login`. 1. Make sure you can start up a GCE VM from the command line. At least make sure you can do the [Create an instance](https://cloud.google.com/compute/docs/instances/#startinstancegcloud) part of the GCE Quickstart. 1. Make sure you can ssh into the VM without interactive prompts. See the [Log in to the instance](https://cloud.google.com/compute/docs/instances/#sshing) part of the GCE Quickstart. @@ -245,5 +246,3 @@ For support level information on all solutions, see the [Table of solutions](/do Please see the [Kubernetes docs](/docs/) for more details on administering and using a Kubernetes cluster. - - diff --git a/docs/getting-started-guides/index.md b/docs/getting-started-guides/index.md index ec5eb98759..ac02bd726a 100644 --- a/docs/getting-started-guides/index.md +++ b/docs/getting-started-guides/index.md @@ -48,8 +48,8 @@ few commands, and have active community support. - [GCE](/docs/getting-started-guides/gce) - [AWS](/docs/getting-started-guides/aws) +- [Azure](/docs/getting-started-guides/azure/) - [Azure](/docs/getting-started-guides/coreos/azure/) (Weave-based, contributed by WeaveWorks employees) -- [Azure](/docs/getting-started-guides/azure/) (Flannel-based, contributed by Microsoft employee) - [CenturyLink Cloud](/docs/getting-started-guides/clc) - [IBM SoftLayer](https://github.com/patrocinio/kubernetes-softlayer) @@ -70,7 +70,7 @@ writing a new solution](https://github.com/kubernetes/kubernetes/tree/{{page.git These solutions are combinations of cloud provider and OS not covered by the above solutions. -- [AWS + coreos](/docs/getting-started-guides/coreos) +- [AWS + CoreOS](/docs/getting-started-guides/coreos) - [GCE + CoreOS](/docs/getting-started-guides/coreos) - [AWS + Ubuntu](/docs/getting-started-guides/juju) - [Joyent + Ubuntu](/docs/getting-started-guides/juju) @@ -122,7 +122,7 @@ Stackpoint.io | | multi-support | multi-support | [d AppsCode.com | Saltstack | Debian | multi-support | [docs](https://appscode.com/products/cloud-deployment/) | | Commercial GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce) | ['œ“][1] | Project Azure | CoreOS | CoreOS | Weave | [docs](/docs/getting-started-guides/coreos/azure/) | | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin)) -Azure | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/azure) | | Community ([@colemickens](https://github.com/colemickens)) +Azure | Ignition | Ubuntu | Azure | [docs](/docs/getting-started-guides/azure) | | Community (Microsoft: [@brendandburns](https://github.com/brendandburns), [@colemickens](https://github.com/colemickens)) Docker Single Node | custom | N/A | local | [docs](/docs/getting-started-guides/docker) | | Project ([@brendandburns](https://github.com/brendandburns)) Docker Multi Node | custom | N/A | flannel | [docs](/docs/getting-started-guides/docker-multinode) | | Project ([@brendandburns](https://github.com/brendandburns)) Bare-metal | Ansible | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/fedora_ansible_config) | | Project diff --git a/docs/getting-started-guides/kubeadm.md b/docs/getting-started-guides/kubeadm.md index 6852438004..f4f4c15211 100644 --- a/docs/getting-started-guides/kubeadm.md +++ b/docs/getting-started-guides/kubeadm.md @@ -45,7 +45,7 @@ For each host in turn: * SSH into the machine and become `root` if you are not already (for example, run `sudo su -`). * If the machine is running Ubuntu 16.04, run: - # curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - + # curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - # cat < /etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF @@ -178,13 +178,13 @@ As an example, install a sample microservices application, a socks shop, to put To learn more about the sample microservices app, see the [GitHub README](https://github.com/microservices-demo/microservices-demo). # git clone https://github.com/microservices-demo/microservices-demo - # kubectl apply -f microservices-demo/deploy/kubernetes/manifests + # kubectl apply -f microservices-demo/deploy/kubernetes/manifests/sock-shop-ns.yml -f microservices-demo/deploy/kubernetes/manifests You can then find out the port that the [NodePort feature of services](/docs/user-guide/services/) allocated for the front-end service by running: - # kubectl describe svc front-end + # kubectl describe svc front-end -n sock-shop Name: front-end - Namespace: default + Namespace: sock-shop Labels: name=front-end Selector: name=front-end Type: NodePort @@ -194,7 +194,7 @@ You can then find out the port that the [NodePort feature of services](/docs/use Endpoints: Session Affinity: None -It takes several minutes to download and start all the containers, watch the output of `kubectl get pods` to see when they're all up and running. +It takes several minutes to download and start all the containers, watch the output of `kubectl get pods -n sock-shop` to see when they're all up and running. Then go to the IP address of your cluster's master node in your browser, and specify the given port. So for example, `http://:`. @@ -211,21 +211,24 @@ See the [list of add-ons](/docs/admin/addons/) to explore other add-ons, includi * Learn more about [Kubernetes concepts and kubectl in Kubernetes 101](/docs/user-guide/walkthrough/). * Install Kubernetes with [a cloud provider configurations](/docs/getting-started-guides/) to add Load Balancer and Persistent Volume support. +* Learn about `kubeadm`'s advanced usage on the [advanced reference doc](/docs/admin/kubeadm/) ## Cleanup * To uninstall the socks shop, run `kubectl delete -f microservices-demo/deploy/kubernetes/manifests` on the master. -* To undo what `kubeadm` did, simply delete the machines you created for this tutorial, or run the script below and then uninstall the packages. -
    -
    systemctl stop kubelet;
    -  docker rm -f $(docker ps -q); mount | grep "/var/lib/kubelet/*" | awk '{print $3}' | xargs umount 1>/dev/null 2>/dev/null;
    -  rm -rf /var/lib/kubelet /etc/kubernetes /var/lib/etcd /etc/cni;
    -  ip link set cbr0 down; ip link del cbr0;
    -  ip link set cni0 down; ip link del cni0;
    -  systemctl start kubelet
    -
    +* To undo what `kubeadm` did, simply delete the machines you created for this tutorial, or run the script below and then start over or uninstall the packages. + +
    + Reset local state: +
    systemctl stop kubelet;
    +  docker rm -f -v $(docker ps -q);
    +  find /var/lib/kubelet | xargs -n 1 findmnt -n -t tmpfs -o TARGET -T | uniq | xargs -r umount -v;
    +  rm -r -f /etc/kubernetes /var/lib/kubelet /var/lib/etcd;
    +  
    + If you wish to start over, run `systemctl start kubelet` followed by `kubeadm init` or `kubeadm join`. + ## Feedback @@ -253,3 +256,9 @@ Please note: `kubeadm` is a work in progress and these limitations will be addre 1. There is not yet an easy way to generate a `kubeconfig` file which can be used to authenticate to the cluster remotely with `kubectl` on, for example, your workstation. Workaround: copy the kubelet's `kubeconfig` from the master: use `scp root@:/etc/kubernetes/admin.conf .` and then e.g. `kubectl --kubeconfig ./admin.conf get nodes` from your workstation. + +1. If you are using VirtualBox (directly or via Vagrant), you will need to ensure that `hostname -i` returns a routable IP address (i.e. one on the second network interface, not the first one). + By default, it doesn't do this and kubelet ends-up using first non-loopback network interface, which is usually NATed. + Workaround: Modify `/etc/hosts`, take a look at this [`Vagrantfile`][ubuntu-vagrantfile] for how you this can be achieved. + +[ubuntu-vagrantfile]: https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11), diff --git a/docs/getting-started-guides/libvirt-coreos.md b/docs/getting-started-guides/libvirt-coreos.md index f752a13aee..86b0707092 100644 --- a/docs/getting-started-guides/libvirt-coreos.md +++ b/docs/getting-started-guides/libvirt-coreos.md @@ -134,7 +134,7 @@ export KUBERNETES_PROVIDER=libvirt-coreos; wget -q -O - https://get.k8s.io | bas Here is the curl version of this command: ```shell -export KUBERNETES_PROVIDER=libvirt-coreos; curl -sS https://get.k8s.io | bash` +export KUBERNETES_PROVIDER=libvirt-coreos; curl -sS https://get.k8s.io | bash ``` This script downloads and unpacks the tarball, then spawns a Kubernetes cluster on CoreOS instances with the following characteristics: diff --git a/docs/getting-started-guides/ubuntu.md b/docs/getting-started-guides/ubuntu.md index 3018bd026e..56ef9d9515 100644 --- a/docs/getting-started-guides/ubuntu.md +++ b/docs/getting-started-guides/ubuntu.md @@ -116,7 +116,13 @@ that conflicts with your own private network range. The `FLANNEL_NET` variable defines the IP range used for flannel overlay network, should not conflict with above `SERVICE_CLUSTER_IP_RANGE`. You can optionally provide additional Flannel network configuration -through `FLANNEL_OTHER_NET_CONFIG`, as explained in `cluster/ubuntu/config-default.sh`. +through `FLANNEL_BACKEND` and `FLANNEL_OTHER_NET_CONFIG`, as explained in `cluster/ubuntu/config-default.sh`. + +The default setting for `ADMISSION_CONTROL` is right for the latest +release of Kubernetes, but if you choose an earlier release then you +might want a different setting. See +[the admisson control doc](http://kubernetes.io/docs/admin/admission-controllers/#is-there-a-recommended-set-of-plug-ins-to-use) +for the recommended settings for various releases. **Note:** When deploying, master needs to be connected to the Internet to download the necessary files. If your machines are located in a private network that need proxy setting to connect the Internet, diff --git a/docs/index.md b/docs/index.md index 5e29c42dcb..38f3400167 100644 --- a/docs/index.md +++ b/docs/index.md @@ -77,9 +77,9 @@ h2, h3, h4 { Read the Overview
    -

    Hello World on Google Container Engine

    -

    In this quickstart, we’ll be creating a Kubernetes instance that stands up a simple “Hello World” app using Node.js. In just a few minutes you'll go from zero to deployed Kubernetes app on Google Container Engine (GKE), a hosted service from Google.

    - Get Started on GKE +

    Kubernetes Basics Interactive Tutorial

    +

    The Kubernetes Basics interactive tutorials let you try out Kubernetes features using Minikube right out of your web browser in a virtual terminal. Learn about the Kubernetes system and deploy, expose, scale, and upgrade a containerized application in just a few minutes.

    + Try the Interactive Tutorials

    Installing Kubernetes on Linux with kubeadm

    diff --git a/docs/tools/index.md b/docs/tools/index.md new file mode 100644 index 0000000000..37cc5bc54b --- /dev/null +++ b/docs/tools/index.md @@ -0,0 +1,40 @@ +--- +assignees: +- janetkuo + +--- + +* TOC +{:toc} + +## Native Tools + +### Kubectl + +[`kubectl`](/docs/user-guide/kubectl/) is the command line tool for Kubernetes. It controls the Kubernetes cluster manager. + +### Dashboard + +[Dashboard](/docs/user-guide/ui/), the web-based user interface of Kubernetes, allows you to deploy containerized applications +to a Kubernetes cluster, troubleshoot them, and manage the cluster and its resources itself. + +## Third-Party Tools + +### Helm + +[Kubernetes Helm](https://github.com/kubernetes/helm) is a tool for managing packages of pre-configured +Kubernetes resources, aka Kubernetes charts. + +Use Helm to: + +* Find and use popular software packaged as Kubernetes charts +* Share your own applications as Kubernetes charts +* Create reproducible builds of your Kubernetes applications +* Intelligently manage your Kubernetes manifest files +* Manage releases of Helm packages + +### Kompose + +[`kompose`](https://github.com/skippbox/kompose) is a tool to help users familiar with `docker-compose` +move to Kubernetes. It takes a Docker Compose file and translates it into Kubernetes objects. `kompose` +is a convenient tool to go from local Docker development to managing your application with Kubernetes. diff --git a/docs/tutorials/getting-started/create-cluster.html b/docs/tutorials/getting-started/create-cluster.html deleted file mode 100644 index 42c22f90e8..0000000000 --- a/docs/tutorials/getting-started/create-cluster.html +++ /dev/null @@ -1,47 +0,0 @@ ---- ---- - - - - - - - - - -
    - -
    - -
    -
    -

    Module overview

    -
      -
    • learn what a Kubernetes cluster is
    • -
    • learn what minikube is
    • -
    • start a Kubernetes cluster using an online terminal
    • -
    -

    -
    -
    -
    -

    What you need to know first

    -

    - Before you do this tutorial, you should be familiar with Linux containers. -

    -
    -
    -
    - - - -
    - -
    - - - diff --git a/docs/tutorials/getting-started/deploy-app.html b/docs/tutorials/getting-started/deploy-app.html deleted file mode 100644 index e60081b601..0000000000 --- a/docs/tutorials/getting-started/deploy-app.html +++ /dev/null @@ -1,52 +0,0 @@ ---- ---- - - - - - - - - - -
    - -
    - -
    -
    - Back -
    -
    - -
    -
    -

    Module overview

    -
      -
    • Learn about application Deployments
    • -
    • Deploy your first app on Kubernetes with Kubectl
    • -
    -

    -
    -
    -
    -

    What you need to know first

    -

    - How to start a Kubernetes cluster with minikube
    -

    -
    -
    -
    - - - -
    - -
    - - - diff --git a/docs/tutorials/getting-started/explore-app.html b/docs/tutorials/getting-started/explore-app.html deleted file mode 100644 index bc18696e81..0000000000 --- a/docs/tutorials/getting-started/explore-app.html +++ /dev/null @@ -1,53 +0,0 @@ ---- ---- - - - - - - - - - -
    - -
    - -
    -
    - Back -
    -
    - -
    -
    -

    Module overview

    -
      -
    • Learn about Kubernetes Pods
    • -
    • Learn about Kubernetes Nodes
    • -
    • Troubleshoot deployed applications
    • -
    -

    -
    -
    -
    -

    What you need to know first

    -

    - What are Deployments
    - How to deploy applications on Kubernetes -

    -
    -
    -
    - - -
    - -
    - - - diff --git a/docs/tutorials/getting-started/expose-app.html b/docs/tutorials/getting-started/expose-app.html deleted file mode 100644 index 04e8300b91..0000000000 --- a/docs/tutorials/getting-started/expose-app.html +++ /dev/null @@ -1,54 +0,0 @@ ---- ---- - - - - - - - - - -
    - -
    - -
    -
    - Back -
    -
    - -
    -
    -

    Module overview

    -
      -
    • Services
    • -
    • Learn about Kubernetes Labels
    • -
    • Exposing applications outside Kubernetes
    • -
    -

    -
    -
    -
    -

    What you need to know first

    -

    - How to deploy apps on Kubernetes
    - How to troubleshoot applications with Kubectl -

    -
    -
    -
    - - - -
    - -
    - - - diff --git a/docs/tutorials/getting-started/index.html b/docs/tutorials/getting-started/index.html deleted file mode 100644 index 932bd79363..0000000000 --- a/docs/tutorials/getting-started/index.html +++ /dev/null @@ -1,97 +0,0 @@ ---- ---- - - - - - - - - - -
    - -
    - -
    -
    -

    Getting Started with Kubernetes

    -

    By the end of this tutorial you will understand what Kubernetes does. You will also learn how to deploy, scale, update and debug containerized applications on a Kubernetes cluster using an interactive online terminal.

    -
    -
    - -
    - -
    -
    -

    Why Kubernetes?

    -

    Today users expect applications to be available 24/7, while developers expect to deploy new versions of those applications several times a day. The way we build software is moving in this direction, enabling applications to be released and updated in an easy and fast way without downtime. We also need to be able to scale application in line with the user demand and we expect them to make intelligent use of the available resources. Kubernetes is a platform designed to meet those requirements, using the experience accumulated by Google in this area, combined with best-of-breed ideas from the community.

    -
    -
    - -
    -

    Getting Started Modules

    -
    - -
    -
    - - -
    -
    -
    - -
    - -
    - -
    -
    -
    - - -
    -
    -
    -
    - - - -
    - -
    - - - diff --git a/docs/tutorials/getting-started/scale-app.html b/docs/tutorials/getting-started/scale-app.html deleted file mode 100644 index 4996dd3cd4..0000000000 --- a/docs/tutorials/getting-started/scale-app.html +++ /dev/null @@ -1,52 +0,0 @@ ---- ---- - - - - - - - - - -
    - -
    - -
    -
    - Back -
    -
    - -
    -
    -

    Module overview

    -
      -
    • Scaling an app with Kubectl
    • -
    -

    -
    -
    -
    -

    What you need to know first

    -

    - What are Deployments
    - What are Services -

    -
    -
    -
    - - - -
    - -
    - - - diff --git a/docs/tutorials/getting-started/update-app.html b/docs/tutorials/getting-started/update-app.html deleted file mode 100644 index 6ce6600d0e..0000000000 --- a/docs/tutorials/getting-started/update-app.html +++ /dev/null @@ -1,54 +0,0 @@ ---- ---- - - - - - - - - - -
    - -
    - -
    -
    - Back -
    -
    - -
    -
    -

    Module overview

    -
      -
    • Performing Rolling Updates with Kubectl
    • -
    -

    -
    -
    -
    -

    What you need to know first

    -

    - What are Deployments
    - What is Scaling -

    -
    -
    -
    - - - -
    - - - -
    - - - diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md index 23400521e7..14530ca25e 100644 --- a/docs/tutorials/index.md +++ b/docs/tutorials/index.md @@ -3,12 +3,18 @@ The Tutorials section of the Kubernetes documentation is a work in progress. +#### Kubernetes Basics + +* [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) is an in-depth interactive tutorial that helps you understand the Kubernetes system and try out some basic Kubernetes features. + #### Stateless Applications * [Running a Stateless Application Using a Deployment](/docs/tutorials/stateless-application/run-stateless-application-deployment/) * [Using a Service to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address-service/) +* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/) + ### What's next If you would like to write a tutorial, see diff --git a/docs/tutorials/getting-started/cluster-interactive.html b/docs/tutorials/kubernetes-basics/cluster-interactive.html similarity index 60% rename from docs/tutorials/getting-started/cluster-interactive.html rename to docs/tutorials/kubernetes-basics/cluster-interactive.html index 8f74bab8b5..8b07b331c7 100644 --- a/docs/tutorials/getting-started/cluster-interactive.html +++ b/docs/tutorials/kubernetes-basics/cluster-interactive.html @@ -7,19 +7,13 @@ - +
    -
    -
    - Back -
    -
    -
    To interact with the Terminal, please use the desktop/tablet version @@ -28,7 +22,7 @@
    diff --git a/docs/tutorials/getting-started/cluster-intro.html b/docs/tutorials/kubernetes-basics/cluster-intro.html similarity index 75% rename from docs/tutorials/getting-started/cluster-intro.html rename to docs/tutorials/kubernetes-basics/cluster-intro.html index a3dafff437..009a8e3947 100644 --- a/docs/tutorials/getting-started/cluster-intro.html +++ b/docs/tutorials/kubernetes-basics/cluster-intro.html @@ -1,4 +1,7 @@ --- +redirect_from: + - /docs/tutorials/getting-started/create-cluster/ + - /docs/tutorials/getting-started/create-cluster.html --- @@ -7,23 +10,25 @@ - +
    -
    - Back + +
    +

    Objectives

    +
      +
    • Learn what a Kubernetes cluster is.
    • +
    • Learn what Minikube is.
    • +
    • Start a Kubernetes cluster using an online terminal.
    • +
    -
    -
    -
    - -
    +

    Kubernetes Clusters

    Kubernetes coordinates a highly available cluster of computers that are connected to work as a single unit. The abstractions in Kubernetes allow you to deploy containerized applications to a cluster without tying them specifically to individual machines. To make use of this new model of deployment, applications need to be packaged in a way that decouples them from individual hosts: they need to be containerized. Containerized applications are more flexible and available than in past deployment models, where applications were installed directly onto specific machines as packages deeply integrated into the host. Kubernetes automates the distribution and scheduling of application containers across a cluster in a more efficient way. Kubernetes is an open-source platform and is production-ready.

    @@ -60,7 +65,7 @@
    -

    +


    @@ -82,7 +87,7 @@

    When you deploy applications on Kubernetes, you tell the master to start the application containers. The master schedules the containers to run on the cluster's nodes. The nodes communicate with the master using the Kubernetes API, which the master exposes. End users can also use the Kubernetes API directly to interact with the cluster.

    -

    A Kubernetes cluster can be deployed on either physical or virtual machines. To get started with Kubernetes development, you can use minikube. Minikube is a is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, Mac OS and Windows systems. The minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete. For this bootcamp, however, you'll use a provided online terminal with minikube pre-installed.

    +

    A Kubernetes cluster can be deployed on either physical or virtual machines. To get started with Kubernetes development, you can use minikube. Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, Mac OS and Windows systems. The minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete. For this bootcamp, however, you'll use a provided online terminal with minikube pre-installed.

    Now that you know what Kubernetes is, let’s go to the online tutorial and start our first cluster!

    @@ -92,7 +97,7 @@ diff --git a/docs/tutorials/getting-started/deploy-interactive.html b/docs/tutorials/kubernetes-basics/deploy-interactive.html similarity index 61% rename from docs/tutorials/getting-started/deploy-interactive.html rename to docs/tutorials/kubernetes-basics/deploy-interactive.html index 6525b21c4c..73d7e9dfc3 100644 --- a/docs/tutorials/getting-started/deploy-interactive.html +++ b/docs/tutorials/kubernetes-basics/deploy-interactive.html @@ -7,18 +7,13 @@ - +
    -
    -
    - Back -
    -

    @@ -31,7 +26,7 @@
    diff --git a/docs/tutorials/getting-started/deploy-intro.html b/docs/tutorials/kubernetes-basics/deploy-intro.html similarity index 85% rename from docs/tutorials/getting-started/deploy-intro.html rename to docs/tutorials/kubernetes-basics/deploy-intro.html index cd72f424b8..9fafe6012e 100644 --- a/docs/tutorials/getting-started/deploy-intro.html +++ b/docs/tutorials/kubernetes-basics/deploy-intro.html @@ -7,23 +7,24 @@ - +
    -
    - Back + +
    +

    Objectives

    +
      +
    • Learn about application Deployments.
    • +
    • Deploy your first app on Kubernetes with kubectl.
    • +
    -
    -
    -
    - -
    +

    Kubernetes Deployments

    Once you have a running Kubernetes cluster, you can deploy your containerized applications on top of it. To do so, you create a Kubernetes Deployment. The Deployment is responsible for creating and updating instances of your application. Once you've created a Deployment, the Kubernetes master schedules the application instances that the Deployment creates onto individual Nodes in the cluster.

    @@ -59,7 +60,7 @@
    -

    +


    @@ -94,7 +95,7 @@ diff --git a/docs/tutorials/getting-started/explore-interactive.html b/docs/tutorials/kubernetes-basics/explore-interactive.html similarity index 61% rename from docs/tutorials/getting-started/explore-interactive.html rename to docs/tutorials/kubernetes-basics/explore-interactive.html index 5b82635398..9b16d4bca4 100644 --- a/docs/tutorials/getting-started/explore-interactive.html +++ b/docs/tutorials/kubernetes-basics/explore-interactive.html @@ -7,18 +7,13 @@ - +
    -
    -
    - Back -
    -

    @@ -31,7 +26,7 @@
    diff --git a/docs/tutorials/getting-started/explore-intro.html b/docs/tutorials/kubernetes-basics/explore-intro.html similarity index 82% rename from docs/tutorials/getting-started/explore-intro.html rename to docs/tutorials/kubernetes-basics/explore-intro.html index df9cd7c305..4789838827 100644 --- a/docs/tutorials/getting-started/explore-intro.html +++ b/docs/tutorials/kubernetes-basics/explore-intro.html @@ -7,7 +7,7 @@ - +
    @@ -15,18 +15,19 @@
    -
    - Back + +
    +

    Objectives

    +
      +
    • Learn about Kubernetes Pods.
    • +
    • Learn about Kubernetes Nodes.
    • +
    • Troubleshoot deployed applications.
    • +
    -
    -
    -
    - -
    -

    Pods

    -

    When you created a Deployment in Module 2, Kubernetes created a Pod to host your application instance. A Pod is Kubernetes abstraction that represents a group of one or more application containers (such as Docker or rkt), and some shared resources for those containers. Those resources include:

    +

    Kubernetes Pods

    +

    When you created a Deployment in Module 2, Kubernetes created a Pod to host your application instance. A Pod is Kubernetes abstraction that represents a group of one or more application containers (such as Docker or rkt), and some shared resources for those containers. Those resources include:

    • Shared storage, as Volumes
    • Networking, as a unique cluster IP address
    • @@ -63,7 +64,7 @@
      -

      +


      @@ -97,7 +98,7 @@
      -

      +


      @@ -128,7 +129,7 @@ diff --git a/docs/tutorials/getting-started/expose-interactive.html b/docs/tutorials/kubernetes-basics/expose-interactive.html similarity index 61% rename from docs/tutorials/getting-started/expose-interactive.html rename to docs/tutorials/kubernetes-basics/expose-interactive.html index 04d3bc3f53..3288807588 100644 --- a/docs/tutorials/getting-started/expose-interactive.html +++ b/docs/tutorials/kubernetes-basics/expose-interactive.html @@ -7,19 +7,13 @@ - +
      -
      -
      - Back -
      -
      -
      To interact with the Terminal, please use the desktop/tablet version @@ -29,7 +23,7 @@
      diff --git a/docs/tutorials/getting-started/expose-intro.html b/docs/tutorials/kubernetes-basics/expose-intro.html similarity index 86% rename from docs/tutorials/getting-started/expose-intro.html rename to docs/tutorials/kubernetes-basics/expose-intro.html index 2178a7dd25..81c1981bf4 100644 --- a/docs/tutorials/getting-started/expose-intro.html +++ b/docs/tutorials/kubernetes-basics/expose-intro.html @@ -7,23 +7,26 @@ - +
      -
      - Back + +
      +

      Objectives

      +
        +
      • Learn about Kubernetes Services.
      • +
      • Learn about Kubernetes Labels.
      • +
      • Expose an application outside Kubernetes.
      • +
      -
      -
      -
      - -
      +

      Kubernetes Services

      +

      While Pods do have their own unique IP across the cluster, those IP’s are not exposed outside Kubernetes. Taking into account that over time Pods may be terminated, deleted or replaced by other Pods, we need a way to let other Pods and applications automatically discover each other. Kubernetes addresses this by grouping Pods in Services. A Kubernetes Service is an abstraction layer which defines a logical set of Pods and enables external traffic exposure, load balancing and service discovery for those Pods.

      This abstraction will allow us to expose Pods to traffic originating from outside the cluster. Services have their own unique cluster-private IP address and expose a port to receive traffic. If you choose to expose the service outside the cluster, the options are:

      @@ -58,7 +61,7 @@
      -

      +


      @@ -106,7 +109,7 @@
      -

      +


      @@ -122,7 +125,7 @@
      diff --git a/docs/tutorials/kubernetes-basics/index.html b/docs/tutorials/kubernetes-basics/index.html new file mode 100644 index 0000000000..d678461e41 --- /dev/null +++ b/docs/tutorials/kubernetes-basics/index.html @@ -0,0 +1,105 @@ +--- +--- + + + + + + + + + +
      + +
      + +
      +
      +

      Kubernetes Basics

      +

      This tutorial provides a walkthrough of the basics of the Kubernetes cluster orchestration system. Each module contains some background information on major Kubernetes features and concepts, and includes an interactive online tutorial. These interactive tutorials let you manage a simple cluster and its containerized applications for yourself.

      +

      Using the interactive tutorials, you can learn to:

      +
        +
      • Deploy a containerized application on a cluster
      • +
      • Scale the deployment
      • +
      • Update the containerized application with a new software version
      • +
      • Debug the containerized application
      • +
      +

      The tutorials use Katacoda to run a virtual terminal in your web browser that runs Minikube, a small-scale local deployment of Kubernetes that can run anywhere. There's no need to install any software or configure anything; each interactive tutorial runs directly out of your web browser itself.

      +
      +
      + +
      + +
      +
      +

      What can Kubernetes do for you?

      +

      With modern web services, users expect applications to be available 24/7, and developers expect to deploy new versions of those applications several times a day. Containzerization helps package software to serve these goals, enabling applications to be released and updated in an easy and fast way without downtime. Kubernetes helps you make sure those containerized applications run where and when you want, and helps them find the resources and tools they need to work. Kubernetes is a production-ready, open source platform designed with the Google's accumulated experience in container orchestration, combined with best-of-breed ideas from the community.

      +
      +
      + +
      +

      Kubernetes Basics Modules

      +
      + +
      +
      + + +
      +
      +
      + +
      + +
      + +
      +
      +
      + + +
      +
      +
      +
      + + + +
      + +
      + + + diff --git a/docs/tutorials/getting-started/public/css/styles.css b/docs/tutorials/kubernetes-basics/public/css/styles.css similarity index 100% rename from docs/tutorials/getting-started/public/css/styles.css rename to docs/tutorials/kubernetes-basics/public/css/styles.css diff --git a/docs/tutorials/getting-started/public/images/badge-01.svg b/docs/tutorials/kubernetes-basics/public/images/badge-01.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/badge-01.svg rename to docs/tutorials/kubernetes-basics/public/images/badge-01.svg diff --git a/docs/tutorials/getting-started/public/images/badge-02.svg b/docs/tutorials/kubernetes-basics/public/images/badge-02.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/badge-02.svg rename to docs/tutorials/kubernetes-basics/public/images/badge-02.svg diff --git a/docs/tutorials/getting-started/public/images/badge-03.svg b/docs/tutorials/kubernetes-basics/public/images/badge-03.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/badge-03.svg rename to docs/tutorials/kubernetes-basics/public/images/badge-03.svg diff --git a/docs/tutorials/getting-started/public/images/badge-04.svg b/docs/tutorials/kubernetes-basics/public/images/badge-04.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/badge-04.svg rename to docs/tutorials/kubernetes-basics/public/images/badge-04.svg diff --git a/docs/tutorials/getting-started/public/images/badge-05.svg b/docs/tutorials/kubernetes-basics/public/images/badge-05.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/badge-05.svg rename to docs/tutorials/kubernetes-basics/public/images/badge-05.svg diff --git a/docs/tutorials/getting-started/public/images/badge-06.svg b/docs/tutorials/kubernetes-basics/public/images/badge-06.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/badge-06.svg rename to docs/tutorials/kubernetes-basics/public/images/badge-06.svg diff --git a/docs/tutorials/getting-started/public/images/badge-07.svg b/docs/tutorials/kubernetes-basics/public/images/badge-07.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/badge-07.svg rename to docs/tutorials/kubernetes-basics/public/images/badge-07.svg diff --git a/docs/tutorials/getting-started/public/images/badge-08.svg b/docs/tutorials/kubernetes-basics/public/images/badge-08.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/badge-08.svg rename to docs/tutorials/kubernetes-basics/public/images/badge-08.svg diff --git a/docs/tutorials/getting-started/public/images/badge-09.svg b/docs/tutorials/kubernetes-basics/public/images/badge-09.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/badge-09.svg rename to docs/tutorials/kubernetes-basics/public/images/badge-09.svg diff --git a/docs/tutorials/getting-started/public/images/badge-1.png b/docs/tutorials/kubernetes-basics/public/images/badge-1.png similarity index 100% rename from docs/tutorials/getting-started/public/images/badge-1.png rename to docs/tutorials/kubernetes-basics/public/images/badge-1.png diff --git a/docs/tutorials/getting-started/public/images/dislike.svg b/docs/tutorials/kubernetes-basics/public/images/dislike.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/dislike.svg rename to docs/tutorials/kubernetes-basics/public/images/dislike.svg diff --git a/docs/tutorials/getting-started/public/images/like.svg b/docs/tutorials/kubernetes-basics/public/images/like.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/like.svg rename to docs/tutorials/kubernetes-basics/public/images/like.svg diff --git a/docs/tutorials/getting-started/public/images/logo.png b/docs/tutorials/kubernetes-basics/public/images/logo.png similarity index 100% rename from docs/tutorials/getting-started/public/images/logo.png rename to docs/tutorials/kubernetes-basics/public/images/logo.png diff --git a/docs/tutorials/getting-started/public/images/logo.svg b/docs/tutorials/kubernetes-basics/public/images/logo.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/logo.svg rename to docs/tutorials/kubernetes-basics/public/images/logo.svg diff --git a/docs/tutorials/getting-started/public/images/logo_2.svg b/docs/tutorials/kubernetes-basics/public/images/logo_2.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/logo_2.svg rename to docs/tutorials/kubernetes-basics/public/images/logo_2.svg diff --git a/docs/tutorials/getting-started/public/images/logo_mobile.png b/docs/tutorials/kubernetes-basics/public/images/logo_mobile.png similarity index 100% rename from docs/tutorials/getting-started/public/images/logo_mobile.png rename to docs/tutorials/kubernetes-basics/public/images/logo_mobile.png diff --git a/docs/tutorials/getting-started/public/images/module_01.svg b/docs/tutorials/kubernetes-basics/public/images/module_01.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/module_01.svg rename to docs/tutorials/kubernetes-basics/public/images/module_01.svg diff --git a/docs/tutorials/getting-started/public/images/module_01_cluster.svg b/docs/tutorials/kubernetes-basics/public/images/module_01_cluster.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/module_01_cluster.svg rename to docs/tutorials/kubernetes-basics/public/images/module_01_cluster.svg diff --git a/docs/tutorials/getting-started/public/images/module_02.svg b/docs/tutorials/kubernetes-basics/public/images/module_02.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/module_02.svg rename to docs/tutorials/kubernetes-basics/public/images/module_02.svg diff --git a/docs/tutorials/getting-started/public/images/module_02_first_app.svg b/docs/tutorials/kubernetes-basics/public/images/module_02_first_app.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/module_02_first_app.svg rename to docs/tutorials/kubernetes-basics/public/images/module_02_first_app.svg diff --git a/docs/tutorials/getting-started/public/images/module_03.svg b/docs/tutorials/kubernetes-basics/public/images/module_03.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/module_03.svg rename to docs/tutorials/kubernetes-basics/public/images/module_03.svg diff --git a/docs/tutorials/getting-started/public/images/module_03_nodes.svg b/docs/tutorials/kubernetes-basics/public/images/module_03_nodes.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/module_03_nodes.svg rename to docs/tutorials/kubernetes-basics/public/images/module_03_nodes.svg diff --git a/docs/tutorials/getting-started/public/images/module_03_pods.svg b/docs/tutorials/kubernetes-basics/public/images/module_03_pods.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/module_03_pods.svg rename to docs/tutorials/kubernetes-basics/public/images/module_03_pods.svg diff --git a/docs/tutorials/getting-started/public/images/module_04.svg b/docs/tutorials/kubernetes-basics/public/images/module_04.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/module_04.svg rename to docs/tutorials/kubernetes-basics/public/images/module_04.svg diff --git a/docs/tutorials/getting-started/public/images/module_04_labels.svg b/docs/tutorials/kubernetes-basics/public/images/module_04_labels.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/module_04_labels.svg rename to docs/tutorials/kubernetes-basics/public/images/module_04_labels.svg diff --git a/docs/tutorials/getting-started/public/images/module_04_services.svg b/docs/tutorials/kubernetes-basics/public/images/module_04_services.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/module_04_services.svg rename to docs/tutorials/kubernetes-basics/public/images/module_04_services.svg diff --git a/docs/tutorials/getting-started/public/images/module_05.svg b/docs/tutorials/kubernetes-basics/public/images/module_05.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/module_05.svg rename to docs/tutorials/kubernetes-basics/public/images/module_05.svg diff --git a/docs/tutorials/getting-started/public/images/module_05_scaling1.svg b/docs/tutorials/kubernetes-basics/public/images/module_05_scaling1.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/module_05_scaling1.svg rename to docs/tutorials/kubernetes-basics/public/images/module_05_scaling1.svg diff --git a/docs/tutorials/getting-started/public/images/module_05_scaling2.svg b/docs/tutorials/kubernetes-basics/public/images/module_05_scaling2.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/module_05_scaling2.svg rename to docs/tutorials/kubernetes-basics/public/images/module_05_scaling2.svg diff --git a/docs/tutorials/getting-started/public/images/module_06.svg b/docs/tutorials/kubernetes-basics/public/images/module_06.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/module_06.svg rename to docs/tutorials/kubernetes-basics/public/images/module_06.svg diff --git a/docs/tutorials/getting-started/public/images/module_06_rollingupdates1.svg b/docs/tutorials/kubernetes-basics/public/images/module_06_rollingupdates1.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/module_06_rollingupdates1.svg rename to docs/tutorials/kubernetes-basics/public/images/module_06_rollingupdates1.svg diff --git a/docs/tutorials/getting-started/public/images/module_06_rollingupdates2.svg b/docs/tutorials/kubernetes-basics/public/images/module_06_rollingupdates2.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/module_06_rollingupdates2.svg rename to docs/tutorials/kubernetes-basics/public/images/module_06_rollingupdates2.svg diff --git a/docs/tutorials/getting-started/public/images/module_06_rollingupdates3.svg b/docs/tutorials/kubernetes-basics/public/images/module_06_rollingupdates3.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/module_06_rollingupdates3.svg rename to docs/tutorials/kubernetes-basics/public/images/module_06_rollingupdates3.svg diff --git a/docs/tutorials/getting-started/public/images/module_06_rollingupdates4.svg b/docs/tutorials/kubernetes-basics/public/images/module_06_rollingupdates4.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/module_06_rollingupdates4.svg rename to docs/tutorials/kubernetes-basics/public/images/module_06_rollingupdates4.svg diff --git a/docs/tutorials/getting-started/public/images/nav_point.png b/docs/tutorials/kubernetes-basics/public/images/nav_point.png similarity index 100% rename from docs/tutorials/getting-started/public/images/nav_point.png rename to docs/tutorials/kubernetes-basics/public/images/nav_point.png diff --git a/docs/tutorials/getting-started/public/images/nav_point.svg b/docs/tutorials/kubernetes-basics/public/images/nav_point.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/nav_point.svg rename to docs/tutorials/kubernetes-basics/public/images/nav_point.svg diff --git a/docs/tutorials/getting-started/public/images/nav_point_active.png b/docs/tutorials/kubernetes-basics/public/images/nav_point_active.png similarity index 100% rename from docs/tutorials/getting-started/public/images/nav_point_active.png rename to docs/tutorials/kubernetes-basics/public/images/nav_point_active.png diff --git a/docs/tutorials/getting-started/public/images/nav_point_active.svg b/docs/tutorials/kubernetes-basics/public/images/nav_point_active.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/nav_point_active.svg rename to docs/tutorials/kubernetes-basics/public/images/nav_point_active.svg diff --git a/docs/tutorials/getting-started/public/images/nav_point_sub.svg b/docs/tutorials/kubernetes-basics/public/images/nav_point_sub.svg similarity index 100% rename from docs/tutorials/getting-started/public/images/nav_point_sub.svg rename to docs/tutorials/kubernetes-basics/public/images/nav_point_sub.svg diff --git a/docs/tutorials/getting-started/public/images/quiz_false.png b/docs/tutorials/kubernetes-basics/public/images/quiz_false.png similarity index 100% rename from docs/tutorials/getting-started/public/images/quiz_false.png rename to docs/tutorials/kubernetes-basics/public/images/quiz_false.png diff --git a/docs/tutorials/getting-started/public/images/quiz_true.png b/docs/tutorials/kubernetes-basics/public/images/quiz_true.png similarity index 100% rename from docs/tutorials/getting-started/public/images/quiz_true.png rename to docs/tutorials/kubernetes-basics/public/images/quiz_true.png diff --git a/docs/tutorials/getting-started/public/images/twitter.png b/docs/tutorials/kubernetes-basics/public/images/twitter.png similarity index 100% rename from docs/tutorials/getting-started/public/images/twitter.png rename to docs/tutorials/kubernetes-basics/public/images/twitter.png diff --git a/docs/tutorials/getting-started/scale-interactive.html b/docs/tutorials/kubernetes-basics/scale-interactive.html similarity index 62% rename from docs/tutorials/getting-started/scale-interactive.html rename to docs/tutorials/kubernetes-basics/scale-interactive.html index 9ab0d10557..d41b6cb36b 100644 --- a/docs/tutorials/getting-started/scale-interactive.html +++ b/docs/tutorials/kubernetes-basics/scale-interactive.html @@ -7,19 +7,13 @@ - +
      -
      -
      - Back -
      -
      -
      To interact with the Terminal, please use the desktop/tablet version @@ -29,7 +23,7 @@
      diff --git a/docs/tutorials/getting-started/scale-intro.html b/docs/tutorials/kubernetes-basics/scale-intro.html similarity index 83% rename from docs/tutorials/getting-started/scale-intro.html rename to docs/tutorials/kubernetes-basics/scale-intro.html index b06e574c7e..b4b5d47c91 100644 --- a/docs/tutorials/getting-started/scale-intro.html +++ b/docs/tutorials/kubernetes-basics/scale-intro.html @@ -7,23 +7,24 @@ - +
      -
      - Back + +
      +

      Objectives

      +
        +
      • Scale an app using kubectl.
      • +
      -
      -
      -
      - -
      +

      Scaling an application

      +

      In the previous modules we created a Deployment, and then exposed it publicly via a Service . The Deployment created only one Pod for running our application. When traffic increases, we will need to scale the application to keep up with user demand.

      Scaling is accomplished by changing the number of replicas in a Deployment

      @@ -59,11 +60,11 @@ @@ -106,7 +107,7 @@ diff --git a/docs/tutorials/getting-started/update-interactive.html b/docs/tutorials/kubernetes-basics/update-interactive.html similarity index 68% rename from docs/tutorials/getting-started/update-interactive.html rename to docs/tutorials/kubernetes-basics/update-interactive.html index 5b399f06dc..a1399aeadb 100644 --- a/docs/tutorials/getting-started/update-interactive.html +++ b/docs/tutorials/kubernetes-basics/update-interactive.html @@ -7,19 +7,13 @@ - +
      -
      -
      - Back -
      -
      -
      To interact with the Terminal, please use the desktop/tablet version diff --git a/docs/tutorials/getting-started/update-intro.html b/docs/tutorials/kubernetes-basics/update-intro.html similarity index 81% rename from docs/tutorials/getting-started/update-intro.html rename to docs/tutorials/kubernetes-basics/update-intro.html index c6f65cfa58..b7867f3b1f 100644 --- a/docs/tutorials/getting-started/update-intro.html +++ b/docs/tutorials/kubernetes-basics/update-intro.html @@ -7,23 +7,24 @@ - +
      -
      - Back + +
      +

      Objectives

      +
        +
      • Perform a rolling update using kubectl.
      • +
      -
      -
      -
      - -
      +

      Updating an application

      +

      Users expect applications to be available all the time and developers are expected to deploy new versions of them several times a day. In Kubernetes this is done with rolling updates. Rolling updates allows Deployments to occur with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.

      In the previous module we scaled our application to run multiple instances. This is a requirement for performing updates without affecting application availability. By default, the maximum number of Pods that can be unavailable during the update and the maximum number of new Pods that can be created, is one. Both options can be configured to either numbers or percentages (of Pods). @@ -61,19 +62,19 @@

      @@ -121,7 +122,7 @@ diff --git a/docs/tutorials/stateless-application/deployment-scale.yaml b/docs/tutorials/stateless-application/deployment-scale.yaml new file mode 100644 index 0000000000..2968b88360 --- /dev/null +++ b/docs/tutorials/stateless-application/deployment-scale.yaml @@ -0,0 +1,16 @@ +apiVersion: extensions/v1beta1 +kind: Deployment +metadata: + name: nginx-deployment +spec: + replicas: 4 + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.8 # Update the version of nginx from 1.7.9 to 1.8 + ports: + - containerPort: 80 diff --git a/docs/tutorials/stateless-application/expose-external-ip-address.md b/docs/tutorials/stateless-application/expose-external-ip-address.md new file mode 100644 index 0000000000..63aabb813d --- /dev/null +++ b/docs/tutorials/stateless-application/expose-external-ip-address.md @@ -0,0 +1,153 @@ +--- +--- + +{% capture overview %} + +This page shows how to create a Kubernetes Service object that exposees an +external IP address. + +{% endcapture %} + + +{% capture prerequisites %} + +* Install [kubectl](http://kubernetes.io/docs/user-guide/prereqs). + +* Use a cloud provider like Google Container Engine or Amazon Web Services to + create a Kubernetes cluster. This tutorial creates an + [external load balancer](/docs/user-guide/load-balancer/), + which requires a cloud provider. + +* Configure `kubectl` to communicate with your Kubernetes API server. For + instructions, see the documentation for your cloud provider. + +{% endcapture %} + + +{% capture objectives %} + +* Run five instances of a Hello World application. +* Create a Service object that exposes an external IP address. +* Use the Service object to access the running application. + +{% endcapture %} + + +{% capture lessoncontent %} + +### Creating a service for an application running in five pods + +1. Run a Hello World application in your cluster: + + kubectl run hello-world --replicas=5 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080 + + The preceding command creates a + [Deployment](/docs/user-guide/deployments/) + object and an associated + [ReplicaSet](/docs/user-guide/replicasets/) + object. The ReplicaSet has five + [Pods](/docs/user-guide/pods/), + each of which runs the Hello World application. + +1. Display information about the Deployment: + + kubectl get deployments hello-world + kubectl describe deployments hello-world + +1. Display information about your ReplicaSet objects: + + kubectl get replicasets + kubectl describe replicasets + +1. Create a Service object that exposes the deployment: + + kubectl expose deployment hello-world --type=LoadBalancer --name=my-service + +1. Display information about the Service: + + kubectl get services my-service + + The output is similar to this: + + NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE + my-service 10.3.245.137 104.198.205.71 8080/TCP 54s + + Note: If the external IP address is shown as , wait for a minute + and enter the same command again. + +1. Display detailed information about the Service: + + kubectl describe services my-service + + The output is similar to this: + + Name: my-service + Namespace: default + Labels: run=load-balancer-example + Selector: run=load-balancer-example + Type: LoadBalancer + IP: 10.3.245.137 + LoadBalancer Ingress: 104.198.205.71 + Port: 8080/TCP + NodePort: 32377/TCP + Endpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more... + Session Affinity: None + Events: + + Make a note of the external IP address exposed by your service. In this + example, the external IP address is 104.198.205.71. Also note + the value of Port. In this example, the port is 8080. + +1. In the preceding output, you can see that the service has several endpoints: + 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more. These are internal + addresses of the pods that are running the Hello World application. To + verify these are pod addresses, enter this command: + + kubectl get pods --output=wide + + The output is similar to this: + + NAME ... IP NODE + hello-world-2895499144-1jaz9 ... 10.0.1.6 gke-cluster-1-default-pool-e0b8d269-1afc + hello-world-2895499144-2e5uh ... 0.0.1.8 gke-cluster-1-default-pool-e0b8d269-1afc + hello-world-2895499144-9m4h1 ... 10.0.0.6 gke-cluster-1-default-pool-e0b8d269-5v7a + hello-world-2895499144-o4z13 ... 10.0.1.7 gke-cluster-1-default-pool-e0b8d269-1afc + hello-world-2895499144-segjf ... 10.0.2.5 gke-cluster-1-default-pool-e0b8d269-cpuc + +1. Use the external IP address to access the Hello World application: + + curl http://: + + where `` us the external IP address of your Service, + and `` is the value of `Port` in your Service description. + + The response to a successful request is a hello message: + + Hello Kubernetes! + +{% endcapture %} + + +{% capture cleanup %} + +To delete the Service, enter this command: + + kubectl delete services my-service + +To delete the Deployment, the ReplicaSet, and the Pods that are running +the Hello World application, enter this command: + + kubectl delete deployment hello-world + +{% endcapture %} + + +{% capture whatsnext %} + +Learn more about +[connecting applications with services](/docs/user-guide/connecting-applications/). +{% endcapture %} + +{% include templates/tutorial.md %} + + diff --git a/docs/tutorials/stateless-application/run-stateless-application-deployment.md b/docs/tutorials/stateless-application/run-stateless-application-deployment.md index 70aeb925c2..20a7aff243 100644 --- a/docs/tutorials/stateless-application/run-stateless-application-deployment.md +++ b/docs/tutorials/stateless-application/run-stateless-application-deployment.md @@ -94,6 +94,30 @@ specifies that the deployment should be updated to use nginx 1.8. kubectl get pods -l app=nginx +### Scaling the application by increasing the replica count + +You can increase the number of pods in your Deployment by applying a new YAML +file. This YAML file sets `replicas` to 4, which specifies that the Deployment +should have four pods: + +{% include code.html language="yaml" file="deployment-scale.yaml" ghlink="/docs/tutorials/stateless-application/deployment-scale.yaml" %} + +1. Apply the new YAML file: + + kubectl apply -f $REPO/docs/tutorials/stateless-application/deployment-scale.yaml + +1. Verify that the Deployment has four pods: + + kubectl get pods + + The output is similar to this: + + NAME READY STATUS RESTARTS AGE + nginx-deployment-148880595-4zdqq 1/1 Running 0 25s + nginx-deployment-148880595-6zgi1 1/1 Running 0 25s + nginx-deployment-148880595-fxcez 1/1 Running 0 2m + nginx-deployment-148880595-rwovn 1/1 Running 0 2m + ### Deleting a deployment Delete the deployment by name: diff --git a/docs/user-guide/accessing-the-cluster.md b/docs/user-guide/accessing-the-cluster.md index c42adb4c6e..6f78ab5293 100644 --- a/docs/user-guide/accessing-the-cluster.md +++ b/docs/user-guide/accessing-the-cluster.md @@ -27,7 +27,7 @@ $ kubectl config view ``` Many of the [examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) provide an introduction to using -kubectl and complete documentation is found in the [kubectl manual](/docs/user-guide/kubectl/kubectl). +kubectl and complete documentation is found in the [kubectl manual](/docs/user-guide/kubectl/index). ### Directly accessing the REST API diff --git a/docs/user-guide/connecting-applications.md b/docs/user-guide/connecting-applications.md index 4f0b804a4e..f75187d6c6 100644 --- a/docs/user-guide/connecting-applications.md +++ b/docs/user-guide/connecting-applications.md @@ -9,7 +9,7 @@ assignees: * TOC {:toc} -# The Kubernetes model for connecting containers +## The Kubernetes model for connecting containers Now that you have a continuously running, replicated application you can expose it on a network. Before discussing the Kubernetes approach to networking, it is worthwhile to contrast it with the "normal" way networking works with Docker. diff --git a/docs/user-guide/connecting-to-applications-port-forward.md b/docs/user-guide/connecting-to-applications-port-forward.md index 742730229f..5876d2ab48 100644 --- a/docs/user-guide/connecting-to-applications-port-forward.md +++ b/docs/user-guide/connecting-to-applications-port-forward.md @@ -1,50 +1,50 @@ ---- -assignees: -- caesarxuchao -- mikedanese - ---- - -kubectl port-forward forwards connections to a local port to a port on a pod. Its man page is available [here](/docs/user-guide/kubectl/kubectl_port-forward). Compared to [kubectl proxy](/docs/user-guide/accessing-the-cluster/#using-kubectl-proxy), `kubectl port-forward` is more generic as it can forward TCP traffic while `kubectl proxy` can only forward HTTP traffic. This guide demonstrates how to use `kubectl port-forward` to connect to a Redis database, which may be useful for database debugging. - -## Creating a Redis master - -```shell -$ kubectl create -f examples/redis/redis-master.yaml -pods/redis-master -``` - -wait until the Redis master pod is Running and Ready, - -```shell -$ kubectl get pods -NAME READY STATUS RESTARTS AGE -redis-master 2/2 Running 0 41s -``` - -## Connecting to the Redis master[a] - -The Redis master is listening on port 6379, to verify this, - -```shell{% raw %} -$ kubectl get pods redis-master --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}' -6379{% endraw %} -``` - -then we forward the port 6379 on the local workstation to the port 6379 of pod redis-master, - -```shell -$ kubectl port-forward redis-master 6379:6379 -I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:6379 -> 6379 -I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:6379 -> 6379 -``` - -To verify the connection is successful, we run a redis-cli on the local workstation, - -```shell -$ redis-cli -127.0.0.1:6379> ping -PONG -``` - -Now one can debug the database from the local workstation. +--- +assignees: +- caesarxuchao +- mikedanese + +--- + +kubectl port-forward forwards connections to a local port to a port on a pod. Its man page is available [here](/docs/user-guide/kubectl/kubectl_port-forward). Compared to [kubectl proxy](/docs/user-guide/accessing-the-cluster/#using-kubectl-proxy), `kubectl port-forward` is more generic as it can forward TCP traffic while `kubectl proxy` can only forward HTTP traffic. This guide demonstrates how to use `kubectl port-forward` to connect to a Redis database, which may be useful for database debugging. + +## Creating a Redis master + +```shell +$ kubectl create -f examples/redis/redis-master.yaml +pods/redis-master +``` + +wait until the Redis master pod is Running and Ready, + +```shell +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +redis-master 2/2 Running 0 41s +``` + +## Connecting to the Redis master[a] + +The Redis master is listening on port 6379, to verify this, + +```shell{% raw %} +$ kubectl get pods redis-master --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}' +6379{% endraw %} +``` + +then we forward the port 6379 on the local workstation to the port 6379 of pod redis-master, + +```shell +$ kubectl port-forward redis-master 6379:6379 +I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:6379 -> 6379 +I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:6379 -> 6379 +``` + +To verify the connection is successful, we run a redis-cli on the local workstation, + +```shell +$ redis-cli +127.0.0.1:6379> ping +PONG +``` + +Now one can debug the database from the local workstation. diff --git a/docs/user-guide/connecting-to-applications-proxy.md b/docs/user-guide/connecting-to-applications-proxy.md index 4e9867a339..5404d2e769 100644 --- a/docs/user-guide/connecting-to-applications-proxy.md +++ b/docs/user-guide/connecting-to-applications-proxy.md @@ -1,32 +1,32 @@ ---- -assignees: -- caesarxuchao -- lavalamp - ---- - -You have seen the [basics](/docs/user-guide/accessing-the-cluster) about `kubectl proxy` and `apiserver proxy`. This guide shows how to use them together to access a service([kube-ui](/docs/user-guide/ui)) running on the Kubernetes cluster from your workstation. - - -## Getting the apiserver proxy URL of kube-ui - -kube-ui is deployed as a cluster add-on. To find its apiserver proxy URL, - -```shell -$ kubectl cluster-info | grep "KubeUI" -KubeUI is running at https://173.255.119.104/api/v1/proxy/namespaces/kube-system/services/kube-ui -``` - -if this command does not find the URL, try the steps [here](/docs/user-guide/ui/#accessing-the-ui). - - -## Connecting to the kube-ui service from your local workstation - -The above proxy URL is an access to the kube-ui service provided by the apiserver. To access it, you still need to authenticate to the apiserver. `kubectl proxy` can handle the authentication. - -```shell -$ kubectl proxy --port=8001 -Starting to serve on localhost:8001 -``` - +--- +assignees: +- caesarxuchao +- lavalamp + +--- + +You have seen the [basics](/docs/user-guide/accessing-the-cluster) about `kubectl proxy` and `apiserver proxy`. This guide shows how to use them together to access a service([kube-ui](/docs/user-guide/ui)) running on the Kubernetes cluster from your workstation. + + +## Getting the apiserver proxy URL of kube-ui + +kube-ui is deployed as a cluster add-on. To find its apiserver proxy URL, + +```shell +$ kubectl cluster-info | grep "KubeUI" +KubeUI is running at https://173.255.119.104/api/v1/proxy/namespaces/kube-system/services/kube-ui +``` + +if this command does not find the URL, try the steps [here](/docs/user-guide/ui/#accessing-the-ui). + + +## Connecting to the kube-ui service from your local workstation + +The above proxy URL is an access to the kube-ui service provided by the apiserver. To access it, you still need to authenticate to the apiserver. `kubectl proxy` can handle the authentication. + +```shell +$ kubectl proxy --port=8001 +Starting to serve on localhost:8001 +``` + Now you can access the kube-ui service on your local workstation at [http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-ui](http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-ui) \ No newline at end of file diff --git a/docs/user-guide/federation/federated-ingress.md b/docs/user-guide/federation/federated-ingress.md index 42e5ad536d..87965a3fc7 100644 --- a/docs/user-guide/federation/federated-ingress.md +++ b/docs/user-guide/federation/federated-ingress.md @@ -18,7 +18,7 @@ automatically checks the health of the pods comprising the service, and avoids sending requests to unresponsive or slow pods (or entire unresponsive clusters). -Federated Ingress is released as a beta feature, and supports Google Cloud (GKE, +Federated Ingress is released as an alpha feature, and supports Google Cloud Platform (GKE, GCE and hybrid scenarios involving both) in Kubernetes v1.4. Work is under way to support other cloud providers such as AWS, and other hybrid cloud scenarios (e.g. services spanning private on-premise as well as public cloud Kubernetes diff --git a/docs/user-guide/federation/replicasets.md b/docs/user-guide/federation/replicasets.md index 805da57782..d0ceaa8bde 100644 --- a/docs/user-guide/federation/replicasets.md +++ b/docs/user-guide/federation/replicasets.md @@ -35,7 +35,7 @@ The API for Federated Replica Set is 100% compatible with the API for traditional Kubernetes Replica Set. You can create a replica set by sending a request to the federation apiserver. -You can do that using [kubectl](/docs/user-guide/kubectl/kubectl/) by running: +You can do that using [kubectl](/docs/user-guide/kubectl/) by running: ``` shell kubectl --context=federation-cluster create -f myrs.yaml diff --git a/docs/user-guide/federation/secrets.md b/docs/user-guide/federation/secrets.md index 7e7a27fc7a..763b53e98e 100644 --- a/docs/user-guide/federation/secrets.md +++ b/docs/user-guide/federation/secrets.md @@ -35,7 +35,7 @@ The API for Federated Secret is 100% compatible with the API for traditional Kubernetes Secret. You can create a secret by sending a request to the federation apiserver. -You can do that using [kubectl](/docs/user-guide/kubectl/kubectl/) by running: +You can do that using [kubectl](/docs/user-guide/kubectl/) by running: ``` shell kubectl --context=federation-cluster create -f mysecret.yaml diff --git a/docs/user-guide/garbage-collection.md b/docs/user-guide/garbage-collection.md index bae6ed442f..2dc8e0c36a 100644 --- a/docs/user-guide/garbage-collection.md +++ b/docs/user-guide/garbage-collection.md @@ -32,5 +32,7 @@ Synchronous garbage collection will be supported in 1.5 (tracking [issue](https: If you specify `deleteOptions.orphanDependents=true`, or leave it blank, then the GC will first reset the `ownerReferences` in the dependents, then delete the owner. Note that the deletion of the owner object is asynchronous, that is, a 200 OK response will be sent by the API server before the owner object gets deleted. ### Other references + [Design Doc](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/garbage-collection.md) + [Known issues](https://github.com/kubernetes/kubernetes/issues/26120) diff --git a/docs/user-guide/getting-into-containers.md b/docs/user-guide/getting-into-containers.md index 25f0f5a3e4..f45da7b0eb 100644 --- a/docs/user-guide/getting-into-containers.md +++ b/docs/user-guide/getting-into-containers.md @@ -1,74 +1,74 @@ ---- -assignees: -- caesarxuchao -- mikedanese - ---- - -Developers can use `kubectl exec` to run commands in a container. This guide demonstrates two use cases. - -## Using kubectl exec to check the environment variables of a container - -Kubernetes exposes [services](/docs/user-guide/services/#environment-variables) through environment variables. It is convenient to check these environment variables using `kubectl exec`. - -We first create a pod and a service, - -```shell -$ kubectl create -f examples/guestbook/redis-master-controller.yaml -$ kubectl create -f examples/guestbook/redis-master-service.yaml -``` -wait until the pod is Running and Ready, - -```shell -$ kubectl get pod -NAME READY REASON RESTARTS AGE -redis-master-ft9ex 1/1 Running 0 12s -``` - -then we can check the environment variables of the pod, - -```shell -$ kubectl exec redis-master-ft9ex env -... -REDIS_MASTER_SERVICE_PORT=6379 -REDIS_MASTER_SERVICE_HOST=10.0.0.219 -... -``` - -We can use these environment variables in applications to find the service. - - -## Using kubectl exec to check the mounted volumes - -It is convenient to use `kubectl exec` to check if the volumes are mounted as expected. -We first create a Pod with a volume mounted at /data/redis, - -```shell -kubectl create -f docs/user-guide/walkthrough/pod-redis.yaml -``` - -wait until the pod is Running and Ready, - -```shell -$ kubectl get pods -NAME READY REASON RESTARTS AGE -storage 1/1 Running 0 1m -``` - -we then use `kubectl exec` to verify that the volume is mounted at /data/redis, - -```shell -$ kubectl exec storage ls /data -redis -``` - -## Using kubectl exec to open a bash terminal in a pod - -After all, open a terminal in a pod is the most direct way to introspect the pod. Assuming the pod/storage is still running, run - -```shell -$ kubectl exec -ti storage -- bash -root@storage:/data# -``` - +--- +assignees: +- caesarxuchao +- mikedanese + +--- + +Developers can use `kubectl exec` to run commands in a container. This guide demonstrates two use cases. + +## Using kubectl exec to check the environment variables of a container + +Kubernetes exposes [services](/docs/user-guide/services/#environment-variables) through environment variables. It is convenient to check these environment variables using `kubectl exec`. + +We first create a pod and a service, + +```shell +$ kubectl create -f examples/guestbook/redis-master-controller.yaml +$ kubectl create -f examples/guestbook/redis-master-service.yaml +``` +wait until the pod is Running and Ready, + +```shell +$ kubectl get pod +NAME READY REASON RESTARTS AGE +redis-master-ft9ex 1/1 Running 0 12s +``` + +then we can check the environment variables of the pod, + +```shell +$ kubectl exec redis-master-ft9ex env +... +REDIS_MASTER_SERVICE_PORT=6379 +REDIS_MASTER_SERVICE_HOST=10.0.0.219 +... +``` + +We can use these environment variables in applications to find the service. + + +## Using kubectl exec to check the mounted volumes + +It is convenient to use `kubectl exec` to check if the volumes are mounted as expected. +We first create a Pod with a volume mounted at /data/redis, + +```shell +kubectl create -f docs/user-guide/walkthrough/pod-redis.yaml +``` + +wait until the pod is Running and Ready, + +```shell +$ kubectl get pods +NAME READY REASON RESTARTS AGE +storage 1/1 Running 0 1m +``` + +we then use `kubectl exec` to verify that the volume is mounted at /data/redis, + +```shell +$ kubectl exec storage ls /data +redis +``` + +## Using kubectl exec to open a bash terminal in a pod + +After all, open a terminal in a pod is the most direct way to introspect the pod. Assuming the pod/storage is still running, run + +```shell +$ kubectl exec -ti storage -- bash +root@storage:/data# +``` + This gets you a terminal. \ No newline at end of file diff --git a/docs/user-guide/index.md b/docs/user-guide/index.md index 44dd71f41e..70bcb5be6d 100644 --- a/docs/user-guide/index.md +++ b/docs/user-guide/index.md @@ -89,5 +89,5 @@ Pods and containers * [Images and registries](/docs/user-guide/images/) * [Migrating from docker-cli to kubectl](/docs/user-guide/docker-cli-to-kubectl/) * [Configuration Best Practices and Tips](/docs/user-guide/config-best-practices/) - * [Assign pods to selected nodes](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/user-guide/node-selection/) - * [Perform a rolling update on a running group of pods](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/user-guide/update-demo/) + * [Assign pods to selected nodes](/docs/user-guide/node-selection/) + * [Perform a rolling update on a running group of pods](/docs/user-guide/update-demo/) diff --git a/docs/user-guide/kubectl-conventions.md b/docs/user-guide/kubectl-conventions.md index f4398362da..a22973f16f 100644 --- a/docs/user-guide/kubectl-conventions.md +++ b/docs/user-guide/kubectl-conventions.md @@ -8,11 +8,11 @@ assignees: * TOC {:toc} -## Using `kubectl` in Reusable Scripts +## Using `kubectl` in Reusable Scripts If you need stable output in a script, you should: -* Request one of the machine-oriented output forms, such as `-o name`, `-o json`, `-o yaml`, `-o go-template`, or `-o jsonpath` +* Request one of the machine-oriented output forms, such as `-o name`, `-o json`, `-o yaml`, `-o go-template`, or `-o jsonpath` * Specify `--output-version`, since those output forms (other than `-o name`) output the resource using a particular API version * Specify `--generator` to pin to a specific behavior forever, if using generator-based commands (such as `kubectl run` or `kubectl expose`) * Don't rely on context, preferences, or other implicit state @@ -27,8 +27,46 @@ In order for `kubectl run` to satisfy infrastructure as code: * If the image is lightly parameterized, capture the parameters in a checked-in script, or at least use `--record`, to annotate the created objects with the command line. * If the image is heavily parameterized, definitely check in the script. * If features are needed that are not expressible via `kubectl run` flags, switch to configuration files checked into source control. -* Pin to a specific generator version, such as `kubectl run --generator=deployment/v1beta1` +* Pin to a specific [generator](#generators) version, such as `kubectl run --generator=deployment/v1beta1` + +#### Generators + +`kubectl run` allows you to generate the following resources (using `--generator` flag): + +* Pod - use `run-pod/v1`. +* Replication controller - use `run/v1`. +* Deployment - use `deployment/v1beta1`. +* Job (using `extension/v1beta1` endpoint) - use `job/v1beta1`. +* Job - use `job/v1`. +* ScheduledJob - use `scheduledjob/v2alpha1`. + +Additionally, if you didn't specify a generator flag, other flags will suggest using +a specific generator. Below table shows which flags force using specific generators, +depending on your cluster version: + +| Generated Resource | Cluster v1.4 | Cluster v1.3 | Cluster v1.2 | Cluster v1.1 and eariler | +|:----------------------:|-----------------------|-----------------------|--------------------------------------------|--------------------------------------------| +| Pod | `--restart=Never` | `--restart=Never` | `--generator=run-pod/v1` | `--restart=OnFailure` OR `--restart=Never` | +| Replication Controller | `--generator=run/v1` | `--generator=run/v1` | `--generator=run/v1` | `--restart=Always` | +| Deployment | `--restart=Always` | `--restart=Always` | `--restart=Always` | N/A | +| Job | `--restart=OnFailure` | `--restart=OnFailure` | `--restart=OnFailure` OR `--restart=Never` | N/A | +| Scheduled Job | `--schedule=` | N/A | N/A | N/A | + +Note that these flags will use a default generator only when you have not specified +any flag. This also means that combining `--generator` with other flags won't +change the generator you specified. For example, in a 1.4 cluster, if you specify +`--restart=Always`, a Deployment will be created; if you specify `--restart=Always` +and `--generator=run/v1`, a Replication Controller will be created instead. +This becomes handy if you want to pin to a specific behavior with the generator, +even when the defaulted generator is changed in the future. + +Finally, the order in which flags set the generator is: schedule flag has the highest +priority, then restart policy and finally the generator itself. + +If in doubt about the final resource being created, you can always use `--dry-run` +flag, which will provide the object to be submitted to the cluster. + ### `kubectl apply` -* To use `kubectl apply` to update resources, always create resources initially with `kubectl apply` or with `--save-config`. See [managing resources with kubectl apply](/docs/user-guide/managing-deployments/#kubectl-apply) for the reason behind it. +* To use `kubectl apply` to update resources, always create resources initially with `kubectl apply` or with `--save-config`. See [managing resources with kubectl apply](/docs/user-guide/managing-deployments/#kubectl-apply) for the reason behind it. diff --git a/docs/user-guide/kubectl-overview.md b/docs/user-guide/kubectl-overview.md index b0a6c5cc5a..e805d48b35 100644 --- a/docs/user-guide/kubectl-overview.md +++ b/docs/user-guide/kubectl-overview.md @@ -5,7 +5,7 @@ assignees: --- -Use this overview of the `kubectl` command line interface to help you start running commands against Kubernetes clusters. This overview quickly covers `kubectl` syntax, describes the command operations, and provides common examples. For details about each command, including all the supported flags and subcommands, see the [kubectl](/docs/user-guide/kubectl/kubectl) reference documentation. +Use this overview of the `kubectl` command line interface to help you start running commands against Kubernetes clusters. This overview quickly covers `kubectl` syntax, describes the command operations, and provides common examples. For details about each command, including all the supported flags and subcommands, see the [kubectl](/docs/user-guide/kubectl) reference documentation. TODO: Auto-generate this file to ensure it's always in sync with any `kubectl` changes, see [#14177](http://pr.k8s.io/14177). @@ -55,14 +55,15 @@ Operation | Syntax | Description `api-versions` | `kubectl api-versions [flags]` | List the API versions that are available. `apply` | `kubectl apply -f FILENAME [flags]`| Apply a configuration change to a resource from a file or stdin. `attach` | `kubectl attach POD -c CONTAINER [-i] [-t] [flags]` | Attach to a running container either to view the output stream or interact with the container (stdin). -`autoscale` | `autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [flags]` | Automatically scale the set of pods that are managed by a replication controller. +`autoscale` | `kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [flags]` | Automatically scale the set of pods that are managed by a replication controller. `cluster-info` | `kubectl cluster-info [flags]` | Display endpoint information about the master and services in the cluster. `config` | `kubectl config SUBCOMMAND [flags]` | Modifies kubeconfig files. See the individual subcommands for details. `create` | `kubectl create -f FILENAME [flags]` | Create one or more resources from a file or stdin. `delete` | `kubectl delete (-f FILENAME | TYPE [NAME | /NAME | -l label | --all]) [flags]` | Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources. `describe` | `kubectl describe (-f FILENAME | TYPE [NAME_PREFIX | /NAME | -l label]) [flags]` | Display the detailed state of one or more resources. `edit` | `kubectl edit (-f FILENAME | TYPE NAME | TYPE/NAME) [flags]` | Edit and update the definition of one or more resources on the server by using the default editor. -`exec` | `kubectl exec POD [-c CONTAINER] [-i] [-t] [flags] [-- COMMAND [args...]]` | Execute a command against a container in a pod. +`exec` | `kubectl exec POD [-c CONTAINER] [-i] [-t] [flags] [-- COMMAND [args...]]` | Execute a command against a container in a pod, +`explain` | `kubectl explain [--include-extended-apis=true] [--recursive=false] [flags]` | Get documentation of various resources. For instance pods, nodes, services, etc. `expose` | `kubectl expose (-f FILENAME | TYPE NAME | TYPE/NAME) [--port=port] [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [----external-ip=external-ip-of-service] [--type=type] [flags]` | Expose a replication controller, service, or pod as a new Kubernetes service. `get` | `kubectl get (-f FILENAME | TYPE [NAME | /NAME | -l label]) [--watch] [--sort-by=FIELD] [[-o | --output]=OUTPUT_FORMAT] [flags]` | List one or more resources. `label` | `kubectl label (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags]` | Add or update the labels of one or more resources. @@ -77,7 +78,7 @@ Operation | Syntax | Description `stop` | `kubectl stop` | Deprecated: Instead, see `kubectl delete`. `version` | `kubectl version [--client] [flags]` | Display the Kubernetes version running on the client and server. -Remember: For more about command operations, see the [kubectl](/docs/user-guide/kubectl/kubectl) reference documentation. +Remember: For more about command operations, see the [kubectl](/docs/user-guide/kubectl) reference documentation. ## Resource types @@ -85,29 +86,37 @@ The following table includes a list of all the supported resource types and thei Resource type | Abbreviated alias -------------------- | -------------------- -`componentstatuses` | `cs` -`daemonsets` | `ds` -`deployments` | -`events` | `ev` -`endpoints` | `ep` -`horizontalpodautoscalers` | `hpa` -`ingresses` | `ing` +`clusters` | +`componentstatuses` |`cs` +`configmaps` |`cm` +`daemonsets` |`ds` +`deployments` |`deploy` +`endpoints` |`ep` +`events` |`ev` +`horizontalpodautoscalers` |`hpa` +`ingresses` |`ing` `jobs` | -`limitranges` | `limits` -`nodes` | `no` -`namespaces` | `ns` -`pods` | `po` -`persistentvolumes` | `pv` -`persistentvolumeclaims` | `pvc` -`resourcequotas` | `quota` -`replicationcontrollers` | `rc` +`limitranges` |`limits` +`namespaces` |`ns` +`networkpolicies` | +`nodes` |`no` +`persistentvolumeclaims` |`pvc` +`persistentvolumes` |`pv` +`pods` |`po` +`podsecuritypolicies` |`psp` +`podtemplates` | +`replicasets` |`rs` +`replicationcontrollers` |`rc` +`resourcequotas` |`quota` `secrets` | -`serviceaccounts` | -`services` | `svc` +`serviceaccounts` |`sa` +`services` |`svc` +`storageclasses` | +`thirdpartyresources` | ## Output options -Use the following sections for information about how you can format or sort the output of certain commands. For details about which commands support the various output options, see the [kubectl](/docs/user-guide/kubectl/kubectl) reference documentation. +Use the following sections for information about how you can format or sort the output of certain commands. For details about which commands support the various output options, see the [kubectl](/docs/user-guide/kubectl) reference documentation. ### Formatting output @@ -138,7 +147,7 @@ In this example, the following command outputs the details for a single pod as a `$ kubectl get pod web-pod-13je7 -o=yaml` -Remember: See the [kubectl](/docs/user-guide/kubectl/kubectl) reference documentation for details about which output format is supported by each command. +Remember: See the [kubectl](/docs/user-guide/kubectl) reference documentation for details about which output format is supported by each command. #### Custom columns @@ -273,4 +282,4 @@ $ kubectl logs -f ## Next steps -Start using the [kubectl](/docs/user-guide/kubectl/kubectl) commands. +Start using the [kubectl](/docs/user-guide/kubectl) commands. diff --git a/docs/user-guide/kubectl/kubectl.md b/docs/user-guide/kubectl/index.md similarity index 100% rename from docs/user-guide/kubectl/kubectl.md rename to docs/user-guide/kubectl/index.md diff --git a/docs/user-guide/kubectl/kubectl_annotate.md b/docs/user-guide/kubectl/kubectl_annotate.md index 2111379a6f..0ac1443b61 100644 --- a/docs/user-guide/kubectl/kubectl_annotate.md +++ b/docs/user-guide/kubectl/kubectl_annotate.md @@ -98,9 +98,7 @@ kubectl annotate pods foo description- --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_api-versions.md b/docs/user-guide/kubectl/kubectl_api-versions.md index c5c9e1fe59..ddde2267d0 100644 --- a/docs/user-guide/kubectl/kubectl_api-versions.md +++ b/docs/user-guide/kubectl/kubectl_api-versions.md @@ -41,9 +41,7 @@ kubectl api-versions --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_apply.md b/docs/user-guide/kubectl/kubectl_apply.md index 899383c0d5..eac7846e6e 100644 --- a/docs/user-guide/kubectl/kubectl_apply.md +++ b/docs/user-guide/kubectl/kubectl_apply.md @@ -70,9 +70,7 @@ cat pod.json | kubectl apply -f - --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_attach.md b/docs/user-guide/kubectl/kubectl_attach.md index 860f7ece8d..1d1cbcb7f4 100644 --- a/docs/user-guide/kubectl/kubectl_attach.md +++ b/docs/user-guide/kubectl/kubectl_attach.md @@ -64,9 +64,7 @@ kubectl attach 123456-7890 -c ruby-container -i -t --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_autoscale.md b/docs/user-guide/kubectl/kubectl_autoscale.md index 120356c5e4..05d5bf143d 100644 --- a/docs/user-guide/kubectl/kubectl_autoscale.md +++ b/docs/user-guide/kubectl/kubectl_autoscale.md @@ -78,9 +78,7 @@ kubectl autoscale rc foo --max=5 --cpu-percent=80 --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_cluster-info.md b/docs/user-guide/kubectl/kubectl_cluster-info.md index 2789f751d4..b3a1480dab 100644 --- a/docs/user-guide/kubectl/kubectl_cluster-info.md +++ b/docs/user-guide/kubectl/kubectl_cluster-info.md @@ -48,10 +48,7 @@ kubectl cluster-info --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager -* [kubectl cluster-info dump](kubectl_cluster-info_dump.md) - Dump lots of relevant info for debugging and diagnosis ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_cluster-info_dump.md b/docs/user-guide/kubectl/kubectl_cluster-info_dump.md index 0116d85bab..c13257c390 100644 --- a/docs/user-guide/kubectl/kubectl_cluster-info_dump.md +++ b/docs/user-guide/kubectl/kubectl_cluster-info_dump.md @@ -73,9 +73,7 @@ kubectl cluster-info dump --namespaces default,kube-system --output-directory=/p --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl cluster-info](kubectl_cluster-info.md) - Display cluster info ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_completion.md b/docs/user-guide/kubectl/kubectl_completion.md index b73b02f368..f69aacb619 100644 --- a/docs/user-guide/kubectl/kubectl_completion.md +++ b/docs/user-guide/kubectl/kubectl_completion.md @@ -66,9 +66,7 @@ $ source <(kubectl completion zsh) --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_config.md b/docs/user-guide/kubectl/kubectl_config.md index 542820f12d..354464e7d4 100644 --- a/docs/user-guide/kubectl/kubectl_config.md +++ b/docs/user-guide/kubectl/kubectl_config.md @@ -52,21 +52,7 @@ kubectl config SUBCOMMAND --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager -* [kubectl config current-context](kubectl_config_current-context.md) - Displays the current-context -* [kubectl config delete-cluster](kubectl_config_delete-cluster.md) - Delete the specified cluster from the kubeconfig -* [kubectl config delete-context](kubectl_config_delete-context.md) - Delete the specified context from the kubeconfig -* [kubectl config get-clusters](kubectl_config_get-clusters.md) - Display clusters defined in the kubeconfig -* [kubectl config get-contexts](kubectl_config_get-contexts.md) - Describe one or many contexts -* [kubectl config set](kubectl_config_set.md) - Sets an individual value in a kubeconfig file -* [kubectl config set-cluster](kubectl_config_set-cluster.md) - Sets a cluster entry in kubeconfig -* [kubectl config set-context](kubectl_config_set-context.md) - Sets a context entry in kubeconfig -* [kubectl config set-credentials](kubectl_config_set-credentials.md) - Sets a user entry in kubeconfig -* [kubectl config unset](kubectl_config_unset.md) - Unsets an individual value in a kubeconfig file -* [kubectl config use-context](kubectl_config_use-context.md) - Sets the current-context in a kubeconfig file -* [kubectl config view](kubectl_config_view.md) - Display merged kubeconfig settings or a specified kubeconfig file ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_config_current-context.md b/docs/user-guide/kubectl/kubectl_config_current-context.md index 0ef7c0023d..73c7cb5366 100644 --- a/docs/user-guide/kubectl/kubectl_config_current-context.md +++ b/docs/user-guide/kubectl/kubectl_config_current-context.md @@ -50,9 +50,7 @@ kubectl config current-context --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl config](kubectl_config.md) - Modify kubeconfig files ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_config_delete-cluster.md b/docs/user-guide/kubectl/kubectl_config_delete-cluster.md index 0ac6b97716..698c72c1a2 100644 --- a/docs/user-guide/kubectl/kubectl_config_delete-cluster.md +++ b/docs/user-guide/kubectl/kubectl_config_delete-cluster.md @@ -41,9 +41,7 @@ kubectl config delete-cluster NAME --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl config](kubectl_config.md) - Modify kubeconfig files ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_config_delete-context.md b/docs/user-guide/kubectl/kubectl_config_delete-context.md index 1249ff0ba3..e58ac663c3 100644 --- a/docs/user-guide/kubectl/kubectl_config_delete-context.md +++ b/docs/user-guide/kubectl/kubectl_config_delete-context.md @@ -41,9 +41,7 @@ kubectl config delete-context NAME --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl config](kubectl_config.md) - Modify kubeconfig files ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_config_get-clusters.md b/docs/user-guide/kubectl/kubectl_config_get-clusters.md index e9e963bbd2..6e5494aac2 100644 --- a/docs/user-guide/kubectl/kubectl_config_get-clusters.md +++ b/docs/user-guide/kubectl/kubectl_config_get-clusters.md @@ -41,9 +41,7 @@ kubectl config get-clusters --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl config](kubectl_config.md) - Modify kubeconfig files ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_config_get-contexts.md b/docs/user-guide/kubectl/kubectl_config_get-contexts.md index ade72ec623..b1ad750ddc 100644 --- a/docs/user-guide/kubectl/kubectl_config_get-contexts.md +++ b/docs/user-guide/kubectl/kubectl_config_get-contexts.md @@ -58,9 +58,7 @@ kubectl config get-contexts my-context --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl config](kubectl_config.md) - Modify kubeconfig files ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_config_set-cluster.md b/docs/user-guide/kubectl/kubectl_config_set-cluster.md index e880ea0db6..165dae4878 100644 --- a/docs/user-guide/kubectl/kubectl_config_set-cluster.md +++ b/docs/user-guide/kubectl/kubectl_config_set-cluster.md @@ -64,9 +64,7 @@ kubectl config set-cluster e2e --insecure-skip-tls-verify=true --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl config](kubectl_config.md) - Modify kubeconfig files ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_config_set-context.md b/docs/user-guide/kubectl/kubectl_config_set-context.md index b6ba8a3437..325516d461 100644 --- a/docs/user-guide/kubectl/kubectl_config_set-context.md +++ b/docs/user-guide/kubectl/kubectl_config_set-context.md @@ -56,9 +56,7 @@ kubectl config set-context gce --user=cluster-admin --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl config](kubectl_config.md) - Modify kubeconfig files ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_config_set-credentials.md b/docs/user-guide/kubectl/kubectl_config_set-credentials.md index 8a462134e8..6c81890be8 100644 --- a/docs/user-guide/kubectl/kubectl_config_set-credentials.md +++ b/docs/user-guide/kubectl/kubectl_config_set-credentials.md @@ -87,9 +87,7 @@ kubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provide --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl config](kubectl_config.md) - Modify kubeconfig files ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_config_set.md b/docs/user-guide/kubectl/kubectl_config_set.md index 0ab8e35c1b..ace136ef20 100644 --- a/docs/user-guide/kubectl/kubectl_config_set.md +++ b/docs/user-guide/kubectl/kubectl_config_set.md @@ -50,9 +50,7 @@ kubectl config set PROPERTY_NAME PROPERTY_VALUE --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl config](kubectl_config.md) - Modify kubeconfig files ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_config_unset.md b/docs/user-guide/kubectl/kubectl_config_unset.md index 7004076385..8b2cd98164 100644 --- a/docs/user-guide/kubectl/kubectl_config_unset.md +++ b/docs/user-guide/kubectl/kubectl_config_unset.md @@ -43,9 +43,7 @@ kubectl config unset PROPERTY_NAME --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl config](kubectl_config.md) - Modify kubeconfig files ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_config_use-context.md b/docs/user-guide/kubectl/kubectl_config_use-context.md index b04ea867bb..a551a6640b 100644 --- a/docs/user-guide/kubectl/kubectl_config_use-context.md +++ b/docs/user-guide/kubectl/kubectl_config_use-context.md @@ -41,9 +41,7 @@ kubectl config use-context CONTEXT_NAME --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl config](kubectl_config.md) - Modify kubeconfig files ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_config_view.md b/docs/user-guide/kubectl/kubectl_config_view.md index 0cc9854458..529c4871da 100644 --- a/docs/user-guide/kubectl/kubectl_config_view.md +++ b/docs/user-guide/kubectl/kubectl_config_view.md @@ -71,9 +71,7 @@ kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}' --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl config](kubectl_config.md) - Modify kubeconfig files ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_convert.md b/docs/user-guide/kubectl/kubectl_convert.md index 9809914863..354e681297 100644 --- a/docs/user-guide/kubectl/kubectl_convert.md +++ b/docs/user-guide/kubectl/kubectl_convert.md @@ -85,9 +85,7 @@ kubectl convert -f . | kubectl create -f - --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_cordon.md b/docs/user-guide/kubectl/kubectl_cordon.md index 102e8726bf..bc945d5d79 100644 --- a/docs/user-guide/kubectl/kubectl_cordon.md +++ b/docs/user-guide/kubectl/kubectl_cordon.md @@ -52,9 +52,7 @@ kubectl cordon foo --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_create.md b/docs/user-guide/kubectl/kubectl_create.md index 1294ddcaa0..c47c3fc72a 100644 --- a/docs/user-guide/kubectl/kubectl_create.md +++ b/docs/user-guide/kubectl/kubectl_create.md @@ -68,16 +68,7 @@ cat pod.json | kubectl create -f - --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager -* [kubectl create configmap](kubectl_create_configmap.md) - Create a configmap from a local file, directory or literal value -* [kubectl create deployment](kubectl_create_deployment.md) - Create a deployment with the specified name. -* [kubectl create namespace](kubectl_create_namespace.md) - Create a namespace with the specified name -* [kubectl create quota](kubectl_create_quota.md) - Create a quota with the specified name. -* [kubectl create secret](kubectl_create_secret.md) - Create a secret using specified subcommand -* [kubectl create service](kubectl_create_service.md) - Create a service using specified subcommand. -* [kubectl create serviceaccount](kubectl_create_serviceaccount.md) - Create a service account with the specified name ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_create_configmap.md b/docs/user-guide/kubectl/kubectl_create_configmap.md index 5641512ef0..190db2f567 100644 --- a/docs/user-guide/kubectl/kubectl_create_configmap.md +++ b/docs/user-guide/kubectl/kubectl_create_configmap.md @@ -85,9 +85,7 @@ kubectl create configmap my-config --from-literal=key1=config1 --from-literal=ke --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl create](kubectl_create.md) - Create a resource by filename or stdin ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_create_deployment.md b/docs/user-guide/kubectl/kubectl_create_deployment.md index bfca422af9..88627ff03b 100644 --- a/docs/user-guide/kubectl/kubectl_create_deployment.md +++ b/docs/user-guide/kubectl/kubectl_create_deployment.md @@ -68,9 +68,7 @@ kubectl create deployment my-dep --image=busybox --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl create](kubectl_create.md) - Create a resource by filename or stdin ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_create_namespace.md b/docs/user-guide/kubectl/kubectl_create_namespace.md index ea8dec22a2..8c4181f50c 100644 --- a/docs/user-guide/kubectl/kubectl_create_namespace.md +++ b/docs/user-guide/kubectl/kubectl_create_namespace.md @@ -67,9 +67,7 @@ kubectl create namespace my-namespace --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl create](kubectl_create.md) - Create a resource by filename or stdin ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_create_quota.md b/docs/user-guide/kubectl/kubectl_create_quota.md index afbd66e572..843ed6b194 100644 --- a/docs/user-guide/kubectl/kubectl_create_quota.md +++ b/docs/user-guide/kubectl/kubectl_create_quota.md @@ -71,9 +71,7 @@ kubectl create quota NAME [--hard=key1=value1,key2=value2] [--scopes=Scope1,Scop --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl create](kubectl_create.md) - Create a resource by filename or stdin ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_create_secret.md b/docs/user-guide/kubectl/kubectl_create_secret.md index 10dcd20d29..ce60df1bb4 100644 --- a/docs/user-guide/kubectl/kubectl_create_secret.md +++ b/docs/user-guide/kubectl/kubectl_create_secret.md @@ -41,12 +41,7 @@ kubectl create secret --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl create](kubectl_create.md) - Create a resource by filename or stdin -* [kubectl create secret docker-registry](kubectl_create_secret_docker-registry.md) - Create a secret for use with a Docker registry -* [kubectl create secret generic](kubectl_create_secret_generic.md) - Create a secret from a local file, directory or literal value -* [kubectl create secret tls](kubectl_create_secret_tls.md) - Create a TLS secret ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_create_secret_docker-registry.md b/docs/user-guide/kubectl/kubectl_create_secret_docker-registry.md index c357816dea..3be76dbfa5 100644 --- a/docs/user-guide/kubectl/kubectl_create_secret_docker-registry.md +++ b/docs/user-guide/kubectl/kubectl_create_secret_docker-registry.md @@ -83,9 +83,7 @@ kubectl create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_ --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl create secret](kubectl_create_secret.md) - Create a secret using specified subcommand ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_create_secret_generic.md b/docs/user-guide/kubectl/kubectl_create_secret_generic.md index 7fb7660038..5d9bccfdbb 100644 --- a/docs/user-guide/kubectl/kubectl_create_secret_generic.md +++ b/docs/user-guide/kubectl/kubectl_create_secret_generic.md @@ -86,9 +86,7 @@ kubectl create secret generic my-secret --from-literal=key1=supersecret --from-l --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl create secret](kubectl_create_secret.md) - Create a secret using specified subcommand ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_create_secret_tls.md b/docs/user-guide/kubectl/kubectl_create_secret_tls.md index 4d64f3f38d..9fb1050d58 100644 --- a/docs/user-guide/kubectl/kubectl_create_secret_tls.md +++ b/docs/user-guide/kubectl/kubectl_create_secret_tls.md @@ -71,9 +71,7 @@ kubectl create secret tls tls-secret --cert=path/to/tls.cert --key=path/to/tls.k --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl create secret](kubectl_create_secret.md) - Create a secret using specified subcommand ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_create_service.md b/docs/user-guide/kubectl/kubectl_create_service.md index ab115fba40..deece74de9 100644 --- a/docs/user-guide/kubectl/kubectl_create_service.md +++ b/docs/user-guide/kubectl/kubectl_create_service.md @@ -41,12 +41,7 @@ kubectl create service --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl create](kubectl_create.md) - Create a resource by filename or stdin -* [kubectl create service clusterip](kubectl_create_service_clusterip.md) - Create a clusterIP service. -* [kubectl create service loadbalancer](kubectl_create_service_loadbalancer.md) - Create a LoadBalancer service. -* [kubectl create service nodeport](kubectl_create_service_nodeport.md) - Create a NodePort service. ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_create_service_clusterip.md b/docs/user-guide/kubectl/kubectl_create_service_clusterip.md index 81837bba97..05258fc991 100644 --- a/docs/user-guide/kubectl/kubectl_create_service_clusterip.md +++ b/docs/user-guide/kubectl/kubectl_create_service_clusterip.md @@ -72,9 +72,7 @@ kubectl create service clusterip my-cs --clusterip="None" --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl create service](kubectl_create_service.md) - Create a service using specified subcommand. ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_create_service_loadbalancer.md b/docs/user-guide/kubectl/kubectl_create_service_loadbalancer.md index a12a8a68e1..8d3d22098a 100644 --- a/docs/user-guide/kubectl/kubectl_create_service_loadbalancer.md +++ b/docs/user-guide/kubectl/kubectl_create_service_loadbalancer.md @@ -68,9 +68,7 @@ kubectl create service loadbalancer my-lbs --tcp=5678:8080 --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl create service](kubectl_create_service.md) - Create a service using specified subcommand. ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_create_service_nodeport.md b/docs/user-guide/kubectl/kubectl_create_service_nodeport.md index 3cc3927812..e95c16b207 100644 --- a/docs/user-guide/kubectl/kubectl_create_service_nodeport.md +++ b/docs/user-guide/kubectl/kubectl_create_service_nodeport.md @@ -68,9 +68,7 @@ kubectl create service nodeport my-ns --tcp=5678:8080 --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl create service](kubectl_create_service.md) - Create a service using specified subcommand. ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_create_serviceaccount.md b/docs/user-guide/kubectl/kubectl_create_serviceaccount.md index 3005fefe2d..96092c3cb4 100644 --- a/docs/user-guide/kubectl/kubectl_create_serviceaccount.md +++ b/docs/user-guide/kubectl/kubectl_create_serviceaccount.md @@ -68,9 +68,7 @@ $ kubectl create serviceaccount my-service-account --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl create](kubectl_create.md) - Create a resource by filename or stdin ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_delete.md b/docs/user-guide/kubectl/kubectl_delete.md index af5d21dac0..5e8c303714 100644 --- a/docs/user-guide/kubectl/kubectl_delete.md +++ b/docs/user-guide/kubectl/kubectl_delete.md @@ -92,9 +92,7 @@ kubectl delete pods --all --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_describe.md b/docs/user-guide/kubectl/kubectl_describe.md index 550018f9ea..ff3c804c95 100644 --- a/docs/user-guide/kubectl/kubectl_describe.md +++ b/docs/user-guide/kubectl/kubectl_describe.md @@ -111,9 +111,7 @@ kubectl describe pods frontend --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_drain.md b/docs/user-guide/kubectl/kubectl_drain.md index ec6ab838a7..248ad68285 100644 --- a/docs/user-guide/kubectl/kubectl_drain.md +++ b/docs/user-guide/kubectl/kubectl_drain.md @@ -79,9 +79,7 @@ $ kubectl drain foo --grace-period=900 --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_edit.md b/docs/user-guide/kubectl/kubectl_edit.md index 3e0b3f53df..61d18ca825 100644 --- a/docs/user-guide/kubectl/kubectl_edit.md +++ b/docs/user-guide/kubectl/kubectl_edit.md @@ -89,9 +89,7 @@ kubectl edit svc/docker-registry --output-version=v1 -o json --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_exec.md b/docs/user-guide/kubectl/kubectl_exec.md index f8140491b8..188ffd9e62 100644 --- a/docs/user-guide/kubectl/kubectl_exec.md +++ b/docs/user-guide/kubectl/kubectl_exec.md @@ -65,9 +65,7 @@ kubectl exec 123456-7890 -c ruby-container -i -t -- bash -il --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_explain.md b/docs/user-guide/kubectl/kubectl_explain.md index f4d65ee7cd..c919415944 100644 --- a/docs/user-guide/kubectl/kubectl_explain.md +++ b/docs/user-guide/kubectl/kubectl_explain.md @@ -87,9 +87,7 @@ kubectl explain pods.spec.containers --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_expose.md b/docs/user-guide/kubectl/kubectl_expose.md index d94504cfa2..d699ea2a63 100644 --- a/docs/user-guide/kubectl/kubectl_expose.md +++ b/docs/user-guide/kubectl/kubectl_expose.md @@ -110,9 +110,7 @@ kubectl expose deployment nginx --port=80 --target-port=8000 --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_get.md b/docs/user-guide/kubectl/kubectl_get.md index beebab3c5d..99f2c25802 100644 --- a/docs/user-guide/kubectl/kubectl_get.md +++ b/docs/user-guide/kubectl/kubectl_get.md @@ -127,9 +127,7 @@ kubectl get rc/web service/frontend pods/web-pod-13je7 --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_label.md b/docs/user-guide/kubectl/kubectl_label.md index 589dce3bac..519111b41a 100644 --- a/docs/user-guide/kubectl/kubectl_label.md +++ b/docs/user-guide/kubectl/kubectl_label.md @@ -91,9 +91,7 @@ kubectl label pods foo bar- --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_logs.md b/docs/user-guide/kubectl/kubectl_logs.md index 8e1acc293d..5436ab7f08 100644 --- a/docs/user-guide/kubectl/kubectl_logs.md +++ b/docs/user-guide/kubectl/kubectl_logs.md @@ -75,9 +75,7 @@ kubectl logs --since=1h nginx --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_namespace.md b/docs/user-guide/kubectl/kubectl_namespace.md index 608a0581f8..3af680938c 100644 --- a/docs/user-guide/kubectl/kubectl_namespace.md +++ b/docs/user-guide/kubectl/kubectl_namespace.md @@ -41,9 +41,7 @@ kubectl namespace [namespace] --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_options.md b/docs/user-guide/kubectl/kubectl_options.md index 014faf6010..060fc472f8 100644 --- a/docs/user-guide/kubectl/kubectl_options.md +++ b/docs/user-guide/kubectl/kubectl_options.md @@ -41,9 +41,7 @@ kubectl options --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_patch.md b/docs/user-guide/kubectl/kubectl_patch.md index dcccf49be5..c695889060 100644 --- a/docs/user-guide/kubectl/kubectl_patch.md +++ b/docs/user-guide/kubectl/kubectl_patch.md @@ -83,9 +83,7 @@ kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/ --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_port-forward.md b/docs/user-guide/kubectl/kubectl_port-forward.md index c8783efaa0..c23318f172 100644 --- a/docs/user-guide/kubectl/kubectl_port-forward.md +++ b/docs/user-guide/kubectl/kubectl_port-forward.md @@ -64,9 +64,7 @@ kubectl port-forward mypod 0:5000 --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_proxy.md b/docs/user-guide/kubectl/kubectl_proxy.md index 7ebae29472..0e0abf0ee6 100644 --- a/docs/user-guide/kubectl/kubectl_proxy.md +++ b/docs/user-guide/kubectl/kubectl_proxy.md @@ -89,9 +89,7 @@ kubectl proxy --api-prefix=/k8s-api --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_replace.md b/docs/user-guide/kubectl/kubectl_replace.md index 04abefd976..61942a00fd 100644 --- a/docs/user-guide/kubectl/kubectl_replace.md +++ b/docs/user-guide/kubectl/kubectl_replace.md @@ -82,9 +82,7 @@ kubectl replace --force -f ./pod.json --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_rolling-update.md b/docs/user-guide/kubectl/kubectl_rolling-update.md index d6fa42992e..f4c35789a2 100644 --- a/docs/user-guide/kubectl/kubectl_rolling-update.md +++ b/docs/user-guide/kubectl/kubectl_rolling-update.md @@ -93,9 +93,7 @@ kubectl rolling-update frontend-v1 frontend-v2 --rollback --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_rollout.md b/docs/user-guide/kubectl/kubectl_rollout.md index 1117ae844b..49d13e8f8f 100644 --- a/docs/user-guide/kubectl/kubectl_rollout.md +++ b/docs/user-guide/kubectl/kubectl_rollout.md @@ -50,14 +50,7 @@ kubectl rollout undo deployment/abc --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager -* [kubectl rollout history](kubectl_rollout_history.md) - View rollout history -* [kubectl rollout pause](kubectl_rollout_pause.md) - Mark the provided resource as paused -* [kubectl rollout resume](kubectl_rollout_resume.md) - Resume a paused resource -* [kubectl rollout status](kubectl_rollout_status.md) - Watch rollout status until it's done -* [kubectl rollout undo](kubectl_rollout_undo.md) - Undo a previous rollout ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_rollout_history.md b/docs/user-guide/kubectl/kubectl_rollout_history.md index 2884cafd13..db82c4a24d 100644 --- a/docs/user-guide/kubectl/kubectl_rollout_history.md +++ b/docs/user-guide/kubectl/kubectl_rollout_history.md @@ -61,9 +61,7 @@ kubectl rollout history deployment/abc --revision=3 --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl rollout](kubectl_rollout.md) - Manage a deployment rollout ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_rollout_pause.md b/docs/user-guide/kubectl/kubectl_rollout_pause.md index 3387e0a06b..0636160072 100644 --- a/docs/user-guide/kubectl/kubectl_rollout_pause.md +++ b/docs/user-guide/kubectl/kubectl_rollout_pause.md @@ -63,9 +63,7 @@ kubectl rollout pause deployment/nginx --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl rollout](kubectl_rollout.md) - Manage a deployment rollout ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_rollout_resume.md b/docs/user-guide/kubectl/kubectl_rollout_resume.md index 02ac6476b7..3954ec033c 100644 --- a/docs/user-guide/kubectl/kubectl_rollout_resume.md +++ b/docs/user-guide/kubectl/kubectl_rollout_resume.md @@ -61,9 +61,7 @@ kubectl rollout resume deployment/nginx --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl rollout](kubectl_rollout.md) - Manage a deployment rollout ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_rollout_status.md b/docs/user-guide/kubectl/kubectl_rollout_status.md index 2d63fc960f..5730266e30 100644 --- a/docs/user-guide/kubectl/kubectl_rollout_status.md +++ b/docs/user-guide/kubectl/kubectl_rollout_status.md @@ -57,9 +57,7 @@ kubectl rollout status deployment/nginx --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl rollout](kubectl_rollout.md) - Manage a deployment rollout ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_rollout_undo.md b/docs/user-guide/kubectl/kubectl_rollout_undo.md index 886162f8ab..1f758bd68d 100644 --- a/docs/user-guide/kubectl/kubectl_rollout_undo.md +++ b/docs/user-guide/kubectl/kubectl_rollout_undo.md @@ -61,9 +61,7 @@ kubectl rollout undo deployment/abc --to-revision=3 --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl rollout](kubectl_rollout.md) - Manage a deployment rollout ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_run.md b/docs/user-guide/kubectl/kubectl_run.md index eea1dcfb80..a62adb32a1 100644 --- a/docs/user-guide/kubectl/kubectl_run.md +++ b/docs/user-guide/kubectl/kubectl_run.md @@ -120,9 +120,7 @@ kubectl run pi --schedule="0/5 * * * ?" --image=perl --restart=OnFailure -- perl --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_scale.md b/docs/user-guide/kubectl/kubectl_scale.md index 5f6be05bef..bf781d2ee6 100644 --- a/docs/user-guide/kubectl/kubectl_scale.md +++ b/docs/user-guide/kubectl/kubectl_scale.md @@ -81,9 +81,7 @@ kubectl scale --replicas=3 job/cron --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_set.md b/docs/user-guide/kubectl/kubectl_set.md index 25091604c3..7968396c09 100644 --- a/docs/user-guide/kubectl/kubectl_set.md +++ b/docs/user-guide/kubectl/kubectl_set.md @@ -44,10 +44,7 @@ kubectl set SUBCOMMAND --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager -* [kubectl set image](kubectl_set_image.md) - Update image of a pod template ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_set_image.md b/docs/user-guide/kubectl/kubectl_set_image.md index 58e50e8ecd..2797c4b83b 100644 --- a/docs/user-guide/kubectl/kubectl_set_image.md +++ b/docs/user-guide/kubectl/kubectl_set_image.md @@ -80,9 +80,7 @@ kubectl set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl set](kubectl_set.md) - Set specific features on objects ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_stop.md b/docs/user-guide/kubectl/kubectl_stop.md index f01281ca18..f46e281c7c 100644 --- a/docs/user-guide/kubectl/kubectl_stop.md +++ b/docs/user-guide/kubectl/kubectl_stop.md @@ -76,9 +76,7 @@ $ kubectl stop -f path/to/resources --vmodule=: comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 24-Nov-2015 diff --git a/docs/user-guide/kubectl/kubectl_taint.md b/docs/user-guide/kubectl/kubectl_taint.md index 075c5ff252..0f0abbf69f 100644 --- a/docs/user-guide/kubectl/kubectl_taint.md +++ b/docs/user-guide/kubectl/kubectl_taint.md @@ -81,9 +81,7 @@ kubectl taint nodes foo dedicated- --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_top.md b/docs/user-guide/kubectl/kubectl_top.md index 9864b15425..392240410b 100644 --- a/docs/user-guide/kubectl/kubectl_top.md +++ b/docs/user-guide/kubectl/kubectl_top.md @@ -44,11 +44,7 @@ kubectl top --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager -* [kubectl top node](kubectl_top_node.md) - Display Resource (CPU/Memory/Storage) usage of nodes -* [kubectl top pod](kubectl_top_pod.md) - Display Resource (CPU/Memory/Storage) usage of pods ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_top_node.md b/docs/user-guide/kubectl/kubectl_top_node.md index 2639e4c0a3..724c2ab3f5 100644 --- a/docs/user-guide/kubectl/kubectl_top_node.md +++ b/docs/user-guide/kubectl/kubectl_top_node.md @@ -61,9 +61,7 @@ kubectl top node NODE_NAME --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl top](kubectl_top.md) - Display Resource (CPU/Memory/Storage) usage ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_top_pod.md b/docs/user-guide/kubectl/kubectl_top_pod.md index 9adcad5c33..2c45fbe07c 100644 --- a/docs/user-guide/kubectl/kubectl_top_pod.md +++ b/docs/user-guide/kubectl/kubectl_top_pod.md @@ -72,9 +72,7 @@ kubectl top pod -l name=myLabel --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl top](kubectl_top.md) - Display Resource (CPU/Memory/Storage) usage ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_uncordon.md b/docs/user-guide/kubectl/kubectl_uncordon.md index 107666f946..ef142c72c2 100644 --- a/docs/user-guide/kubectl/kubectl_uncordon.md +++ b/docs/user-guide/kubectl/kubectl_uncordon.md @@ -52,9 +52,7 @@ $ kubectl uncordon foo --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/kubectl/kubectl_version.md b/docs/user-guide/kubectl/kubectl_version.md index 10065476a7..9abc0c2856 100644 --- a/docs/user-guide/kubectl/kubectl_version.md +++ b/docs/user-guide/kubectl/kubectl_version.md @@ -47,9 +47,7 @@ kubectl version --vmodule value comma-separated list of pattern=N settings for file-filtered logging ``` -### SEE ALSO -* [kubectl](../kubectl.md) - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 2-Sep-2016 diff --git a/docs/user-guide/logging.md b/docs/user-guide/logging.md index 54f9e77e61..d329016c7b 100644 --- a/docs/user-guide/logging.md +++ b/docs/user-guide/logging.md @@ -1,80 +1,80 @@ ---- -assignees: -- mikedanese - ---- - -This page is designed to help you use logs to troubleshoot issues with your Kubernetes solution. - -## Logging by Kubernetes Components - -Kubernetes components, such as kubelet and apiserver, use the [glog](https://godoc.org/github.com/golang/glog) logging library. Developer conventions for logging severity are described in [docs/devel/logging.md](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/logging.md). - -## Examining the logs of running containers - -The logs of a running container may be fetched using the command `kubectl logs`. For example, given -this pod specification [counter-pod.yaml](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/blog-logging/counter-pod.yaml), which has a container which writes out some text to standard -output every second. (You can find different pod specifications [here](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/user-guide/logging-demo/).) - -{% include code.html language="yaml" file="counter-pod.yaml" k8slink="/examples/blog-logging/counter-pod.yaml" %} - -we can run the pod: - -```shell -$ kubectl create -f ./counter-pod.yaml -pods/counter -``` - -and then fetch the logs: - -```shell -$ kubectl logs counter -0: Tue Jun 2 21:37:31 UTC 2015 -1: Tue Jun 2 21:37:32 UTC 2015 -2: Tue Jun 2 21:37:33 UTC 2015 -3: Tue Jun 2 21:37:34 UTC 2015 -4: Tue Jun 2 21:37:35 UTC 2015 -5: Tue Jun 2 21:37:36 UTC 2015 -... -``` - -If a pod has more than one container then you need to specify which container's log files should -be fetched e.g. - -```shell -$ kubectl logs kube-dns-v3-7r1l9 etcd -2015/06/23 00:43:10 etcdserver: start to snapshot (applied: 30003, lastsnap: 20002) -2015/06/23 00:43:10 etcdserver: compacted log at index 30003 -2015/06/23 00:43:10 etcdserver: saved snapshot at index 30003 -2015/06/23 02:05:42 etcdserver: start to snapshot (applied: 40004, lastsnap: 30003) -2015/06/23 02:05:42 etcdserver: compacted log at index 40004 -2015/06/23 02:05:42 etcdserver: saved snapshot at index 40004 -2015/06/23 03:28:31 etcdserver: start to snapshot (applied: 50005, lastsnap: 40004) -2015/06/23 03:28:31 etcdserver: compacted log at index 50005 -2015/06/23 03:28:31 etcdserver: saved snapshot at index 50005 -2015/06/23 03:28:56 filePurge: successfully removed file default.etcd/member/wal/0000000000000000-0000000000000000.wal -2015/06/23 04:51:03 etcdserver: start to snapshot (applied: 60006, lastsnap: 50005) -2015/06/23 04:51:03 etcdserver: compacted log at index 60006 -2015/06/23 04:51:03 etcdserver: saved snapshot at index 60006 -... -``` - -## Cluster level logging to Google Cloud Logging - -The getting started guide [Cluster Level Logging to Google Cloud Logging](/docs/getting-started-guides/logging) -explains how container logs are ingested into [Google Cloud Logging](https://cloud.google.com/logging/docs/) -and shows how to query the ingested logs. - -## Cluster level logging with Elasticsearch and Kibana - -The getting started guide [Cluster Level Logging with Elasticsearch and Kibana](/docs/getting-started-guides/logging-elasticsearch) -describes how to ingest cluster level logs into Elasticsearch and view them using Kibana. - -## Ingesting Application Log Files - -Cluster level logging only collects the standard output and standard error output of the applications -running in containers. The guide [Collecting log files from within containers with Fluentd and sending them to the Google Cloud Logging service](https://github.com/kubernetes/contrib/blob/master/logging/fluentd-sidecar-gcp/README.md) explains how the log files of applications can also be ingested into Google Cloud logging. - -## Known issues - +--- +assignees: +- mikedanese + +--- + +This page is designed to help you use logs to troubleshoot issues with your Kubernetes solution. + +## Logging by Kubernetes Components + +Kubernetes components, such as kubelet and apiserver, use the [glog](https://godoc.org/github.com/golang/glog) logging library. Developer conventions for logging severity are described in [docs/devel/logging.md](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/logging.md). + +## Examining the logs of running containers + +The logs of a running container may be fetched using the command `kubectl logs`. For example, given +this pod specification [counter-pod.yaml](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/blog-logging/counter-pod.yaml), which has a container which writes out some text to standard +output every second. (You can find different pod specifications [here](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/user-guide/logging-demo/).) + +{% include code.html language="yaml" file="counter-pod.yaml" k8slink="/examples/blog-logging/counter-pod.yaml" %} + +we can run the pod: + +```shell +$ kubectl create -f ./counter-pod.yaml +pods/counter +``` + +and then fetch the logs: + +```shell +$ kubectl logs counter +0: Tue Jun 2 21:37:31 UTC 2015 +1: Tue Jun 2 21:37:32 UTC 2015 +2: Tue Jun 2 21:37:33 UTC 2015 +3: Tue Jun 2 21:37:34 UTC 2015 +4: Tue Jun 2 21:37:35 UTC 2015 +5: Tue Jun 2 21:37:36 UTC 2015 +... +``` + +If a pod has more than one container then you need to specify which container's log files should +be fetched e.g. + +```shell +$ kubectl logs kube-dns-v3-7r1l9 etcd +2015/06/23 00:43:10 etcdserver: start to snapshot (applied: 30003, lastsnap: 20002) +2015/06/23 00:43:10 etcdserver: compacted log at index 30003 +2015/06/23 00:43:10 etcdserver: saved snapshot at index 30003 +2015/06/23 02:05:42 etcdserver: start to snapshot (applied: 40004, lastsnap: 30003) +2015/06/23 02:05:42 etcdserver: compacted log at index 40004 +2015/06/23 02:05:42 etcdserver: saved snapshot at index 40004 +2015/06/23 03:28:31 etcdserver: start to snapshot (applied: 50005, lastsnap: 40004) +2015/06/23 03:28:31 etcdserver: compacted log at index 50005 +2015/06/23 03:28:31 etcdserver: saved snapshot at index 50005 +2015/06/23 03:28:56 filePurge: successfully removed file default.etcd/member/wal/0000000000000000-0000000000000000.wal +2015/06/23 04:51:03 etcdserver: start to snapshot (applied: 60006, lastsnap: 50005) +2015/06/23 04:51:03 etcdserver: compacted log at index 60006 +2015/06/23 04:51:03 etcdserver: saved snapshot at index 60006 +... +``` + +## Cluster level logging to Google Cloud Logging + +The getting started guide [Cluster Level Logging to Google Cloud Logging](/docs/getting-started-guides/logging) +explains how container logs are ingested into [Google Cloud Logging](https://cloud.google.com/logging/docs/) +and shows how to query the ingested logs. + +## Cluster level logging with Elasticsearch and Kibana + +The getting started guide [Cluster Level Logging with Elasticsearch and Kibana](/docs/getting-started-guides/logging-elasticsearch) +describes how to ingest cluster level logs into Elasticsearch and view them using Kibana. + +## Ingesting Application Log Files + +Cluster level logging only collects the standard output and standard error output of the applications +running in containers. The guide [Collecting log files from within containers with Fluentd and sending them to the Google Cloud Logging service](https://github.com/kubernetes/contrib/blob/master/logging/fluentd-sidecar-gcp/README.md) explains how the log files of applications can also be ingested into Google Cloud logging. + +## Known issues + Kubernetes does log rotation for Kubernetes components and docker containers. The command `kubectl logs` currently only read the latest logs, not all historical ones. \ No newline at end of file diff --git a/docs/user-guide/monitoring.md b/docs/user-guide/monitoring.md index 6125291421..0c5f673708 100644 --- a/docs/user-guide/monitoring.md +++ b/docs/user-guide/monitoring.md @@ -6,7 +6,7 @@ assignees: Understanding how an application behaves when deployed is crucial to scaling the application and providing a reliable service. In a Kubernetes cluster, application performance can be examined at many different levels: containers, [pods](/docs/user-guide/pods), [services](/docs/user-guide/services), and whole clusters. As part of Kubernetes we want to provide users with detailed resource usage information about their running applications at all these levels. This will give users deep insights into how their applications are performing and where possible application bottlenecks may be found. In comes [Heapster](https://github.com/kubernetes/heapster), a project meant to provide a base monitoring platform on Kubernetes. -### Overview +## Overview Heapster is a cluster-wide aggregator of monitoring and event data. It currently supports Kubernetes natively and works on all Kubernetes setups. Heapster runs as a pod in the cluster, similar to how any Kubernetes application would run. The Heapster pod discovers all nodes in the cluster and queries usage information from the nodes' [Kubelet](https://releases.k8s.io/{{page.githubbranch}}/DESIGN.md#kubelet)s, the on-machine Kubernetes agent. The Kubelet itself fetches the data from [cAdvisor](https://github.com/google/cadvisor). Heapster groups the information by pod along with the relevant labels. This data is then pushed to a configurable backend for storage and visualization. Currently supported backends include [InfluxDB](http://influxdb.com/) (with [Grafana](http://grafana.org/) for visualization), [Google Cloud Monitoring](https://cloud.google.com/monitoring/) and many others described in more details [here](https://github.com/kubernetes/heapster/blob/master/docs/sink-configuration.md). The overall architecture of the service can be seen below: diff --git a/docs/user-guide/persistent-volumes/index.md b/docs/user-guide/persistent-volumes/index.md index 5232f068cd..8c69a75379 100644 --- a/docs/user-guide/persistent-volumes/index.md +++ b/docs/user-guide/persistent-volumes/index.md @@ -150,7 +150,7 @@ In the CLI, the access modes are abbreviated to: | HostPath | x | - | - | | iSCSI | x | x | - | | NFS | x | x | x | -| RDB | x | x | - | +| RBD | x | x | - | | VsphereVolume | x | - | - | ### Class diff --git a/docs/user-guide/petset/bootstrapping/index.md b/docs/user-guide/petset/bootstrapping/index.md index e9b04fc135..03ba721edc 100644 --- a/docs/user-guide/petset/bootstrapping/index.md +++ b/docs/user-guide/petset/bootstrapping/index.md @@ -8,7 +8,7 @@ This purpose of this guide is to help you become familiar with the runtime initialization of [Pet Sets](/docs/user-guide/petset). This guide assumes the same prerequisites, and uses the same terminology as the [Pet Set user document](/docs/user-guide/petset). -The most common way to initialize the runtime in a containerized environment, is through a custom [entrypoint](https://docs.docker.com/engine/reference/builder/#entrypoint). While this is not necessarily bad, making your application pid 1, and treating containers as processes in general is good for a few reasons outside the scope of this document. Doing so allows you to run docker images from third-party vendors without modification. We will not be writing custom entrypoints for this example, but using a feature called [init containers](http://releases.k8s.io/{{page.githubbranch}}/docs/proposals/container-init.md), to explain 2 common patterns that come up deploying Pet Sets. +The most common way to initialize the runtime in a containerized environment, is through a custom [entrypoint](https://docs.docker.com/engine/reference/builder/#entrypoint). While this is not necessarily bad, making your application pid 1, and treating containers as processes in general is good for a few reasons outside the scope of this document. Doing so allows you to run docker images from third-party vendors without modification. We will not be writing custom entrypoints for this example, but using a feature called [init containers](http://kubernetes.io/docs/user-guide/production-pods/#handling-initialization), to explain 2 common patterns that come up deploying Pet Sets. 1. Transferring state across Pet restart, so that a future Pet is initialized with the computations of its past incarnation 2. Initializing the runtime environment of a Pet based on existing conditions, like a list of currently healthy peers diff --git a/docs/user-guide/pods/index.md b/docs/user-guide/pods/index.md index 4c16b66074..b5de192d83 100644 --- a/docs/user-guide/pods/index.md +++ b/docs/user-guide/pods/index.md @@ -150,7 +150,9 @@ Pod is exposed as a primitive in order to facilitate: * clean composition of Kubelet-level functionality with cluster-level functionality — Kubelet is effectively the "pod controller" * high-availability applications, which will expect pods to be replaced in advance of their termination and certainly in advance of deletion, such as in the case of planned evictions, image prefetching, or live pod migration [#3949](http://issue.k8s.io/3949) -The current best practice for pets is to create a replication controller with `replicas` equal to `1` and a corresponding service. If you find this cumbersome, please comment on [issue #260](http://issue.k8s.io/260). +There is new first-class support for pet-like pods with the [PetSet](/docs/user-guide/petset/) feature (currently in alpha). +For prior versions of Kubernetes, best practice for pets is to create a replication controller with `replicas` equal to `1` and a corresponding service. + ## Termination of Pods diff --git a/docs/user-guide/prereqs.md b/docs/user-guide/prereqs.md index dfba1542af..e6b94baa62 100644 --- a/docs/user-guide/prereqs.md +++ b/docs/user-guide/prereqs.md @@ -5,7 +5,7 @@ assignees: --- -To deploy and manage applications on Kubernetes, you’ll use the Kubernetes command-line tool, [kubectl](/docs/user-guide/kubectl/kubectl/). It lets you inspect your cluster resources, create, delete, and update components, and much more. You will use it to look at your new cluster and bring up example apps. +To deploy and manage applications on Kubernetes, you’ll use the Kubernetes command-line tool, [kubectl](/docs/user-guide/kubectl/). It lets you inspect your cluster resources, create, delete, and update components, and much more. You will use it to look at your new cluster and bring up example apps. ## Installing kubectl diff --git a/docs/user-guide/production-pods.md b/docs/user-guide/production-pods.md index efd1c43e43..c345ada200 100644 --- a/docs/user-guide/production-pods.md +++ b/docs/user-guide/production-pods.md @@ -169,7 +169,7 @@ If no resource requirements are specified, a nominal amount of resources is assu {% include code.html language="yaml" file="redis-resource-deployment.yaml" ghlink="/docs/user-guide/redis-resource-deployment.yaml" %} -The container will die due to OOM (out of memory) if it exceeds its specified limit, so specifying a value a little higher than expected generally improves reliability. By specifying request, pod is guaranteed to be able to use that much of resource when needed. See [Resource QoS](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/resource-qos.md) for the difference between resource limits and requests. +The container will die due to OOM (out of memory) if it exceeds its specified limit, so specifying a value a little higher than expected generally improves reliability. By specifying request, pod is guaranteed to be able to use that much of resource when needed. See [Resource QoS](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/resource-qos.md) for the difference between resource limits and requests. If you're not sure how much resources to request, you can first launch the application without specifying resources, and use [resource usage monitoring](/docs/user-guide/monitoring) to determine appropriate values. @@ -194,13 +194,13 @@ Applications often need a set of initialization steps prior to performing their * Registering the pod into a central database, or fetching remote configuration from that database * Downloading application dependencies, seed data, or preconfiguring disk -Kubernetes now includes an alpha feature known as **init containers**, which are one or more containers in a pod that get a chance to run and initialize shared volumes prior to the other application containers starting. An init container is exactly like a regular container, except that it always runs to completion and each init container must complete successfully before the next one is started. If the init container fails (exits with a non-zero exit code) on a `RestartNever` pod the pod will fail - otherwise it will be restarted until it succeeds or the user deletes the pod. +Kubernetes now includes a beta feature known as **init containers**, which are one or more containers in a pod that get a chance to run and initialize shared volumes prior to the other application containers starting. An init container is exactly like a regular container, except that it always runs to completion and each init container must complete successfully before the next one is started. If the init container fails (exits with a non-zero exit code) on a `RestartNever` pod the pod will fail - otherwise it will be restarted until it succeeds or the user deletes the pod. -Since init containers are an alpha feature, they are specified by setting the `pod.alpha.kubernetes.io/init-containers` annotation on a pod (or replica set, deployment, daemon set, pet set, or job). The value of the annotation must be a string containing a JSON array of container definitions: +Since init containers are a beta feature, they are specified by setting the `pod.beta.kubernetes.io/init-containers` annotation on a pod (or replica set, deployment, daemon set, pet set, or job). The value of the annotation must be a string containing a JSON array of container definitions: {% include code.html language="yaml" file="nginx-init-containers.yaml" ghlink="/docs/user-guide/nginx-init-containers.yaml" %} -The status of the init containers is returned as another annotation - `pod.alpha.kubernetes.io/init-container-statuses` -- as an array of the container statuses (similar to the `status.containerStatuses` field). +The status of the init containers is returned as another annotation - `pod.beta.kubernetes.io/init-container-statuses` -- as an array of the container statuses (similar to the `status.containerStatuses` field). Init containers support all of the same features as normal containers, including resource limits, volumes, and security settings. The resource requests and limits for an init container are handled slightly different than normal containers since init containers are run one at a time instead of all at once - any limits or quotas will be applied based on the largest init container resource quantity, rather than as the sum of quantities. Init containers do not support readiness probes since they will run to completion before the pod can be ready. diff --git a/docs/user-guide/replicasets.md b/docs/user-guide/replicasets.md index 27e9e3da88..d06be55328 100644 --- a/docs/user-guide/replicasets.md +++ b/docs/user-guide/replicasets.md @@ -18,7 +18,7 @@ the selector support. Replica Set supports the new set-based selector requiremen as described in the [labels user guide](/docs/user-guide/labels/#label-selectors) whereas a Replication Controller only supports equality-based selector requirements. -Most [`kubectl`](/docs/user-guide/kubectl/kubectl/) commands that support +Most [`kubectl`](/docs/user-guide/kubectl/) commands that support Replication Controllers also support Replica Sets. One exception is the [`rolling-update`](/docs/user-guide/kubectl/kubectl_rolling-update/) command. If you want the rolling update functionality please consider using Deployments diff --git a/docs/user-guide/rolling-updates.md b/docs/user-guide/rolling-updates.md index 63d3622c89..c9ba5d3d27 100644 --- a/docs/user-guide/rolling-updates.md +++ b/docs/user-guide/rolling-updates.md @@ -12,7 +12,7 @@ assignees: To update a service without an outage, `kubectl` supports what is called ['rolling update'](/docs/user-guide/kubectl/kubectl_rolling-update), which updates one pod at a time, rather than taking down the entire service at the same time. See the [rolling update design document](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/simple-rolling-update.md) and the [example of rolling update](/docs/user-guide/update-demo/) for more information. Note that `kubectl rolling-update` only supports Replication Controllers. However, if you deploy applications with Replication Controllers, -consider switching them to [Deployments](/docs/user-guide/deployments/). A Deployments is a higher-level controller that automates rolling updates +consider switching them to [Deployments](/docs/user-guide/deployments/). A Deployment is a higher-level controller that automates rolling updates of applications declaratively, and therefore is recommended. If you still want to keep your Replication Controllers and use `kubectl rolling-update`, keep reading: A rolling update applies changes to the configuration of pods being managed by diff --git a/docs/user-guide/secrets/index.md b/docs/user-guide/secrets/index.md index ab8eb9388e..f9931bfbf5 100644 --- a/docs/user-guide/secrets/index.md +++ b/docs/user-guide/secrets/index.md @@ -284,7 +284,7 @@ For example, you can specify a default mode like this: "image": "redis", "volumeMounts": [{ "name": "foo", - "mountPath": "/etc/foo", + "mountPath": "/etc/foo" }] }], "volumes": [{ @@ -322,7 +322,7 @@ permission for different files like this: "image": "redis", "volumeMounts": [{ "name": "foo", - "mountPath": "/etc/foo", + "mountPath": "/etc/foo" }] }], "volumes": [{ @@ -377,7 +377,7 @@ To use a secret in an environment variable in a pod: 1. Modify your Pod definition in each container that you wish to consume the value of a secret key to add an environment variable for each secret key you wish to consume. The environment variable that consumes the secret key should populate the secret's name and key in `env[x].valueFrom.secretKeyRef`. 1. Modify your image and/or command line so that the program looks for values in the specified environment variables -This is an example of a pod that mounts a secret in a volume: +This is an example of a pod that uses secrets from environment variables: ```yaml apiVersion: v1 @@ -543,9 +543,9 @@ credentials. Make the secrets: ```shell -$ kubectl create secret generic prod-db-secret --from-literal=user=produser --from-literal=password=Y4nys7f11 +$ kubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11 secret "prod-db-secret" created -$ kubectl create secret generic test-db-secret --from-literal=user=testuser --from-literal=password=iluvtests +$ kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests secret "test-db-secret" created ``` diff --git a/docs/user-guide/thirdpartyresources.md b/docs/user-guide/thirdpartyresources.md new file mode 100644 index 0000000000..d8f2bc5ba9 --- /dev/null +++ b/docs/user-guide/thirdpartyresources.md @@ -0,0 +1,118 @@ +--- +assignees: +- IanLewis + +--- + +* TOC +{:toc} + +## What is ThirdPartyResource? + +Kubernetes comes with many built-in API objects. However, there are often times when you might need to extend Kubernetes with their own API objects in order to do custom automation. + +`ThirdPartyResource` objects are a way to extend the Kubernetes API with a new API object type. The new API object type will be given an API endpoint URL and support CRUD operations, and watch API. You can then create custom objects using this API endpoint. You can think of `ThirdPartyResources` as being much like the schema for a database table. Once you have created the table, you can then start storing rows in the table. Once created, `ThirdPartyResources` can act as the data model behind custom controllers or automation programs. + +## Structure of a ThirdPartyResource + +Each `ThirdPartyResource` has the following: + + * `metadata` - Standard Kubernetes object metadata. + * `kind` - The kind of the resources described by this third party resource. + * `description` - A free text description of the resource. + * `versions` - A list of the versions of the resource. + +The `kind` for a `ThirdPartyResource` takes the form `.`. You are expected to provide a unique kind and domain name in order to avoid conflicts with other `ThirdPartyResource` objects. Kind names will be converted to CamelCase when creating instances of the `ThirdPartyResource`. Hypens in the `kind` are assumed to be word breaks. For instance the kind `camel-case` would be converted to `CamelCase` but `camelcase` would be converted to `Camelcase`. + +Other fields on the `ThirdPartyResource` are treated as custom data fields. These fields can hold arbitrary JSON data and have any structure. + +You can view the full documentation about `ThirdPartyResources` using the `explain` command in kubectl. + +``` +$ kubectl explain thirdpartyresource +``` + +## Creating a ThirdPartyResource + +When you user create a new `ThirdPartyResource`, the Kubernetes API Server reacts by creating a new, namespaced RESTful resource path. For now, non-namespaced objects are not supported. As with existing built-in objects, deleting a namespace deletes all custom objects in that namespace. `ThirdPartyResources` themselves are non-namespaced and are available to all namespaces. + +For example, if a save the following `ThirdPartyResource` to `resource.yaml`: + +```yaml +apiVersion: extensions/v1beta1 +kind: ThirdPartyResource +metadata: + name: cron-tab.stable.example.com +description: "A specification of a Pod to run on a cron style schedule" +versions: +- name: v1 +``` + +And create it: + +```shell +$ kubectl create -f resource.yaml +thirdpartyresource "cron-tab.stable.example.com" created +``` + +Then a new RESTful API endpoint is created at: + +`/apis/stable.example.com/v1/namespaces//crontabs/...` + +This endpoint URL can then be used to create and manage custom objects. + +## Creating Custom Objects + +After the `ThirdPartyResource` object has been created you can create custom objects. Custom objects can contain custom fields. These fields can contain arbitrary JSON. +In the following example, a `cronSpec` and `image` custom fields are set to the custom `CronTab` object. If you save the following YAML to `my-crontab.yaml`: + +```yaml +apiVersion: "stable.example.com/v1" +kind: CronTab +metadata: + name: my-new-cron-object +cronSpec: "* * * * /5" +image: my-awesome-cron-image +``` + +and create it: + +```shell +$ kubectl create -f my-crontab.yaml +crontab "my-new-cron-object" created +``` + +You can then manage our `CronTab` objects using kubectl. Note that resource names are not case-sensitive when using kubectl: + +```shell +$ kubectl get crontab +NAME LABELS DATA +my-new-cron-object {"apiVersion":"stable.example.com/v1","cronSpec":"... +``` + +You can also view the raw JSON data. Here you can see that it contains the custom `cronSpec` and `image` fields from the yaml you used to create it: + +```yaml +$ kubectl get crontab -o json +{ + "kind": "List", + "apiVersion": "v1", + "metadata": {}, + "items": [ + { + "apiVersion": "stable.example.com/v1", + "cronSpec": "* * * * /5", + "image": "my-awesome-cron-image", + "kind": "CronTab", + "metadata": { + "creationTimestamp": "2016-09-29T04:59:00Z", + "name": "my-new-cron-object", + "namespace": "default", + "resourceVersion": "12601503", + "selfLink": "/apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object", + "uid": "6f65e7a3-8601-11e6-a23e-42010af0000c" + } + } + ] +} +``` diff --git a/docs/user-guide/ui.md b/docs/user-guide/ui.md index 0efdb3fd1f..84e0adabc6 100644 --- a/docs/user-guide/ui.md +++ b/docs/user-guide/ui.md @@ -163,7 +163,7 @@ Workloads are categorized as follows: * [Daemon Sets](http://kubernetes.io/docs/admin/daemons/) which ensure that all or some of the nodes in your cluster run a copy of a Pod. * [Deployments](http://kubernetes.io/docs/user-guide/deployments/) which provide declarative updates for Pods and Replica Sets (the next-generation [Replication Controller](http://kubernetes.io/docs/user-guide/replication-controller/)) The Details page for a Deployment lists resource details, as well as new and old Replica Sets. The resource details also include information on the [RollingUpdate](http://kubernetes.io/docs/user-guide/rolling-updates/) strategy, if any. -* [Pet Sets](http://kubernetes.io/docs/user-guide/load-balancer/) (nominal Services, also known as load-balanced Services) for legacy application support. +* [Pet Sets](http://kubernetes.io/docs/user-guide/petset/) (nominal Services, also known as load-balanced Services) for legacy application support. * [Replica Sets](http://kubernetes.io/docs/user-guide/replicasets/) for using label selectors. * [Jobs](http://kubernetes.io/docs/user-guide/jobs/) for creating one or more Pods, ensuring that a specified number of them successfully terminate, and tracking the completions. * [Replication Controllers](http://kubernetes.io/docs/user-guide/replication-controller/) diff --git a/docs/user-guide/volumes.md b/docs/user-guide/volumes.md index e33bd3e018..a7e0b3c2ba 100644 --- a/docs/user-guide/volumes.md +++ b/docs/user-guide/volumes.md @@ -126,9 +126,10 @@ Watch out when using this type of volume, because: behave differently on different nodes due to different files on the nodes * when Kubernetes adds resource-aware scheduling, as is planned, it will not be able to account for resources used by a `hostPath` -* the directories created on the underlying hosts are only writable by root, you either need - to run your process as root in a privileged container or modify the file permissions on - the host to be able to write to a `hostPath` volume +* the directories created on the underlying hosts are only writable by root. You + either need to run your process as root in a + [privileged container](/docs/user-guide/security-context) or modify the file + permissions on the host to be able to write to a `hostPath` volume #### Example pod diff --git a/index.html b/index.html index 3c940304ba..cf01ad2e92 100644 --- a/index.html +++ b/index.html @@ -14,7 +14,7 @@ title: Production-Grade Container Orchestration

      Production-Grade Container Orchestration

      Automated container deployment, scaling, and management
      - Try Our Hello World + Try Our Interactive Tutorials
      diff --git a/js/redirects.js b/js/redirects.js new file mode 100644 index 0000000000..dc3cbb56ed --- /dev/null +++ b/js/redirects.js @@ -0,0 +1,60 @@ +$( document ).ready(function() { + var oldURLs=["/README.md","/README.html",".html",".md","/v1.1/","/v1.0/"]; + var fwdDirs=["examples/","cluster/","docs/devel","docs/design"]; + var doRedirect = false; + var notHere = false; + var forwardingURL=window.location.href; + + var redirects = [{ + "from": "third_party/swagger-ui", + "to": "http://kubernetes.io/kubernetes/third_party/swagger-ui/" + }, + { + "from": "resource-quota", + "to": "http://kubernetes.io/docs/admin/resourcequota/" + }, + { + "from": "horizontal-pod-autoscaler", + "to": "http://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/" + }, + { + "from": "docs/roadmap", + "to": "https://github.com/kubernetes/kubernetes/milestones/" + }, + { + "from": "api-ref/", + "to": "https://github.com/kubernetes/kubernetes/milestones/" + }, + { + "from": "docs/user-guide/overview", + "to": "http://kubernetes.io/docs/whatisk8s/" + }]; + + for (i=0;i -1){ + notHere = true; + window.location.replace(redirects[i].to); + } + } + + for (i=0;i -1){ + var urlPieces = forwardingURL.split(fwdDirs[i]); + var newURL = "https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/" + fwdDirs[i] + urlPieces[1]; + notHere = true; + window.location.replace(newURL); + } + } + if (!notHere) { + for (i=0;i -1 && + forwardingURL.indexOf("404.html") < 0){ + doRedirect=true; + forwardingURL=forwardingURL.replace(oldURLs[i],"/"); + } + } + if (doRedirect){ + window.location.replace(forwardingURL); + }; + } +}); diff --git a/robots.txt b/robots.txt index 187d7c94bb..9bb39d8dbd 100644 --- a/robots.txt +++ b/robots.txt @@ -3,5 +3,7 @@ User-agent: * Disallow: /legacy/ Disallow: /v1.0/ Disallow: /v1.1/ +Disallow: /404/ +Disallow: 404.html SITEMAP: http://kubernetes.io/sitemap.xml diff --git a/sitemap.xml b/sitemap.xml index a965f4a570..ff1dd0d398 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -11,8 +11,8 @@ http://kubernetes.io/ {{ site.time | date_to_xmlschema }} -{% for page in site.pages %} +{% for page in site.pages %}{% if page.url != "/404.html" and page.url != "/sitemap.xml" and page.url != "/css/styles.css" %} http://kubernetes.io{{ page.url }} {% if page.date %}{{ page.date | date_to_xmlschema }}{% else %}{{ site.time | date_to_xmlschema }}{% endif %} -{% endfor %} - \ No newline at end of file +{% endif %}{% endfor %} + diff --git a/update-imported-docs.sh b/update-imported-docs.sh index 2b9203e168..5c2674cf6c 100755 --- a/update-imported-docs.sh +++ b/update-imported-docs.sh @@ -102,6 +102,13 @@ cd docs/user-guide/kubectl find . -name '*.md' -type f -exec sed -i -e '//,//d' {} \; find . -name '*.md' -type f -exec sed -i -e '//,//d' {} \; + # Rename kubectl.md to index.md + mv kubectl.md index.md + # Strip the "See Also" links. + # These links in raw .md files are relative to current file location, but the website see them as relative to current url instead, and will return 404. + find . -name 'kubectl*.md' -type f -exec sed -i -e '/### SEE ALSO/d' {} \; + find . -name 'kubectl*.md' -type f -exec sed -i -e '/\* \[kubectl/d' {} \; + # Add the expected headers to md files find . -name '*.md' -type f -exec sed -i -e '1 i\ ---' {} \;