From b716a92b5e40e1c2ecaf67f186c1df61d4710574 Mon Sep 17 00:00:00 2001 From: Jason Murray Date: Tue, 15 Nov 2016 07:37:30 +0100 Subject: [PATCH 01/24] Improve Grammar for Authentication strategies --- docs/admin/authentication.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/admin/authentication.md b/docs/admin/authentication.md index 6819677107..4015610e52 100644 --- a/docs/admin/authentication.md +++ b/docs/admin/authentication.md @@ -35,7 +35,7 @@ or be treated as an anonymous user. Kubernetes uses client certificates, bearer tokens, or HTTP basic auth to authenticate API requests through authentication plugins. As HTTP request are -made to the API server plugins attempts to associate the following attributes +made to the API server, plugins attempt to associate the following attributes with the request: * Username: a string which identifies the end user. Common values might be `kube-admin` or `jane@example.com`. From 5dbabed7a96c61937267a434e0f150c6213faed6 Mon Sep 17 00:00:00 2001 From: Brandon Philips Date: Tue, 29 Nov 2016 12:27:56 -0800 Subject: [PATCH 02/24] docs: create /security endpoint Create an easy to remember and locate URL for security disclosure process. This URL will need to be placed in lots of templates and tools so make it easy. --- docs/reporting-security-issues.md | 22 +--------------------- security.md | 28 ++++++++++++++++++++++++++++ 2 files changed, 29 insertions(+), 21 deletions(-) create mode 100644 security.md diff --git a/docs/reporting-security-issues.md b/docs/reporting-security-issues.md index da4da20b3c..28e40cea9c 100644 --- a/docs/reporting-security-issues.md +++ b/docs/reporting-security-issues.md @@ -5,24 +5,4 @@ assignees: --- -If you believe you have discovered a vulnerability or a have a security incident to report, please follow the steps below. This applies to Kubernetes releases v1.0 or later. - -To watch for security and major API announcements, please join our [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernetes-announce) group. - -## Reporting a security issue - -To report an issue, please: - -- Submit a bug report [here](http://goo.gl/vulnz). - - Select 'I want to report a technical security bug in a Google product (SQLi, XSS, etc.).'? - - Select 'Other'? as the Application Type. -- Under reproduction steps, please additionally include - - the words "Kubernetes Security issue" - - Description of the issue - - Kubernetes release (e.g. output of `kubectl version` command, which includes server version.) - - Environment setup (e.g. which "Getting Started Guide" you followed, if any; what node operating system used; what service or software creates your virtual machines, if any) - -An online submission will have the fastest response; however, if you prefer email, please send mail to security@google.com. If you feel the need, please use the [PGP public key](https://services.google.com/corporate/publickey.txt) to encrypt communications. - - - +This document has moved to [http://kubernetes.io/security](http://kubernetes.io/security). diff --git a/security.md b/security.md new file mode 100644 index 0000000000..da4da20b3c --- /dev/null +++ b/security.md @@ -0,0 +1,28 @@ +--- +assignees: +- eparis +- erictune + +--- + +If you believe you have discovered a vulnerability or a have a security incident to report, please follow the steps below. This applies to Kubernetes releases v1.0 or later. + +To watch for security and major API announcements, please join our [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernetes-announce) group. + +## Reporting a security issue + +To report an issue, please: + +- Submit a bug report [here](http://goo.gl/vulnz). + - Select 'I want to report a technical security bug in a Google product (SQLi, XSS, etc.).'? + - Select 'Other'? as the Application Type. +- Under reproduction steps, please additionally include + - the words "Kubernetes Security issue" + - Description of the issue + - Kubernetes release (e.g. output of `kubectl version` command, which includes server version.) + - Environment setup (e.g. which "Getting Started Guide" you followed, if any; what node operating system used; what service or software creates your virtual machines, if any) + +An online submission will have the fastest response; however, if you prefer email, please send mail to security@google.com. If you feel the need, please use the [PGP public key](https://services.google.com/corporate/publickey.txt) to encrypt communications. + + + From 6fb18ed5ddabe4aa1c9fd00a0096a58d3544cfe4 Mon Sep 17 00:00:00 2001 From: Brandon Philips Date: Tue, 29 Nov 2016 13:35:54 -0800 Subject: [PATCH 03/24] security: add the new disclosure process This is the new disclosure process as discussed here: https://github.com/kubernetes/kubernetes/issues/35462 This relies on a doc to be merged into docs/devel/security-release-process.md but this doc can be reviewed in parallel. You can find a draft of the content of that doc on #35462 --- security.md | 28 ---------------------------- security/index.md | 46 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 46 insertions(+), 28 deletions(-) delete mode 100644 security.md create mode 100644 security/index.md diff --git a/security.md b/security.md deleted file mode 100644 index da4da20b3c..0000000000 --- a/security.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -assignees: -- eparis -- erictune - ---- - -If you believe you have discovered a vulnerability or a have a security incident to report, please follow the steps below. This applies to Kubernetes releases v1.0 or later. - -To watch for security and major API announcements, please join our [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernetes-announce) group. - -## Reporting a security issue - -To report an issue, please: - -- Submit a bug report [here](http://goo.gl/vulnz). - - Select 'I want to report a technical security bug in a Google product (SQLi, XSS, etc.).'? - - Select 'Other'? as the Application Type. -- Under reproduction steps, please additionally include - - the words "Kubernetes Security issue" - - Description of the issue - - Kubernetes release (e.g. output of `kubectl version` command, which includes server version.) - - Environment setup (e.g. which "Getting Started Guide" you followed, if any; what node operating system used; what service or software creates your virtual machines, if any) - -An online submission will have the fastest response; however, if you prefer email, please send mail to security@google.com. If you feel the need, please use the [PGP public key](https://services.google.com/corporate/publickey.txt) to encrypt communications. - - - diff --git a/security/index.md b/security/index.md new file mode 100644 index 0000000000..5fe3b5b13c --- /dev/null +++ b/security/index.md @@ -0,0 +1,46 @@ +--- +layout: docwithnav +title: Kubernetes Security and Disclosure Information +permalink: /security/ +assignees: +- eparis +- erictune +- philips +- jessfraz +--- + +## Security Announcements + +Join the [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernetes-announce) group for emails about security and major API announcements. + +## Report a Vulnerability + +We’re extremely grateful for security researchers and users that report vulnerabilities to the Kubernetes Open Source Community. All reports are thoroughly investigated by a set of community volunteers. + +To make a report, please email the private [kubernetes-security@googlegroups.com](mailto:kubernetes-security@googlegroups.com) list with the security details and the details expected for [all Kubernetes bug reports](https://github.com/kubernetes/kubernetes/blob/master/.github/ISSUE_TEMPLATE.md). + +You may encrypt your email to this list using the GPG keys of the [Product Security Team members](https://github.com/kubernetes/community/blob/master/contributors/devel/security-release-process.md#product-security-team-pst). Encryption using GPG is NOT required to make a disclosure. + +### When Should I Report a Vulnerability? + +- You think you discovered a potential security vulnerability in Kubernetes +- You are unsure how a vulnerability affects Kubernetes +- You think you discovered a vulnerability in another project that Kubernetes depends on (e.g. docker, rkt, etcd) + +### When Should I NOT Report a Vulnerability? + +- You need help tuning Kubernetes components for security +- You need help applying security related updates +- Your issue is not security related + +## Security Vulnerability Response + +Each report is acknowledged and analyzed by Product Security Team members within 3 working days. This will set off the [Security Release Process](https://github.com/kubernetes/community/blob/master/contributors/devel/security-release-process.md#product-security-team-pst). + +Any vulnerability information shared with Product Security Team stays within Kubernetes project and will not be disseminated to other projects unless it is necessary to get the issue fixed. + +As the security issue moves from triage, to identified fix, to release planning we will keep the reporter updated. + +## Public Disclosure Timing + +A public disclosure date is negotiated by the Kubernetes product security team and the bug submitter. We prefer to fully disclose the bug as soon as possible once a user mitigation is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for vendor coordination. The timeframe for disclosure is from immediate (especially if it's already publicly known) to a few weeks. As a basic default, we expect report date to disclosure date to be on the order of 7 days. The Kubernetes product security team holds the final say when setting a disclosure date. From 53d55320e778c1346c8b7654b21f2e9d0abaa41b Mon Sep 17 00:00:00 2001 From: Junaid Ali Date: Mon, 12 Dec 2016 22:03:46 +0500 Subject: [PATCH 04/24] Improving expose-intro.html - Fixed a typo Signed-off-by: Junaid Ali --- docs/tutorials/kubernetes-basics/expose-intro.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/kubernetes-basics/expose-intro.html b/docs/tutorials/kubernetes-basics/expose-intro.html index 524e051c68..4e65a16988 100644 --- a/docs/tutorials/kubernetes-basics/expose-intro.html +++ b/docs/tutorials/kubernetes-basics/expose-intro.html @@ -31,7 +31,7 @@

This abstraction will allow us to expose Pods to traffic originating from outside the cluster. Services have their own unique cluster-private IP address and expose a port to receive traffic. If you choose to expose the service outside the cluster, the options are:

    -
  • LoadBalancer - provides a public IP address (what you would typically use when you run Kubernetes on GKE or AWS)
  • +
  • LoadBalancer - provides a public IP address (what you would typically use when you run Kubernetes on GCE or AWS)
  • NodePort - exposes the Service on the same port on each Node of the cluster using NAT (available on all Kubernetes clusters, and in Minikube)
From 61a60eec847124a5905d045e32f90c9ffe08af67 Mon Sep 17 00:00:00 2001 From: unisisdev Date: Fri, 16 Dec 2016 15:16:36 -0300 Subject: [PATCH 05/24] Described how deploy the Dashboard UI --- docs/user-guide/ui.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/docs/user-guide/ui.md b/docs/user-guide/ui.md index 32e3143b52..9316455d95 100644 --- a/docs/user-guide/ui.md +++ b/docs/user-guide/ui.md @@ -16,6 +16,14 @@ Dashboard also provides information on the state of Kubernetes resources in your * TOC {:toc} +## Deploying the Dashboard UI + +The Dashboard UI is not deployed by default. To deploy it please execute: + +``` +kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml +``` + ## Accessing the Dashboard UI There are multiple ways you can access the Dashboard UI; either by using the kubectl command-line interface, or by accessing the Kubernetes master apiserver using your web browser. From 938bb6843062fa14cd73335c975911ec83f77f50 Mon Sep 17 00:00:00 2001 From: devin-donnelly Date: Wed, 21 Dec 2016 16:34:35 -0800 Subject: [PATCH 06/24] Update ui.md --- docs/user-guide/ui.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/user-guide/ui.md b/docs/user-guide/ui.md index 9316455d95..1158108416 100644 --- a/docs/user-guide/ui.md +++ b/docs/user-guide/ui.md @@ -18,7 +18,7 @@ Dashboard also provides information on the state of Kubernetes resources in your ## Deploying the Dashboard UI -The Dashboard UI is not deployed by default. To deploy it please execute: +The Dashboard UI is not deployed by default. To deploy it, run the following command: ``` kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml From bdf6d77a18aebff26a28da9251665321733c82c4 Mon Sep 17 00:00:00 2001 From: Kyle Ibrahim Date: Wed, 21 Dec 2016 18:50:01 -0800 Subject: [PATCH 07/24] Fix typos in thirdpartyresources.md --- docs/user-guide/thirdpartyresources.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/user-guide/thirdpartyresources.md b/docs/user-guide/thirdpartyresources.md index e76c64e295..b6e608b3b9 100644 --- a/docs/user-guide/thirdpartyresources.md +++ b/docs/user-guide/thirdpartyresources.md @@ -34,9 +34,9 @@ $ kubectl explain thirdpartyresource ## Creating a ThirdPartyResource -When you user create a new `ThirdPartyResource`, the Kubernetes API Server reacts by creating a new, namespaced RESTful resource path. For now, non-namespaced objects are not supported. As with existing built-in objects, deleting a namespace deletes all custom objects in that namespace. `ThirdPartyResources` themselves are non-namespaced and are available to all namespaces. +When you create a new `ThirdPartyResource`, the Kubernetes API Server reacts by creating a new, namespaced RESTful resource path. For now, non-namespaced objects are not supported. As with existing built-in objects, deleting a namespace deletes all custom objects in that namespace. `ThirdPartyResources` themselves are non-namespaced and are available to all namespaces. -For example, if a save the following `ThirdPartyResource` to `resource.yaml`: +For example, if you save the following `ThirdPartyResource` to `resource.yaml`: ```yaml apiVersion: extensions/v1beta1 From 59759f5526837a649416cc4bcacc8c1ee10fd528 Mon Sep 17 00:00:00 2001 From: Xing Zhou Date: Tue, 13 Dec 2016 13:21:58 +0800 Subject: [PATCH 08/24] Update API server --apiserver-count description. A validation check is added for option --apiserver-count in kubernetes ticket #38143. As a result, update related description in doc. --- docs/admin/federation-apiserver.md | 2 +- docs/admin/kube-apiserver.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/admin/federation-apiserver.md b/docs/admin/federation-apiserver.md index 72d71547c7..9eb760d087 100644 --- a/docs/admin/federation-apiserver.md +++ b/docs/admin/federation-apiserver.md @@ -26,7 +26,7 @@ federation-apiserver --admission-control-config-file string File with admission control configuration. --advertise-address ip The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used. --anonymous-auth Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated. (default true) - --apiserver-count int The number of apiservers running in the cluster. (default 1) + --apiserver-count int The number of apiservers running in the cluster. Must be a positive number. (default 1) --audit-log-maxage int The maximum number of days to retain old audit log files based on the timestamp encoded in their filename. --audit-log-maxbackup int The maximum number of old audit log files to retain. --audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated. Defaults to 100MB. diff --git a/docs/admin/kube-apiserver.md b/docs/admin/kube-apiserver.md index bc08ef1f0a..1e2c8a602e 100644 --- a/docs/admin/kube-apiserver.md +++ b/docs/admin/kube-apiserver.md @@ -27,7 +27,7 @@ kube-apiserver --advertise-address ip The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used. --allow-privileged If true, allow privileged containers. --anonymous-auth Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated. (default true) - --apiserver-count int The number of apiservers running in the cluster. (default 1) + --apiserver-count int The number of apiservers running in the cluster. Must be a positive number. (default 1) --audit-log-maxage int The maximum number of days to retain old audit log files based on the timestamp encoded in their filename. --audit-log-maxbackup int The maximum number of old audit log files to retain. --audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated. Defaults to 100MB. From c6d6c1e6f9478f2e891b047933de4fa8ef9ee6cd Mon Sep 17 00:00:00 2001 From: Andrey Date: Thu, 22 Dec 2016 12:15:34 +0100 Subject: [PATCH 09/24] Target _blank was removed Serious guys! It is so much annoying - each link in new tab! If I would need it I would hold Ctrl pressed. --- _includes/tree.html | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/_includes/tree.html b/_includes/tree.html index 6387c171e7..4aeaeb0176 100644 --- a/_includes/tree.html +++ b/_includes/tree.html @@ -11,7 +11,6 @@ {% if item.path %} {% assign path = item.path %} {% assign title = item.title %} - {% assign target = " target='_blank'" %} {% else %} {% assign page = site.pages | where: "path", item | first %} {% assign title = page.title %} @@ -20,7 +19,7 @@ {% endcapture %} {% if path %} - + {% endif %} {% endif %} {% endfor %} From bbc441504211624359034cdd4fcc9b42ed02edc9 Mon Sep 17 00:00:00 2001 From: Michail Kargakis Date: Fri, 9 Dec 2016 19:42:15 +0100 Subject: [PATCH 10/24] Link sections that talk about deployment status --- docs/user-guide/deployments.md | 66 +++++++++++++++++++++++++--------- 1 file changed, 49 insertions(+), 17 deletions(-) diff --git a/docs/user-guide/deployments.md b/docs/user-guide/deployments.md index c53c1e19ae..6f222dbc40 100644 --- a/docs/user-guide/deployments.md +++ b/docs/user-guide/deployments.md @@ -86,24 +86,56 @@ After creating or updating a Deployment, you would want to confirm whether it su ```shell $ kubectl rollout status deployment/nginx-deployment -deployment nginx-deployment successfully rolled out +deployment "nginx-deployment" successfully rolled out ``` This verifies the Deployment's `.status.observedGeneration` >= `.metadata.generation`, and its up-to-date replicas -(`.status.updatedReplicas`) matches the desired replicas (`.spec.replicas`) to determine if the rollout succeeded. -If the rollout is still in progress, it watches for Deployment status changes and prints related messages. - -Note that it's impossible to know whether a Deployment will ever succeed, so if the above command doesn't return success, -you'll need to timeout and give up at some point. - -Additionally, if you set `.spec.minReadySeconds`, you would also want to check if the available replicas (`.status.availableReplicas`) matches the desired replicas too. +(`.status.updatedReplicas`) matches the desired replicas (`.spec.replicas`) to determine if the rollout succeeded. +It also expects that the available replicas running (`.spec.availableReplicas`) will be at least the minimum required +based on the Deployment strategy. If the rollout is still in progress, it watches for Deployment status changes and +prints related messages. ```shell -$ kubectl get deployments -NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE -nginx-deployment 3 3 3 3 20s +$ kubectl rollout status deployment/nginx-deployment +Waiting for rollout to finish: 2 out of 10 new replicas have been updated... +Waiting for rollout to finish: 2 out of 10 new replicas have been updated... +Waiting for rollout to finish: 2 out of 10 new replicas have been updated... +Waiting for rollout to finish: 3 out of 10 new replicas have been updated... +Waiting for rollout to finish: 3 out of 10 new replicas have been updated... +Waiting for rollout to finish: 4 out of 10 new replicas have been updated... +Waiting for rollout to finish: 4 out of 10 new replicas have been updated... +Waiting for rollout to finish: 4 out of 10 new replicas have been updated... +Waiting for rollout to finish: 4 out of 10 new replicas have been updated... +Waiting for rollout to finish: 4 out of 10 new replicas have been updated... +Waiting for rollout to finish: 5 out of 10 new replicas have been updated... +Waiting for rollout to finish: 5 out of 10 new replicas have been updated... +Waiting for rollout to finish: 5 out of 10 new replicas have been updated... +Waiting for rollout to finish: 5 out of 10 new replicas have been updated... +Waiting for rollout to finish: 6 out of 10 new replicas have been updated... +Waiting for rollout to finish: 6 out of 10 new replicas have been updated... +Waiting for rollout to finish: 6 out of 10 new replicas have been updated... +Waiting for rollout to finish: 6 out of 10 new replicas have been updated... +Waiting for rollout to finish: 6 out of 10 new replicas have been updated... +Waiting for rollout to finish: 7 out of 10 new replicas have been updated... +Waiting for rollout to finish: 7 out of 10 new replicas have been updated... +Waiting for rollout to finish: 7 out of 10 new replicas have been updated... +Waiting for rollout to finish: 7 out of 10 new replicas have been updated... +Waiting for rollout to finish: 8 out of 10 new replicas have been updated... +Waiting for rollout to finish: 8 out of 10 new replicas have been updated... +Waiting for rollout to finish: 8 out of 10 new replicas have been updated... +Waiting for rollout to finish: 9 out of 10 new replicas have been updated... +Waiting for rollout to finish: 9 out of 10 new replicas have been updated... +Waiting for rollout to finish: 9 out of 10 new replicas have been updated... +Waiting for rollout to finish: 1 old replicas are pending termination... +Waiting for rollout to finish: 1 old replicas are pending termination... +Waiting for rollout to finish: 1 old replicas are pending termination... +Waiting for rollout to finish: 9 of 10 updated replicas are available... +deployment "nginx-deployment" successfully rolled out ``` +For more information about the status of a Deployment [read more here](#deployment-status). + + ## Updating a Deployment **Note:** a Deployment's rollout is triggered if and only if the Deployment's pod template (i.e. `.spec.template`) is changed, @@ -129,7 +161,7 @@ To see its rollout status, simply run: ```shell $ kubectl rollout status deployment/nginx-deployment Waiting for rollout to finish: 2 out of 3 new replicas have been updated... -deployment nginx-deployment successfully rolled out +deployment "nginx-deployment" successfully rolled out ``` After the rollout succeeds, you may want to `get` the Deployment: @@ -244,12 +276,12 @@ deployment "nginx-deployment" image updated The rollout will be stuck. -``` +```shell $ kubectl rollout status deployments nginx-deployment Waiting for rollout to finish: 2 out of 3 new replicas have been updated... ``` -Press Ctrl-C to stop the above rollout status watch. +Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, [read more here](#deployment-status). You will also see that both the number of old replicas (nginx-deployment-1564180365 and nginx-deployment-2035384211) and new replicas (nginx-deployment-3066724191) are 2. @@ -549,7 +581,7 @@ updates you've requested have been completed. You can check if a Deployment has completed by using `kubectl rollout status`. If the rollout completed successfully, `kubectl rollout status` returns a zero exit code. -``` +```shell $ kubectl rollout status deploy/nginx Waiting for rollout to finish: 2 of 3 updated replicas are available... deployment "nginx" successfully rolled out @@ -594,7 +626,7 @@ You may experience transient errors with your Deployments, either due to a low t of error that can be treated as transient. For example, let's suppose you have insufficient quota. If you describe the Deployment you will notice the following section: -``` +```shell $ kubectl describe deployment nginx-deployment <...> Conditions: @@ -667,7 +699,7 @@ required new replicas are available (see the Reason of the condition for the par You can check if a Deployment has failed to progress by using `kubectl rollout status`. `kubectl rollout status` returns a non-zero exit code if the Deployment has exceeded the progression deadline. -``` +```shell $ kubectl rollout status deploy/nginx Waiting for rollout to finish: 2 out of 3 new replicas have been updated... error: deployment "nginx" exceeded its progress deadline From 4c4959e63123fc9653529f6d8df21f8ef56606d5 Mon Sep 17 00:00:00 2001 From: Alejandro Escobar Date: Mon, 12 Dec 2016 08:57:29 -0800 Subject: [PATCH 11/24] changes to node.md for clarity since sections and subsections visually are that different in sizes and single line comment was not clear enough and looked incomplete, specially at first read. Added .idea/ directory in gitignore. removed change to .gitignore and pushing to a separate pr. suggested changes made. --- docs/admin/node.md | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/docs/admin/node.md b/docs/admin/node.md index a18aaf5ca7..d9d498ba0d 100644 --- a/docs/admin/node.md +++ b/docs/admin/node.md @@ -20,7 +20,15 @@ architecture design doc for more details. ## Node Status -A node's status is comprised of the following information. +A node's status contains the following information: + +* [Addresses](#Addresses) +* ~~[Phase](#Phase)~~ **deprecated** +* [Condition](#Condition) +* [Capacity](#Capacity) +* [Info](#Info) + +Each section is described in detail below. ### Addresses From 547a6d7b2ad839e8fe833ee5fef2e60fb4184154 Mon Sep 17 00:00:00 2001 From: Taylor Thomas Date: Tue, 29 Nov 2016 14:29:44 -0800 Subject: [PATCH 12/24] Clarifies lifecycle hook documentation --- docs/user-guide/container-environment.md | 25 ++++++++++++++++++++++-- 1 file changed, 23 insertions(+), 2 deletions(-) diff --git a/docs/user-guide/container-environment.md b/docs/user-guide/container-environment.md index f3996b2eb5..cf8cb037f7 100644 --- a/docs/user-guide/container-environment.md +++ b/docs/user-guide/container-environment.md @@ -60,7 +60,9 @@ This hook is called immediately before a container is terminated. No parameters ### Hook Handler Execution -When a management hook occurs, the management system calls into any registered hook handlers in the container for that hook.  These hook handler calls are synchronous in the context of the pod containing the container. Typically we expect that users will make their hook handlers as lightweight as possible, but there are cases where long running commands make sense (e.g. saving state prior to container stop). +When a management hook occurs, the management system calls into any registered hook handlers in the container for that hook.  These hook handler calls are synchronous in the context of the pod containing the container. This means that for a `PostStart` hook, the container entrypoint and hook will fire asynchronously. However, if the hook takes a while to run or hangs, the container will never reach a "running" state. The behavior is similar for a `PreStop` hook. If the hook hangs during execution, the Pod phase will stay in a "running" state and never reach "failed." If a `PostStart` or `PreStop` hook fails, it will kill the container. + +Typically we expect that users will make their hook handlers as lightweight as possible, but there are cases where long running commands make sense (e.g. saving state prior to container stop). ### Hook delivery guarantees @@ -81,4 +83,23 @@ Hook handlers are the way that hooks are surfaced to containers.  Containers ca * HTTP - Executes an HTTP request against a specific endpoint on the container. -[1]: http://man7.org/linux/man-pages/man2/gethostname.2.html \ No newline at end of file +[1]: http://man7.org/linux/man-pages/man2/gethostname.2.html + +### Debugging Hook Handlers + +Currently, the logs for a hook handler are not exposed in the pod events. If your handler fails for some reason, it will emit an event. For `PostStart`, this is the `FailedPostStartHook` event. For `PreStop` this is the `FailedPreStopHook` event. You can see these events by running `kubectl describe pod `. An example output of events from runing this command is below: + +``` +Events: + FirstSeen LastSeen Count From SubobjectPath Type Reason Message + --------- -------- ----- ---- ------------- -------- ------ ------- + 1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned test-1730497541-cq1d2 to gke-test-cluster-default-pool-a07e5d30-siqd + 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulling pulling image "test:1.0" + 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Created Created container with docker id 5c6a256a2567; Security:[seccomp=unconfined] + 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulled Successfully pulled image "test:1.0" + 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Started Started container with docker id 5c6a256a2567 + 38s 38s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 5c6a256a2567: PostStart handler: Error executing in Docker Container: 1 + 37s 37s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 8df9fdfd7054: PostStart handler: Error executing in Docker Container: 1 + 38s 37s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "main" with RunContainerError: "PostStart handler: Error executing in Docker Container: 1" + 1m 22s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Warning FailedPostStartHook +``` \ No newline at end of file From 0f96b1ed77d27a875b61b026edadfad0bc94ce8a Mon Sep 17 00:00:00 2001 From: Alejandro Escobar Date: Thu, 22 Dec 2016 10:05:00 -0800 Subject: [PATCH 13/24] fixed server.cert text to server.crt which is consistent with the rest of the document. --- docs/admin/authentication.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/admin/authentication.md b/docs/admin/authentication.md index 3ada61a5fd..4bbd0a4fee 100644 --- a/docs/admin/authentication.md +++ b/docs/admin/authentication.md @@ -444,7 +444,7 @@ The script will generate three files: `ca.crt`, `server.crt`, and `server.key`. Finally, add the following parameters into API server start parameters: - `--client-ca-file=/srv/kubernetes/ca.crt` -- `--tls-cert-file=/srv/kubernetes/server.cert` +- `--tls-cert-file=/srv/kubernetes/server.crt` - `--tls-private-key-file=/srv/kubernetes/server.key` #### easyrsa @@ -468,7 +468,7 @@ Finally, add the following parameters into API server start parameters: 1. Fill in and add the following parameters into the API server start parameters: --client-ca-file=/yourdirectory/ca.crt - --tls-cert-file=/yourdirectory/server.cert + --tls-cert-file=/yourdirectory/server.crt --tls-private-key-file=/yourdirectory/server.key #### openssl From f9d1cbc8fa00b01fbc35e843d3e5d89e402a96fa Mon Sep 17 00:00:00 2001 From: "Elijah C. Voigt" Date: Fri, 16 Dec 2016 15:51:48 -0800 Subject: [PATCH 14/24] Remove italics, correct CamelCase typos in titles --- docs/admin/daemons.md | 29 +++-- docs/admin/sysctls.md | 2 +- docs/user-guide/cron-jobs.md | 2 +- docs/user-guide/jobs.md | 2 +- docs/user-guide/pod-security-policy/index.md | 2 +- docs/user-guide/pods/index.md | 2 +- docs/user-guide/replicasets.md | 38 +++--- .../replication-controller/index.md | 108 +++++++++--------- 8 files changed, 92 insertions(+), 93 deletions(-) diff --git a/docs/admin/daemons.md b/docs/admin/daemons.md index 90637239b3..819636ba99 100644 --- a/docs/admin/daemons.md +++ b/docs/admin/daemons.md @@ -7,20 +7,20 @@ title: Daemon Sets * TOC {:toc} -## What is a Daemon Set? +## What is a DaemonSet? -A _Daemon Set_ ensures that all (or some) nodes run a copy of a pod. As nodes are added to the +A _DaemonSet_ ensures that all (or some) nodes run a copy of a pod. As nodes are added to the cluster, pods are added to them. As nodes are removed from the cluster, those pods are garbage -collected. Deleting a Daemon Set will clean up the pods it created. +collected. Deleting a DaemonSet will clean up the pods it created. -Some typical uses of a Daemon Set are: +Some typical uses of a DaemonSet are: - running a cluster storage daemon, such as `glusterd`, `ceph`, on each node. - running a logs collection daemon on every node, such as `fluentd` or `logstash`. - running a node monitoring daemon on every node, such as [Prometheus Node Exporter]( https://github.com/prometheus/node_exporter), `collectd`, New Relic agent, or Ganglia `gmond`. -In a simple case, one Daemon Set, covering all nodes, would be used for each type of daemon. +In a simple case, one DaemonSet, covering all nodes, would be used for each type of daemon. A more complex setup might use multiple DaemonSets would be used for a single type of daemon, but with different flags and/or different memory and cpu requests for different hardware types. @@ -74,7 +74,7 @@ a node for testing. If you specify a `.spec.template.spec.nodeSelector`, then the DaemonSet controller will create pods on nodes which match that [node -selector](/docs/user-guide/node-selection/). +selector](/docs/user-guide/node-selection/). If you specify a `scheduler.alpha.kubernetes.io/affinity` annotation in `.spec.template.metadata.annotations`, then DaemonSet controller will create pods on nodes which match that [node affinity](../../user-guide/node-selection/#alpha-feature-in-kubernetes-v12-node-affinity). @@ -88,18 +88,17 @@ created by the Daemon controller have the machine already selected (`.spec.nodeN when the pod is created, so it is ignored by the scheduler). Therefore: - the [`unschedulable`](/docs/admin/node/#manual-node-administration) field of a node is not respected - by the daemon set controller. - - daemon set controller can make pods even when the scheduler has not been started, which can help cluster + by the DaemonSet controller. + - DaemonSet controller can make pods even when the scheduler has not been started, which can help cluster bootstrap. ## Communicating with DaemonSet Pods Some possible patterns for communicating with pods in a DaemonSet are: -- **Push**: Pods in the Daemon Set are configured to send updates to another service, such +- **Push**: Pods in the DaemonSet are configured to send updates to another service, such as a stats database. They do not have clients. -- **NodeIP and Known Port**: Pods in the Daemon Set use a `hostPort`, so that the pods are reachable - via the node IPs. Clients knows the list of nodes ips somehow, and know the port by convention. +- **NodeIP and Known Port**: Pods in the DaemonSet use a `hostPort`, so that the pods are reachable via the node IPs. Clients know the list of nodes ips somehow, and know the port by convention. - **DNS**: Create a [headless service](/docs/user-guide/services/#headless-services) with the same pod selector, and then discover DaemonSets using the `endpoints` resource or retrieve multiple A records from DNS. @@ -126,7 +125,7 @@ You cannot update a DaemonSet. Support for updating DaemonSets and controlled updating of nodes is planned. -## Alternatives to Daemon Set +## Alternatives to DaemonSet ### Init Scripts @@ -145,9 +144,9 @@ running such processes via a DaemonSet: ### Bare Pods It is possible to create pods directly which specify a particular node to run on. However, -a Daemon Set replaces pods that are deleted or terminated for any reason, such as in the case of +a DaemonSet replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, you should -use a Daemon Set rather than creating individual pods. +use a DaemonSet rather than creating individual pods. ### Static Pods @@ -159,7 +158,7 @@ in cluster bootstrapping cases. Also, static pods may be deprecated in the futu ### Replication Controller -Daemon Set are similar to [Replication Controllers](/docs/user-guide/replication-controller) in that +DaemonSet are similar to [Replication Controllers](/docs/user-guide/replication-controller) in that they both create pods, and those pods have processes which are not expected to terminate (e.g. web servers, storage servers). diff --git a/docs/admin/sysctls.md b/docs/admin/sysctls.md index dc62b8c3d1..ff6829850e 100644 --- a/docs/admin/sysctls.md +++ b/docs/admin/sysctls.md @@ -9,7 +9,7 @@ assignees: This document describes how sysctls are used within a Kubernetes cluster. -## What is a _Sysctl_? +## What is a Sysctl? In Linux, the sysctl interface allows an administrator to modify kernel parameters at runtime. Parameters are available via the `/proc/sys/` virtual diff --git a/docs/user-guide/cron-jobs.md b/docs/user-guide/cron-jobs.md index 9124852a80..55b85adf46 100644 --- a/docs/user-guide/cron-jobs.md +++ b/docs/user-guide/cron-jobs.md @@ -9,7 +9,7 @@ title: Cron Jobs * TOC {:toc} -## What is a Cron Job? +## What is a cron job? A _Cron Job_ manages time based [Jobs](/docs/user-guide/jobs/), namely: diff --git a/docs/user-guide/jobs.md b/docs/user-guide/jobs.md index 0d71bc5e56..f5b7362039 100644 --- a/docs/user-guide/jobs.md +++ b/docs/user-guide/jobs.md @@ -8,7 +8,7 @@ title: Jobs * TOC {:toc} -## What is a job? +## What is a Job? A _job_ creates one or more pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the _job_ tracks the successful completions. When a specified number diff --git a/docs/user-guide/pod-security-policy/index.md b/docs/user-guide/pod-security-policy/index.md index c2de42162c..46db299311 100644 --- a/docs/user-guide/pod-security-policy/index.md +++ b/docs/user-guide/pod-security-policy/index.md @@ -6,7 +6,7 @@ title: Pod Security Policies Objects of type `podsecuritypolicy` govern the ability to make requests on a pod that affect the `SecurityContext` that will be -applied to a pod and container. +applied to a pod and container. See [PodSecurityPolicy proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/security-context-constraints.md) for more information. diff --git a/docs/user-guide/pods/index.md b/docs/user-guide/pods/index.md index b18ae485e3..6bea334dec 100644 --- a/docs/user-guide/pods/index.md +++ b/docs/user-guide/pods/index.md @@ -10,7 +10,7 @@ title: Pods _pods_ are the smallest deployable units of computing that can be created and managed in Kubernetes. -## What is a pod? +## What is a Pod? A _pod_ (as in a pod of whales or pea pod) is a group of one or more containers (such as Docker containers), the shared storage for those containers, and diff --git a/docs/user-guide/replicasets.md b/docs/user-guide/replicasets.md index f0aa08bf04..ea3e7bde14 100644 --- a/docs/user-guide/replicasets.md +++ b/docs/user-guide/replicasets.md @@ -9,17 +9,17 @@ title: Replica Sets * TOC {:toc} -## What is a Replica Set? +## What is a ReplicaSet? -Replica Set is the next-generation Replication Controller. The only difference -between a _Replica Set_ and a +ReplicaSet is the next-generation Replication Controller. The only difference +between a _ReplicaSet_ and a [_Replication Controller_](/docs/user-guide/replication-controller/) right now is -the selector support. Replica Set supports the new set-based selector requirements +the selector support. ReplicaSet supports the new set-based selector requirements as described in the [labels user guide](/docs/user-guide/labels/#label-selectors) whereas a Replication Controller only supports equality-based selector requirements. Most [`kubectl`](/docs/user-guide/kubectl/) commands that support -Replication Controllers also support Replica Sets. One exception is the +Replication Controllers also support ReplicaSets. One exception is the [`rolling-update`](/docs/user-guide/kubectl/kubectl_rolling-update/) command. If you want the rolling update functionality please consider using Deployments instead. Also, the @@ -27,21 +27,21 @@ instead. Also, the imperative whereas Deployments are declarative, so we recommend using Deployments through the [`rollout`](/docs/user-guide/kubectl/kubectl_rollout/) command. -While Replica Sets can be used independently, today it's mainly used by +While ReplicaSets can be used independently, today it's mainly used by [Deployments](/docs/user-guide/deployments/) as a mechanism to orchestrate pod creation, deletion and updates. When you use Deployments you don't have to worry -about managing the Replica Sets that they create. Deployments own and manage -their Replica Sets. +about managing the ReplicaSets that they create. Deployments own and manage +their ReplicaSets. -## When to use a Replica Set? +## When to use a ReplicaSet? -A Replica Set ensures that a specified number of pod “replicas” are running at any given -time. However, a Deployment is a higher-level concept that manages Replica Sets and +A ReplicaSet ensures that a specified number of pod “replicas” are running at any given +time. However, a Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to pods along with a lot of other useful features. -Therefore, we recommend using Deployments instead of directly using Replica Sets, unless +Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don't require updates at all. -This actually means that you may never need to manipulate Replica Set objects: +This actually means that you may never need to manipulate ReplicaSet objects: use directly a Deployment and define your application in the spec section. ## Example @@ -49,7 +49,7 @@ use directly a Deployment and define your application in the spec section. {% include code.html language="yaml" file="replicasets/frontend.yaml" ghlink="/docs/user-guide/replicasets/frontend.yaml" %} Saving this config into `frontend.yaml` and submitting it to a Kubernetes cluster should -create the defined Replica Set and the pods that it manages. +create the defined ReplicaSet and the pods that it manages. ```shell $ kubectl create -f frontend.yaml @@ -76,18 +76,18 @@ frontend-dnjpy 1/1 Running 0 1m frontend-qhloh 1/1 Running 0 1m ``` -## Replica Set as an Horizontal Pod Autoscaler target +## ReplicaSet as an Horizontal Pod Autoscaler target -A Replica Set can also be a target for +A ReplicaSet can also be a target for [Horizontal Pod Autoscalers (HPA)](/docs/user-guide/horizontal-pod-autoscaling/), -i.e. a Replica Set can be auto-scaled by an HPA. Here is an example HPA targeting -the Replica Set we created in the previous example. +i.e. a ReplicaSet can be auto-scaled by an HPA. Here is an example HPA targeting +the ReplicaSet we created in the previous example. {% include code.html language="yaml" file="replicasets/hpa-rs.yaml" ghlink="/docs/user-guide/replicasets/hpa-rs.yaml" %} Saving this config into `hpa-rs.yaml` and submitting it to a Kubernetes cluster should -create the defined HPA that autoscales the target Replica Set depending on the CPU usage +create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the replicated pods. ```shell diff --git a/docs/user-guide/replication-controller/index.md b/docs/user-guide/replication-controller/index.md index e69c55231b..3b91828535 100644 --- a/docs/user-guide/replication-controller/index.md +++ b/docs/user-guide/replication-controller/index.md @@ -8,30 +8,30 @@ title: Replication Controller * TOC {:toc} -## What is a replication controller? +## What is a ReplicationController? -A _replication controller_ ensures that a specified number of pod "replicas" are running at any one -time. In other words, a replication controller makes sure that a pod or homogeneous set of pods are +A _ReplicationController_ ensures that a specified number of pod "replicas" are running at any one +time. In other words, a ReplicationController makes sure that a pod or homogeneous set of pods are always up and available. If there are too many pods, it will kill some. If there are too few, the -replication controller will start more. Unlike manually created pods, the pods maintained by a -replication controller are automatically replaced if they fail, get deleted, or are terminated. +ReplicationController will start more. Unlike manually created pods, the pods maintained by a +ReplicationController are automatically replaced if they fail, get deleted, or are terminated. For example, your pods get re-created on a node after disruptive maintenance such as a kernel upgrade. -For this reason, we recommend that you use a replication controller even if your application requires -only a single pod. You can think of a replication controller as something similar to a process supervisor, -but rather than individual processes on a single node, the replication controller supervises multiple pods +For this reason, we recommend that you use a ReplicationController even if your application requires +only a single pod. You can think of a ReplicationController as something similar to a process supervisor, +but rather than individual processes on a single node, the ReplicationController supervises multiple pods across multiple nodes. -Replication Controller is often abbreviated to "rc" or "rcs" in discussion, and as a shortcut in +ReplicationController is often abbreviated to "rc" or "rcs" in discussion, and as a shortcut in kubectl commands. -A simple case is to create 1 Replication Controller object in order to reliably run one instance of +A simple case is to create 1 ReplicationController object in order to reliably run one instance of a Pod indefinitely. A more complex use case is to run several identical replicas of a replicated service, such as web servers. -## Running an example Replication Controller +## Running an example ReplicationController -Here is an example Replication Controller config. It runs 3 copies of the nginx web server. +Here is an example ReplicationController config. It runs 3 copies of the nginx web server. {% include code.html language="yaml" file="replication.yaml" ghlink="/docs/user-guide/replication.yaml" %} @@ -42,7 +42,7 @@ $ kubectl create -f ./replication.yaml replicationcontrollers/nginx ``` -Check on the status of the replication controller using this command: +Check on the status of the ReplicationController using this command: ```shell $ kubectl describe replicationcontrollers/nginx @@ -79,18 +79,18 @@ echo $pods nginx-3ntk0 nginx-4ok8v nginx-qrm3m ``` -Here, the selector is the same as the selector for the replication controller (seen in the +Here, the selector is the same as the selector for the ReplicationController (seen in the `kubectl describe` output, and in a different form in `replication.yaml`. The `--output=jsonpath` option specifies an expression that just gets the name from each pod in the returned list. -## Writing a Replication Controller Spec +## Writing a ReplicationController Spec As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see [here](/docs/user-guide/simple-yaml/), [here](/docs/user-guide/configuring-containers/), and [here](/docs/user-guide/working-with-resources/). -A Replication Controller also needs a [`.spec` section](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status). +A ReplicationController also needs a [`.spec` section](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status). ### Pod Template @@ -100,28 +100,28 @@ The `.spec.template` is a [pod template](#pod-template). It has exactly the same schema as a [pod](/docs/user-guide/pods/), except it is nested and does not have an `apiVersion` or `kind`. -In addition to required fields for a Pod, a pod template in a Replication Controller must specify appropriate +In addition to required fields for a Pod, a pod template in a ReplicationController must specify appropriate labels (i.e. don't overlap with other controllers, see [pod selector](#pod-selector)) and an appropriate restart policy. Only a [`.spec.template.spec.restartPolicy`](/docs/user-guide/pod-states/) equal to `Always` is allowed, which is the default if not specified. -For local container restarts, replication controllers delegate to an agent on the node, +For local container restarts, ReplicationControllers delegate to an agent on the node, for example the [Kubelet](/docs/admin/kubelet/) or Docker. -### Labels on the Replication Controller +### Labels on the ReplicationController -The replication controller can itself have labels (`.metadata.labels`). Typically, you +The ReplicationController can itself have labels (`.metadata.labels`). Typically, you would set these the same as the `.spec.template.metadata.labels`; if `.metadata.labels` is not specified then it is defaulted to `.spec.template.metadata.labels`. However, they are allowed to be -different, and the `.metadata.labels` do not affect the behavior of the replication controller. +different, and the `.metadata.labels` do not affect the behavior of the ReplicationController. ### Pod Selector The `.spec.selector` field is a [label selector](/docs/user-guide/labels/#label-selectors). A replication controller manages all the pods with labels which match the selector. It does not distinguish between pods which it created or deleted versus pods which some other person or process created or -deleted. This allows the replication controller to be replaced without affecting the running pods. +deleted. This allows the ReplicationController to be replaced without affecting the running pods. If specified, the `.spec.template.metadata.labels` must be equal to the `.spec.selector`, or it will be rejected by the API. If `.spec.selector` is unspecified, it will be defaulted to @@ -144,54 +144,54 @@ shutdown, and a replacement starts early. If you do not specify `.spec.replicas`, then it defaults to 1. -## Working with Replication Controllers +## Working with ReplicationControllers -### Deleting a Replication Controller and its Pods +### Deleting a ReplicationController and its Pods -To delete a replication controller and all its pods, use [`kubectl -delete`](/docs/user-guide/kubectl/kubectl_delete/). Kubectl will scale the replication controller to zero and wait -for it to delete each pod before deleting the replication controller itself. If this kubectl +To delete a ReplicationController and all its pods, use [`kubectl +delete`](/docs/user-guide/kubectl/kubectl_delete/). Kubectl will scale the ReplicationController to zero and wait +for it to delete each pod before deleting the ReplicationController itself. If this kubectl command is interrupted, it can be restarted. When using the REST API or go client library, you need to do the steps explicitly (scale replicas to -0, wait for pod deletions, then delete the replication controller). +0, wait for pod deletions, then delete the ReplicationController). -### Deleting just a Replication Controller +### Deleting just a ReplicationController -You can delete a replication controller without affecting any of its pods. +You can delete a ReplicationController without affecting any of its pods. Using kubectl, specify the `--cascade=false` option to [`kubectl delete`](/docs/user-guide/kubectl/kubectl_delete/). -When using the REST API or go client library, simply delete the replication controller object. +When using the REST API or go client library, simply delete the ReplicationController object. -Once the original is deleted, you can create a new replication controller to replace it. As long +Once the original is deleted, you can create a new ReplicationController to replace it. As long as the old and new `.spec.selector` are the same, then the new one will adopt the old pods. However, it will not make any effort to make existing pods match a new, different pod template. To update pods to a new spec in a controlled way, use a [rolling update](#rolling-updates). -### Isolating pods from a Replication Controller +### Isolating pods from a ReplicationController -Pods may be removed from a replication controller's target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed). +Pods may be removed from a ReplicationController's target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed). ## Common usage patterns ### Rescheduling -As mentioned above, whether you have 1 pod you want to keep running, or 1000, a replication controller will ensure that the specified number of pods exists, even in the event of node failure or pod termination (e.g., due to an action by another control agent). +As mentioned above, whether you have 1 pod you want to keep running, or 1000, a ReplicationController will ensure that the specified number of pods exists, even in the event of node failure or pod termination (e.g., due to an action by another control agent). ### Scaling -The replication controller makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the `replicas` field. +The ReplicationController makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the `replicas` field. ### Rolling updates -The replication controller is designed to facilitate rolling updates to a service by replacing pods one-by-one. +The ReplicationController is designed to facilitate rolling updates to a service by replacing pods one-by-one. -As explained in [#1353](http://issue.k8s.io/1353), the recommended approach is to create a new replication controller with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures. +As explained in [#1353](http://issue.k8s.io/1353), the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures. Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of pods were productively serving at any given time. -The two replication controllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates. +The two ReplicationControllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates. Rolling update is implemented in the client tool [`kubectl rolling-update`](/docs/user-guide/kubectl/kubectl_rolling-update). Visit [`kubectl rolling-update` tutorial](/docs/user-guide/rolling-updates/) for more concrete examples. @@ -200,26 +200,26 @@ Rolling update is implemented in the client tool In addition to running multiple releases of an application while a rolling update is in progress, it's common to run multiple releases for an extended period of time, or even continuously, using multiple release tracks. The tracks would be differentiated by labels. -For instance, a service might target all pods with `tier in (frontend), environment in (prod)`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a replication controller with `replicas` set to 9 for the bulk of the replicas, with labels `tier=frontend, environment=prod, track=stable`, and another replication controller with `replicas` set to 1 for the canary, with labels `tier=frontend, environment=prod, track=canary`. Now the service is covering both the canary and non-canary pods. But you can mess with the replication controllers separately to test things out, monitor the results, etc. +For instance, a service might target all pods with `tier in (frontend), environment in (prod)`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a ReplicationController with `replicas` set to 9 for the bulk of the replicas, with labels `tier=frontend, environment=prod, track=stable`, and another ReplicationController with `replicas` set to 1 for the canary, with labels `tier=frontend, environment=prod, track=canary`. Now the service is covering both the canary and non-canary pods. But you can mess with the ReplicationControllers separately to test things out, monitor the results, etc. -### Using Replication Controllers with Services +### Using ReplicationControllers with Services -Multiple replication controllers can sit behind a single service, so that, for example, some traffic +Multiple ReplicationControllers can sit behind a single service, so that, for example, some traffic goes to the old version, and some goes to the new version. -A replication controller will never terminate on its own, but it isn't expected to be as long-lived as services. Services may be composed of pods controlled by multiple replication controllers, and it is expected that many replication controllers may be created and destroyed over the lifetime of a service (for instance, to perform an update of pods that run the service). Both services themselves and their clients should remain oblivious to the replication controllers that maintain the pods of the services. +A ReplicationController will never terminate on its own, but it isn't expected to be as long-lived as services. Services may be composed of pods controlled by multiple ReplicationControllers, and it is expected that many ReplicationControllers may be created and destroyed over the lifetime of a service (for instance, to perform an update of pods that run the service). Both services themselves and their clients should remain oblivious to the ReplicationControllers that maintain the pods of the services. ## Writing programs for Replication -Pods created by a replication controller are intended to be fungible and semantically identical, though their configurations may become heterogeneous over time. This is an obvious fit for replicated stateless servers, but replication controllers can also be used to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work assignment mechanisms, such as the [etcd lock module](https://coreos.com/docs/distributed-configuration/etcd-modules/) or [RabbitMQ work queues](https://www.rabbitmq.com/tutorials/tutorial-two-python.html), as opposed to static/one-time customization of the configuration of each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (e.g., cpu or memory), should be performed by another online controller process, not unlike the replication controller itself. +Pods created by a ReplicationController are intended to be fungible and semantically identical, though their configurations may become heterogeneous over time. This is an obvious fit for replicated stateless servers, but ReplicationControllers can also be used to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work assignment mechanisms, such as the [etcd lock module](https://coreos.com/docs/distributed-configuration/etcd-modules/) or [RabbitMQ work queues](https://www.rabbitmq.com/tutorials/tutorial-two-python.html), as opposed to static/one-time customization of the configuration of each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (e.g., cpu or memory), should be performed by another online controller process, not unlike the ReplicationController itself. -## Responsibilities of the replication controller +## Responsibilities of the ReplicationController -The replication controller simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](http://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies. +The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](http://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies. -The replication controller is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](http://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (e.g., [spreading](http://issue.k8s.io/367#issuecomment-48428019)) to the replication controller. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](http://issue.k8s.io/170)). +The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](http://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (e.g., [spreading](http://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](http://issue.k8s.io/170)). -The replication controller is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, stop, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing replication controllers, auto-scalers, services, scheduling policies, canaries, etc. +The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, stop, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc. ## API Object @@ -228,11 +228,11 @@ Replication controller is a top-level resource in the kubernetes REST API. More API object can be found at: [ReplicationController API object](/docs/api-reference/v1/definitions/#_v1_replicationcontroller). -## Alternatives to Replication Controller +## Alternatives to ReplicationController ### ReplicaSet -[`ReplicaSet`](/docs/user-guide/replicasets/) is the next-generation Replication Controller that supports the new [set-based label selector](/docs/user-guide/labels/#set-based-requirement). +[`ReplicaSet`](/docs/user-guide/replicasets/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/user-guide/labels/#set-based-requirement). It’s mainly used by [`Deployment`](/docs/user-guide/deployments/) as a mechanism to orchestrate pod creation, deletion and updates. Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don’t require updates at all. @@ -244,20 +244,20 @@ because unlike `kubectl rolling-update`, they are declarative, server-side, and ### Bare Pods -Unlike in the case where a user directly created pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a replication controller even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A replication controller delegates local container restarts to some agent on the node (e.g., Kubelet or Docker). +Unlike in the case where a user directly created pods, a ReplicationController replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicationController even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A ReplicationController delegates local container restarts to some agent on the node (e.g., Kubelet or Docker). ### Job -Use a [`Job`](/docs/user-guide/jobs/) instead of a replication controller for pods that are expected to terminate on their own +Use a [`Job`](/docs/user-guide/jobs/) instead of a ReplicationController for pods that are expected to terminate on their own (i.e. batch jobs). ### DaemonSet -Use a [`DaemonSet`](/docs/admin/daemons/) instead of a replication controller for pods that provide a +Use a [`DaemonSet`](/docs/admin/daemons/) instead of a ReplicationController for pods that provide a machine-level function, such as machine monitoring or machine logging. These pods have a lifetime that is tied to a machine lifetime: the pod needs to be running on the machine before other pods start, and are safe to terminate when the machine is otherwise ready to be rebooted/shutdown. ## For more information -Read [Replication Controller Operations](/docs/user-guide/replication-controller/operations/). +Read [ReplicationController Operations](/docs/user-guide/replication-controller/operations/). From fbf57b224f9ba36cd9daeb22577aa65894596cbb Mon Sep 17 00:00:00 2001 From: yanan Lee Date: Thu, 22 Dec 2016 17:14:39 +0800 Subject: [PATCH 15/24] spelling error Signed-off-by: yanan Lee Incorrect spelling Signed-off-by: yanan Lee spelling error Signed-off-by: yanan Lee Incorrect spelling Signed-off-by: yanan Lee Revert "Incorrect spelling" fix some typos Signed-off-by: Jie Luo fix a typo Signed-off-by: Jie Luo fix a typo Signed-off-by: Jie Luo --- docs/admin/accessing-the-api.md | 2 +- docs/admin/authorization.md | 2 +- docs/admin/cluster-management.md | 2 +- docs/admin/dns.md | 2 +- docs/admin/kubeadm.md | 2 +- docs/admin/kubelet-tls-bootstrapping.md | 4 ++-- docs/admin/kubelet.md | 4 ++-- docs/admin/limitrange/index.md | 2 +- docs/admin/out-of-resource.md | 2 +- docs/admin/rescheduler.md | 2 +- docs/getting-started-guides/clc.md | 2 +- docs/getting-started-guides/network-policy/calico.md | 2 +- docs/getting-started-guides/network-policy/weave.md | 2 +- docs/getting-started-guides/ubuntu/automated.md | 2 +- docs/getting-started-guides/ubuntu/manual.md | 2 +- .../load-balance-access-application-cluster.md | 2 +- .../stateless-application/expose-external-ip-address.md | 2 +- docs/user-guide/connecting-applications.md | 2 +- docs/user-guide/deployments.md | 4 ++-- docs/user-guide/jobs.md | 2 +- docs/user-guide/kubectl/kubectl_drain.md | 2 +- docs/user-guide/persistent-volumes/index.md | 2 +- 22 files changed, 25 insertions(+), 25 deletions(-) diff --git a/docs/admin/accessing-the-api.md b/docs/admin/accessing-the-api.md index c8f239969f..5a57db23ce 100644 --- a/docs/admin/accessing-the-api.md +++ b/docs/admin/accessing-the-api.md @@ -24,7 +24,7 @@ following diagram: In a typical Kubernetes cluster, the API served on port 443. A TLS connection is established. The API server presents a certificate. This certificate is often self-signed, so `$USER/.kube/config` on the user's machine typically -contains the root certficate for the API server's certificate, which when specified +contains the root certificate for the API server's certificate, which when specified is used in place of the system default root certificates. This certificate is typically automatically written into your `$USER/.kube/config` when you create a cluster yourself using `kube-up.sh`. If the cluster has multiple users, then the creator needs to share diff --git a/docs/admin/authorization.md b/docs/admin/authorization.md index 523dd256d9..caae123f14 100644 --- a/docs/admin/authorization.md +++ b/docs/admin/authorization.md @@ -330,7 +330,7 @@ roleRef: Finally a `ClusterRoleBinding` may be used to grant permissions in all namespaces. The following `ClusterRoleBinding` allows any user in the group -"manager" to read secrets in any namepsace. +"manager" to read secrets in any namespace. ```yaml # This cluster role binding allows anyone in the "manager" group to read secrets in any namespace. diff --git a/docs/admin/cluster-management.md b/docs/admin/cluster-management.md index b1c4c340a3..eea8b3f228 100644 --- a/docs/admin/cluster-management.md +++ b/docs/admin/cluster-management.md @@ -92,7 +92,7 @@ an extended period of time (10min but it may change in the future). Cluster autoscaler is configured per instance group (GCE) or node pool (GKE). If you are using GCE then you can either enable it while creating a cluster with kube-up.sh script. -To configure cluser autoscaler you have to set 3 environment variables: +To configure cluster autoscaler you have to set 3 environment variables: * `KUBE_ENABLE_CLUSTER_AUTOSCALER` - it enables cluster autoscaler if set to true. * `KUBE_AUTOSCALER_MIN_NODES` - minimum number of nodes in the cluster. diff --git a/docs/admin/dns.md b/docs/admin/dns.md index f9514a50bf..f7536249f4 100644 --- a/docs/admin/dns.md +++ b/docs/admin/dns.md @@ -77,7 +77,7 @@ For example, a pod with ip `1.2.3.4` in the namespace `default` with a DNS name Currently when a pod is created, its hostname is the Pod's `metadata.name` value. With v1.2, users can specify a Pod annotation, `pod.beta.kubernetes.io/hostname`, to specify what the Pod's hostname should be. -The Pod annotation, if specified, takes precendence over the Pod's name, to be the hostname of the pod. +The Pod annotation, if specified, takes precedence over the Pod's name, to be the hostname of the pod. For example, given a Pod with annotation `pod.beta.kubernetes.io/hostname: my-pod-name`, the Pod will have its hostname set to "my-pod-name". With v1.3, the PodSpec has a `hostname` field, which can be used to specify the Pod's hostname. This field value takes precedence over the diff --git a/docs/admin/kubeadm.md b/docs/admin/kubeadm.md index 9ecabe8b7a..71095d0577 100644 --- a/docs/admin/kubeadm.md +++ b/docs/admin/kubeadm.md @@ -242,7 +242,7 @@ Once the cluster is up, you can grab the admin credentials from the master node ## Environment variables There are some environment variables that modify the way that `kubeadm` works. Most users will have no need to set these. -These enviroment variables are a short-term solution, eventually they will be integrated in the kubeadm configuration file. +These environment variables are a short-term solution, eventually they will be integrated in the kubeadm configuration file. | Variable | Default | Description | | --- | --- | --- | diff --git a/docs/admin/kubelet-tls-bootstrapping.md b/docs/admin/kubelet-tls-bootstrapping.md index f8d56923ee..0dfc4bbf55 100644 --- a/docs/admin/kubelet-tls-bootstrapping.md +++ b/docs/admin/kubelet-tls-bootstrapping.md @@ -9,7 +9,7 @@ title: TLS bootstrapping ## Overview -This document describes how to set up TLS client certificate boostrapping for kubelets. +This document describes how to set up TLS client certificate bootstrapping for kubelets. Kubernetes 1.4 introduces an experimental API for requesting certificates from a cluster-level Certificate Authority (CA). The first supported use of this API is the provisioning of TLS client certificates for kubelets. The proposal can be found [here](https://github.com/kubernetes/kubernetes/pull/20439) @@ -17,7 +17,7 @@ and progress on the feature is being tracked as [feature #43](https://github.com ## apiserver configuration -You must provide a token file which specifies at least one "bootstrap token" assigned to a kubelet boostrap-specific group. +You must provide a token file which specifies at least one "bootstrap token" assigned to a kubelet bootstrap-specific group. This group will later be used in the controller-manager configuration to scope approvals in the default approval controller. As this feature matures, you should ensure tokens are bound to an RBAC policy which limits requests using the bootstrap token to only be able to make requests related to certificate provisioning. When RBAC policy diff --git a/docs/admin/kubelet.md b/docs/admin/kubelet.md index 342189ba94..28365fbf97 100644 --- a/docs/admin/kubelet.md +++ b/docs/admin/kubelet.md @@ -78,9 +78,9 @@ kubelet --experimental-allowed-unsafe-sysctls stringSlice Comma-separated whitelist of unsafe sysctls or unsafe sysctl patterns (ending in *). Use these at your own risk. --experimental-bootstrap-kubeconfig string Path to a kubeconfig file that will be used to get client certificate for kubelet. If the file specified by --kubeconfig does not exist, the bootstrap kubeconfig is used to request a client certificate from the API server. On success, a kubeconfig file referencing the generated key and obtained certificate is written to the path specified by --kubeconfig. The certificate and key file will be stored in the directory pointed by --cert-dir. --experimental-cgroups-per-qos Enable creation of QoS cgroup hierarchy, if true top level QoS and pod cgroups are created. - --experimental-check-node-capabilities-before-mount [Experimental] if set true, the kubelet will check the underlying node for required componenets (binaries, etc.) before performing the mount + --experimental-check-node-capabilities-before-mount [Experimental] if set true, the kubelet will check the underlying node for required components (binaries, etc.) before performing the mount --experimental-cri [Experimental] Enable the Container Runtime Interface (CRI) integration. If --container-runtime is set to "remote", Kubelet will communicate with the runtime/image CRI server listening on the endpoint specified by --remote-runtime-endpoint/--remote-image-endpoint. If --container-runtime is set to "docker", Kubelet will launch a in-process CRI server on behalf of docker, and communicate over a default endpoint. - --experimental-fail-swap-on Makes the Kubelet fail to start if swap is enabled on the node. This is a temporary opton to maintain legacy behavior, failing due to swap enabled will happen by default in v1.6. + --experimental-fail-swap-on Makes the Kubelet fail to start if swap is enabled on the node. This is a temporary option to maintain legacy behavior, failing due to swap enabled will happen by default in v1.6. --experimental-kernel-memcg-notification If enabled, the kubelet will integrate with the kernel memcg notification to determine if memory eviction thresholds are crossed rather than polling. --experimental-mounter-path string [Experimental] Path of mounter binary. Leave empty to use the default mount. --experimental-nvidia-gpus int32 Number of NVIDIA GPU devices on this node. Only 0 (default) and 1 are currently supported. diff --git a/docs/admin/limitrange/index.md b/docs/admin/limitrange/index.md index 767513a1a3..2241cbb140 100644 --- a/docs/admin/limitrange/index.md +++ b/docs/admin/limitrange/index.md @@ -184,7 +184,7 @@ Note that this pod specifies explicit resource *limits* and *requests* so it did default values. Note: The *limits* for CPU resource are enforced in the default Kubernetes setup on the physical node -that runs the container unless the administrator deploys the kubelet with the folllowing flag: +that runs the container unless the administrator deploys the kubelet with the following flag: ```shell $ kubelet --help diff --git a/docs/admin/out-of-resource.md b/docs/admin/out-of-resource.md index 0fa6f3942c..30b8744624 100644 --- a/docs/admin/out-of-resource.md +++ b/docs/admin/out-of-resource.md @@ -330,7 +330,7 @@ for eviction. Instead `DaemonSet` should ideally launch `Guaranteed` pods. `kubelet` has been freeing up disk space on demand to keep the node stable. As disk based eviction matures, the following `kubelet` flags will be marked for deprecation -in favor of the simpler configuation supported around eviction. +in favor of the simpler configuration supported around eviction. | Existing Flag | New Flag | | ------------- | -------- | diff --git a/docs/admin/rescheduler.md b/docs/admin/rescheduler.md index c9a3bd074c..27c512bff9 100644 --- a/docs/admin/rescheduler.md +++ b/docs/admin/rescheduler.md @@ -50,7 +50,7 @@ It's enabled by default. It can be disabled: ### Marking add-on as critical -To be critical an add-on has to run in `kube-system` namespace (cofigurable via flag) +To be critical an add-on has to run in `kube-system` namespace (configurable via flag) and have the following annotations specified: * `scheduler.alpha.kubernetes.io/critical-pod` set to empty string diff --git a/docs/getting-started-guides/clc.md b/docs/getting-started-guides/clc.md index ab45ffbf8f..121b6c1030 100644 --- a/docs/getting-started-guides/clc.md +++ b/docs/getting-started-guides/clc.md @@ -207,7 +207,7 @@ Create a cluster with name of k8s_3, 1 master node, and 10 worker minions (on VM ## Cluster Features and Architecture -We configue the Kubernetes cluster with the following features: +We configure the Kubernetes cluster with the following features: * KubeDNS: DNS resolution and service discovery * Heapster/InfluxDB: For metric collection. Needed for Grafana and auto-scaling. diff --git a/docs/getting-started-guides/network-policy/calico.md b/docs/getting-started-guides/network-policy/calico.md index a411fba163..47516756e5 100644 --- a/docs/getting-started-guides/network-policy/calico.md +++ b/docs/getting-started-guides/network-policy/calico.md @@ -31,4 +31,4 @@ There are two main components to be aware of: - One `calico-node` Pod runs on each node in your cluster, and enforces network policy on the traffic to/from Pods on that machine by configuring iptables. - The `calico-policy-controller` Pod reads policy and label information from the Kubernetes API and configures Calico appropriately. -Once your cluster is running, you can follow the [NetworkPolicy gettting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy. +Once your cluster is running, you can follow the [NetworkPolicy getting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy. diff --git a/docs/getting-started-guides/network-policy/weave.md b/docs/getting-started-guides/network-policy/weave.md index 8d5896861d..8fd1a072d7 100644 --- a/docs/getting-started-guides/network-policy/weave.md +++ b/docs/getting-started-guides/network-policy/weave.md @@ -8,4 +8,4 @@ The [Weave Net Addon](https://www.weave.works/docs/net/latest/kube-addon/) for K This component automatically monitors Kubernetes for any NetworkPolicy annotations on all namespaces, and configures `iptables` rules to allow or block traffic as directed by the policies. -Once you have installed the Weave Net Addon you can follow the [NetworkPolicy gettting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy. +Once you have installed the Weave Net Addon you can follow the [NetworkPolicy getting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy. diff --git a/docs/getting-started-guides/ubuntu/automated.md b/docs/getting-started-guides/ubuntu/automated.md index 21c25e1c3c..6a1e905bf8 100644 --- a/docs/getting-started-guides/ubuntu/automated.md +++ b/docs/getting-started-guides/ubuntu/automated.md @@ -93,7 +93,7 @@ Note that each controller can host multiple Kubernetes clusters in a given cloud ## Launch a Kubernetes cluster -The following command will deploy the intial 12-node starter cluster. The speed of execution is very dependent of the performance of the cloud you're deploying to, but +The following command will deploy the initial 12-node starter cluster. The speed of execution is very dependent of the performance of the cloud you're deploying to, but ```shell juju deploy canonical-kubernetes diff --git a/docs/getting-started-guides/ubuntu/manual.md b/docs/getting-started-guides/ubuntu/manual.md index e5a849d909..c2566aaf33 100644 --- a/docs/getting-started-guides/ubuntu/manual.md +++ b/docs/getting-started-guides/ubuntu/manual.md @@ -122,7 +122,7 @@ through `FLANNEL_BACKEND` and `FLANNEL_OTHER_NET_CONFIG`, as explained in `clust The default setting for `ADMISSION_CONTROL` is right for the latest release of Kubernetes, but if you choose an earlier release then you might want a different setting. See -[the admisson control doc](http://kubernetes.io/docs/admin/admission-controllers/#is-there-a-recommended-set-of-plug-ins-to-use) +[the admission control doc](http://kubernetes.io/docs/admin/admission-controllers/#is-there-a-recommended-set-of-plug-ins-to-use) for the recommended settings for various releases. **Note:** When deploying, master needs to be connected to the Internet to download the necessary files. diff --git a/docs/tasks/access-application-cluster/load-balance-access-application-cluster.md b/docs/tasks/access-application-cluster/load-balance-access-application-cluster.md index 259988aa15..3fc08562d0 100644 --- a/docs/tasks/access-application-cluster/load-balance-access-application-cluster.md +++ b/docs/tasks/access-application-cluster/load-balance-access-application-cluster.md @@ -52,7 +52,7 @@ load-balanced access to an application running in a cluster. NAME DESIRED CURRENT AGE hello-world-2189936611 2 2 12m -1. Create a Serivice object that exposes the replica set: +1. Create a Service object that exposes the replica set: kubectl expose rs --type="LoadBalancer" --name="example-service" diff --git a/docs/tutorials/stateless-application/expose-external-ip-address.md b/docs/tutorials/stateless-application/expose-external-ip-address.md index 2d2e28d594..f21abf7e61 100644 --- a/docs/tutorials/stateless-application/expose-external-ip-address.md +++ b/docs/tutorials/stateless-application/expose-external-ip-address.md @@ -4,7 +4,7 @@ title: Exposing an External IP Address to Access an Application in a Cluster {% capture overview %} -This page shows how to create a Kubernetes Service object that exposees an +This page shows how to create a Kubernetes Service object that exposes an external IP address. {% endcapture %} diff --git a/docs/user-guide/connecting-applications.md b/docs/user-guide/connecting-applications.md index 95d365bdb1..c0cb825a3b 100644 --- a/docs/user-guide/connecting-applications.md +++ b/docs/user-guide/connecting-applications.md @@ -295,7 +295,7 @@ LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.el Kubernetes also supports Federated Services, which can span multiple clusters and cloud providers, to provide increased availability, -bettern fault tolerance and greater scalability for your services. See +better fault tolerance and greater scalability for your services. See the [Federated Services User Guide](/docs/user-guide/federation/federated-services/) for further information. diff --git a/docs/user-guide/deployments.md b/docs/user-guide/deployments.md index c53c1e19ae..1de58c8fda 100644 --- a/docs/user-guide/deployments.md +++ b/docs/user-guide/deployments.md @@ -413,7 +413,7 @@ $ kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent= deployment "nginx-deployment" autoscaled ``` -RollingUpdate Deployments support running multitple versions of an application at the same time. When you +RollingUpdate Deployments support running multiple versions of an application at the same time. When you or an autoscaler scales a RollingUpdate Deployment that is in the middle of a rollout (either in progress or paused), then the Deployment controller will balance the additional replicas in the existing active ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called *proportional scaling*. @@ -568,7 +568,7 @@ Your Deployment may get stuck trying to deploy its newest ReplicaSet without eve * Limit ranges * Application runtime misconfiguration -One way you can detect this condition is to specify specify a deadline parameter in your Deployment spec: ([`spec.progressDeadlineSeconds`](#progress-deadline-seconds)). `spec.progressDeadlineSeconds` denotes the number of seconds the Deployment controller waits before indicating (via the Deployment status) that the Deployment progress has stalled. +One way you can detect this condition is to specify a deadline parameter in your Deployment spec: ([`spec.progressDeadlineSeconds`](#progress-deadline-seconds)). `spec.progressDeadlineSeconds` denotes the number of seconds the Deployment controller waits before indicating (via the Deployment status) that the Deployment progress has stalled. The following `kubectl` command sets the spec with `progressDeadlineSeconds` to make the controller report lack of progress for a Deployment after 10 minutes: diff --git a/docs/user-guide/jobs.md b/docs/user-guide/jobs.md index 0d71bc5e56..3d665eaa7f 100644 --- a/docs/user-guide/jobs.md +++ b/docs/user-guide/jobs.md @@ -166,7 +166,7 @@ parallelism, for a variety or reasons: - If the controller failed to create pods for any reason (lack of ResourceQuota, lack of permission, etc.), then there may be fewer pods than requested. - The controller may throttle new pod creation due to excessive previous pod failures in the same Job. -- When a pod is gracefully shutdown, it make take time to stop. +- When a pod is gracefully shutdown, it takes time to stop. ## Handling Pod and Container Failures diff --git a/docs/user-guide/kubectl/kubectl_drain.md b/docs/user-guide/kubectl/kubectl_drain.md index 712af40af7..b6eba48f59 100644 --- a/docs/user-guide/kubectl/kubectl_drain.md +++ b/docs/user-guide/kubectl/kubectl_drain.md @@ -11,7 +11,7 @@ Drain node in preparation for maintenance Drain node in preparation for maintenance. -The given node will be marked unschedulable to prevent new pods from arriving. 'drain' evicts the pods if the APIServer supports eviciton (http://kubernetes.io/docs/admin/disruptions/). Otherwise, it will use normal DELETE to delete the pods. The 'drain' evicts or deletes all pods except mirror pods (which cannot be deleted through the API server). If there are DaemonSet-managed pods, drain will not proceed without --ignore-daemonsets, and regardless it will not delete any DaemonSet-managed pods, because those pods would be immediately replaced by the DaemonSet controller, which ignores unschedulable markings. If there are any pods that are neither mirror pods nor managed by ReplicationController, ReplicaSet, DaemonSet, StatefulSet or Job, then drain will not delete any pods unless you use --force. +The given node will be marked unschedulable to prevent new pods from arriving. 'drain' evicts the pods if the APIServer supports eviction (http://kubernetes.io/docs/admin/disruptions/). Otherwise, it will use normal DELETE to delete the pods. The 'drain' evicts or deletes all pods except mirror pods (which cannot be deleted through the API server). If there are DaemonSet-managed pods, drain will not proceed without --ignore-daemonsets, and regardless it will not delete any DaemonSet-managed pods, because those pods would be immediately replaced by the DaemonSet controller, which ignores unschedulable markings. If there are any pods that are neither mirror pods nor managed by ReplicationController, ReplicaSet, DaemonSet, StatefulSet or Job, then drain will not delete any pods unless you use --force. 'drain' waits for graceful termination. You should not operate on the machine until the command completes. diff --git a/docs/user-guide/persistent-volumes/index.md b/docs/user-guide/persistent-volumes/index.md index 8f9703000c..d16cecd208 100644 --- a/docs/user-guide/persistent-volumes/index.md +++ b/docs/user-guide/persistent-volumes/index.md @@ -497,7 +497,7 @@ parameters: ``` * `quobyteAPIServer`: API Server of Quobyte in the format `http(s)://api-server:7860` -* `registry`: Quobyte registry to use to mount the volume. You can specifiy the registry as ``:`` pair or if you want to specify multiple registries you just have to put a comma between them e.q. ``:,:,:``. The host can be an IP address or if you have a working DNS you can also provide the DNS names. +* `registry`: Quobyte registry to use to mount the volume. You can specify the registry as ``:`` pair or if you want to specify multiple registries you just have to put a comma between them e.q. ``:,:,:``. The host can be an IP address or if you have a working DNS you can also provide the DNS names. * `adminSecretNamespace`: The namespace for `adminSecretName`. Default is "default". * `adminSecretName`: secret that holds information about the Quobyte user and the password to authenticate agains the API server. The provided secret must have type "kubernetes.io/quobyte", e.g. created in this way: ``` From 5bcc66a16a81078766e1ef94dcd4b1321630a71c Mon Sep 17 00:00:00 2001 From: devin-donnelly Date: Thu, 22 Dec 2016 18:39:52 -0800 Subject: [PATCH 16/24] Update expose-intro.html --- docs/tutorials/kubernetes-basics/expose-intro.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/kubernetes-basics/expose-intro.html b/docs/tutorials/kubernetes-basics/expose-intro.html index 4e65a16988..cd40b22f8b 100644 --- a/docs/tutorials/kubernetes-basics/expose-intro.html +++ b/docs/tutorials/kubernetes-basics/expose-intro.html @@ -31,7 +31,7 @@

This abstraction will allow us to expose Pods to traffic originating from outside the cluster. Services have their own unique cluster-private IP address and expose a port to receive traffic. If you choose to expose the service outside the cluster, the options are:

    -
  • LoadBalancer - provides a public IP address (what you would typically use when you run Kubernetes on GCE or AWS)
  • +
  • LoadBalancer - provides a public IP address (what you would typically use when you run Kubernetes on GCP or AWS)
  • NodePort - exposes the Service on the same port on each Node of the cluster using NAT (available on all Kubernetes clusters, and in Minikube)
From 0ecc4d600b8afcdad8832b1cfd7a2f3f9346518c Mon Sep 17 00:00:00 2001 From: devin-donnelly Date: Thu, 22 Dec 2016 18:54:20 -0800 Subject: [PATCH 17/24] Update authentication.md --- docs/admin/authentication.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/admin/authentication.md b/docs/admin/authentication.md index 064b7cd50f..a3726cbf2d 100644 --- a/docs/admin/authentication.md +++ b/docs/admin/authentication.md @@ -35,7 +35,7 @@ or be treated as an anonymous user. ## Authentication strategies Kubernetes uses client certificates, bearer tokens, an authenticating proxy, or HTTP basic auth to -authenticate API requests through authentication plugins. As HTTP request are +authenticate API requests through authentication plugins. As HTTP requests are made to the API server, plugins attempt to associate the following attributes with the request: From 2374e3d1bb4a175a2b0021b593e3171bb89a4c71 Mon Sep 17 00:00:00 2001 From: dongziming Date: Fri, 23 Dec 2016 13:44:14 +0800 Subject: [PATCH 18/24] Fixed some e.g. problems and some spelling errors. --- docs/admin/cluster-troubleshooting.md | 2 +- docs/admin/daemons.md | 2 +- docs/admin/multi-cluster.md | 2 +- docs/admin/node-problem.md | 4 ++-- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/admin/cluster-troubleshooting.md b/docs/admin/cluster-troubleshooting.md index 89cd99926b..ff8358a7a9 100644 --- a/docs/admin/cluster-troubleshooting.md +++ b/docs/admin/cluster-troubleshooting.md @@ -89,7 +89,7 @@ Mitigations: - Mitigates: Apiserver VM shutdown or apiserver crashing - Mitigates: Supporting services VM shutdown or crashes -- Action use IaaS providers reliable storage (e.g GCE PD or AWS EBS volume) for VMs with apiserver+etcd +- Action use IaaS providers reliable storage (e.g. GCE PD or AWS EBS volume) for VMs with apiserver+etcd - Mitigates: Apiserver backing storage lost - Action: Use (experimental) [high-availability](/docs/admin/high-availability) configuration diff --git a/docs/admin/daemons.md b/docs/admin/daemons.md index 819636ba99..4682a62b71 100644 --- a/docs/admin/daemons.md +++ b/docs/admin/daemons.md @@ -129,7 +129,7 @@ Support for updating DaemonSets and controlled updating of nodes is planned. ### Init Scripts -It is certainly possible to run daemon processes by directly starting them on a node (e.g using +It is certainly possible to run daemon processes by directly starting them on a node (e.g. using `init`, `upstartd`, or `systemd`). This is perfectly fine. However, there are several advantages to running such processes via a DaemonSet: diff --git a/docs/admin/multi-cluster.md b/docs/admin/multi-cluster.md index 1d238d8e13..4473bb381d 100644 --- a/docs/admin/multi-cluster.md +++ b/docs/admin/multi-cluster.md @@ -52,7 +52,7 @@ Second, decide how many clusters should be able to be unavailable at the same ti the number that can be unavailable `U`. If you are not sure, then 1 is a fine choice. If it is allowable for load-balancing to direct traffic to any region in the event of a cluster failure, then -you need at least the larger of `R` or `U + 1` clusters. If it is not (e.g you want to ensure low latency for all +you need at least the larger of `R` or `U + 1` clusters. If it is not (e.g. you want to ensure low latency for all users in the event of a cluster failure), then you need to have `R * (U + 1)` clusters (`U + 1` in each of `R` regions). In any case, try to put each cluster in a different zone. diff --git a/docs/admin/node-problem.md b/docs/admin/node-problem.md index 0d7b57005e..b4f3e6ee31 100644 --- a/docs/admin/node-problem.md +++ b/docs/admin/node-problem.md @@ -49,7 +49,7 @@ either `kubectl` or addon pod. ### Kubectl -This is the recommanded way to start node problem detector outside of GCE. It +This is the recommended way to start node problem detector outside of GCE. It provides more flexible management, such as overwriting the default configuration to fit it into your environment or detect customized node problems. @@ -238,7 +238,7 @@ implement a new translator for a new log format. ## Caveats -It is recommanded to run the node problem detector in your cluster to monitor +It is recommended to run the node problem detector in your cluster to monitor the node health. However, you should be aware that this will introduce extra resource overhead on each node. Usually this is fine, because: From 0328b6d591bc698050dd4935771ba1775e68063e Mon Sep 17 00:00:00 2001 From: Martially <21651061@zju.edu.cn> Date: Fri, 23 Dec 2016 14:06:54 +0800 Subject: [PATCH 19/24] fix typo Signed-off-by: Martially <21651061@zju.edu.cn> --- docs/whatisk8s.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/whatisk8s.md b/docs/whatisk8s.md index 7c1e637b6d..2e53554863 100644 --- a/docs/whatisk8s.md +++ b/docs/whatisk8s.md @@ -52,7 +52,7 @@ Summary of container benefits: * **Cloud and OS distribution portability**: Runs on Ubuntu, RHEL, CoreOS, on-prem, Google Container Engine, and anywhere else. * **Application-centric management**: - Raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources. + Raises the level of abstraction from running an OS on virtual hardware to run an application on an OS using logical resources. * **Loosely coupled, distributed, elastic, liberated [micro-services](http://martinfowler.com/articles/microservices.html)**: Applications are broken into smaller, independent pieces and can be deployed and managed dynamically -- not a fat monolithic stack running on one big single-purpose machine. * **Resource isolation**: From 996b4343dcd6a563dbdf876a3d06f1894cc3a532 Mon Sep 17 00:00:00 2001 From: Martially <21651061@zju.edu.cn> Date: Fri, 23 Dec 2016 14:17:29 +0800 Subject: [PATCH 20/24] link error Signed-off-by: Martially <21651061@zju.edu.cn> --- docs/whatisk8s.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/whatisk8s.md b/docs/whatisk8s.md index 2e53554863..dde25433de 100644 --- a/docs/whatisk8s.md +++ b/docs/whatisk8s.md @@ -106,7 +106,7 @@ Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) syst * Kubernetes does not provide nor mandate a comprehensive application configuration language/system (e.g., [jsonnet](https://github.com/google/jsonnet)). * Kubernetes does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems. -On the other hand, a number of PaaS systems run *on* Kubernetes, such as [Openshift](https://github.com/openshift/origin), [Deis](http://deis.io/), and [Gondor](https://gondor.io/). You could also roll your own custom PaaS, integrate with a CI system of your choice, or get along just fine with just Kubernetes: bring your container images and deploy them on Kubernetes. +On the other hand, a number of PaaS systems run *on* Kubernetes, such as [Openshift](https://github.com/openshift/origin), [Deis](http://deis.io/), and [Eldarion Cloud](http://eldarion.cloud/). You could also roll your own custom PaaS, integrate with a CI system of your choice, or get along just fine with just Kubernetes: bring your container images and deploy them on Kubernetes. Since Kubernetes operates at the application level rather than at just the hardware level, it provides some generally applicable features common to PaaS offerings, such as deployment, scaling, load balancing, logging, monitoring, etc. However, Kubernetes is not monolithic, and these default solutions are optional and pluggable. From 41a9b7c55d302096c9f0d62119722299cdecfb58 Mon Sep 17 00:00:00 2001 From: Martially <21651061@zju.edu.cn> Date: Fri, 23 Dec 2016 15:26:12 +0800 Subject: [PATCH 21/24] fix typo Signed-off-by: Martially <21651061@zju.edu.cn> --- .../stateful-application/run-replicated-stateful-application.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/stateful-application/run-replicated-stateful-application.md b/docs/tutorials/stateful-application/run-replicated-stateful-application.md index 29f0d68242..30d22e1cce 100644 --- a/docs/tutorials/stateful-application/run-replicated-stateful-application.md +++ b/docs/tutorials/stateful-application/run-replicated-stateful-application.md @@ -180,7 +180,7 @@ replicating. In general, when a new Pod joins the set as a slave, it must assume the MySQL master might already have data on it. It also must assume that the replication logs might not go all the way back to the beginning of time. -These conservative assumptions are the key to allowing a running StatefulSet +These conservative assumptions are the key to allow a running StatefulSet to scale up and down over time, rather than being fixed at its initial size. The second Init Container, named `clone-mysql`, performs a clone operation on From a129e1c4c1d6569b1e9abb79cb2908fcd1a4601e Mon Sep 17 00:00:00 2001 From: dongziming Date: Fri, 23 Dec 2016 15:59:02 +0800 Subject: [PATCH 22/24] Spelling errors in /docs/ --- docs/getting-started-guides/ubuntu/automated.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/getting-started-guides/ubuntu/automated.md b/docs/getting-started-guides/ubuntu/automated.md index 6a1e905bf8..d82c4ca545 100644 --- a/docs/getting-started-guides/ubuntu/automated.md +++ b/docs/getting-started-guides/ubuntu/automated.md @@ -206,7 +206,7 @@ Congratulations, you've now set up a Kubernetes cluster! Want larger Kubernetes nodes? It is easy to request different sizes of cloud resources from Juju by using **constraints**. You can increase the amount of CPU or memory (RAM) in any of the systems requested by Juju. This allows you -to fine tune th Kubernetes cluster to fit your workload. Use flags on the +to fine tune the Kubernetes cluster to fit your workload. Use flags on the bootstrap command or as a separate `juju constraints` command. Look to the [Juju documentation for machine](https://jujucharms.com/docs/2.0/charms-constraints) details. From 5c5ebf0bf159b45e69e5e28c8450c3af6f46b97e Mon Sep 17 00:00:00 2001 From: dongziming Date: Fri, 23 Dec 2016 16:45:56 +0800 Subject: [PATCH 23/24] Some spelling errors in /docs/ --- docs/getting-started-guides/clc.md | 2 +- docs/getting-started-guides/kops.md | 2 +- docs/getting-started-guides/photon-controller.md | 2 +- docs/getting-started-guides/scratch.md | 4 ++-- docs/tutorials/stateful-application/basic-stateful-set.md | 4 ++-- 5 files changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/getting-started-guides/clc.md b/docs/getting-started-guides/clc.md index 121b6c1030..bd0804eb9b 100644 --- a/docs/getting-started-guides/clc.md +++ b/docs/getting-started-guides/clc.md @@ -218,7 +218,7 @@ We configure the Kubernetes cluster with the following features: We use the following to create the kubernetes cluster: * Kubernetes 1.1.7 -* Unbuntu 14.04 +* Ubuntu 14.04 * Flannel 0.5.4 * Docker 1.9.1-0~trusty * Etcd 2.2.2 diff --git a/docs/getting-started-guides/kops.md b/docs/getting-started-guides/kops.md index 90ea6546a2..0b2381dd18 100644 --- a/docs/getting-started-guides/kops.md +++ b/docs/getting-started-guides/kops.md @@ -57,7 +57,7 @@ kops uses DNS for discovery, both inside the cluster and so that you can reach t from clients. kops has a strong opinion on the cluster name: it should be a valid DNS name. By doing so you will -no longer get your clusters confused, you can share clusters with your colleagues unambigiously, +no longer get your clusters confused, you can share clusters with your colleagues unambiguously, and you can reach them without relying on remembering an IP address. You can, and probably should, use subdomains to divide your clusters. As our example we will use diff --git a/docs/getting-started-guides/photon-controller.md b/docs/getting-started-guides/photon-controller.md index ec9d9511fa..df9d14326a 100644 --- a/docs/getting-started-guides/photon-controller.md +++ b/docs/getting-started-guides/photon-controller.md @@ -163,7 +163,7 @@ balancer. Specifically: Configure your service with the NodePort option. For example, this service uses the NodePort option. All Kubernetes nodes will listen on a port and forward network traffic to any pods in the service. In this -case, Kubernets will choose a random port, but it will be the same +case, Kubernetes will choose a random port, but it will be the same port on all nodes. ```yaml diff --git a/docs/getting-started-guides/scratch.md b/docs/getting-started-guides/scratch.md index dd554c5715..2ef0f75dc3 100644 --- a/docs/getting-started-guides/scratch.md +++ b/docs/getting-started-guides/scratch.md @@ -69,7 +69,7 @@ accomplished in two ways: - **Using an overlay network** - An overlay network obscures the underlying network architecture from the - pod network through traffic encapsulation (e.g vxlan). + pod network through traffic encapsulation (e.g. vxlan). - Encapsulation reduces performance, though exactly how much depends on your solution. - **Without an overlay network** - Configure the underlying network fabric (switches, routers, etc.) to be aware of pod IP addresses. @@ -180,7 +180,7 @@ we recommend that you run these as containers, so you need an image to be built. You have several choices for Kubernetes images: - Use images hosted on Google Container Registry (GCR): - - e.g `gcr.io/google_containers/hyperkube:$TAG`, where `TAG` is the latest + - e.g. `gcr.io/google_containers/hyperkube:$TAG`, where `TAG` is the latest release tag, which can be found on the [latest releases page](https://github.com/kubernetes/kubernetes/releases/latest). - Ensure $TAG is the same tag as the release tag you are using for kubelet and kube-proxy. - The [hyperkube](https://releases.k8s.io/{{page.githubbranch}}/cmd/hyperkube) binary is an all in one binary diff --git a/docs/tutorials/stateful-application/basic-stateful-set.md b/docs/tutorials/stateful-application/basic-stateful-set.md index 07e41cd56d..0edf9f9d38 100644 --- a/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/docs/tutorials/stateful-application/basic-stateful-set.md @@ -122,7 +122,7 @@ launching `web-1`. In fact, `web-1` is not launched until `web-0` is [Running and Ready](/docs/user-guide/pod-states). ### Pods in a StatefulSet -Unlike Pods in other controllers, the Pods in a StatefulSet have a unqiue +Unlike Pods in other controllers, the Pods in a StatefulSet have a unique ordinal index and a stable network identity. #### Examining the Pod's Ordinal Index @@ -177,7 +177,7 @@ Name: web-1.nginx Address 1: 10.244.2.6 ``` -The CNAME of the headless serivce points to SRV records (one for each Pod that +The CNAME of the headless service points to SRV records (one for each Pod that is Running and Ready). The SRV records point to A record entries that contain the Pods' IP addresses. From 83a6c52ddc9c425df24a6dbed2545601aea46cdb Mon Sep 17 00:00:00 2001 From: dongziming Date: Fri, 23 Dec 2016 17:25:27 +0800 Subject: [PATCH 24/24] Spelling errors in /docs/ --- docs/user-guide/federation/events.md | 2 +- docs/user-guide/federation/federated-services.md | 6 +++--- docs/user-guide/pods/init-container.md | 2 +- docs/user-guide/secrets/index.md | 2 +- docs/user-guide/services/index.md | 2 +- 5 files changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/user-guide/federation/events.md b/docs/user-guide/federation/events.md index f1f8868466..60c78ad9c5 100644 --- a/docs/user-guide/federation/events.md +++ b/docs/user-guide/federation/events.md @@ -24,7 +24,7 @@ general. ## Overview -Events in federation control plane (refered to as "federation events" in +Events in federation control plane (referred to as "federation events" in this guide) are very similar to the traditional Kubernetes Events providing the same functionality. Federation Events are stored only in federation control plane and are not passed on to the underlying kubernetes clusters. diff --git a/docs/user-guide/federation/federated-services.md b/docs/user-guide/federation/federated-services.md index 354fbeca01..b163ad18e8 100644 --- a/docs/user-guide/federation/federated-services.md +++ b/docs/user-guide/federation/federated-services.md @@ -232,7 +232,7 @@ due to caching by intermediate DNS servers. The above set of DNS records is automatically kept in sync with the current state of health of all service shards globally by the Federated Service system. DNS resolver libraries (which are invoked by -all clients) automatically traverse the hiearchy of 'CNAME' and 'A' +all clients) automatically traverse the hierarchy of 'CNAME' and 'A' records to return the correct set of healthy IP addresses. Clients can then select any one of the returned addresses to initiate a network connection (and fail over automatically to one of the other equivalent @@ -295,7 +295,7 @@ availability zones and regions other than the ones local to a Pod by specifying the appropriate DNS names explicitly, and not relying on automatic DNS expansion. For example, "nginx.mynamespace.myfederation.svc.europe-west1.example.com" will -resolve to all of the currently healthy service shards in europe, even +resolve to all of the currently healthy service shards in Europe, even if the Pod issuing the lookup is located in the U.S., and irrespective of whether or not there are healthy shards of the service in the U.S. This is useful for remote monitoring and other similar applications. @@ -316,7 +316,7 @@ us.nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.us-central1.ex nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.example.com. ``` That way your clients can always use the short form on the left, and -always be automatcally routed to the closest healthy shard on their +always be automatically routed to the closest healthy shard on their home continent. All of the required failover is handled for you automatically by Kubernetes Cluster Federation. Future releases will improve upon this even further. diff --git a/docs/user-guide/pods/init-container.md b/docs/user-guide/pods/init-container.md index 75b6efcac3..2f4748f9cf 100644 --- a/docs/user-guide/pods/init-container.md +++ b/docs/user-guide/pods/init-container.md @@ -159,7 +159,7 @@ reasons: * This is uncommon and would have to be done by someone with root access to nodes. * All containers in a pod are terminated, requiring a restart (RestartPolicyAlways) AND the record of init container completion has been lost due to garbage collection. -## Support and compatibilty +## Support and compatibility A cluster with Kubelet and Apiserver version 1.4.0 or greater supports init containers with the beta annotations. Support varies for other combinations of diff --git a/docs/user-guide/secrets/index.md b/docs/user-guide/secrets/index.md index 79b6d93a7d..6f6728db42 100644 --- a/docs/user-guide/secrets/index.md +++ b/docs/user-guide/secrets/index.md @@ -666,7 +666,7 @@ one called, say, `prod-user` with the `prod-db-secret`, and one called, say, ### Use-case: Dotfiles in secret volume -In order to make piece of data 'hidden' (ie, in a file whose name begins with a dot character), simply +In order to make piece of data 'hidden' (i.e., in a file whose name begins with a dot character), simply make that key begin with a dot. For example, when the following secret is mounted into a volume: ```json diff --git a/docs/user-guide/services/index.md b/docs/user-guide/services/index.md index 8151eebab9..60fa80e2e7 100644 --- a/docs/user-guide/services/index.md +++ b/docs/user-guide/services/index.md @@ -500,7 +500,7 @@ within AWS Certificate Manager. }, ``` -The second annotation specificies which protocol a pod speaks. For HTTPS and +The second annotation specifies which protocol a pod speaks. For HTTPS and SSL, the ELB will expect the pod to authenticate itself over the encrypted connection.