From 383e40f978334792e7ca30399845c9bf38ea813c Mon Sep 17 00:00:00 2001 From: SRaddict Date: Thu, 22 Dec 2016 11:24:05 +0800 Subject: [PATCH 01/14] fix a series punctuation errors --- LICENSE | 2 +- case-studies/index.html | 4 ++-- case-studies/pearson.html | 10 +++++----- case-studies/wikimedia.html | 8 ++++---- docs/admin/admission-controllers.md | 4 ++-- docs/admin/rescheduler.md | 2 +- docs/getting-started-guides/windows/index.md | 6 +++--- docs/tutorials/kubernetes-basics/explore-intro.html | 2 +- docs/user-guide/replicasets.md | 2 +- 9 files changed, 20 insertions(+), 20 deletions(-) diff --git a/LICENSE b/LICENSE index 06c608dcf4..b6988e7edc 100644 --- a/LICENSE +++ b/LICENSE @@ -378,7 +378,7 @@ Section 8 -- Interpretation. Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances -will be considered the “Licensor.” The text of the Creative Commons +will be considered the "Licensor." The text of the Creative Commons public licenses is dedicated to the public domain under the CC0 Public Domain Dedication. Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as diff --git a/case-studies/index.html b/case-studies/index.html index ce14542424..6d288a8bb2 100644 --- a/case-studies/index.html +++ b/case-studies/index.html @@ -17,13 +17,13 @@ title: Case Studies
Pearson -

“We chose Kubernetes because of its flexibility, ease of management and the way it improves our engineers’ productivity.”

+

"We chose Kubernetes because of its flexibility, ease of management and the way it improves our engineers’ productivity."

Read about Pearson
Wikimedia -

“With Kubernetes, we’re simplifying our environment and making it easier for developers to build the tools that make wikis run better.”

+

"With Kubernetes, we’re simplifying our environment and making it easier for developers to build the tools that make wikis run better."

Read about Wikimedia
diff --git a/case-studies/pearson.html b/case-studies/pearson.html index bf871789b9..5eecc6f349 100644 --- a/case-studies/pearson.html +++ b/case-studies/pearson.html @@ -19,7 +19,7 @@ title: Pearson Case Study
Pearson

- “To transform our infrastructure, we had to think beyond simply enabling automated provisioning, we realized we had to build a platform that would allow Pearson developers to build manage and deploy applications in a completely different way. We chose Kubernetes because of its flexibility, ease of management and the way it would improve our engineers’ productivity.”

+ "To transform our infrastructure, we had to think beyond simply enabling automated provisioning, we realized we had to build a platform that would allow Pearson developers to build manage and deploy applications in a completely different way. We chose Kubernetes because of its flexibility, ease of management and the way it would improve our engineers’ productivity."

— Chris Jackson, Director for Cloud Product Engineering, Pearson

@@ -63,9 +63,9 @@ title: Pearson Case Study

Kubernetes powers a comprehensive developer experience

-

Pearson wanted to use as much open source technology as possible for the platform given that it provides both technical and commercial benefits over the duration of the project. Jackson says, “Building an infrastructure platform based on open source technology in Pearson was a no-brainer, the sharing of technical challenges and advanced use cases in a community of people with talent far beyond what we could hire independently allows us to innovate at a level we could not reach on our own. Our engineers enjoy returning code to the community and participating in talks, blogs and meetings, it’s a great way for us to allow our team to express themselves and share the pride they have in their work.”

-

It also wanted to use a container-focused platform. Pearson has 400 development groups and diverse brands with varying business and technical needs. With containers, each brand could experiment with building new types of content using their preferred technologies, and then deliver it using containers. Pearson chose Kubernetes because it believes that is the best technology for managing containers, has the widest community support and offers the most flexible and powerful tools.“

-

Kubernetes is at the core of the platform we’ve built for developers. After we get our big spike in back-to-school in traffic, much of Pearson’s traffic will interact with Kubernetes. It is proving to be as effective as we had hoped,” Jackson says.

+

Pearson wanted to use as much open source technology as possible for the platform given that it provides both technical and commercial benefits over the duration of the project. Jackson says, "Building an infrastructure platform based on open source technology in Pearson was a no-brainer, the sharing of technical challenges and advanced use cases in a community of people with talent far beyond what we could hire independently allows us to innovate at a level we could not reach on our own. Our engineers enjoy returning code to the community and participating in talks, blogs and meetings, it’s a great way for us to allow our team to express themselves and share the pride they have in their work."

+

It also wanted to use a container-focused platform. Pearson has 400 development groups and diverse brands with varying business and technical needs. With containers, each brand could experiment with building new types of content using their preferred technologies, and then deliver it using containers. Pearson chose Kubernetes because it believes that is the best technology for managing containers, has the widest community support and offers the most flexible and powerful tools."

+

Kubernetes is at the core of the platform we’ve built for developers. After we get our big spike in back-to-school in traffic, much of Pearson’s traffic will interact with Kubernetes. It is proving to be as effective as we had hoped," Jackson says.

@@ -76,7 +76,7 @@ title: Pearson Case Study

Encouraging experimentation, saving engineers time

With the new platform, Pearson will increase stability and performance, and to bring products to market more quickly. The company says its engineers will also get a productivity boost because they won’t spend time managing infrastructure. Jackson estimates 15 to 20 percent in productivity savings.

Beyond that, Pearson says the platform will encourage innovation because of the ease with which new applications can be developed, and because applications will be deployed far more quickly than in the past. It expects that will help the company meet its goal of reaching 200 million learners within the next 10 years.

-

“We’re already seeing tremendous benefits with Kubernetes — improved engineering productivity, faster delivery of applications and a simplified infrastructure. But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online,” says Jackson.

+

"We’re already seeing tremendous benefits with Kubernetes — improved engineering productivity, faster delivery of applications and a simplified infrastructure. But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online," says Jackson.

diff --git a/case-studies/wikimedia.html b/case-studies/wikimedia.html index 00eb47e3e0..0dc910fbe4 100644 --- a/case-studies/wikimedia.html +++ b/case-studies/wikimedia.html @@ -20,7 +20,7 @@ title: Wikimedia Case Study
Wikimedia

- “Wikimedia Tool Labs is vital for making sure wikis all around the world work as well as they possibly can. Because it’s grown organically for almost 10 years, it has become an extremely challenging environment and difficult to maintain. It’s like a big ball of mud — you really can’t see through it. With Kubernetes, we’re simplifying the environment and making it easier for developers to build the tools that make wikis run better.” + "Wikimedia Tool Labs is vital for making sure wikis all around the world work as well as they possibly can. Because it’s grown organically for almost 10 years, it has become an extremely challenging environment and difficult to maintain. It’s like a big ball of mud — you really can’t see through it. With Kubernetes, we’re simplifying the environment and making it easier for developers to build the tools that make wikis run better."

— Yuvi Panda, operations engineer at Wikimedia Foundation and Wikimedia Tool Labs

@@ -67,13 +67,13 @@ title: Wikimedia Case Study

Using Kubernetes to provide tools for maintaining wikis

- Wikimedia Tool Labs is run by a staff of four-and-a-half paid employees and two volunteers. The infrastructure didn't make it easy or intuitive for developers to build bots and other tools to make wikis work more easily. Yuvi says, “It’s incredibly chaotic. We have lots of Perl and Bash duct tape on top of it. Everything is super fragile.” + Wikimedia Tool Labs is run by a staff of four-and-a-half paid employees and two volunteers. The infrastructure didn't make it easy or intuitive for developers to build bots and other tools to make wikis work more easily. Yuvi says, "It’s incredibly chaotic. We have lots of Perl and Bash duct tape on top of it. Everything is super fragile."

To solve the problem, Wikimedia Tool Labs migrated parts of its infrastructure to Kubernetes, in preparation for eventually moving its entire system. Yuvi said Kubernetes greatly simplifies maintenance. The goal is to allow developers creating bots and other tools to use whatever development methods they want, but make it easier for the Wikimedia Tool Labs to maintain the required infrastructure for hosting and sharing them.

- “With Kubernetes, I’ve been able to remove a lot of our custom-made code, which makes everything easier to maintain. Our users’ code also runs in a more stable way than previously,” says Yuvi. + "With Kubernetes, I’ve been able to remove a lot of our custom-made code, which makes everything easier to maintain. Our users’ code also runs in a more stable way than previously," says Yuvi.

@@ -90,7 +90,7 @@ title: Wikimedia Case Study In the future, with a more complete migration to Kubernetes, Wikimedia Tool Labs expects to make it even easier to host and maintain the bots and tools that help run wikis across the world. The tool labs already host approximately 1,300 tools and bots from 800 volunteers, with many more being submitted every day. Twenty percent of the tool labs’ web tools that account for more than 60 percent of web traffic now run on Kubernetes. The tool labs has a 25-node cluster that keeps up with each new Kubernetes release. Many existing web tools are migrating to Kubernetes.

- “Our goal is to make sure that people all over the world can share knowledge as easily as possible. Kubernetes helps with that, by making it easier for wikis everywhere to have the tools they need to thrive,” says Yuvi. + "Our goal is to make sure that people all over the world can share knowledge as easily as possible. Kubernetes helps with that, by making it easier for wikis everywhere to have the tools they need to thrive," says Yuvi.

diff --git a/docs/admin/admission-controllers.md b/docs/admin/admission-controllers.md index 475f2e4be9..de544e3d8b 100644 --- a/docs/admin/admission-controllers.md +++ b/docs/admin/admission-controllers.md @@ -126,7 +126,7 @@ For additional HTTP configuration, refer to the [kubeconfig](/docs/user-guide/ku When faced with an admission decision, the API Server POSTs a JSON serialized api.imagepolicy.v1alpha1.ImageReview object describing the action. This object contains fields describing the containers being admitted, as well as any pod annotations that match `*.image-policy.k8s.io/*`. -Note that webhook API objects are subject to the same versioning compatibility rules as other Kubernetes API objects. Implementers should be aware of looser compatibility promises for alpha objects and check the “apiVersion” field of the request to ensure correct deserialization. Additionally, the API Server must enable the imagepolicy.k8s.io/v1alpha1 API extensions group (`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`). +Note that webhook API objects are subject to the same versioning compatibility rules as other Kubernetes API objects. Implementers should be aware of looser compatibility promises for alpha objects and check the "apiVersion" field of the request to ensure correct deserialization. Additionally, the API Server must enable the imagepolicy.k8s.io/v1alpha1 API extensions group (`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`). An example request body: @@ -151,7 +151,7 @@ An example request body: } ``` -The remote service is expected to fill the ImageReviewStatus field of the request and respond to either allow or disallow access. The response body’s “spec” field is ignored and may be omitted. A permissive response would return: +The remote service is expected to fill the ImageReviewStatus field of the request and respond to either allow or disallow access. The response body’s "spec" field is ignored and may be omitted. A permissive response would return: ``` { diff --git a/docs/admin/rescheduler.md b/docs/admin/rescheduler.md index c9a3bd074c..651fdf15b5 100644 --- a/docs/admin/rescheduler.md +++ b/docs/admin/rescheduler.md @@ -30,7 +30,7 @@ given the pods that are already running in the cluster the rescheduler tries to free up space for the add-on by evicting some pods; then the scheduler will schedule the add-on pod. To avoid situation when another pod is scheduled into the space prepared for the critical add-on, -the chosen node gets a temporary taint “CriticalAddonsOnly” before the eviction(s) +the chosen node gets a temporary taint "CriticalAddonsOnly" before the eviction(s) (see [more details](https://github.com/kubernetes/kubernetes/blob/master/docs/design/taint-toleration-dedicated.md)). Each critical add-on has to tolerate it, the other pods shouldn't tolerate the taint. The tain is removed once the add-on is successfully scheduled. diff --git a/docs/getting-started-guides/windows/index.md b/docs/getting-started-guides/windows/index.md index 3096bed7eb..b5926744ae 100644 --- a/docs/getting-started-guides/windows/index.md +++ b/docs/getting-started-guides/windows/index.md @@ -18,15 +18,15 @@ In Kubernetes version 1.5, Windows Server Containers for Kubernetes is supported Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) don’t natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used. ### Linux -The above networking approach is already supported on Linux using a bridge interface, which essentially creates a private network local to the node. Similar to the Windows side, routes to all other pod CIDRs must be created in order to send packets via the “public” NIC. +The above networking approach is already supported on Linux using a bridge interface, which essentially creates a private network local to the node. Similar to the Windows side, routes to all other pod CIDRs must be created in order to send packets via the "public" NIC. ### Windows Each Window Server node should have the following configuration: 1. Two NICs (virtual networking adapters) are required on each Windows Server node - The two Windows container networking modes of interest (transparent and L2 bridge) use an external Hyper-V virtual switch. This means that one of the NICs is entirely allocated to the bridge, creating the need for the second NIC. 2. Transparent container network created - This is a manual configuration step and is shown in **_Route Setup_** section below -3. RRAS (Routing) Windows feature enabled - Allows routing between NICs on the box, and also “captures” packets that have the destination IP of a POD running on the node. To enable, open “Server Manager”. Click on “Roles”, “Add Roles”. Click “Next”. Select “Network Policy and Access Services”. Click on “Routing and Remote Access Service” and the underlying checkboxes -4. Routes defined pointing to the other pod CIDRs via the “public” NIC - These routes are added to the built-in routing table as shown in **_Route Setup_** section below +3. RRAS (Routing) Windows feature enabled - Allows routing between NICs on the box, and also "captures" packets that have the destination IP of a POD running on the node. To enable, open "Server Manager". Click on "Roles", "Add Roles". Click "Next". Select "Network Policy and Access Services". Click on "Routing and Remote Access Service" and the underlying checkboxes +4. Routes defined pointing to the other pod CIDRs via the "public" NIC - These routes are added to the built-in routing table as shown in **_Route Setup_** section below The following diagram illustrates the Windows Server networking setup for Kubernetes Setup ![Windows Setup](windows-setup.png) diff --git a/docs/tutorials/kubernetes-basics/explore-intro.html b/docs/tutorials/kubernetes-basics/explore-intro.html index edc813d3d4..56bde41cfd 100644 --- a/docs/tutorials/kubernetes-basics/explore-intro.html +++ b/docs/tutorials/kubernetes-basics/explore-intro.html @@ -34,7 +34,7 @@ title: Viewing Pods and Nodes
  • Networking, as a unique cluster IP address
  • Information about how to run each container, such as the container image version or specific ports to use
  • -

    A Pod models an application-specific “logical host” and can contain different application containers which are relatively tightly coupled. For example, a Pod might include both the container with your Node.js app as well as a different container that feeds the data to be published by the Node.js webserver. The containers in a Pod share an IP Address and port space, are always co-located and co-scheduled, and run in a shared context on the same Node.

    +

    A Pod models an application-specific "logical host" and can contain different application containers which are relatively tightly coupled. For example, a Pod might include both the container with your Node.js app as well as a different container that feeds the data to be published by the Node.js webserver. The containers in a Pod share an IP Address and port space, are always co-located and co-scheduled, and run in a shared context on the same Node.

    Pods are the atomic unit on the Kubernetes platform. When we create a Deployment on Kubernetes, that Deployment creates Pods with containers inside them (as opposed to creating containers directly). Each Pod is tied to the Node where it is scheduled, and remains there until termination (according to restart policy) or deletion. In case of a Node failure, identical Pods are scheduled on other available Nodes in the cluster.

    diff --git a/docs/user-guide/replicasets.md b/docs/user-guide/replicasets.md index f0aa08bf04..86e60cffda 100644 --- a/docs/user-guide/replicasets.md +++ b/docs/user-guide/replicasets.md @@ -35,7 +35,7 @@ their Replica Sets. ## When to use a Replica Set? -A Replica Set ensures that a specified number of pod “replicas” are running at any given +A Replica Set ensures that a specified number of pod "replicas" are running at any given time. However, a Deployment is a higher-level concept that manages Replica Sets and provides declarative updates to pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using Replica Sets, unless From b27310d81a1d9404781de7dea669f71805d0c20a Mon Sep 17 00:00:00 2001 From: tim-zju <21651152@zju.edu.cn> Date: Thu, 22 Dec 2016 14:37:10 +0800 Subject: [PATCH 02/14] symbol errors Signed-off-by: tim-zju <21651152@zju.edu.cn> --- CONTRIBUTING.md | 2 +- LICENSE | 2 +- _includes/partner-script.js | 6 +- case-studies/index.html | 8 +- case-studies/pearson.html | 20 +- case-studies/wikimedia.html | 12 +- community.html | 4 +- docs/admin/admission-controllers.md | 4 +- docs/admin/networking.md | 2 +- docs/admin/rescheduler.md | 3 +- docs/getting-started-guides/logging.md | 2 +- docs/getting-started-guides/meanstack.md | 26 +- docs/getting-started-guides/windows/index.md | 14 +- docs/hellonode.md | 22 +- .../kubernetes-basics/cluster-intro.html | 2 +- .../kubernetes-basics/deploy-intro.html | 4 +- .../kubernetes-basics/explore-intro.html | 4 +- .../kubernetes-basics/expose-intro.html | 6 +- .../kubernetes-basics/scale-intro.html | 2 +- .../kubernetes-basics/update-intro.html | 2 +- .../stateful-application/zookeeper.md | 352 +++++++++--------- docs/user-guide/configuring-containers.md | 2 +- docs/user-guide/managing-deployments.md | 40 +- docs/user-guide/pod-security-policy/index.md | 38 +- docs/user-guide/prereqs.md | 2 +- docs/user-guide/replicasets.md | 2 +- .../replication-controller/index.md | 8 +- docs/user-guide/security-context.md | 7 +- index.html | 16 +- 29 files changed, 306 insertions(+), 308 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 9dd8149a15..934d7947ae 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -33,4 +33,4 @@ Note that code issues should be filed against the main kubernetes repository, wh ### Submitting Documentation Pull Requests -If you’re fixing an issue in the existing documentation, you should submit a PR against the master branch. Follow [these instructions to create a documentation pull request against the kubernetes.io repository](http://kubernetes.io/docs/contribute/create-pull-request/). +If you're fixing an issue in the existing documentation, you should submit a PR against the master branch. Follow [these instructions to create a documentation pull request against the kubernetes.io repository](http://kubernetes.io/docs/contribute/create-pull-request/). diff --git a/LICENSE b/LICENSE index 06c608dcf4..b6988e7edc 100644 --- a/LICENSE +++ b/LICENSE @@ -378,7 +378,7 @@ Section 8 -- Interpretation. Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances -will be considered the “Licensor.” The text of the Creative Commons +will be considered the "Licensor." The text of the Creative Commons public licenses is dedicated to the public domain under the CC0 Public Domain Dedication. Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as diff --git a/_includes/partner-script.js b/_includes/partner-script.js index 4d0a117620..c291521a01 100644 --- a/_includes/partner-script.js +++ b/_includes/partner-script.js @@ -54,7 +54,7 @@ name: 'Skippbox', logo: 'skippbox', link: 'http://www.skippbox.com/tag/products/', - blurb: 'Creator of Cabin the first mobile application for Kubernetes, and kompose. Skippbox’s solutions distill all the power of k8s in simple easy to use interfaces.' + blurb: 'Creator of Cabin the first mobile application for Kubernetes, and kompose. Skippbox's solutions distill all the power of k8s in simple easy to use interfaces.' }, { type: 0, @@ -89,7 +89,7 @@ name: 'Intel', logo: 'intel', link: 'https://tectonic.com/press/intel-coreos-collaborate-on-openstack-with-kubernetes.html', - blurb: 'Powering the GIFEE (Google’s Infrastructure for Everyone Else), to run OpenStack deployments on Kubernetes.' + blurb: 'Powering the GIFEE (Google's Infrastructure for Everyone Else), to run OpenStack deployments on Kubernetes.' }, { type: 0, @@ -243,7 +243,7 @@ name: 'Samsung SDS', logo: 'samsung_sds', link: 'http://www.samsungsdsa.com/cloud-infrastructure_kubernetes', - blurb: 'Samsung SDS’s Cloud Native Computing Team offers expert consulting across the range of technical aspects involved in building services targeted at a Kubernetes cluster.' + blurb: 'Samsung SDS's Cloud Native Computing Team offers expert consulting across the range of technical aspects involved in building services targeted at a Kubernetes cluster.' }, { type: 1, diff --git a/case-studies/index.html b/case-studies/index.html index ce14542424..f593d73fb9 100644 --- a/case-studies/index.html +++ b/case-studies/index.html @@ -17,19 +17,19 @@ title: Case Studies
    Pearson -

    “We chose Kubernetes because of its flexibility, ease of management and the way it improves our engineers’ productivity.”

    +

    "We chose Kubernetes because of its flexibility, ease of management and the way it improves our engineers' productivity."

    Read about Pearson
    Wikimedia -

    “With Kubernetes, we’re simplifying our environment and making it easier for developers to build the tools that make wikis run better.”

    +

    "With Kubernetes, we're simplifying our environment and making it easier for developers to build the tools that make wikis run better."

    Read about Wikimedia
    eBay -

    Inside eBay’s shift to Kubernetes and containers atop OpenStack

    +

    Inside eBay's shift to Kubernetes and containers atop OpenStack

    Read about eBay
    @@ -45,7 +45,7 @@ title: Case Studies
    - + diff --git a/case-studies/pearson.html b/case-studies/pearson.html index bf871789b9..50f16ce7ae 100644 --- a/case-studies/pearson.html +++ b/case-studies/pearson.html @@ -13,13 +13,13 @@ title: Pearson Case Study
    -

    Using Kubernetes to reinvent the world’s largest educational company

    +

    Using Kubernetes to reinvent the world's largest educational company

    - Pearson, the world’s education company, serving 75 million learners worldwide, set a goal to more than double that number to 200 million by 2025. A key part of this growth is in digital learning experiences, and that requires an infrastructure platform that is able to scale quickly and deliver products to market faster. So Pearson’s Cloud Technology team chose Kubernetes to help build a platform to meet the business requirements.

    + Pearson, the world's education company, serving 75 million learners worldwide, set a goal to more than double that number to 200 million by 2025. A key part of this growth is in digital learning experiences, and that requires an infrastructure platform that is able to scale quickly and deliver products to market faster. So Pearson's Cloud Technology team chose Kubernetes to help build a platform to meet the business requirements.

    Pearson

    - “To transform our infrastructure, we had to think beyond simply enabling automated provisioning, we realized we had to build a platform that would allow Pearson developers to build manage and deploy applications in a completely different way. We chose Kubernetes because of its flexibility, ease of management and the way it would improve our engineers’ productivity.”

    + "To transform our infrastructure, we had to think beyond simply enabling automated provisioning, we realized we had to build a platform that would allow Pearson developers to build manage and deploy applications in a completely different way. We chose Kubernetes because of its flexibility, ease of management and the way it would improve our engineers' productivity."

    — Chris Jackson, Director for Cloud Product Engineering, Pearson

    @@ -38,7 +38,7 @@ title: Pearson Case Study

    Why Kubernetes:

      -
    • Kubernetes will allow Pearson’s teams to develop their apps in a consistent manner, saving time and minimizing complexity.
    • +
    • Kubernetes will allow Pearson's teams to develop their apps in a consistent manner, saving time and minimizing complexity.
    @@ -52,7 +52,7 @@ title: Pearson Case Study

    Results:

      -
    • Pearson is building an enterprise-wide platform for delivering innovative, web-based educational content. They expect engineers’ productivity to increase by up to 20 percent.
    • +
    • Pearson is building an enterprise-wide platform for delivering innovative, web-based educational content. They expect engineers' productivity to increase by up to 20 percent.
    @@ -63,9 +63,9 @@ title: Pearson Case Study

    Kubernetes powers a comprehensive developer experience

    -

    Pearson wanted to use as much open source technology as possible for the platform given that it provides both technical and commercial benefits over the duration of the project. Jackson says, “Building an infrastructure platform based on open source technology in Pearson was a no-brainer, the sharing of technical challenges and advanced use cases in a community of people with talent far beyond what we could hire independently allows us to innovate at a level we could not reach on our own. Our engineers enjoy returning code to the community and participating in talks, blogs and meetings, it’s a great way for us to allow our team to express themselves and share the pride they have in their work.”

    -

    It also wanted to use a container-focused platform. Pearson has 400 development groups and diverse brands with varying business and technical needs. With containers, each brand could experiment with building new types of content using their preferred technologies, and then deliver it using containers. Pearson chose Kubernetes because it believes that is the best technology for managing containers, has the widest community support and offers the most flexible and powerful tools.“

    -

    Kubernetes is at the core of the platform we’ve built for developers. After we get our big spike in back-to-school in traffic, much of Pearson’s traffic will interact with Kubernetes. It is proving to be as effective as we had hoped,” Jackson says.

    +

    Pearson wanted to use as much open source technology as possible for the platform given that it provides both technical and commercial benefits over the duration of the project. Jackson says, "Building an infrastructure platform based on open source technology in Pearson was a no-brainer, the sharing of technical challenges and advanced use cases in a community of people with talent far beyond what we could hire independently allows us to innovate at a level we could not reach on our own. Our engineers enjoy returning code to the community and participating in talks, blogs and meetings, it's a great way for us to allow our team to express themselves and share the pride they have in their work."

    +

    It also wanted to use a container-focused platform. Pearson has 400 development groups and diverse brands with varying business and technical needs. With containers, each brand could experiment with building new types of content using their preferred technologies, and then deliver it using containers. Pearson chose Kubernetes because it believes that is the best technology for managing containers, has the widest community support and offers the most flexible and powerful tools."

    +

    Kubernetes is at the core of the platform we've built for developers. After we get our big spike in back-to-school in traffic, much of Pearson's traffic will interact with Kubernetes. It is proving to be as effective as we had hoped," Jackson says.

    @@ -74,9 +74,9 @@ title: Pearson Case Study

    Encouraging experimentation, saving engineers time

    -

    With the new platform, Pearson will increase stability and performance, and to bring products to market more quickly. The company says its engineers will also get a productivity boost because they won’t spend time managing infrastructure. Jackson estimates 15 to 20 percent in productivity savings.

    +

    With the new platform, Pearson will increase stability and performance, and to bring products to market more quickly. The company says its engineers will also get a productivity boost because they won't spend time managing infrastructure. Jackson estimates 15 to 20 percent in productivity savings.

    Beyond that, Pearson says the platform will encourage innovation because of the ease with which new applications can be developed, and because applications will be deployed far more quickly than in the past. It expects that will help the company meet its goal of reaching 200 million learners within the next 10 years.

    -

    “We’re already seeing tremendous benefits with Kubernetes — improved engineering productivity, faster delivery of applications and a simplified infrastructure. But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online,” says Jackson.

    +

    "We're already seeing tremendous benefits with Kubernetes — improved engineering productivity, faster delivery of applications and a simplified infrastructure. But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online," says Jackson.

    diff --git a/case-studies/wikimedia.html b/case-studies/wikimedia.html index 00eb47e3e0..2d3b686128 100644 --- a/case-studies/wikimedia.html +++ b/case-studies/wikimedia.html @@ -20,7 +20,7 @@ title: Wikimedia Case Study
    Wikimedia

    - “Wikimedia Tool Labs is vital for making sure wikis all around the world work as well as they possibly can. Because it’s grown organically for almost 10 years, it has become an extremely challenging environment and difficult to maintain. It’s like a big ball of mud — you really can’t see through it. With Kubernetes, we’re simplifying the environment and making it easier for developers to build the tools that make wikis run better.” + "Wikimedia Tool Labs is vital for making sure wikis all around the world work as well as they possibly can. Because it's grown organically for almost 10 years, it has become an extremely challenging environment and difficult to maintain. It's like a big ball of mud — you really can't see through it. With Kubernetes, we're simplifying the environment and making it easier for developers to build the tools that make wikis run better."

    — Yuvi Panda, operations engineer at Wikimedia Foundation and Wikimedia Tool Labs

    @@ -67,13 +67,13 @@ title: Wikimedia Case Study

    Using Kubernetes to provide tools for maintaining wikis

    - Wikimedia Tool Labs is run by a staff of four-and-a-half paid employees and two volunteers. The infrastructure didn't make it easy or intuitive for developers to build bots and other tools to make wikis work more easily. Yuvi says, “It’s incredibly chaotic. We have lots of Perl and Bash duct tape on top of it. Everything is super fragile.” + Wikimedia Tool Labs is run by a staff of four-and-a-half paid employees and two volunteers. The infrastructure didn't make it easy or intuitive for developers to build bots and other tools to make wikis work more easily. Yuvi says, "It's incredibly chaotic. We have lots of Perl and Bash duct tape on top of it. Everything is super fragile."

    To solve the problem, Wikimedia Tool Labs migrated parts of its infrastructure to Kubernetes, in preparation for eventually moving its entire system. Yuvi said Kubernetes greatly simplifies maintenance. The goal is to allow developers creating bots and other tools to use whatever development methods they want, but make it easier for the Wikimedia Tool Labs to maintain the required infrastructure for hosting and sharing them.

    - “With Kubernetes, I’ve been able to remove a lot of our custom-made code, which makes everything easier to maintain. Our users’ code also runs in a more stable way than previously,” says Yuvi. + "With Kubernetes, I've been able to remove a lot of our custom-made code, which makes everything easier to maintain. Our users' code also runs in a more stable way than previously," says Yuvi.

    @@ -84,13 +84,13 @@ title: Wikimedia Case Study

    Simplifying infrastructure and keeping wikis running better

    - Wikimedia Tool Labs has seen great success with the initial Kubernetes deployment. Old code is being simplified and eliminated, contributing developers don’t have to change the way they write their tools and bots, and those tools and bots run in a more stable fashion than they have in the past. The paid staff and volunteers are able to better keep up with fixing issues. + Wikimedia Tool Labs has seen great success with the initial Kubernetes deployment. Old code is being simplified and eliminated, contributing developers don't have to change the way they write their tools and bots, and those tools and bots run in a more stable fashion than they have in the past. The paid staff and volunteers are able to better keep up with fixing issues.

    - In the future, with a more complete migration to Kubernetes, Wikimedia Tool Labs expects to make it even easier to host and maintain the bots and tools that help run wikis across the world. The tool labs already host approximately 1,300 tools and bots from 800 volunteers, with many more being submitted every day. Twenty percent of the tool labs’ web tools that account for more than 60 percent of web traffic now run on Kubernetes. The tool labs has a 25-node cluster that keeps up with each new Kubernetes release. Many existing web tools are migrating to Kubernetes. + In the future, with a more complete migration to Kubernetes, Wikimedia Tool Labs expects to make it even easier to host and maintain the bots and tools that help run wikis across the world. The tool labs already host approximately 1,300 tools and bots from 800 volunteers, with many more being submitted every day. Twenty percent of the tool labs' web tools that account for more than 60 percent of web traffic now run on Kubernetes. The tool labs has a 25-node cluster that keeps up with each new Kubernetes release. Many existing web tools are migrating to Kubernetes.

    - “Our goal is to make sure that people all over the world can share knowledge as easily as possible. Kubernetes helps with that, by making it easier for wikis everywhere to have the tools they need to thrive,” says Yuvi. + "Our goal is to make sure that people all over the world can share knowledge as easily as possible. Kubernetes helps with that, by making it easier for wikis everywhere to have the tools they need to thrive," says Yuvi.

    diff --git a/community.html b/community.html index 9ef63c1b66..a10a100375 100644 --- a/community.html +++ b/community.html @@ -24,8 +24,8 @@ title: Community

    SIGs

    Have a special interest in how Kubernetes works with another technology? See our ever growing lists of SIGs, - from AWS and Openstack to Big Data and Scalability, there’s a place for you to contribute and instructions - for forming a new SIG if your special interest isn’t covered (yet).

    + from AWS and Openstack to Big Data and Scalability, there's a place for you to contribute and instructions + for forming a new SIG if your special interest isn't covered (yet).

    Events

    diff --git a/docs/admin/admission-controllers.md b/docs/admin/admission-controllers.md index 475f2e4be9..089dce2605 100644 --- a/docs/admin/admission-controllers.md +++ b/docs/admin/admission-controllers.md @@ -126,7 +126,7 @@ For additional HTTP configuration, refer to the [kubeconfig](/docs/user-guide/ku When faced with an admission decision, the API Server POSTs a JSON serialized api.imagepolicy.v1alpha1.ImageReview object describing the action. This object contains fields describing the containers being admitted, as well as any pod annotations that match `*.image-policy.k8s.io/*`. -Note that webhook API objects are subject to the same versioning compatibility rules as other Kubernetes API objects. Implementers should be aware of looser compatibility promises for alpha objects and check the “apiVersion” field of the request to ensure correct deserialization. Additionally, the API Server must enable the imagepolicy.k8s.io/v1alpha1 API extensions group (`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`). +Note that webhook API objects are subject to the same versioning compatibility rules as other Kubernetes API objects. Implementers should be aware of looser compatibility promises for alpha objects and check the "apiVersion" field of the request to ensure correct deserialization. Additionally, the API Server must enable the imagepolicy.k8s.io/v1alpha1 API extensions group (`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`). An example request body: @@ -151,7 +151,7 @@ An example request body: } ``` -The remote service is expected to fill the ImageReviewStatus field of the request and respond to either allow or disallow access. The response body’s “spec” field is ignored and may be omitted. A permissive response would return: +The remote service is expected to fill the ImageReviewStatus field of the request and respond to either allow or disallow access. The response body's "spec" field is ignored and may be omitted. A permissive response would return: ``` { diff --git a/docs/admin/networking.md b/docs/admin/networking.md index 0b73e855bb..565005a991 100644 --- a/docs/admin/networking.md +++ b/docs/admin/networking.md @@ -173,7 +173,7 @@ Lars Kellogg-Stedman. [Nuage](http://www.nuagenetworks.net) provides a highly scalable policy-based Software-Defined Networking (SDN) platform. Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards. -The Nuage platform uses overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Nuage’s policy abstraction model is designed with applications in mind and makes it easy to declare fine-grained policies for applications.The platform’s real-time analytics engine enables visibility and security monitoring for Kubernetes applications. +The Nuage platform uses overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Nuage's policy abstraction model is designed with applications in mind and makes it easy to declare fine-grained policies for applications.The platform's real-time analytics engine enables visibility and security monitoring for Kubernetes applications. ### OpenVSwitch diff --git a/docs/admin/rescheduler.md b/docs/admin/rescheduler.md index e1a2cca5de..11d15c10dd 100644 --- a/docs/admin/rescheduler.md +++ b/docs/admin/rescheduler.md @@ -30,7 +30,7 @@ given the pods that are already running in the cluster the rescheduler tries to free up space for the add-on by evicting some pods; then the scheduler will schedule the add-on pod. To avoid situation when another pod is scheduled into the space prepared for the critical add-on, -the chosen node gets a temporary taint “CriticalAddonsOnly” before the eviction(s) +the chosen node gets a temporary taint "CriticalAddonsOnly" before the eviction(s) (see [more details](https://github.com/kubernetes/kubernetes/blob/master/docs/design/taint-toleration-dedicated.md)). Each critical add-on has to tolerate it, the other pods shouldn't tolerate the taint. The tain is removed once the add-on is successfully scheduled. @@ -57,4 +57,3 @@ and have the following annotations specified: * `scheduler.alpha.kubernetes.io/tolerations` set to `[{"key":"CriticalAddonsOnly", "operator":"Exists"}]` The first one marks a pod a critical. The second one is required by Rescheduler algorithm. - diff --git a/docs/getting-started-guides/logging.md b/docs/getting-started-guides/logging.md index ff874e119d..05c41cd3c6 100644 --- a/docs/getting-started-guides/logging.md +++ b/docs/getting-started-guides/logging.md @@ -79,7 +79,7 @@ root 479 0.0 0.0 4348 812 ? S 00:05 0:00 sleep 1 root 480 0.0 0.0 15572 2212 ? R 00:05 0:00 ps aux ``` -What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original container's execution and only see the log lines for the new container? Let’s find out. First let's delete the currently running counter. +What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original container's execution and only see the log lines for the new container? Let's find out. First let's delete the currently running counter. ```shell $ kubectl delete pod counter diff --git a/docs/getting-started-guides/meanstack.md b/docs/getting-started-guides/meanstack.md index 37df0513f7..e1e7bd7696 100644 --- a/docs/getting-started-guides/meanstack.md +++ b/docs/getting-started-guides/meanstack.md @@ -17,12 +17,12 @@ Thankfully, there is a system we can use to manage our containers in a cluster e ## The Basics of Using Kubernetes -Before we jump in and start kube’ing it up, it’s important to understand some of the fundamentals of Kubernetes. +Before we jump in and start kube'ing it up, it's important to understand some of the fundamentals of Kubernetes. * Containers: These are the Docker, rtk, AppC, or whatever Container you are running. You can think of these like subatomic particles; everything is made up of them, but you rarely (if ever) interact with them directly. -* Pods: Pods are the basic component of Kubernetes. They are a group of Containers that are scheduled, live, and die together. Why would you want to have a group of containers instead of just a single container? Let’s say you had a log processor, a web server, and a database. If you couldn't use Pods, you would have to bundle the log processor in the web server and database containers, and each time you updated one you would have to update the other. With Pods, you can just reuse the same log processor for both the web server and database. +* Pods: Pods are the basic component of Kubernetes. They are a group of Containers that are scheduled, live, and die together. Why would you want to have a group of containers instead of just a single container? Let's say you had a log processor, a web server, and a database. If you couldn't use Pods, you would have to bundle the log processor in the web server and database containers, and each time you updated one you would have to update the other. With Pods, you can just reuse the same log processor for both the web server and database. * Deployments: A Deployment provides declarative updates for Pods. You can define Deployments to create new Pods, or replace existing Pods. You only need to describe the desired state in a Deployment object, and the deployment controller will change the actual state to the desired state at a controlled rate for you. You can define Deployments to create new resources, or replace existing ones by new ones. -* Services: A service is the single point of contact for a group of Pods. For example, let’s say you have a Deployment that creates four copies of a web server pod. A Service will split the traffic to each of the four copies. Services are "permanent" while the pods behind them can come and go, so it’s a good idea to use Services. +* Services: A service is the single point of contact for a group of Pods. For example, let's say you have a Deployment that creates four copies of a web server pod. A Service will split the traffic to each of the four copies. Services are "permanent" while the pods behind them can come and go, so it's a good idea to use Services. ## Step 1: Creating the Container @@ -37,7 +37,7 @@ To do this, you need to use more Docker. Make sure you have the latest version i Getting the code: -Before starting, let’s get some code to run. You can follow along on your personal machine or a Linux VM in the cloud. I recommend using Linux or a Linux VM; running Docker on Mac and Windows is outside the scope of this tutorial. +Before starting, let's get some code to run. You can follow along on your personal machine or a Linux VM in the cloud. I recommend using Linux or a Linux VM; running Docker on Mac and Windows is outside the scope of this tutorial. ```shell $ git clone https://github.com/ijason/NodeJS-Sample-App.git app @@ -45,7 +45,7 @@ $ mv app/EmployeeDB/* app/ $ sed -i -- 's/localhost/mongo/g' ./app/app.js ``` -This is the same sample app we ran before. The second line just moves everything from the `EmployeeDB` subfolder up into the app folder so it’s easier to access. The third line, once again, replaces the hardcoded `localhost` with the `mongo` proxy. +This is the same sample app we ran before. The second line just moves everything from the `EmployeeDB` subfolder up into the app folder so it's easier to access. The third line, once again, replaces the hardcoded `localhost` with the `mongo` proxy. Building the Docker image: @@ -83,7 +83,7 @@ $ ls Dockerfile app ``` -Let’s build. +Le's build. ```shell $ docker build -t myapp . @@ -139,7 +139,7 @@ After some time, it will finish. You can check the console to see the container ## **Step 4: Creating the Cluster** -So now you have the custom container, let’s create a cluster to run it. +So now you have the custom container, let's create a cluster to run it. Currently, a cluster can be as small as one machine to as big as 100 machines. You can pick any machine type you want, so you can have a cluster of a single `f1-micro` instance, 100 `n1-standard-32` instances (3,200 cores!), and anything in between. @@ -193,7 +193,7 @@ $ gcloud compute disks create \ Pick the same zone as your cluster and an appropriate disk size for your application. -Now, we need to create a Deployment that will run the database. I’m using a Deployment and not a Pod, because if a standalone Pod dies, it won't restart automatically. +Now, we need to create a Deployment that will run the database. I'm using a Deployment and not a Pod, because if a standalone Pod dies, it won't restart automatically. ### `db-deployment.yml` @@ -231,7 +231,7 @@ We call the deployment `mongo-deployment`, specify one replica, and open the app The `volumes` section creates the volume for Kubernetes to use. There is a Google Container Engine-specific `gcePersistentDisk` section that maps the disk we made into a Kubernetes volume, and we mount the volume into the `/data/db` directory (as described in the MongoDB Docker documentation) -Now we have the Deployment, let’s create the Service: +Now we have the Deployment, let's create the Service: ### `db-service.yml` @@ -267,7 +267,7 @@ db-service.yml ## Step 6: Running the Database -First, let’s "log in" to the cluster +First, let's "log in" to the cluster ```shell $ gcloud container clusters get-credentials mean-cluster @@ -305,14 +305,14 @@ mongo-deployment-xxxx 1/1 Running 0 3m ## Step 7: Creating the Web Server -Now the database is running, let’s start the web server. +Now the database is running, let's start the web server. We need two things: 1. Deployment to spin up and down web server pods 2. Service to expose our website to the interwebs -Let’s look at the Deployment configuration: +Let's look at the Deployment configuration: ### `web-deployment.yml` @@ -371,7 +371,7 @@ At this point, the local directory looks like this ```shell $ ls -Dockerfile +Dockerfile app db-deployment.yml db-service.yml diff --git a/docs/getting-started-guides/windows/index.md b/docs/getting-started-guides/windows/index.md index 511d125dcd..6349bc52ce 100644 --- a/docs/getting-started-guides/windows/index.md +++ b/docs/getting-started-guides/windows/index.md @@ -15,18 +15,18 @@ In Kubernetes version 1.5, Windows Server Containers for Kubernetes is supported 4. Docker Version 1.12.2-cs2-ws-beta or later ## Networking -Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) don’t natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used. +Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) don't natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used. ### Linux -The above networking approach is already supported on Linux using a bridge interface, which essentially creates a private network local to the node. Similar to the Windows side, routes to all other pod CIDRs must be created in order to send packets via the “public” NIC. +The above networking approach is already supported on Linux using a bridge interface, which essentially creates a private network local to the node. Similar to the Windows side, routes to all other pod CIDRs must be created in order to send packets via the "public" NIC. ### Windows Each Window Server node should have the following configuration: 1. Two NICs (virtual networking adapters) are required on each Windows Server node - The two Windows container networking modes of interest (transparent and L2 bridge) use an external Hyper-V virtual switch. This means that one of the NICs is entirely allocated to the bridge, creating the need for the second NIC. 2. Transparent container network created - This is a manual configuration step and is shown in **_Route Setup_** section below -3. RRAS (Routing) Windows feature enabled - Allows routing between NICs on the box, and also “captures” packets that have the destination IP of a POD running on the node. To enable, open “Server Manager”. Click on “Roles”, “Add Roles”. Click “Next”. Select “Network Policy and Access Services”. Click on “Routing and Remote Access Service” and the underlying checkboxes -4. Routes defined pointing to the other pod CIDRs via the “public” NIC - These routes are added to the built-in routing table as shown in **_Route Setup_** section below +3. RRAS (Routing) Windows feature enabled - Allows routing between NICs on the box, and also "captures" packets that have the destination IP of a POD running on the node. To enable, open "Server Manager". Click on "Roles", "Add Roles". Click "Next". Select "Network Policy and Access Services". Click on "Routing and Remote Access Service" and the underlying checkboxes +4. Routes defined pointing to the other pod CIDRs via the "public" NIC - These routes are added to the built-in routing table as shown in **_Route Setup_** section below The following diagram illustrates the Windows Server networking setup for Kubernetes Setup ![Windows Setup](windows-setup.png) @@ -38,12 +38,12 @@ To run Windows Server Containers on Kubernetes, you'll need to set up both your 1. Windows Server container host running Windows Server 2016 and Docker v1.12. Follow the setup instructions outlined by this blog post: https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/quick_start_windows_server 2. DNS support for Windows recently got merged to docker master and is currently not supported in a stable docker release. To use DNS build docker from master or download the binary from [Docker master](https://master.dockerproject.org/) -3. Pull the `apprenda/pause` image from `https://hub.docker.com/r/apprenda/pause` +3. Pull the `apprenda/pause` image from `https://hub.docker.com/r/apprenda/pause` 4. RRAS (Routing) Windows feature enabled **Linux Host Setup** -1. Linux hosts should be setup according to their respective distro documentation and the requirements of the Kubernetes version you will be using. +1. Linux hosts should be setup according to their respective distro documentation and the requirements of the Kubernetes version you will be using. 2. CNI network plugin installed. ### Component Setup @@ -110,7 +110,7 @@ route add 192.168.1.0 mask 255.255.255.0 192.168.1.1 if A Kubernetes cluster can be deployed on either physical or virtual machines. To get started with Kubernetes development, you can use Minikube. Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, Mac OS and Windows systems. The Minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete. For this bootcamp, however, you'll use a provided online terminal with Minikube pre-installed.

    -

    Now that you know what Kubernetes is, let’s go to the online tutorial and start our first cluster!

    +

    Now that you know what Kubernetes is, let's go to the online tutorial and start our first cluster!

    diff --git a/docs/tutorials/kubernetes-basics/deploy-intro.html b/docs/tutorials/kubernetes-basics/deploy-intro.html index 5352e0ea38..b8ca582f1d 100644 --- a/docs/tutorials/kubernetes-basics/deploy-intro.html +++ b/docs/tutorials/kubernetes-basics/deploy-intro.html @@ -86,9 +86,9 @@ title: Using kubectl to Create a Deployment
    -

    For our first Deployment, we’ll use a Node.js application packaged in a Docker container. The source code and the Dockerfile are available in the GitHub repository for the Kubernetes Bootcamp.

    +

    For our first Deployment, we'll use a Node.js application packaged in a Docker container. The source code and the Dockerfile are available in the GitHub repository for the Kubernetes Bootcamp.

    -

    Now that you know what Deployments are, let’s go to the online tutorial and deploy our first app!

    +

    Now that you know what Deployments are, let's go to the online tutorial and deploy our first app!

    diff --git a/docs/tutorials/kubernetes-basics/explore-intro.html b/docs/tutorials/kubernetes-basics/explore-intro.html index edc813d3d4..e16d2a0755 100644 --- a/docs/tutorials/kubernetes-basics/explore-intro.html +++ b/docs/tutorials/kubernetes-basics/explore-intro.html @@ -34,7 +34,7 @@ title: Viewing Pods and Nodes
  • Networking, as a unique cluster IP address
  • Information about how to run each container, such as the container image version or specific ports to use
  • -

    A Pod models an application-specific “logical host” and can contain different application containers which are relatively tightly coupled. For example, a Pod might include both the container with your Node.js app as well as a different container that feeds the data to be published by the Node.js webserver. The containers in a Pod share an IP Address and port space, are always co-located and co-scheduled, and run in a shared context on the same Node.

    +

    A Pod models an application-specific "logical host" and can contain different application containers which are relatively tightly coupled. For example, a Pod might include both the container with your Node.js app as well as a different container that feeds the data to be published by the Node.js webserver. The containers in a Pod share an IP Address and port space, are always co-located and co-scheduled, and run in a shared context on the same Node.

    Pods are the atomic unit on the Kubernetes platform. When we create a Deployment on Kubernetes, that Deployment creates Pods with containers inside them (as opposed to creating containers directly). Each Pod is tied to the Node where it is scheduled, and remains there until termination (according to restart policy) or deletion. In case of a Node failure, identical Pods are scheduled on other available Nodes in the cluster.

    @@ -117,7 +117,7 @@ title: Viewing Pods and Nodes

    You can use these commands to see when applications were deployed, what their current statuses are, where they are running and what their configurations are.

    -

    Now that we know more about our cluster components and the command line, let’s explore our application.

    +

    Now that we know more about our cluster components and the command line, let's explore our application.

    diff --git a/docs/tutorials/kubernetes-basics/expose-intro.html b/docs/tutorials/kubernetes-basics/expose-intro.html index 2d3c4d7bb4..d914fdf9a2 100644 --- a/docs/tutorials/kubernetes-basics/expose-intro.html +++ b/docs/tutorials/kubernetes-basics/expose-intro.html @@ -28,7 +28,7 @@ title: Using a Service to Expose Your App

    Kubernetes Services

    -

    While Pods do have their own unique IP across the cluster, those IP’s are not exposed outside Kubernetes. Taking into account that over time Pods may be terminated, deleted or replaced by other Pods, we need a way to let other Pods and applications automatically discover each other. Kubernetes addresses this by grouping Pods in Services. A Kubernetes Service is an abstraction layer which defines a logical set of Pods and enables external traffic exposure, load balancing and service discovery for those Pods.

    +

    While Pods do have their own unique IP across the cluster, those IP's are not exposed outside Kubernetes. Taking into account that over time Pods may be terminated, deleted or replaced by other Pods, we need a way to let other Pods and applications automatically discover each other. Kubernetes addresses this by grouping Pods in Services. A Kubernetes Service is an abstraction layer which defines a logical set of Pods and enables external traffic exposure, load balancing and service discovery for those Pods.

    This abstraction will allow us to expose Pods to traffic originating from outside the cluster. Services have their own unique cluster-private IP address and expose a port to receive traffic. If you choose to expose the service outside the cluster, the options are:

      @@ -70,7 +70,7 @@ title: Using a Service to Expose Your App
      -

      A Service provides load balancing of traffic across the contained set of Pods. This is useful when a service is created to group all Pods from a specific Deployment (our application will make use of this in the next module, when we’ll have multiple instances running).

      +

      A Service provides load balancing of traffic across the contained set of Pods. This is useful when a service is created to group all Pods from a specific Deployment (our application will make use of this in the next module, when we'll have multiple instances running).

      Services are also responsible for service-discovery within the cluster (covered in Accessing the Service). This will for example allow a frontend service (like a web server) to receive traffic from a backend service (like a database) without worrying about Pods.

      @@ -120,7 +120,7 @@ title: Using a Service to Expose Your App

      Labels can be attached to objects at the creation time or later and can be modified at any time. The kubectl run command sets some default Labels/Label Selectors on the new Pods/ Deployment. The link between Labels and Label Selectors defines the relationship between the Deployment and the Pods it creates.

      -

      Now let’s expose our application with the help of a Service, and apply some new Labels.

      +

      Now let's expose our application with the help of a Service, and apply some new Labels.


      diff --git a/docs/tutorials/kubernetes-basics/scale-intro.html b/docs/tutorials/kubernetes-basics/scale-intro.html index 49b9e49dec..cf3635eba1 100644 --- a/docs/tutorials/kubernetes-basics/scale-intro.html +++ b/docs/tutorials/kubernetes-basics/scale-intro.html @@ -101,7 +101,7 @@ title: Running Multiple Instances of Your App
      -

      Once you have multiple instances of an Application running, you would be able to do Rolling updates without downtime. We’ll cover that in the next module. Now, let’s go to the online terminal and scale our application.

      +

      Once you have multiple instances of an Application running, you would be able to do Rolling updates without downtime. We'll cover that in the next module. Now, let's go to the online terminal and scale our application.


      diff --git a/docs/tutorials/kubernetes-basics/update-intro.html b/docs/tutorials/kubernetes-basics/update-intro.html index 9ed498ce35..d331e3f58c 100644 --- a/docs/tutorials/kubernetes-basics/update-intro.html +++ b/docs/tutorials/kubernetes-basics/update-intro.html @@ -116,7 +116,7 @@ title: Performing a Rolling Update
      -

      In the following interactive tutorial we’ll update our application to a new version, and also perform a rollback.

      +

      In the following interactive tutorial we'll update our application to a new version, and also perform a rollback.


      diff --git a/docs/tutorials/stateful-application/zookeeper.md b/docs/tutorials/stateful-application/zookeeper.md index 90a78fdc31..b36ed0835b 100644 --- a/docs/tutorials/stateful-application/zookeeper.md +++ b/docs/tutorials/stateful-application/zookeeper.md @@ -11,15 +11,15 @@ title: Running ZooKeeper, A CP Distributed System --- {% capture overview %} -This tutorial demonstrates [Apache Zookeeper](https://zookeeper.apache.org) on -Kubernetes using [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/), -[PodDisruptionBudgets](/docs/admin/disruptions/#specifying-a-poddisruptionbudget), +This tutorial demonstrates [Apache Zookeeper](https://zookeeper.apache.org) on +Kubernetes using [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/), +[PodDisruptionBudgets](/docs/admin/disruptions/#specifying-a-poddisruptionbudget), and [PodAntiAffinity](/docs/user-guide/node-selection/). {% endcapture %} {% capture prerequisites %} -Before starting this tutorial, you should be familiar with the following +Before starting this tutorial, you should be familiar with the following Kubernetes concepts. * [Pods](/docs/user-guide/pods/single-container/) @@ -34,16 +34,16 @@ Kubernetes concepts. * [kubectl CLI](/docs/user-guide/kubectl) You will require a cluster with at least four nodes, and each node will require -at least 2 CPUs and 4 GiB of memory. In this tutorial you will cordon and -drain the cluster's nodes. **This means that all Pods on the cluster's nodes -will be terminated and evicted, and the nodes will, temporarily, become -unschedulable.** You should use a dedicated cluster for this tutorial, or you -should ensure that the disruption you cause will not interfere with other +at least 2 CPUs and 4 GiB of memory. In this tutorial you will cordon and +drain the cluster's nodes. **This means that all Pods on the cluster's nodes +will be terminated and evicted, and the nodes will, temporarily, become +unschedulable.** You should use a dedicated cluster for this tutorial, or you +should ensure that the disruption you cause will not interfere with other tenants. -This tutorial assumes that your cluster is configured to dynamically provision +This tutorial assumes that your cluster is configured to dynamically provision PersistentVolumes. If your cluster is not configured to do so, you -will have to manually provision three 20 GiB volumes prior to starting this +will have to manually provision three 20 GiB volumes prior to starting this tutorial. {% endcapture %} @@ -60,51 +60,51 @@ After this tutorial, you will know the following. #### ZooKeeper Basics -[Apache ZooKeeper](https://zookeeper.apache.org/doc/current/) is a +[Apache ZooKeeper](https://zookeeper.apache.org/doc/current/) is a distributed, open-source coordination service for distributed applications. -ZooKeeper allows you to read, write, and observe updates to data. Data are -organized in a file system like hierarchy and replicated to all ZooKeeper -servers in the ensemble (a set of ZooKeeper servers). All operations on data -are atomic and sequentially consistent. ZooKeeper ensures this by using the -[Zab](https://pdfs.semanticscholar.org/b02c/6b00bd5dbdbd951fddb00b906c82fa80f0b3.pdf) +ZooKeeper allows you to read, write, and observe updates to data. Data are +organized in a file system like hierarchy and replicated to all ZooKeeper +servers in the ensemble (a set of ZooKeeper servers). All operations on data +are atomic and sequentially consistent. ZooKeeper ensures this by using the +[Zab](https://pdfs.semanticscholar.org/b02c/6b00bd5dbdbd951fddb00b906c82fa80f0b3.pdf) consensus protocol to replicate a state machine across all servers in the ensemble. The ensemble uses the Zab protocol to elect a leader, and -data can not be written until a leader is elected. Once a leader is -elected, the ensemble uses Zab to ensure that all writes are replicated to a +data can not be written until a leader is elected. Once a leader is +elected, the ensemble uses Zab to ensure that all writes are replicated to a quorum before they are acknowledged and made visible to clients. Without respect -to weighted quorums, a quorum is a majority component of the ensemble containing -the current leader. For instance, if the ensemble has three servers, a component -that contains the leader and one other server constitutes a quorum. If the +to weighted quorums, a quorum is a majority component of the ensemble containing +the current leader. For instance, if the ensemble has three servers, a component +that contains the leader and one other server constitutes a quorum. If the ensemble can not achieve a quorum, data can not be written. -ZooKeeper servers keep their entire state machine in memory, but every mutation -is written to a durable WAL (Write Ahead Log) on storage media. When a server -crashes, it can recover its previous state by replaying the WAL. In order to -prevent the WAL from growing without bound, ZooKeeper servers will periodically -snapshot their in memory state to storage media. These snapshots can be loaded -directly into memory, and all WAL entries that preceded the snapshot may be +ZooKeeper servers keep their entire state machine in memory, but every mutation +is written to a durable WAL (Write Ahead Log) on storage media. When a server +crashes, it can recover its previous state by replaying the WAL. In order to +prevent the WAL from growing without bound, ZooKeeper servers will periodically +snapshot their in memory state to storage media. These snapshots can be loaded +directly into memory, and all WAL entries that preceded the snapshot may be safely discarded. ### Creating a ZooKeeper Ensemble -The manifest below contains a -[Headless Service](/docs/user-guide/services/#headless-services), -a [ConfigMap](/docs/user-guide/configmap/), -a [PodDisruptionBudget](/docs/admin/disruptions/#specifying-a-poddisruptionbudget), -and a [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/). +The manifest below contains a +[Headless Service](/docs/user-guide/services/#headless-services), +a [ConfigMap](/docs/user-guide/configmap/), +a [PodDisruptionBudget](/docs/admin/disruptions/#specifying-a-poddisruptionbudget), +and a [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/). {% include code.html language="yaml" file="zookeeper.yaml" ghlink="/docs/tutorials/stateful-application/zookeeper.yaml" %} -Open a command terminal, and use -[`kubectl create`](/docs/user-guide/kubectl/kubectl_create/) to create the +Open a command terminal, and use +[`kubectl create`](/docs/user-guide/kubectl/kubectl_create/) to create the manifest. ```shell kubectl create -f http://k8s.io/docs/tutorials/stateful-application/zookeeper.yaml ``` -This creates the `zk-headless` Headless Service, the `zk-config` ConfigMap, +This creates the `zk-headless` Headless Service, the `zk-config` ConfigMap, the `zk-budget` PodDisruptionBudget, and the `zk` StatefulSet. ```shell @@ -142,29 +142,29 @@ zk-2 0/1 Running 0 19s zk-2 1/1 Running 0 40s ``` -The StatefulSet controller creates three Pods, and each Pod has a container with +The StatefulSet controller creates three Pods, and each Pod has a container with a [ZooKeeper 3.4.9](http://www-us.apache.org/dist/zookeeper/zookeeper-3.4.9/) server. #### Facilitating Leader Election -As there is no terminating algorithm for electing a leader in an anonymous -network, Zab requires explicit membership configuration in order to perform -leader election. Each server in the ensemble needs to have a unique +As there is no terminating algorithm for electing a leader in an anonymous +network, Zab requires explicit membership configuration in order to perform +leader election. Each server in the ensemble needs to have a unique identifier, all servers need to know the global set of identifiers, and each identifier needs to be associated with a network address. -Use [`kubectl exec`](/docs/user-guide/kubectl/kubectl_exec/) to get the hostnames +Use [`kubectl exec`](/docs/user-guide/kubectl/kubectl_exec/) to get the hostnames of the Pods in the `zk` StatefulSet. ```shell for i in 0 1 2; do kubectl exec zk-$i -- hostname; done ``` -The StatefulSet controller provides each Pod with a unique hostname based on its -ordinal index. The hostnames take the form `-`. -As the `replicas` field of the `zk` StatefulSet is set to `3`, the Set's -controller creates three Pods with their hostnames set to `zk-0`, `zk-1`, and -`zk-2`. +The StatefulSet controller provides each Pod with a unique hostname based on its +ordinal index. The hostnames take the form `-`. +As the `replicas` field of the `zk` StatefulSet is set to `3`, the Set's +controller creates three Pods with their hostnames set to `zk-0`, `zk-1`, and +`zk-2`. ```shell zk-0 @@ -172,9 +172,9 @@ zk-1 zk-2 ``` -The servers in a ZooKeeper ensemble use natural numbers as unique identifiers, and -each server's identifier is stored in a file called `myid` in the server’s -data directory. +The servers in a ZooKeeper ensemble use natural numbers as unique identifiers, and +each server's identifier is stored in a file called `myid` in the server's +data directory. Examine the contents of the `myid` file for each server. @@ -182,7 +182,7 @@ Examine the contents of the `myid` file for each server. for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done ``` -As the identifiers are natural numbers and the ordinal indices are non-negative +As the identifiers are natural numbers and the ordinal indices are non-negative integers, you can generate an identifier by adding one to the ordinal. ```shell @@ -200,7 +200,7 @@ Get the FQDN (Fully Qualified Domain Name) of each Pod in the `zk` StatefulSet. for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done ``` -The `zk-headless` Service creates a domain for all of the Pods, +The `zk-headless` Service creates a domain for all of the Pods, `zk-headless.default.svc.cluster.local`. ```shell @@ -209,11 +209,11 @@ zk-1.zk-headless.default.svc.cluster.local zk-2.zk-headless.default.svc.cluster.local ``` -The A records in [Kubernetes DNS](/docs/admin/dns/) resolve the FQDNs to the Pods' IP addresses. -If the Pods are rescheduled, the A records will be updated with the Pods' new IP +The A records in [Kubernetes DNS](/docs/admin/dns/) resolve the FQDNs to the Pods' IP addresses. +If the Pods are rescheduled, the A records will be updated with the Pods' new IP addresses, but the A record's names will not change. -ZooKeeper stores its application configuration in a file named `zoo.cfg`. Use +ZooKeeper stores its application configuration in a file named `zoo.cfg`. Use `kubectl exec` to view the contents of the `zoo.cfg` file in the `zk-0` Pod. ``` @@ -222,8 +222,8 @@ kubectl exec zk-0 -- cat /opt/zookeeper/conf/zoo.cfg For the `server.1`, `server.2`, and `server.3` properties at the bottom of the file, the `1`, `2`, and `3` correspond to the identifiers in the -ZooKeeper servers' `myid` files. They are set to the FQDNs for the Pods in -the `zk` StatefulSet. +ZooKeeper servers' `myid` files. They are set to the FQDNs for the Pods in +the `zk` StatefulSet. ```shell clientPort=2181 @@ -244,16 +244,16 @@ server.3=zk-2.zk-headless.default.svc.cluster.local:2888:3888 #### Achieving Consensus -Consensus protocols require that the identifiers of each participant be -unique. No two participants in the Zab protocol should claim the same unique -identifier. This is necessary to allow the processes in the system to agree on -which processes have committed which data. If two Pods were launched with the +Consensus protocols require that the identifiers of each participant be +unique. No two participants in the Zab protocol should claim the same unique +identifier. This is necessary to allow the processes in the system to agree on +which processes have committed which data. If two Pods were launched with the same ordinal, two ZooKeeper servers would both identify themselves as the same server. -When you created the `zk` StatefulSet, the StatefulSet's controller created -each Pod sequentially, in the order defined by the Pods' ordinal indices, and it -waited for each Pod to be Running and Ready before creating the next Pod. +When you created the `zk` StatefulSet, the StatefulSet's controller created +each Pod sequentially, in the order defined by the Pods' ordinal indices, and it +waited for each Pod to be Running and Ready before creating the next Pod. ```shell kubectl get pods -w -l app=zk @@ -277,7 +277,7 @@ zk-2 1/1 Running 0 40s The A records for each Pod are only entered when the Pod becomes Ready. Therefore, the FQDNs of the ZooKeeper servers will only resolve to a single endpoint, and that -endpoint will be the unique ZooKeeper server claiming the identity configured +endpoint will be the unique ZooKeeper server claiming the identity configured in its `myid` file. ```shell @@ -286,7 +286,7 @@ zk-1.zk-headless.default.svc.cluster.local zk-2.zk-headless.default.svc.cluster.local ``` -This ensures that the `servers` properties in the ZooKeepers' `zoo.cfg` files +This ensures that the `servers` properties in the ZooKeepers' `zoo.cfg` files represents a correctly configured ensemble. ```shell @@ -295,16 +295,16 @@ server.2=zk-1.zk-headless.default.svc.cluster.local:2888:3888 server.3=zk-2.zk-headless.default.svc.cluster.local:2888:3888 ``` -When the servers use the Zab protocol to attempt to commit a value, they will -either achieve consensus and commit the value (if leader election has succeeded -and at least two of the Pods are Running and Ready), or they will fail to do so -(if either of the aforementioned conditions are not met). No state will arise +When the servers use the Zab protocol to attempt to commit a value, they will +either achieve consensus and commit the value (if leader election has succeeded +and at least two of the Pods are Running and Ready), or they will fail to do so +(if either of the aforementioned conditions are not met). No state will arise where one server acknowledges a write on behalf of another. #### Sanity Testing the Ensemble -The most basic sanity test is to write some data to one ZooKeeper server and -to read the data from another. +The most basic sanity test is to write some data to one ZooKeeper server and +to read the data from another. Use the `zkCli.sh` script to write `world` to the path `/hello` on the `zk-0` Pod. @@ -327,7 +327,7 @@ Get the data from the `zk-1` Pod. kubectl exec zk-1 zkCli.sh get /hello ``` -The data that you created on `zk-0` is available on all of the servers in the +The data that you created on `zk-0` is available on all of the servers in the ensemble. ```shell @@ -351,12 +351,12 @@ numChildren = 0 #### Providing Durable Storage As mentioned in the [ZooKeeper Basics](#zookeeper-basics) section, -ZooKeeper commits all entries to a durable WAL, and periodically writes snapshots -in memory state, to storage media. Using WALs to provide durability is a common +ZooKeeper commits all entries to a durable WAL, and periodically writes snapshots +in memory state, to storage media. Using WALs to provide durability is a common technique for applications that use consensus protocols to achieve a replicated state machine and for storage applications in general. -Use [`kubectl delete`](/docs/user-guide/kubectl/kubectl_delete/) to delete the +Use [`kubectl delete`](/docs/user-guide/kubectl/kubectl_delete/) to delete the `zk` StatefulSet. ```shell @@ -392,7 +392,7 @@ Reapply the manifest in `zookeeper.yaml`. kubectl apply -f http://k8s.io/docs/tutorials/stateful-application/zookeeper.yaml ``` -The `zk` StatefulSet will be created, but, as they already exist, the other API +The `zk` StatefulSet will be created, but, as they already exist, the other API Objects in the manifest will not be modified. ```shell @@ -429,14 +429,14 @@ zk-2 0/1 Running 0 19s zk-2 1/1 Running 0 40s ``` -Get the value you entered during the [sanity test](#sanity-testing-the-ensemble), +Get the value you entered during the [sanity test](#sanity-testing-the-ensemble), from the `zk-2` Pod. ```shell kubectl exec zk-2 zkCli.sh get /hello ``` -Even though all of the Pods in the `zk` StatefulSet have been terminated and +Even though all of the Pods in the `zk` StatefulSet have been terminated and recreated, the ensemble still serves the original value. ```shell @@ -457,8 +457,8 @@ dataLength = 5 numChildren = 0 ``` -The `volumeClaimTemplates` field, of the `zk` StatefulSet's `spec`, specifies a -PersistentVolume that will be provisioned for each Pod. +The `volumeClaimTemplates` field, of the `zk` StatefulSet's `spec`, specifies a +PersistentVolume that will be provisioned for each Pod. ```yaml volumeClaimTemplates: @@ -474,8 +474,8 @@ volumeClaimTemplates: ``` -The StatefulSet controller generates a PersistentVolumeClaim for each Pod in -the StatefulSet. +The StatefulSet controller generates a PersistentVolumeClaim for each Pod in +the StatefulSet. Get the StatefulSet's PersistentVolumeClaims. @@ -483,7 +483,7 @@ Get the StatefulSet's PersistentVolumeClaims. kubectl get pvc -l app=zk ``` -When the StatefulSet recreated its Pods, the Pods' PersistentVolumes were +When the StatefulSet recreated its Pods, the Pods' PersistentVolumes were remounted. ```shell @@ -502,19 +502,19 @@ volumeMounts: mountPath: /var/lib/zookeeper ``` -When a Pod in the `zk` StatefulSet is (re)scheduled, it will always have the -same PersistentVolume mounted to the ZooKeeper server's data directory. -Even when the Pods are rescheduled, all of the writes made to the ZooKeeper +When a Pod in the `zk` StatefulSet is (re)scheduled, it will always have the +same PersistentVolume mounted to the ZooKeeper server's data directory. +Even when the Pods are rescheduled, all of the writes made to the ZooKeeper servers' WALs, and all of their snapshots, remain durable. ### Ensuring Consistent Configuration As noted in the [Facilitating Leader Election](#facilitating-leader-election) and -[Achieving Consensus](#achieving-consensus) sections, the servers in a -ZooKeeper ensemble require consistent configuration in order to elect a leader +[Achieving Consensus](#achieving-consensus) sections, the servers in a +ZooKeeper ensemble require consistent configuration in order to elect a leader and form a quorum. They also require consistent configuration of the Zab protocol -in order for the protocol to work correctly over a network. You can use -ConfigMaps to achieve this. +in order for the protocol to work correctly over a network. You can use +ConfigMaps to achieve this. Get the `zk-config` ConfigMap. @@ -532,8 +532,8 @@ data: tick: "2000" ``` -The `env` field of the `zk` StatefulSet's Pod `template` reads the ConfigMap -into environment variables. These variables are injected into the containers +The `env` field of the `zk` StatefulSet's Pod `template` reads the ConfigMap +into environment variables. These variables are injected into the containers environment. ```yaml @@ -581,7 +581,7 @@ env: ``` The entry point of the container invokes a bash script, `zkConfig.sh`, prior to -launching the ZooKeeper server process. This bash script generates the +launching the ZooKeeper server process. This bash script generates the ZooKeeper configuration files from the supplied environment variables. ```yaml @@ -597,8 +597,8 @@ Examine the environment of all of the Pods in the `zk` StatefulSet. for i in 0 1 2; do kubectl exec zk-$i env | grep ZK_*;echo""; done ``` -All of the variables populated from `zk-config` contain identical values. This -allows the `zkGenConfig.sh` script to create consistent configurations for all +All of the variables populated from `zk-config` contain identical values. This +allows the `zkGenConfig.sh` script to create consistent configurations for all of the ZooKeeper servers in the ensemble. ```shell @@ -653,16 +653,16 @@ ZK_LOG_DIR=/var/log/zookeeper #### Configuring Logging -One of the files generated by the `zkConfigGen.sh` script controls ZooKeeper's logging. -ZooKeeper uses [Log4j](http://logging.apache.org/log4j/2.x/), and, by default, -it uses a time and size based rolling file appender for its logging configuration. +One of the files generated by the `zkConfigGen.sh` script controls ZooKeeper's logging. +ZooKeeper uses [Log4j](http://logging.apache.org/log4j/2.x/), and, by default, +it uses a time and size based rolling file appender for its logging configuration. Get the logging configuration from one of Pods in the `zk` StatefulSet. ```shell kubectl exec zk-0 cat /usr/etc/zookeeper/log4j.properties ``` -The logging configuration below will cause the ZooKeeper process to write all +The logging configuration below will cause the ZooKeeper process to write all of its logs to the standard output file stream. ```shell @@ -675,20 +675,20 @@ log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n ``` -This is the simplest possible way to safely log inside the container. As the -application's logs are being written to standard out, Kubernetes will handle -log rotation for you. Kubernetes also implements a sane retention policy that -ensures application logs written to standard out and standard error do not +This is the simplest possible way to safely log inside the container. As the +application's logs are being written to standard out, Kubernetes will handle +log rotation for you. Kubernetes also implements a sane retention policy that +ensures application logs written to standard out and standard error do not exhaust local storage media. -Use [`kubectl logs`](/docs/user-guide/kubectl/kubectl_logs/) to retrieve the last +Use [`kubectl logs`](/docs/user-guide/kubectl/kubectl_logs/) to retrieve the last few log lines from one of the Pods. ```shell kubectl logs zk-0 --tail 20 ``` -Application logs that are written to standard out or standard error are viewable +Application logs that are written to standard out or standard error are viewable using `kubectl logs` and from the Kubernetes Dashboard. ```shell @@ -714,19 +714,19 @@ using `kubectl logs` and from the Kubernetes Dashboard. 2016-12-06 19:34:46,230 [myid:1] - INFO [Thread-1142:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52768 (no session established for client) ``` -Kubernetes also supports more powerful, but more complex, logging integrations -with [Google Cloud Logging](https://github.com/kubernetes/contrib/blob/master/logging/fluentd-sidecar-gcp/README.md) +Kubernetes also supports more powerful, but more complex, logging integrations +with [Google Cloud Logging](https://github.com/kubernetes/contrib/blob/master/logging/fluentd-sidecar-gcp/README.md) and [ELK](https://github.com/kubernetes/contrib/blob/master/logging/fluentd-sidecar-es/README.md). For cluster level log shipping and aggregation, you should consider deploying a -[sidecar](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html) +[sidecar](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html) container to rotate and ship your logs. #### Configuring a Non-Privileged User -The best practices with respect to allowing an application to run as a privileged -user inside of a container are a matter of debate. If your organization requires -that applications be run as a non-privileged user you can use a -[SecurityContext](/docs/user-guide/security-context/) to control the user that +The best practices with respect to allowing an application to run as a privileged +user inside of a container are a matter of debate. If your organization requires +that applications be run as a non-privileged user you can use a +[SecurityContext](/docs/user-guide/security-context/) to control the user that the entry point runs as. The `zk` StatefulSet's Pod `template` contains a SecurityContext. @@ -737,7 +737,7 @@ securityContext: fsGroup: 1000 ``` -In the Pods' containers, UID 1000 corresponds to the zookeeper user and GID 1000 +In the Pods' containers, UID 1000 corresponds to the zookeeper user and GID 1000 corresponds to the zookeeper group. Get the ZooKeeper process information from the `zk-0` Pod. @@ -746,7 +746,7 @@ Get the ZooKeeper process information from the `zk-0` Pod. kubectl exec zk-0 -- ps -elf ``` -As the `runAsUser` field of the `securityContext` object is set to 1000, +As the `runAsUser` field of the `securityContext` object is set to 1000, instead of running as root, the ZooKeeper process runs as the zookeeper user. ```shell @@ -755,8 +755,8 @@ F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD 0 S zookeep+ 27 1 0 80 0 - 1155556 - 20:46 ? 00:00:19 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.9.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper: -Xmx2G -Xms2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/bin/../etc/zookeeper/zoo.cfg ``` -By default, when the Pod's PersistentVolume is mounted to the ZooKeeper server's -data directory, it is only accessible by the root user. This configuration +By default, when the Pod's PersistentVolume is mounted to the ZooKeeper server's +data directory, it is only accessible by the root user. This configuration prevents the ZooKeeper process from writing to its WAL and storing its snapshots. Get the file permissions of the ZooKeeper data directory on the `zk-0` Pod. @@ -765,8 +765,8 @@ Get the file permissions of the ZooKeeper data directory on the `zk-0` Pod. kubectl exec -ti zk-0 -- ls -ld /var/lib/zookeeper/data ``` -As the `fsGroup` field of the `securityContext` object is set to 1000, -the ownership of the Pods' PersistentVolumes is set to the zookeeper group, +As the `fsGroup` field of the `securityContext` object is set to 1000, +the ownership of the Pods' PersistentVolumes is set to the zookeeper group, and the ZooKeeper process is able to successfully read and write its data. ```shell @@ -775,21 +775,21 @@ drwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data ### Managing the ZooKeeper Process -The [ZooKeeper documentation](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_supervision) -documentation indicates that "You will want to have a supervisory process that -manages each of your ZooKeeper server processes (JVM)." Utilizing a watchdog -(supervisory process) to restart failed processes in a distributed system is a -common pattern. When deploying an application in Kubernetes, rather than using -an external utility as a supervisory process, you should use Kubernetes as the +The [ZooKeeper documentation](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_supervision) +documentation indicates that "You will want to have a supervisory process that +manages each of your ZooKeeper server processes (JVM)." Utilizing a watchdog +(supervisory process) to restart failed processes in a distributed system is a +common pattern. When deploying an application in Kubernetes, rather than using +an external utility as a supervisory process, you should use Kubernetes as the watchdog for your application. -#### Handling Process Failure +#### Handling Process Failure -[Restart Policies](/docs/user-guide/pod-states/#restartpolicy) control how +[Restart Policies](/docs/user-guide/pod-states/#restartpolicy) control how Kubernetes handles process failures for the entry point of the container in a Pod. For Pods in a StatefulSet, the only appropriate RestartPolicy is Always, and this -is the default value. For stateful applications you should **never** override +is the default value. For stateful applications you should **never** override the default policy. @@ -799,7 +799,7 @@ Examine the process tree for the ZooKeeper server running in the `zk-0` Pod. kubectl exec zk-0 -- ps -ef ``` -The command used as the container's entry point has PID 1, and the +The command used as the container's entry point has PID 1, and the the ZooKeeper process, a child of the entry point, has PID 23. @@ -824,8 +824,8 @@ In another terminal, kill the ZooKeeper process in Pod `zk-0`. ``` -The death of the ZooKeeper process caused its parent process to terminate. As -the RestartPolicy of the container is Always, the parent process was relaunched. +The death of the ZooKeeper process caused its parent process to terminate. As +the RestartPolicy of the container is Always, the parent process was relaunched. ```shell @@ -840,19 +840,19 @@ zk-0 1/1 Running 1 29m ``` -If your application uses a script (such as zkServer.sh) to launch the process +If your application uses a script (such as zkServer.sh) to launch the process that implements the application's business logic, the script must terminate with the child process. This ensures that Kubernetes will restart the application's -container when the process implementing the application's business logic fails. +container when the process implementing the application's business logic fails. #### Testing for Liveness -Configuring your application to restart failed processes is not sufficient to -keep a distributed system healthy. There are many scenarios where -a system's processes can be both alive and unresponsive, or otherwise -unhealthy. You should use liveness probes in order to notify Kubernetes +Configuring your application to restart failed processes is not sufficient to +keep a distributed system healthy. There are many scenarios where +a system's processes can be both alive and unresponsive, or otherwise +unhealthy. You should use liveness probes in order to notify Kubernetes that your application's processes are unhealthy and should be restarted. @@ -869,7 +869,7 @@ The Pod `template` for the `zk` StatefulSet specifies a liveness probe. ``` -The probe calls a simple bash script that uses the ZooKeeper `ruok` four letter +The probe calls a simple bash script that uses the ZooKeeper `ruok` four letter word to test the server's health. @@ -900,7 +900,7 @@ kubectl exec zk-0 -- rm /opt/zookeeper/bin/zkOk.sh ``` -When the liveness probe for the ZooKeeper process fails, Kubernetes will +When the liveness probe for the ZooKeeper process fails, Kubernetes will automatically restart the process for you, ensuring that unhealthy processes in the ensemble are restarted. @@ -921,10 +921,10 @@ zk-0 1/1 Running 1 1h #### Testing for Readiness -Readiness is not the same as liveness. If a process is alive, it is scheduled -and healthy. If a process is ready, it is able to process input. Liveness is +Readiness is not the same as liveness. If a process is alive, it is scheduled +and healthy. If a process is ready, it is able to process input. Liveness is a necessary, but not sufficient, condition for readiness. There are many cases, -particularly during initialization and termination, when a process can be +particularly during initialization and termination, when a process can be alive but not ready. @@ -932,8 +932,8 @@ If you specify a readiness probe, Kubernetes will ensure that your application's processes will not receive network traffic until their readiness checks pass. -For a ZooKeeper server, liveness implies readiness. Therefore, the readiness -probe from the `zookeeper.yaml` manifest is identical to the liveness probe. +For a ZooKeeper server, liveness implies readiness. Therefore, the readiness +probe from the `zookeeper.yaml` manifest is identical to the liveness probe. ```yaml @@ -946,28 +946,28 @@ probe from the `zookeeper.yaml` manifest is identical to the liveness probe. ``` -Even though the liveness and readiness probes are identical, it is important -to specify both. This ensures that only healthy servers in the ZooKeeper +Even though the liveness and readiness probes are identical, it is important +to specify both. This ensures that only healthy servers in the ZooKeeper ensemble receive network traffic. ### Tolerating Node Failure -ZooKeeper needs a quorum of servers in order to successfully commit mutations -to data. For a three server ensemble, two servers must be healthy in order for -writes to succeed. In quorum based systems, members are deployed across failure -domains to ensure availability. In order to avoid an outage, due to the loss of an -individual machine, best practices preclude co-locating multiple instances of the +ZooKeeper needs a quorum of servers in order to successfully commit mutations +to data. For a three server ensemble, two servers must be healthy in order for +writes to succeed. In quorum based systems, members are deployed across failure +domains to ensure availability. In order to avoid an outage, due to the loss of an +individual machine, best practices preclude co-locating multiple instances of the application on the same machine. -By default, Kubernetes may co-locate Pods in a StatefulSet on the same node. +By default, Kubernetes may co-locate Pods in a StatefulSet on the same node. For the three server ensemble you created, if two servers reside on the same node, and that node fails, the clients of your ZooKeeper service will experience -an outage until at least one of the Pods can be rescheduled. +an outage until at least one of the Pods can be rescheduled. You should always provision additional capacity to allow the processes of critical -systems to be rescheduled in the event of node failures. If you do so, then the -outage will only last until the Kubernetes scheduler reschedules one of the ZooKeeper +systems to be rescheduled in the event of node failures. If you do so, then the +outage will only last until the Kubernetes scheduler reschedules one of the ZooKeeper servers. However, if you want your service to tolerate node failures with no downtime, you should use a `PodAntiAffinity` annotation. @@ -985,7 +985,7 @@ kubernetes-minion-group-a5aq kubernetes-minion-group-2g2d ``` -This is because the Pods in the `zk` StatefulSet contain a +This is because the Pods in the `zk` StatefulSet contain a [PodAntiAffinity](/docs/user-guide/node-selection/) annotation. ```yaml @@ -1006,11 +1006,11 @@ scheduler.alpha.kubernetes.io/affinity: > } ``` -The `requiredDuringSchedulingRequiredDuringExecution` field tells the +The `requiredDuringSchedulingRequiredDuringExecution` field tells the Kubernetes Scheduler that it should never co-locate two Pods from the `zk-headless` Service in the domain defined by the `topologyKey`. The `topologyKey` -`kubernetes.io/hostname` indicates that the domain is an individual node. Using -different rules, labels, and selectors, you can extend this technique to spread +`kubernetes.io/hostname` indicates that the domain is an individual node. Using +different rules, labels, and selectors, you can extend this technique to spread your ensemble across physical, network, and power failure domains. ### Surviving Maintenance @@ -1018,8 +1018,8 @@ your ensemble across physical, network, and power failure domains. **In this section you will cordon and drain nodes. If you are using this tutorial on a shared cluster, be sure that this will not adversely affect other tenants.** -The previous section showed you how to spread your Pods across nodes to survive -unplanned node failures, but you also need to plan for temporary node failures +The previous section showed you how to spread your Pods across nodes to survive +unplanned node failures, but you also need to plan for temporary node failures that occur due to planned maintenance. Get the nodes in your cluster. @@ -1028,7 +1028,7 @@ Get the nodes in your cluster. kubectl get nodes ``` -Use [`kubectl cordon`](/docs/user-guide/kubectl/kubectl_cordon/) to +Use [`kubectl cordon`](/docs/user-guide/kubectl/kubectl_cordon/) to cordon all but four of the nodes in your cluster. ```shell{% raw %} @@ -1041,8 +1041,8 @@ Get the `zk-budget` PodDisruptionBudget. kubectl get poddisruptionbudget zk-budget ``` -The `min-available` field indicates to Kubernetes that at least two Pods from -`zk` StatefulSet must be available at any time. +The `min-available` field indicates to Kubernetes that at least two Pods from +`zk` StatefulSet must be available at any time. ```yaml NAME MIN-AVAILABLE ALLOWED-DISRUPTIONS AGE @@ -1065,7 +1065,7 @@ kubernetes-minion-group-ixsl kubernetes-minion-group-i4c4 {% endraw %}``` -Use [`kubectl drain`](/docs/user-guide/kubectl/kubectl_drain/) to cordon and +Use [`kubectl drain`](/docs/user-guide/kubectl/kubectl_drain/) to cordon and drain the node on which the `zk-0` Pod is scheduled. ```shell {% raw %} @@ -1075,7 +1075,7 @@ pod "zk-0" deleted node "kubernetes-minion-group-pb41" drained {% endraw %}``` -As there are four nodes in your cluster, `kubectl drain`, succeeds and the +As there are four nodes in your cluster, `kubectl drain`, succeeds and the `zk-0` is rescheduled to another node. ``` @@ -1095,7 +1095,7 @@ zk-0 0/1 Running 0 51s zk-0 1/1 Running 0 1m ``` -Keep watching the StatefulSet's Pods in the first terminal and drain the node on which +Keep watching the StatefulSet's Pods in the first terminal and drain the node on which `zk-1` is scheduled. ```shell{% raw %} @@ -1105,8 +1105,8 @@ pod "zk-1" deleted node "kubernetes-minion-group-ixsl" drained {% endraw %}``` -The `zk-1` Pod can not be scheduled. As the `zk` StatefulSet contains a -`PodAntiAffinity` annotation preventing co-location of the Pods, and as only +The `zk-1` Pod can not be scheduled. As the `zk` StatefulSet contains a +`PodAntiAffinity` annotation preventing co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a Pending state. ```shell @@ -1133,7 +1133,7 @@ zk-1 0/1 Pending 0 0s zk-1 0/1 Pending 0 0s ``` -Continue to watch the Pods of the stateful set, and drain the node on which +Continue to watch the Pods of the stateful set, and drain the node on which `zk-2` is scheduled. ```shell{% raw %} @@ -1145,9 +1145,9 @@ There are pending pods when an error occurred: Cannot evict pod as it would viol pod/zk-2 {% endraw %}``` -Use `CRTL-C` to terminate to kubectl. +Use `CRTL-C` to terminate to kubectl. -You can not drain the third node because evicting `zk-2` would violate `zk-budget`. However, +You can not drain the third node because evicting `zk-2` would violate `zk-budget`. However, the node will remain cordoned. Use `zkCli.sh` to retrieve the value you entered during the sanity test from `zk-0`. @@ -1232,9 +1232,9 @@ node "kubernetes-minion-group-ixsl" uncordoned ``` You can use `kubectl drain` in conjunction with PodDisruptionBudgets to ensure that your service -remains available during maintenance. If drain is used to cordon nodes and evict pods prior to -taking the node offline for maintenance, services that express a disruption budget will have that -budget respected. You should always allocate additional capacity for critical services so that +remains available during maintenance. If drain is used to cordon nodes and evict pods prior to +taking the node offline for maintenance, services that express a disruption budget will have that +budget respected. You should always allocate additional capacity for critical services so that their Pods can be immediately rescheduled. {% endcapture %} @@ -1242,8 +1242,8 @@ their Pods can be immediately rescheduled. {% capture cleanup %} * Use `kubectl uncordon` to uncordon all the nodes in your cluster. * You will need to delete the persistent storage media for the PersistentVolumes -used in this tutorial. Follow the necessary steps, based on your environment, -storage configuration, and provisioning method, to ensure that all storage is +used in this tutorial. Follow the necessary steps, based on your environment, +storage configuration, and provisioning method, to ensure that all storage is reclaimed. {% endcapture %} {% include templates/tutorial.md %} diff --git a/docs/user-guide/configuring-containers.md b/docs/user-guide/configuring-containers.md index 1fa82f52e9..51ac150f07 100644 --- a/docs/user-guide/configuring-containers.md +++ b/docs/user-guide/configuring-containers.md @@ -75,7 +75,7 @@ apiVersion: v1 kind: Pod metadata: name: hello-world -spec: # specification of the pod’s contents +spec: # specification of the pod's contents restartPolicy: Never containers: - name: hello diff --git a/docs/user-guide/managing-deployments.md b/docs/user-guide/managing-deployments.md index 2555e5601c..43e283b4a8 100644 --- a/docs/user-guide/managing-deployments.md +++ b/docs/user-guide/managing-deployments.md @@ -85,8 +85,8 @@ NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx-svc 10.0.0.208 80/TCP 0s ``` -With the above commands, we first create resources under docs/user-guide/nginx/ and print the resources created with `-o name` output format -(print each resource as resource/name). Then we `grep` only the "service", and then print it with `kubectl get`. +With the above commands, we first create resources under docs/user-guide/nginx/ and print the resources created with `-o name` output format +(print each resource as resource/name). Then we `grep` only the "service", and then print it with `kubectl get`. If you happen to organize your resources across several subdirectories within a particular directory, you can recursively perform the operations on the subdirectories also, by specifying `--recursive` or `-R` alongside the `--filename,-f` flag. @@ -102,7 +102,7 @@ project/k8s/development └── my-pvc.yaml ``` -By default, performing a bulk operation on `project/k8s/development` will stop at the first level of the directory, not processing any subdirectories. If we tried to create the resources in this directory using the following command, we'd encounter an error: +By default, performing a bulk operation on `project/k8s/development` will stop at the first level of the directory, not processing any subdirectories. If we tried to create the resources in this directory using the following command, we'd encounter an error: ```shell $ kubectl create -f project/k8s/development @@ -131,7 +131,7 @@ deployment "my-deployment" created persistentvolumeclaim "my-pvc" created ``` -If you're interested in learning more about `kubectl`, go ahead and read [kubectl Overview](/docs/user-guide/kubectl-overview). +If you're interested in learning more about `kubectl`, go ahead and read [kubectl Overview](/docs/user-guide/kubectl-overview). ## Using labels effectively @@ -185,9 +185,9 @@ guestbook-redis-slave-qgazl 1/1 Running 0 3m ## Canary deployments -Another scenario where multiple labels are needed is to distinguish deployments of different releases or configurations of the same component. It is common practice to deploy a *canary* of a new application release (specified via image tag in the pod template) side by side with the previous release so that the new release can receive live production traffic before fully rolling it out. +Another scenario where multiple labels are needed is to distinguish deployments of different releases or configurations of the same component. It is common practice to deploy a *canary* of a new application release (specified via image tag in the pod template) side by side with the previous release so that the new release can receive live production traffic before fully rolling it out. -For instance, you can use a `track` label to differentiate different releases. +For instance, you can use a `track` label to differentiate different releases. The primary, stable release would have a `track` label with value as `stable`: @@ -227,13 +227,13 @@ The frontend service would span both sets of replicas by selecting the common su ``` You can tweak the number of replicas of the stable and canary releases to determine the ratio of each release that will receive live production traffic (in this case, 3:1). -Once you're confident, you can update the stable track to the new application release and remove the canary one. +Once you're confident, you can update the stable track to the new application release and remove the canary one. For a more concrete example, check the [tutorial of deploying Ghost](https://github.com/kelseyhightower/talks/tree/master/kubecon-eu-2016/demo#deploy-a-canary). ## Updating labels -Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`. +Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`. For example, if you want to label all your nginx pods as frontend tier, simply run: ```shell @@ -243,8 +243,8 @@ pod "my-nginx-2035384211-u2c7e" labeled pod "my-nginx-2035384211-u3t6x" labeled ``` -This first filters all pods with the label "app=nginx", and then labels them with the "tier=fe". -To see the pods you just labeled, run: +This first filters all pods with the label "app=nginx", and then labels them with the "tier=fe". +To see the pods you just labeled, run: ```shell $ kubectl get pods -l app=nginx -L tier @@ -284,7 +284,7 @@ $ kubectl scale deployment/my-nginx --replicas=1 deployment "my-nginx" scaled ``` -Now you only have one pod managed by the deployment. +Now you only have one pod managed by the deployment. ```shell $ kubectl get pods -l app=nginx @@ -294,25 +294,25 @@ my-nginx-2035384211-j5fhi 1/1 Running 0 30m To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do: -```shell +```shell $ kubectl autoscale deployment/my-nginx --min=1 --max=3 deployment "my-nginx" autoscaled ``` -Now your nginx replicas will be scaled up and down as needed, automatically. +Now your nginx replicas will be scaled up and down as needed, automatically. For more information, please see [kubectl scale](/docs/user-guide/kubectl/kubectl_scale/), [kubectl autoscale](/docs/user-guide/kubectl/kubectl_autoscale/) and [horizontal pod autoscaler](/docs/user-guide/horizontal-pod-autoscaler/) document. ## In-place updates of resources -Sometimes it's necessary to make narrow, non-disruptive updates to resources you've created. +Sometimes it's necessary to make narrow, non-disruptive updates to resources you've created. ### kubectl apply It is suggested to maintain a set of configuration files in source control (see [configuration as code](http://martinfowler.com/bliki/InfrastructureAsCode.html)), so that they can be maintained and versioned along with the code for the resources they configure. -Then, you can use [`kubectl apply`](/docs/user-guide/kubectl/kubectl_apply/) to push your configuration changes to the cluster. +Then, you can use [`kubectl apply`](/docs/user-guide/kubectl/kubectl_apply/) to push your configuration changes to the cluster. This command will compare the version of the configuration that you're pushing with the previous version and apply the changes you've made, without overwriting any automated changes to properties you haven't specified. @@ -357,7 +357,7 @@ For more information, please see [kubectl edit](/docs/user-guide/kubectl/kubectl Suppose you want to fix a typo of the container's image of a Deployment. One way to do that is with `kubectl patch`: ```shell -# Suppose you have a Deployment with a container named "nginx" and its image "nignx" (typo), +# Suppose you have a Deployment with a container named "nginx" and its image "nignx" (typo), # use container name "nginx" as a key to update the image from "nignx" (typo) to "nginx" $ kubectl get deployment my-nginx -o yaml ``` @@ -396,7 +396,7 @@ spec: The patch is specified using json. -The system ensures that you don’t clobber changes made by other users or components by confirming that the `resourceVersion` doesn’t differ from the version you edited. If you want to update regardless of other changes, remove the `resourceVersion` field when you edit the resource. However, if you do this, don’t use your original configuration file as the source since additional fields most likely were set in the live state. +The system ensures that you don't clobber changes made by other users or components by confirming that the `resourceVersion` doesn't differ from the version you edited. If you want to update regardless of other changes, remove the `resourceVersion` field when you edit the resource. However, if you do this, don't use your original configuration file as the source since additional fields most likely were set in the live state. For more information, please see [kubectl patch](/docs/user-guide/kubectl/kubectl_patch/) document. @@ -414,8 +414,8 @@ deployment "my-nginx" replaced At some point, you'll eventually need to update your deployed application, typically by specifying a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several update operations, each of which is applicable to different scenarios. -We'll guide you through how to create and update applications with Deployments. If your deployed application is managed by Replication Controllers, -you should read [how to use `kubectl rolling-update`](/docs/user-guide/rolling-updates/) instead. +We'll guide you through how to create and update applications with Deployments. If your deployed application is managed by Replication Controllers, +you should read [how to use `kubectl rolling-update`](/docs/user-guide/rolling-updates/) instead. Let's say you were running version 1.7.9 of nginx: @@ -424,7 +424,7 @@ $ kubectl run my-nginx --image=nginx:1.7.9 --replicas=3 deployment "my-nginx" created ``` -To update to version 1.9.1, simply change `.spec.template.spec.containers[0].image` from `nginx:1.7.9` to `nginx:1.9.1`, with the kubectl commands we learned above. +To update to version 1.9.1, simply change `.spec.template.spec.containers[0].image` from `nginx:1.7.9` to `nginx:1.9.1`, with the kubectl commands we learned above. ```shell $ kubectl edit deployment/my-nginx diff --git a/docs/user-guide/pod-security-policy/index.md b/docs/user-guide/pod-security-policy/index.md index c2de42162c..c7c58b8ec6 100644 --- a/docs/user-guide/pod-security-policy/index.md +++ b/docs/user-guide/pod-security-policy/index.md @@ -4,8 +4,8 @@ assignees: title: Pod Security Policies --- -Objects of type `podsecuritypolicy` govern the ability -to make requests on a pod that affect the `SecurityContext` that will be +Objects of type `podsecuritypolicy` govern the ability +to make requests on a pod that affect the `SecurityContext` that will be applied to a pod and container. See [PodSecurityPolicy proposal](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/proposals/security-context-constraints.md) for more information. @@ -15,10 +15,10 @@ See [PodSecurityPolicy proposal](https://github.com/kubernetes/kubernetes/blob/{ ## What is a Pod Security Policy? -A _Pod Security Policy_ is a cluster-level resource that controls the +A _Pod Security Policy_ is a cluster-level resource that controls the actions that a pod can perform and what it has the ability to access. The -`PodSecurityPolicy` objects define a set of conditions that a pod must -run with in order to be accepted into the system. They allow an +`PodSecurityPolicy` objects define a set of conditions that a pod must +run with in order to be accepted into the system. They allow an administrator to control the following: 1. Running of privileged containers. @@ -26,21 +26,21 @@ administrator to control the following: 1. The SELinux context of the container. 1. The user ID. 1. The use of host namespaces and networking. -1. Allocating an FSGroup that owns the pod’s volumes +1. Allocating an FSGroup that owns the pod's volumes 1. Configuring allowable supplemental groups 1. Requiring the use of a read only root file system 1. Controlling the usage of volume types -_Pod Security Policies_ are comprised of settings and strategies that -control the security features a pod has access to. These settings fall +_Pod Security Policies_ are comprised of settings and strategies that +control the security features a pod has access to. These settings fall into three categories: -- *Controlled by a boolean*: Fields of this type default to the most -restrictive value. -- *Controlled by an allowable set*: Fields of this type are checked +- *Controlled by a boolean*: Fields of this type default to the most +restrictive value. +- *Controlled by an allowable set*: Fields of this type are checked against the set to ensure their value is allowed. - *Controlled by a strategy*: Items that have a strategy to generate a value provide -a mechanism to generate the value and a mechanism to ensure that a +a mechanism to generate the value and a mechanism to ensure that a specified value falls into the set of allowable values. @@ -65,22 +65,22 @@ specified. ### SupplementalGroups -- *MustRunAs* - Requires at least one range to be specified. Uses the +- *MustRunAs* - Requires at least one range to be specified. Uses the minimum value of the first range as the default. Validates against all ranges. - *RunAsAny* - No default provided. Allows any `*supplementalGroups*` to be specified. ### FSGroup -- *MustRunAs* - Requires at least one range to be specified. Uses the -minimum value of the first range as the default. Validates against the +- *MustRunAs* - Requires at least one range to be specified. Uses the +minimum value of the first range as the default. Validates against the first ID in the first range. - *RunAsAny* - No default provided. Allows any `*fsGroup*` ID to be specified. ### Controlling Volumes -The usage of specific volume types can be controlled by setting the -volumes field of the PSP. The allowable values of this field correspond +The usage of specific volume types can be controlled by setting the +volumes field of the PSP. The allowable values of this field correspond to the volume sources that are defined when creating a volume: 1. azureFile @@ -104,7 +104,7 @@ to the volume sources that are defined when creating a volume: 1. configMap 1. \* (allow all volumes) -The recommended minimum set of allowed volumes for new PSPs are +The recommended minimum set of allowed volumes for new PSPs are configMap, downwardAPI, emptyDir, persistentVolumeClaim, and secret. ## Admission @@ -150,7 +150,7 @@ podsecuritypolicy "permissive" deleted ## Enabling Pod Security Policies -In order to use Pod Security Policies in your cluster you must ensure the +In order to use Pod Security Policies in your cluster you must ensure the following 1. You have enabled the api type `extensions/v1beta1/podsecuritypolicy` diff --git a/docs/user-guide/prereqs.md b/docs/user-guide/prereqs.md index 4be0d6a188..3b9688f1b8 100644 --- a/docs/user-guide/prereqs.md +++ b/docs/user-guide/prereqs.md @@ -5,7 +5,7 @@ assignees: title: Installing and Setting up kubectl --- -To deploy and manage applications on Kubernetes, you’ll use the Kubernetes command-line tool, [kubectl](/docs/user-guide/kubectl/). It lets you inspect your cluster resources, create, delete, and update components, and much more. You will use it to look at your new cluster and bring up example apps. +To deploy and manage applications on Kubernetes, you'll use the Kubernetes command-line tool, [kubectl](/docs/user-guide/kubectl/). It lets you inspect your cluster resources, create, delete, and update components, and much more. You will use it to look at your new cluster and bring up example apps. ## Install kubectl Binary Via curl diff --git a/docs/user-guide/replicasets.md b/docs/user-guide/replicasets.md index f0aa08bf04..86e60cffda 100644 --- a/docs/user-guide/replicasets.md +++ b/docs/user-guide/replicasets.md @@ -35,7 +35,7 @@ their Replica Sets. ## When to use a Replica Set? -A Replica Set ensures that a specified number of pod “replicas” are running at any given +A Replica Set ensures that a specified number of pod "replicas" are running at any given time. However, a Deployment is a higher-level concept that manages Replica Sets and provides declarative updates to pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using Replica Sets, unless diff --git a/docs/user-guide/replication-controller/index.md b/docs/user-guide/replication-controller/index.md index e69c55231b..95917b8f10 100644 --- a/docs/user-guide/replication-controller/index.md +++ b/docs/user-guide/replication-controller/index.md @@ -194,7 +194,7 @@ Ideally, the rolling update controller would take application readiness into acc The two replication controllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates. Rolling update is implemented in the client tool -[`kubectl rolling-update`](/docs/user-guide/kubectl/kubectl_rolling-update). Visit [`kubectl rolling-update` tutorial](/docs/user-guide/rolling-updates/) for more concrete examples. +[`kubectl rolling-update`](/docs/user-guide/kubectl/kubectl_rolling-update). Visit [`kubectl rolling-update` tutorial](/docs/user-guide/rolling-updates/) for more concrete examples. ### Multiple release tracks @@ -233,13 +233,13 @@ object](/docs/api-reference/v1/definitions/#_v1_replicationcontroller). ### ReplicaSet [`ReplicaSet`](/docs/user-guide/replicasets/) is the next-generation Replication Controller that supports the new [set-based label selector](/docs/user-guide/labels/#set-based-requirement). -It’s mainly used by [`Deployment`](/docs/user-guide/deployments/) as a mechanism to orchestrate pod creation, deletion and updates. -Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don’t require updates at all. +It's mainly used by [`Deployment`](/docs/user-guide/deployments/) as a mechanism to orchestrate pod creation, deletion and updates. +Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don't require updates at all. ### Deployment (Recommended) [`Deployment`](/docs/user-guide/deployments/) is a higher-level API object that updates its underlying Replica Sets and their Pods -in a similar fashion as `kubectl rolling-update`. Deployments are recommended if you want this rolling update functionality, +in a similar fashion as `kubectl rolling-update`. Deployments are recommended if you want this rolling update functionality, because unlike `kubectl rolling-update`, they are declarative, server-side, and have additional features. ### Bare Pods diff --git a/docs/user-guide/security-context.md b/docs/user-guide/security-context.md index 3d216447ca..52fe9f97eb 100644 --- a/docs/user-guide/security-context.md +++ b/docs/user-guide/security-context.md @@ -11,7 +11,7 @@ A security context defines the operating system security settings (uid, gid, cap There are two levels of security context: pod level security context, and container level security context. ## Pod Level Security Context -Setting security context at the pod applies those settings to all containers in the pod +Setting security context at the pod applies those settings to all containers in the pod ```yaml apiVersion: v1 @@ -20,7 +20,7 @@ metadata: name: hello-world spec: containers: - # specification of the pod’s containers + # specification of the pod's containers # ... securityContext: fsGroup: 1234 @@ -82,7 +82,6 @@ spec: ``` Please refer to the -[API documentation](/docs/api-reference/v1/definitions/#_v1_securitycontext) +[API documentation](/docs/api-reference/v1/definitions/#_v1_securitycontext) for a detailed listing and description of all the fields available within the container security context. - diff --git a/index.html b/index.html index 728100db84..78b964ce39 100644 --- a/index.html +++ b/index.html @@ -80,7 +80,7 @@

      Self-healing

      Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers - that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.

      + that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.

    @@ -100,7 +100,7 @@

    Automated rollouts and rollbacks

    Kubernetes progressively rolls out changes to your application or its configuration, while monitoring - application health to ensure it doesn’t kill all your instances at the same time. If something goes + application health to ensure it doesn't kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you. Take advantage of a growing ecosystem of deployment solutions.

    @@ -131,7 +131,7 @@

    Case Studies

    -

    Using Kubernetes to reinvent the world’s largest educational company

    +

    Using Kubernetes to reinvent the world's largest educational company

    Read more
    @@ -139,11 +139,11 @@ Read more
    -

    Inside eBay’s shift to Kubernetes and containers atop OpenStack

    +

    Inside eBay's shift to Kubernetes and containers atop OpenStack

    Read more
    -

    Migrating from a homegrown ‘cluster’ to Kubernetes

    +

    Migrating from a homegrown 'cluster' to Kubernetes

    Watch the video
    @@ -154,7 +154,7 @@ - + @@ -162,11 +162,11 @@ - + - + From 521ae621eb869b97d209f35f23ffe37801a92130 Mon Sep 17 00:00:00 2001 From: SRaddict Date: Thu, 22 Dec 2016 16:21:49 +0800 Subject: [PATCH 03/14] duplicated 'the' --- docs/admin/cluster-large.md | 16 ++++++++-------- docs/admin/ha-master-gce.md | 2 +- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/docs/admin/cluster-large.md b/docs/admin/cluster-large.md index f41df12689..41393bc01d 100644 --- a/docs/admin/cluster-large.md +++ b/docs/admin/cluster-large.md @@ -1,10 +1,10 @@ ---- -assignees: -- davidopp -- lavalamp -title: Building Large Clusters ---- - +--- +assignees: +- davidopp +- lavalamp +title: Building Large Clusters +--- + ## Support At {{page.version}}, Kubernetes supports clusters with up to 1000 nodes. More specifically, we support configurations that meet *all* of the following criteria: @@ -21,7 +21,7 @@ At {{page.version}}, Kubernetes supports clusters with up to 1000 nodes. More sp A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by a "master" (the cluster-level control plane). -Normally the number of nodes in a cluster is controlled by the the value `NUM_NODES` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/{{page.githubbranch}}/cluster/gce/config-default.sh)). +Normally the number of nodes in a cluster is controlled by the value `NUM_NODES` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/{{page.githubbranch}}/cluster/gce/config-default.sh)). Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up. diff --git a/docs/admin/ha-master-gce.md b/docs/admin/ha-master-gce.md index 262dafbe0a..871ce56606 100644 --- a/docs/admin/ha-master-gce.md +++ b/docs/admin/ha-master-gce.md @@ -24,7 +24,7 @@ If true, reads will be directed to leader etcd replica. Setting this value to true is optional: reads will be more reliable but will also be slower. Optionally, you can specify a GCE zone where the first master replica is to be created. -Set the the following flag: +Set the following flag: * `KUBE_GCE_ZONE=zone` - zone where the first master replica will run. From 5d6a3aaa53486021aea2e1fe5446a5a5081cf12d Mon Sep 17 00:00:00 2001 From: SRaddict Date: Thu, 22 Dec 2016 16:56:12 +0800 Subject: [PATCH 04/14] fix a series errors of using "a" and "an" --- _includes/v1.3/extensions-v1beta1-definitions.html | 6 +++--- _includes/v1.3/extensions-v1beta1-operations.html | 4 ++-- _includes/v1.3/v1-definitions.html | 6 +++--- _includes/v1.3/v1-operations.html | 8 ++++---- _includes/v1.4/extensions-v1beta1-operations.html | 4 ++-- _includes/v1.4/v1-operations.html | 10 +++++----- docs/admin/addons.md | 2 +- docs/admin/kubelet.md | 2 +- docs/getting-started-guides/libvirt-coreos.md | 2 +- docs/getting-started-guides/mesos/index.md | 2 +- docs/getting-started-guides/rackspace.md | 2 +- docs/tutorials/services/source-ip.md | 2 +- docs/user-guide/connecting-applications.md | 2 +- docs/user-guide/jobs/work-queue-2/rediswq.py | 2 +- docs/user-guide/load-balancer.md | 2 +- docs/user-guide/petset.md | 2 +- docs/user-guide/pods/init-container.md | 2 +- 17 files changed, 30 insertions(+), 30 deletions(-) diff --git a/_includes/v1.3/extensions-v1beta1-definitions.html b/_includes/v1.3/extensions-v1beta1-definitions.html index 7ecddc8d7b..92ce832083 100755 --- a/_includes/v1.3/extensions-v1beta1-definitions.html +++ b/_includes/v1.3/extensions-v1beta1-definitions.html @@ -2079,7 +2079,7 @@ Populated by the system when a graceful deletion is requested. Read-only. More i

    v1.FlexVolumeSource

    -

    FlexVolume represents a generic volume resource that is provisioned/attached using a exec based plugin. This is an alpha feature and may change in future.

    +

    FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. This is an alpha feature and may change in future.

    @@ -2535,7 +2535,7 @@ Populated by the system when a graceful deletion is requested. Read-only. More i - + @@ -5867,7 +5867,7 @@ Both these may change in the future. Incoming requests are matched against the h - + diff --git a/_includes/v1.3/extensions-v1beta1-operations.html b/_includes/v1.3/extensions-v1beta1-operations.html index be39609140..21f12fcf7a 100755 --- a/_includes/v1.3/extensions-v1beta1-operations.html +++ b/_includes/v1.3/extensions-v1beta1-operations.html @@ -5578,7 +5578,7 @@
    -

    create a Ingress

    +

    create an Ingress

    POST /apis/extensions/v1beta1/namespaces/{namespace}/ingresses
    @@ -5959,7 +5959,7 @@
    -

    delete a Ingress

    +

    delete an Ingress

    DELETE /apis/extensions/v1beta1/namespaces/{namespace}/ingresses/{name}
    diff --git a/_includes/v1.3/v1-definitions.html b/_includes/v1.3/v1-definitions.html index e833b003ea..693f3ce4c7 100755 --- a/_includes/v1.3/v1-definitions.html +++ b/_includes/v1.3/v1-definitions.html @@ -2560,7 +2560,7 @@ The resulting set of endpoints can be viewed as:

    v1.FlexVolumeSource

    -

    FlexVolume represents a generic volume resource that is provisioned/attached using a exec based plugin. This is an alpha feature and may change in future.

    +

    FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. This is an alpha feature and may change in future.

    flexVolume

    FlexVolume represents a generic volume resource that is provisioned/attached using a exec based plugin. This is an alpha feature and may change in future.

    FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. This is an alpha feature and may change in future.

    false

    v1.FlexVolumeSource

    path

    Path is a extended POSIX regex as defined by IEEE Std 1003.1, (i.e this follows the egrep/unix syntax, not the perl syntax) matched against the path of an incoming request. Currently it can contain characters disallowed from the conventional "path" part of a URL as defined by RFC 3986. Paths must begin with a /. If unspecified, the path defaults to a catch all sending traffic to the backend.

    Path is an extended POSIX regex as defined by IEEE Std 1003.1, (i.e this follows the egrep/unix syntax, not the perl syntax) matched against the path of an incoming request. Currently it can contain characters disallowed from the conventional "path" part of a URL as defined by RFC 3986. Paths must begin with a /. If unspecified, the path defaults to a catch all sending traffic to the backend.

    false

    string

    @@ -3268,7 +3268,7 @@ The resulting set of endpoints can be viewed as:
    - + @@ -5555,7 +5555,7 @@ The resulting set of endpoints can be viewed as:
    - + diff --git a/_includes/v1.3/v1-operations.html b/_includes/v1.3/v1-operations.html index 24e21c4f53..de6b5117e6 100755 --- a/_includes/v1.3/v1-operations.html +++ b/_includes/v1.3/v1-operations.html @@ -2676,7 +2676,7 @@
    -

    create a Endpoints

    +

    create an Endpoints

    POST /api/v1/namespaces/{namespace}/endpoints
    @@ -3057,7 +3057,7 @@
    -

    delete a Endpoints

    +

    delete an Endpoints

    DELETE /api/v1/namespaces/{namespace}/endpoints/{name}
    @@ -3619,7 +3619,7 @@
    -

    create a Event

    +

    create an Event

    POST /api/v1/namespaces/{namespace}/events
    @@ -4000,7 +4000,7 @@
    -

    delete a Event

    +

    delete an Event

    DELETE /api/v1/namespaces/{namespace}/events/{name}
    diff --git a/_includes/v1.4/extensions-v1beta1-operations.html b/_includes/v1.4/extensions-v1beta1-operations.html index a18a2f6030..ce55af43d9 100755 --- a/_includes/v1.4/extensions-v1beta1-operations.html +++ b/_includes/v1.4/extensions-v1beta1-operations.html @@ -5578,7 +5578,7 @@
    -

    create a Ingress

    +

    create an Ingress

    POST /apis/extensions/v1beta1/namespaces/{namespace}/ingresses
    @@ -5959,7 +5959,7 @@
    -

    delete a Ingress

    +

    delete an Ingress

    DELETE /apis/extensions/v1beta1/namespaces/{namespace}/ingresses/{name}
    diff --git a/_includes/v1.4/v1-operations.html b/_includes/v1.4/v1-operations.html index 875b464420..f866fc12fc 100755 --- a/_includes/v1.4/v1-operations.html +++ b/_includes/v1.4/v1-operations.html @@ -2676,7 +2676,7 @@
    -

    create a Endpoints

    +

    create an Endpoints

    POST /api/v1/namespaces/{namespace}/endpoints
    @@ -3057,7 +3057,7 @@
    -

    delete a Endpoints

    +

    delete an Endpoints

    DELETE /api/v1/namespaces/{namespace}/endpoints/{name}
    @@ -3619,7 +3619,7 @@
    -

    create a Event

    +

    create an Event

    POST /api/v1/namespaces/{namespace}/events
    @@ -4000,7 +4000,7 @@
    -

    delete a Event

    +

    delete an Event

    DELETE /api/v1/namespaces/{namespace}/events/{name}
    @@ -7885,7 +7885,7 @@
    -

    create eviction of a Eviction

    +

    create eviction of an Eviction

    POST /api/v1/namespaces/{namespace}/pods/{name}/eviction
    diff --git a/docs/admin/addons.md b/docs/admin/addons.md index f45aebeb09..aeee68cc30 100644 --- a/docs/admin/addons.md +++ b/docs/admin/addons.md @@ -14,7 +14,7 @@ Add-ons in each section are sorted alphabetically - the ordering does not imply * [Calico](http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/) is a secure L3 networking and network policy provider. * [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm) unites Flannel and Calico, providing networking and network policy. -* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml) is a overlay network provider that can be used with Kubernetes. +* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml) is an overlay network provider that can be used with Kubernetes. * [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/user-guide/networkpolicies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize). * [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database. diff --git a/docs/admin/kubelet.md b/docs/admin/kubelet.md index 342189ba94..e593cda77b 100644 --- a/docs/admin/kubelet.md +++ b/docs/admin/kubelet.md @@ -79,7 +79,7 @@ kubelet --experimental-bootstrap-kubeconfig string Path to a kubeconfig file that will be used to get client certificate for kubelet. If the file specified by --kubeconfig does not exist, the bootstrap kubeconfig is used to request a client certificate from the API server. On success, a kubeconfig file referencing the generated key and obtained certificate is written to the path specified by --kubeconfig. The certificate and key file will be stored in the directory pointed by --cert-dir. --experimental-cgroups-per-qos Enable creation of QoS cgroup hierarchy, if true top level QoS and pod cgroups are created. --experimental-check-node-capabilities-before-mount [Experimental] if set true, the kubelet will check the underlying node for required componenets (binaries, etc.) before performing the mount - --experimental-cri [Experimental] Enable the Container Runtime Interface (CRI) integration. If --container-runtime is set to "remote", Kubelet will communicate with the runtime/image CRI server listening on the endpoint specified by --remote-runtime-endpoint/--remote-image-endpoint. If --container-runtime is set to "docker", Kubelet will launch a in-process CRI server on behalf of docker, and communicate over a default endpoint. + --experimental-cri [Experimental] Enable the Container Runtime Interface (CRI) integration. If --container-runtime is set to "remote", Kubelet will communicate with the runtime/image CRI server listening on the endpoint specified by --remote-runtime-endpoint/--remote-image-endpoint. If --container-runtime is set to "docker", Kubelet will launch an in-process CRI server on behalf of docker, and communicate over a default endpoint. --experimental-fail-swap-on Makes the Kubelet fail to start if swap is enabled on the node. This is a temporary opton to maintain legacy behavior, failing due to swap enabled will happen by default in v1.6. --experimental-kernel-memcg-notification If enabled, the kubelet will integrate with the kernel memcg notification to determine if memory eviction thresholds are crossed rather than polling. --experimental-mounter-path string [Experimental] Path of mounter binary. Leave empty to use the default mount. diff --git a/docs/getting-started-guides/libvirt-coreos.md b/docs/getting-started-guides/libvirt-coreos.md index 73be7e4261..e5668dbf53 100644 --- a/docs/getting-started-guides/libvirt-coreos.md +++ b/docs/getting-started-guides/libvirt-coreos.md @@ -30,7 +30,7 @@ Another difference is that no security is enforced on `libvirt-coreos` at all. F * Kubernetes secrets are not protected as securely as they are on production environments; * etc. -So, an k8s application developer should not validate its interaction with Kubernetes on `libvirt-coreos` because he might technically succeed in doing things that are prohibited on a production environment like: +So, a k8s application developer should not validate its interaction with Kubernetes on `libvirt-coreos` because he might technically succeed in doing things that are prohibited on a production environment like: * un-authenticated access to Kube API server; * Access to Kubernetes private data structures inside etcd; diff --git a/docs/getting-started-guides/mesos/index.md b/docs/getting-started-guides/mesos/index.md index 948eae1a41..499ff0ba51 100644 --- a/docs/getting-started-guides/mesos/index.md +++ b/docs/getting-started-guides/mesos/index.md @@ -229,7 +229,7 @@ We assume that kube-dns will use Note that we have passed these two values already as parameter to the apiserver above. -A template for an replication controller spinning up the pod with the 3 containers can be found at [cluster/addons/dns/skydns-rc.yaml.in][11] in the repository. The following steps are necessary in order to get a valid replication controller yaml file: +A template for a replication controller spinning up the pod with the 3 containers can be found at [cluster/addons/dns/skydns-rc.yaml.in][11] in the repository. The following steps are necessary in order to get a valid replication controller yaml file: - replace `{% raw %}{{ pillar['dns_replicas'] }}{% endraw %}` with `1` - replace `{% raw %}{{ pillar['dns_domain'] }}{% endraw %}` with `cluster.local.` diff --git a/docs/getting-started-guides/rackspace.md b/docs/getting-started-guides/rackspace.md index 00c73a8e59..ff59f4d31b 100644 --- a/docs/getting-started-guides/rackspace.md +++ b/docs/getting-started-guides/rackspace.md @@ -45,7 +45,7 @@ There is a specific `cluster/rackspace` directory with the scripts for the follo 1. A cloud network will be created and all instances will be attached to this network. - flanneld uses this network for next hop routing. These routes allow the containers running on each node to communicate with one another on this private network. -2. A SSH key will be created and uploaded if needed. This key must be used to ssh into the machines (we do not capture the password). +2. An SSH key will be created and uploaded if needed. This key must be used to ssh into the machines (we do not capture the password). 3. The master server and additional nodes will be created via the `nova` CLI. A `cloud-config.yaml` is generated and provided as user-data with the entire configuration for the systems. 4. We then boot as many nodes as defined via `$NUM_NODES`. diff --git a/docs/tutorials/services/source-ip.md b/docs/tutorials/services/source-ip.md index 6657e42720..76548e68ad 100644 --- a/docs/tutorials/services/source-ip.md +++ b/docs/tutorials/services/source-ip.md @@ -29,7 +29,7 @@ This document makes use of the following terms: You must have a working Kubernetes 1.5 cluster to run the examples in this document. The examples use a small nginx webserver that echoes back the source -IP of requests it receives through a HTTP header. You can create it as follows: +IP of requests it receives through an HTTP header. You can create it as follows: ```console $ kubectl run source-ip-app --image=gcr.io/google_containers/echoserver:1.4 diff --git a/docs/user-guide/connecting-applications.md b/docs/user-guide/connecting-applications.md index 95d365bdb1..c4ca8d20f0 100644 --- a/docs/user-guide/connecting-applications.md +++ b/docs/user-guide/connecting-applications.md @@ -181,7 +181,7 @@ default-token-il9rc kubernetes.io/service-account-token 1 nginxsecret Opaque 2 ``` -Now modify your nginx replicas to start a https server using the certificate in the secret, and the Service, to expose both ports (80 and 443): +Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443): {% include code.html language="yaml" file="nginx-secure-app.yaml" ghlink="/docs/user-guide/nginx-secure-app" %} diff --git a/docs/user-guide/jobs/work-queue-2/rediswq.py b/docs/user-guide/jobs/work-queue-2/rediswq.py index ebefa64311..ceda8bd1e3 100644 --- a/docs/user-guide/jobs/work-queue-2/rediswq.py +++ b/docs/user-guide/jobs/work-queue-2/rediswq.py @@ -95,7 +95,7 @@ class RedisWQ(object): # Record that we (this session id) are working on a key. Expire that # note after the lease timeout. # Note: if we crash at this line of the program, then GC will see no lease - # for this item an later return it to the main queue. + # for this item a later return it to the main queue. itemkey = self._itemkey(item) self._db.setex(self._lease_key_prefix + itemkey, lease_secs, self._session) return item diff --git a/docs/user-guide/load-balancer.md b/docs/user-guide/load-balancer.md index d8540d98e5..fadeb38d5f 100644 --- a/docs/user-guide/load-balancer.md +++ b/docs/user-guide/load-balancer.md @@ -93,7 +93,7 @@ Due to the implementation of this feature, the source IP for sessions as seen in that will preserve the client Source IP for GCE/GKE environments. This feature will be phased in for other cloud providers in subsequent releases. ## Annotation to modify the LoadBalancer behavior for preservation of Source IP -In 1.5, an Beta feature has been added that changes the behavior of the external LoadBalancer feature. +In 1.5, a Beta feature has been added that changes the behavior of the external LoadBalancer feature. This feature can be activated by adding the beta annotation below to the metadata section of the Service Configuration file. diff --git a/docs/user-guide/petset.md b/docs/user-guide/petset.md index 3cba1fb31e..07934774b4 100644 --- a/docs/user-guide/petset.md +++ b/docs/user-guide/petset.md @@ -88,7 +88,7 @@ Only use PetSet if your application requires some or all of these properties. Ma Example workloads for PetSet: -* Databases like MySQL or PostgreSQL that require a single instance attached to a NFS persistent volume at any time +* Databases like MySQL or PostgreSQL that require a single instance attached to an NFS persistent volume at any time * Clustered software like Zookeeper, Etcd, or Elasticsearch that require stable membership. ## Alpha limitations diff --git a/docs/user-guide/pods/init-container.md b/docs/user-guide/pods/init-container.md index 75b6efcac3..c9266baf67 100644 --- a/docs/user-guide/pods/init-container.md +++ b/docs/user-guide/pods/init-container.md @@ -105,7 +105,7 @@ If the pod is [restarted](#pod-restart-reasons) all init containers must execute again. Changes to the init container spec are limited to the container image field. -Altering a init container image field is equivalent to restarting the pod. +Altering an init container image field is equivalent to restarting the pod. Because init containers can be restarted, retried, or reexecuted, init container code should be idempotent. In particular, code that writes to files on EmptyDirs From e0a6a2c835c54c3a5508d43158d3c0d21e97f1e5 Mon Sep 17 00:00:00 2001 From: SRaddict Date: Thu, 22 Dec 2016 17:57:06 +0800 Subject: [PATCH 05/14] revert --- docs/admin/cluster-large.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/admin/cluster-large.md b/docs/admin/cluster-large.md index 41393bc01d..f41df12689 100644 --- a/docs/admin/cluster-large.md +++ b/docs/admin/cluster-large.md @@ -1,10 +1,10 @@ ---- -assignees: -- davidopp -- lavalamp -title: Building Large Clusters ---- - +--- +assignees: +- davidopp +- lavalamp +title: Building Large Clusters +--- + ## Support At {{page.version}}, Kubernetes supports clusters with up to 1000 nodes. More specifically, we support configurations that meet *all* of the following criteria: @@ -21,7 +21,7 @@ At {{page.version}}, Kubernetes supports clusters with up to 1000 nodes. More sp A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by a "master" (the cluster-level control plane). -Normally the number of nodes in a cluster is controlled by the value `NUM_NODES` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/{{page.githubbranch}}/cluster/gce/config-default.sh)). +Normally the number of nodes in a cluster is controlled by the the value `NUM_NODES` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/{{page.githubbranch}}/cluster/gce/config-default.sh)). Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up. From 8edcfc05b80b5c7fa14b8f8280d6a7cb997a5c71 Mon Sep 17 00:00:00 2001 From: tim-zju <21651152@zju.edu.cn> Date: Thu, 22 Dec 2016 18:17:35 +0800 Subject: [PATCH 06/14] revert Signed-off-by: tim-zju <21651152@zju.edu.cn> --- _includes/partner-script.js | 6 +++--- docs/getting-started-guides/meanstack.md | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/_includes/partner-script.js b/_includes/partner-script.js index c291521a01..4d0a117620 100644 --- a/_includes/partner-script.js +++ b/_includes/partner-script.js @@ -54,7 +54,7 @@ name: 'Skippbox', logo: 'skippbox', link: 'http://www.skippbox.com/tag/products/', - blurb: 'Creator of Cabin the first mobile application for Kubernetes, and kompose. Skippbox's solutions distill all the power of k8s in simple easy to use interfaces.' + blurb: 'Creator of Cabin the first mobile application for Kubernetes, and kompose. Skippbox’s solutions distill all the power of k8s in simple easy to use interfaces.' }, { type: 0, @@ -89,7 +89,7 @@ name: 'Intel', logo: 'intel', link: 'https://tectonic.com/press/intel-coreos-collaborate-on-openstack-with-kubernetes.html', - blurb: 'Powering the GIFEE (Google's Infrastructure for Everyone Else), to run OpenStack deployments on Kubernetes.' + blurb: 'Powering the GIFEE (Google’s Infrastructure for Everyone Else), to run OpenStack deployments on Kubernetes.' }, { type: 0, @@ -243,7 +243,7 @@ name: 'Samsung SDS', logo: 'samsung_sds', link: 'http://www.samsungsdsa.com/cloud-infrastructure_kubernetes', - blurb: 'Samsung SDS's Cloud Native Computing Team offers expert consulting across the range of technical aspects involved in building services targeted at a Kubernetes cluster.' + blurb: 'Samsung SDS’s Cloud Native Computing Team offers expert consulting across the range of technical aspects involved in building services targeted at a Kubernetes cluster.' }, { type: 1, diff --git a/docs/getting-started-guides/meanstack.md b/docs/getting-started-guides/meanstack.md index e1e7bd7696..ee9afc1483 100644 --- a/docs/getting-started-guides/meanstack.md +++ b/docs/getting-started-guides/meanstack.md @@ -83,7 +83,7 @@ $ ls Dockerfile app ``` -Le's build. +Let's build. ```shell $ docker build -t myapp . From 1f7b3148f2a22c511a82e5613afdf43a9be4ba5a Mon Sep 17 00:00:00 2001 From: tim-zju <21651152@zju.edu.cn> Date: Thu, 22 Dec 2016 19:47:40 +0800 Subject: [PATCH 07/14] fix space problems which ide results in Signed-off-by: tim-zju <21651152@zju.edu.cn> --- docs/getting-started-guides/meanstack.md | 6 +- docs/getting-started-guides/windows/index.md | 2 +- .../stateful-application/zookeeper.md | 352 +++++++++--------- docs/user-guide/managing-deployments.md | 4 +- 4 files changed, 182 insertions(+), 182 deletions(-) diff --git a/docs/getting-started-guides/meanstack.md b/docs/getting-started-guides/meanstack.md index ee9afc1483..e5ae6a297e 100644 --- a/docs/getting-started-guides/meanstack.md +++ b/docs/getting-started-guides/meanstack.md @@ -20,9 +20,9 @@ Thankfully, there is a system we can use to manage our containers in a cluster e Before we jump in and start kube'ing it up, it's important to understand some of the fundamentals of Kubernetes. * Containers: These are the Docker, rtk, AppC, or whatever Container you are running. You can think of these like subatomic particles; everything is made up of them, but you rarely (if ever) interact with them directly. -* Pods: Pods are the basic component of Kubernetes. They are a group of Containers that are scheduled, live, and die together. Why would you want to have a group of containers instead of just a single container? Let's say you had a log processor, a web server, and a database. If you couldn't use Pods, you would have to bundle the log processor in the web server and database containers, and each time you updated one you would have to update the other. With Pods, you can just reuse the same log processor for both the web server and database. +* Pods: Pods are the basic component of Kubernetes. They are a group of Containers that are scheduled, live, and die together. Why would you want to have a group of containers instead of just a single container? Let’s say you had a log processor, a web server, and a database. If you couldn't use Pods, you would have to bundle the log processor in the web server and database containers, and each time you updated one you would have to update the other. With Pods, you can just reuse the same log processor for both the web server and database. * Deployments: A Deployment provides declarative updates for Pods. You can define Deployments to create new Pods, or replace existing Pods. You only need to describe the desired state in a Deployment object, and the deployment controller will change the actual state to the desired state at a controlled rate for you. You can define Deployments to create new resources, or replace existing ones by new ones. -* Services: A service is the single point of contact for a group of Pods. For example, let's say you have a Deployment that creates four copies of a web server pod. A Service will split the traffic to each of the four copies. Services are "permanent" while the pods behind them can come and go, so it's a good idea to use Services. +* Services: A service is the single point of contact for a group of Pods. For example, let’s say you have a Deployment that creates four copies of a web server pod. A Service will split the traffic to each of the four copies. Services are "permanent" while the pods behind them can come and go, so it’s a good idea to use Services. ## Step 1: Creating the Container @@ -371,7 +371,7 @@ At this point, the local directory looks like this ```shell $ ls -Dockerfile +Dockerfile app db-deployment.yml db-service.yml diff --git a/docs/getting-started-guides/windows/index.md b/docs/getting-started-guides/windows/index.md index af990c445b..e2c6748330 100644 --- a/docs/getting-started-guides/windows/index.md +++ b/docs/getting-started-guides/windows/index.md @@ -15,7 +15,7 @@ In Kubernetes version 1.5, Windows Server Containers for Kubernetes is supported 4. Docker Version 1.12.2-cs2-ws-beta or later for Windows Server nodes (Linux nodes and Kubernetes control plane can run any Kubernetes supported Docker Version) ## Networking -Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) don't natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used. +Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) don’t natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used. ### Linux The above networking approach is already supported on Linux using a bridge interface, which essentially creates a private network local to the node. Similar to the Windows side, routes to all other pod CIDRs must be created in order to send packets via the "public" NIC. diff --git a/docs/tutorials/stateful-application/zookeeper.md b/docs/tutorials/stateful-application/zookeeper.md index b36ed0835b..90a78fdc31 100644 --- a/docs/tutorials/stateful-application/zookeeper.md +++ b/docs/tutorials/stateful-application/zookeeper.md @@ -11,15 +11,15 @@ title: Running ZooKeeper, A CP Distributed System --- {% capture overview %} -This tutorial demonstrates [Apache Zookeeper](https://zookeeper.apache.org) on -Kubernetes using [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/), -[PodDisruptionBudgets](/docs/admin/disruptions/#specifying-a-poddisruptionbudget), +This tutorial demonstrates [Apache Zookeeper](https://zookeeper.apache.org) on +Kubernetes using [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/), +[PodDisruptionBudgets](/docs/admin/disruptions/#specifying-a-poddisruptionbudget), and [PodAntiAffinity](/docs/user-guide/node-selection/). {% endcapture %} {% capture prerequisites %} -Before starting this tutorial, you should be familiar with the following +Before starting this tutorial, you should be familiar with the following Kubernetes concepts. * [Pods](/docs/user-guide/pods/single-container/) @@ -34,16 +34,16 @@ Kubernetes concepts. * [kubectl CLI](/docs/user-guide/kubectl) You will require a cluster with at least four nodes, and each node will require -at least 2 CPUs and 4 GiB of memory. In this tutorial you will cordon and -drain the cluster's nodes. **This means that all Pods on the cluster's nodes -will be terminated and evicted, and the nodes will, temporarily, become -unschedulable.** You should use a dedicated cluster for this tutorial, or you -should ensure that the disruption you cause will not interfere with other +at least 2 CPUs and 4 GiB of memory. In this tutorial you will cordon and +drain the cluster's nodes. **This means that all Pods on the cluster's nodes +will be terminated and evicted, and the nodes will, temporarily, become +unschedulable.** You should use a dedicated cluster for this tutorial, or you +should ensure that the disruption you cause will not interfere with other tenants. -This tutorial assumes that your cluster is configured to dynamically provision +This tutorial assumes that your cluster is configured to dynamically provision PersistentVolumes. If your cluster is not configured to do so, you -will have to manually provision three 20 GiB volumes prior to starting this +will have to manually provision three 20 GiB volumes prior to starting this tutorial. {% endcapture %} @@ -60,51 +60,51 @@ After this tutorial, you will know the following. #### ZooKeeper Basics -[Apache ZooKeeper](https://zookeeper.apache.org/doc/current/) is a +[Apache ZooKeeper](https://zookeeper.apache.org/doc/current/) is a distributed, open-source coordination service for distributed applications. -ZooKeeper allows you to read, write, and observe updates to data. Data are -organized in a file system like hierarchy and replicated to all ZooKeeper -servers in the ensemble (a set of ZooKeeper servers). All operations on data -are atomic and sequentially consistent. ZooKeeper ensures this by using the -[Zab](https://pdfs.semanticscholar.org/b02c/6b00bd5dbdbd951fddb00b906c82fa80f0b3.pdf) +ZooKeeper allows you to read, write, and observe updates to data. Data are +organized in a file system like hierarchy and replicated to all ZooKeeper +servers in the ensemble (a set of ZooKeeper servers). All operations on data +are atomic and sequentially consistent. ZooKeeper ensures this by using the +[Zab](https://pdfs.semanticscholar.org/b02c/6b00bd5dbdbd951fddb00b906c82fa80f0b3.pdf) consensus protocol to replicate a state machine across all servers in the ensemble. The ensemble uses the Zab protocol to elect a leader, and -data can not be written until a leader is elected. Once a leader is -elected, the ensemble uses Zab to ensure that all writes are replicated to a +data can not be written until a leader is elected. Once a leader is +elected, the ensemble uses Zab to ensure that all writes are replicated to a quorum before they are acknowledged and made visible to clients. Without respect -to weighted quorums, a quorum is a majority component of the ensemble containing -the current leader. For instance, if the ensemble has three servers, a component -that contains the leader and one other server constitutes a quorum. If the +to weighted quorums, a quorum is a majority component of the ensemble containing +the current leader. For instance, if the ensemble has three servers, a component +that contains the leader and one other server constitutes a quorum. If the ensemble can not achieve a quorum, data can not be written. -ZooKeeper servers keep their entire state machine in memory, but every mutation -is written to a durable WAL (Write Ahead Log) on storage media. When a server -crashes, it can recover its previous state by replaying the WAL. In order to -prevent the WAL from growing without bound, ZooKeeper servers will periodically -snapshot their in memory state to storage media. These snapshots can be loaded -directly into memory, and all WAL entries that preceded the snapshot may be +ZooKeeper servers keep their entire state machine in memory, but every mutation +is written to a durable WAL (Write Ahead Log) on storage media. When a server +crashes, it can recover its previous state by replaying the WAL. In order to +prevent the WAL from growing without bound, ZooKeeper servers will periodically +snapshot their in memory state to storage media. These snapshots can be loaded +directly into memory, and all WAL entries that preceded the snapshot may be safely discarded. ### Creating a ZooKeeper Ensemble -The manifest below contains a -[Headless Service](/docs/user-guide/services/#headless-services), -a [ConfigMap](/docs/user-guide/configmap/), -a [PodDisruptionBudget](/docs/admin/disruptions/#specifying-a-poddisruptionbudget), -and a [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/). +The manifest below contains a +[Headless Service](/docs/user-guide/services/#headless-services), +a [ConfigMap](/docs/user-guide/configmap/), +a [PodDisruptionBudget](/docs/admin/disruptions/#specifying-a-poddisruptionbudget), +and a [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/). {% include code.html language="yaml" file="zookeeper.yaml" ghlink="/docs/tutorials/stateful-application/zookeeper.yaml" %} -Open a command terminal, and use -[`kubectl create`](/docs/user-guide/kubectl/kubectl_create/) to create the +Open a command terminal, and use +[`kubectl create`](/docs/user-guide/kubectl/kubectl_create/) to create the manifest. ```shell kubectl create -f http://k8s.io/docs/tutorials/stateful-application/zookeeper.yaml ``` -This creates the `zk-headless` Headless Service, the `zk-config` ConfigMap, +This creates the `zk-headless` Headless Service, the `zk-config` ConfigMap, the `zk-budget` PodDisruptionBudget, and the `zk` StatefulSet. ```shell @@ -142,29 +142,29 @@ zk-2 0/1 Running 0 19s zk-2 1/1 Running 0 40s ``` -The StatefulSet controller creates three Pods, and each Pod has a container with +The StatefulSet controller creates three Pods, and each Pod has a container with a [ZooKeeper 3.4.9](http://www-us.apache.org/dist/zookeeper/zookeeper-3.4.9/) server. #### Facilitating Leader Election -As there is no terminating algorithm for electing a leader in an anonymous -network, Zab requires explicit membership configuration in order to perform -leader election. Each server in the ensemble needs to have a unique +As there is no terminating algorithm for electing a leader in an anonymous +network, Zab requires explicit membership configuration in order to perform +leader election. Each server in the ensemble needs to have a unique identifier, all servers need to know the global set of identifiers, and each identifier needs to be associated with a network address. -Use [`kubectl exec`](/docs/user-guide/kubectl/kubectl_exec/) to get the hostnames +Use [`kubectl exec`](/docs/user-guide/kubectl/kubectl_exec/) to get the hostnames of the Pods in the `zk` StatefulSet. ```shell for i in 0 1 2; do kubectl exec zk-$i -- hostname; done ``` -The StatefulSet controller provides each Pod with a unique hostname based on its -ordinal index. The hostnames take the form `-`. -As the `replicas` field of the `zk` StatefulSet is set to `3`, the Set's -controller creates three Pods with their hostnames set to `zk-0`, `zk-1`, and -`zk-2`. +The StatefulSet controller provides each Pod with a unique hostname based on its +ordinal index. The hostnames take the form `-`. +As the `replicas` field of the `zk` StatefulSet is set to `3`, the Set's +controller creates three Pods with their hostnames set to `zk-0`, `zk-1`, and +`zk-2`. ```shell zk-0 @@ -172,9 +172,9 @@ zk-1 zk-2 ``` -The servers in a ZooKeeper ensemble use natural numbers as unique identifiers, and -each server's identifier is stored in a file called `myid` in the server's -data directory. +The servers in a ZooKeeper ensemble use natural numbers as unique identifiers, and +each server's identifier is stored in a file called `myid` in the server’s +data directory. Examine the contents of the `myid` file for each server. @@ -182,7 +182,7 @@ Examine the contents of the `myid` file for each server. for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done ``` -As the identifiers are natural numbers and the ordinal indices are non-negative +As the identifiers are natural numbers and the ordinal indices are non-negative integers, you can generate an identifier by adding one to the ordinal. ```shell @@ -200,7 +200,7 @@ Get the FQDN (Fully Qualified Domain Name) of each Pod in the `zk` StatefulSet. for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done ``` -The `zk-headless` Service creates a domain for all of the Pods, +The `zk-headless` Service creates a domain for all of the Pods, `zk-headless.default.svc.cluster.local`. ```shell @@ -209,11 +209,11 @@ zk-1.zk-headless.default.svc.cluster.local zk-2.zk-headless.default.svc.cluster.local ``` -The A records in [Kubernetes DNS](/docs/admin/dns/) resolve the FQDNs to the Pods' IP addresses. -If the Pods are rescheduled, the A records will be updated with the Pods' new IP +The A records in [Kubernetes DNS](/docs/admin/dns/) resolve the FQDNs to the Pods' IP addresses. +If the Pods are rescheduled, the A records will be updated with the Pods' new IP addresses, but the A record's names will not change. -ZooKeeper stores its application configuration in a file named `zoo.cfg`. Use +ZooKeeper stores its application configuration in a file named `zoo.cfg`. Use `kubectl exec` to view the contents of the `zoo.cfg` file in the `zk-0` Pod. ``` @@ -222,8 +222,8 @@ kubectl exec zk-0 -- cat /opt/zookeeper/conf/zoo.cfg For the `server.1`, `server.2`, and `server.3` properties at the bottom of the file, the `1`, `2`, and `3` correspond to the identifiers in the -ZooKeeper servers' `myid` files. They are set to the FQDNs for the Pods in -the `zk` StatefulSet. +ZooKeeper servers' `myid` files. They are set to the FQDNs for the Pods in +the `zk` StatefulSet. ```shell clientPort=2181 @@ -244,16 +244,16 @@ server.3=zk-2.zk-headless.default.svc.cluster.local:2888:3888 #### Achieving Consensus -Consensus protocols require that the identifiers of each participant be -unique. No two participants in the Zab protocol should claim the same unique -identifier. This is necessary to allow the processes in the system to agree on -which processes have committed which data. If two Pods were launched with the +Consensus protocols require that the identifiers of each participant be +unique. No two participants in the Zab protocol should claim the same unique +identifier. This is necessary to allow the processes in the system to agree on +which processes have committed which data. If two Pods were launched with the same ordinal, two ZooKeeper servers would both identify themselves as the same server. -When you created the `zk` StatefulSet, the StatefulSet's controller created -each Pod sequentially, in the order defined by the Pods' ordinal indices, and it -waited for each Pod to be Running and Ready before creating the next Pod. +When you created the `zk` StatefulSet, the StatefulSet's controller created +each Pod sequentially, in the order defined by the Pods' ordinal indices, and it +waited for each Pod to be Running and Ready before creating the next Pod. ```shell kubectl get pods -w -l app=zk @@ -277,7 +277,7 @@ zk-2 1/1 Running 0 40s The A records for each Pod are only entered when the Pod becomes Ready. Therefore, the FQDNs of the ZooKeeper servers will only resolve to a single endpoint, and that -endpoint will be the unique ZooKeeper server claiming the identity configured +endpoint will be the unique ZooKeeper server claiming the identity configured in its `myid` file. ```shell @@ -286,7 +286,7 @@ zk-1.zk-headless.default.svc.cluster.local zk-2.zk-headless.default.svc.cluster.local ``` -This ensures that the `servers` properties in the ZooKeepers' `zoo.cfg` files +This ensures that the `servers` properties in the ZooKeepers' `zoo.cfg` files represents a correctly configured ensemble. ```shell @@ -295,16 +295,16 @@ server.2=zk-1.zk-headless.default.svc.cluster.local:2888:3888 server.3=zk-2.zk-headless.default.svc.cluster.local:2888:3888 ``` -When the servers use the Zab protocol to attempt to commit a value, they will -either achieve consensus and commit the value (if leader election has succeeded -and at least two of the Pods are Running and Ready), or they will fail to do so -(if either of the aforementioned conditions are not met). No state will arise +When the servers use the Zab protocol to attempt to commit a value, they will +either achieve consensus and commit the value (if leader election has succeeded +and at least two of the Pods are Running and Ready), or they will fail to do so +(if either of the aforementioned conditions are not met). No state will arise where one server acknowledges a write on behalf of another. #### Sanity Testing the Ensemble -The most basic sanity test is to write some data to one ZooKeeper server and -to read the data from another. +The most basic sanity test is to write some data to one ZooKeeper server and +to read the data from another. Use the `zkCli.sh` script to write `world` to the path `/hello` on the `zk-0` Pod. @@ -327,7 +327,7 @@ Get the data from the `zk-1` Pod. kubectl exec zk-1 zkCli.sh get /hello ``` -The data that you created on `zk-0` is available on all of the servers in the +The data that you created on `zk-0` is available on all of the servers in the ensemble. ```shell @@ -351,12 +351,12 @@ numChildren = 0 #### Providing Durable Storage As mentioned in the [ZooKeeper Basics](#zookeeper-basics) section, -ZooKeeper commits all entries to a durable WAL, and periodically writes snapshots -in memory state, to storage media. Using WALs to provide durability is a common +ZooKeeper commits all entries to a durable WAL, and periodically writes snapshots +in memory state, to storage media. Using WALs to provide durability is a common technique for applications that use consensus protocols to achieve a replicated state machine and for storage applications in general. -Use [`kubectl delete`](/docs/user-guide/kubectl/kubectl_delete/) to delete the +Use [`kubectl delete`](/docs/user-guide/kubectl/kubectl_delete/) to delete the `zk` StatefulSet. ```shell @@ -392,7 +392,7 @@ Reapply the manifest in `zookeeper.yaml`. kubectl apply -f http://k8s.io/docs/tutorials/stateful-application/zookeeper.yaml ``` -The `zk` StatefulSet will be created, but, as they already exist, the other API +The `zk` StatefulSet will be created, but, as they already exist, the other API Objects in the manifest will not be modified. ```shell @@ -429,14 +429,14 @@ zk-2 0/1 Running 0 19s zk-2 1/1 Running 0 40s ``` -Get the value you entered during the [sanity test](#sanity-testing-the-ensemble), +Get the value you entered during the [sanity test](#sanity-testing-the-ensemble), from the `zk-2` Pod. ```shell kubectl exec zk-2 zkCli.sh get /hello ``` -Even though all of the Pods in the `zk` StatefulSet have been terminated and +Even though all of the Pods in the `zk` StatefulSet have been terminated and recreated, the ensemble still serves the original value. ```shell @@ -457,8 +457,8 @@ dataLength = 5 numChildren = 0 ``` -The `volumeClaimTemplates` field, of the `zk` StatefulSet's `spec`, specifies a -PersistentVolume that will be provisioned for each Pod. +The `volumeClaimTemplates` field, of the `zk` StatefulSet's `spec`, specifies a +PersistentVolume that will be provisioned for each Pod. ```yaml volumeClaimTemplates: @@ -474,8 +474,8 @@ volumeClaimTemplates: ``` -The StatefulSet controller generates a PersistentVolumeClaim for each Pod in -the StatefulSet. +The StatefulSet controller generates a PersistentVolumeClaim for each Pod in +the StatefulSet. Get the StatefulSet's PersistentVolumeClaims. @@ -483,7 +483,7 @@ Get the StatefulSet's PersistentVolumeClaims. kubectl get pvc -l app=zk ``` -When the StatefulSet recreated its Pods, the Pods' PersistentVolumes were +When the StatefulSet recreated its Pods, the Pods' PersistentVolumes were remounted. ```shell @@ -502,19 +502,19 @@ volumeMounts: mountPath: /var/lib/zookeeper ``` -When a Pod in the `zk` StatefulSet is (re)scheduled, it will always have the -same PersistentVolume mounted to the ZooKeeper server's data directory. -Even when the Pods are rescheduled, all of the writes made to the ZooKeeper +When a Pod in the `zk` StatefulSet is (re)scheduled, it will always have the +same PersistentVolume mounted to the ZooKeeper server's data directory. +Even when the Pods are rescheduled, all of the writes made to the ZooKeeper servers' WALs, and all of their snapshots, remain durable. ### Ensuring Consistent Configuration As noted in the [Facilitating Leader Election](#facilitating-leader-election) and -[Achieving Consensus](#achieving-consensus) sections, the servers in a -ZooKeeper ensemble require consistent configuration in order to elect a leader +[Achieving Consensus](#achieving-consensus) sections, the servers in a +ZooKeeper ensemble require consistent configuration in order to elect a leader and form a quorum. They also require consistent configuration of the Zab protocol -in order for the protocol to work correctly over a network. You can use -ConfigMaps to achieve this. +in order for the protocol to work correctly over a network. You can use +ConfigMaps to achieve this. Get the `zk-config` ConfigMap. @@ -532,8 +532,8 @@ data: tick: "2000" ``` -The `env` field of the `zk` StatefulSet's Pod `template` reads the ConfigMap -into environment variables. These variables are injected into the containers +The `env` field of the `zk` StatefulSet's Pod `template` reads the ConfigMap +into environment variables. These variables are injected into the containers environment. ```yaml @@ -581,7 +581,7 @@ env: ``` The entry point of the container invokes a bash script, `zkConfig.sh`, prior to -launching the ZooKeeper server process. This bash script generates the +launching the ZooKeeper server process. This bash script generates the ZooKeeper configuration files from the supplied environment variables. ```yaml @@ -597,8 +597,8 @@ Examine the environment of all of the Pods in the `zk` StatefulSet. for i in 0 1 2; do kubectl exec zk-$i env | grep ZK_*;echo""; done ``` -All of the variables populated from `zk-config` contain identical values. This -allows the `zkGenConfig.sh` script to create consistent configurations for all +All of the variables populated from `zk-config` contain identical values. This +allows the `zkGenConfig.sh` script to create consistent configurations for all of the ZooKeeper servers in the ensemble. ```shell @@ -653,16 +653,16 @@ ZK_LOG_DIR=/var/log/zookeeper #### Configuring Logging -One of the files generated by the `zkConfigGen.sh` script controls ZooKeeper's logging. -ZooKeeper uses [Log4j](http://logging.apache.org/log4j/2.x/), and, by default, -it uses a time and size based rolling file appender for its logging configuration. +One of the files generated by the `zkConfigGen.sh` script controls ZooKeeper's logging. +ZooKeeper uses [Log4j](http://logging.apache.org/log4j/2.x/), and, by default, +it uses a time and size based rolling file appender for its logging configuration. Get the logging configuration from one of Pods in the `zk` StatefulSet. ```shell kubectl exec zk-0 cat /usr/etc/zookeeper/log4j.properties ``` -The logging configuration below will cause the ZooKeeper process to write all +The logging configuration below will cause the ZooKeeper process to write all of its logs to the standard output file stream. ```shell @@ -675,20 +675,20 @@ log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n ``` -This is the simplest possible way to safely log inside the container. As the -application's logs are being written to standard out, Kubernetes will handle -log rotation for you. Kubernetes also implements a sane retention policy that -ensures application logs written to standard out and standard error do not +This is the simplest possible way to safely log inside the container. As the +application's logs are being written to standard out, Kubernetes will handle +log rotation for you. Kubernetes also implements a sane retention policy that +ensures application logs written to standard out and standard error do not exhaust local storage media. -Use [`kubectl logs`](/docs/user-guide/kubectl/kubectl_logs/) to retrieve the last +Use [`kubectl logs`](/docs/user-guide/kubectl/kubectl_logs/) to retrieve the last few log lines from one of the Pods. ```shell kubectl logs zk-0 --tail 20 ``` -Application logs that are written to standard out or standard error are viewable +Application logs that are written to standard out or standard error are viewable using `kubectl logs` and from the Kubernetes Dashboard. ```shell @@ -714,19 +714,19 @@ using `kubectl logs` and from the Kubernetes Dashboard. 2016-12-06 19:34:46,230 [myid:1] - INFO [Thread-1142:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52768 (no session established for client) ``` -Kubernetes also supports more powerful, but more complex, logging integrations -with [Google Cloud Logging](https://github.com/kubernetes/contrib/blob/master/logging/fluentd-sidecar-gcp/README.md) +Kubernetes also supports more powerful, but more complex, logging integrations +with [Google Cloud Logging](https://github.com/kubernetes/contrib/blob/master/logging/fluentd-sidecar-gcp/README.md) and [ELK](https://github.com/kubernetes/contrib/blob/master/logging/fluentd-sidecar-es/README.md). For cluster level log shipping and aggregation, you should consider deploying a -[sidecar](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html) +[sidecar](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html) container to rotate and ship your logs. #### Configuring a Non-Privileged User -The best practices with respect to allowing an application to run as a privileged -user inside of a container are a matter of debate. If your organization requires -that applications be run as a non-privileged user you can use a -[SecurityContext](/docs/user-guide/security-context/) to control the user that +The best practices with respect to allowing an application to run as a privileged +user inside of a container are a matter of debate. If your organization requires +that applications be run as a non-privileged user you can use a +[SecurityContext](/docs/user-guide/security-context/) to control the user that the entry point runs as. The `zk` StatefulSet's Pod `template` contains a SecurityContext. @@ -737,7 +737,7 @@ securityContext: fsGroup: 1000 ``` -In the Pods' containers, UID 1000 corresponds to the zookeeper user and GID 1000 +In the Pods' containers, UID 1000 corresponds to the zookeeper user and GID 1000 corresponds to the zookeeper group. Get the ZooKeeper process information from the `zk-0` Pod. @@ -746,7 +746,7 @@ Get the ZooKeeper process information from the `zk-0` Pod. kubectl exec zk-0 -- ps -elf ``` -As the `runAsUser` field of the `securityContext` object is set to 1000, +As the `runAsUser` field of the `securityContext` object is set to 1000, instead of running as root, the ZooKeeper process runs as the zookeeper user. ```shell @@ -755,8 +755,8 @@ F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD 0 S zookeep+ 27 1 0 80 0 - 1155556 - 20:46 ? 00:00:19 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.9.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper: -Xmx2G -Xms2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/bin/../etc/zookeeper/zoo.cfg ``` -By default, when the Pod's PersistentVolume is mounted to the ZooKeeper server's -data directory, it is only accessible by the root user. This configuration +By default, when the Pod's PersistentVolume is mounted to the ZooKeeper server's +data directory, it is only accessible by the root user. This configuration prevents the ZooKeeper process from writing to its WAL and storing its snapshots. Get the file permissions of the ZooKeeper data directory on the `zk-0` Pod. @@ -765,8 +765,8 @@ Get the file permissions of the ZooKeeper data directory on the `zk-0` Pod. kubectl exec -ti zk-0 -- ls -ld /var/lib/zookeeper/data ``` -As the `fsGroup` field of the `securityContext` object is set to 1000, -the ownership of the Pods' PersistentVolumes is set to the zookeeper group, +As the `fsGroup` field of the `securityContext` object is set to 1000, +the ownership of the Pods' PersistentVolumes is set to the zookeeper group, and the ZooKeeper process is able to successfully read and write its data. ```shell @@ -775,21 +775,21 @@ drwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data ### Managing the ZooKeeper Process -The [ZooKeeper documentation](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_supervision) -documentation indicates that "You will want to have a supervisory process that -manages each of your ZooKeeper server processes (JVM)." Utilizing a watchdog -(supervisory process) to restart failed processes in a distributed system is a -common pattern. When deploying an application in Kubernetes, rather than using -an external utility as a supervisory process, you should use Kubernetes as the +The [ZooKeeper documentation](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_supervision) +documentation indicates that "You will want to have a supervisory process that +manages each of your ZooKeeper server processes (JVM)." Utilizing a watchdog +(supervisory process) to restart failed processes in a distributed system is a +common pattern. When deploying an application in Kubernetes, rather than using +an external utility as a supervisory process, you should use Kubernetes as the watchdog for your application. -#### Handling Process Failure +#### Handling Process Failure -[Restart Policies](/docs/user-guide/pod-states/#restartpolicy) control how +[Restart Policies](/docs/user-guide/pod-states/#restartpolicy) control how Kubernetes handles process failures for the entry point of the container in a Pod. For Pods in a StatefulSet, the only appropriate RestartPolicy is Always, and this -is the default value. For stateful applications you should **never** override +is the default value. For stateful applications you should **never** override the default policy. @@ -799,7 +799,7 @@ Examine the process tree for the ZooKeeper server running in the `zk-0` Pod. kubectl exec zk-0 -- ps -ef ``` -The command used as the container's entry point has PID 1, and the +The command used as the container's entry point has PID 1, and the the ZooKeeper process, a child of the entry point, has PID 23. @@ -824,8 +824,8 @@ In another terminal, kill the ZooKeeper process in Pod `zk-0`. ``` -The death of the ZooKeeper process caused its parent process to terminate. As -the RestartPolicy of the container is Always, the parent process was relaunched. +The death of the ZooKeeper process caused its parent process to terminate. As +the RestartPolicy of the container is Always, the parent process was relaunched. ```shell @@ -840,19 +840,19 @@ zk-0 1/1 Running 1 29m ``` -If your application uses a script (such as zkServer.sh) to launch the process +If your application uses a script (such as zkServer.sh) to launch the process that implements the application's business logic, the script must terminate with the child process. This ensures that Kubernetes will restart the application's -container when the process implementing the application's business logic fails. +container when the process implementing the application's business logic fails. #### Testing for Liveness -Configuring your application to restart failed processes is not sufficient to -keep a distributed system healthy. There are many scenarios where -a system's processes can be both alive and unresponsive, or otherwise -unhealthy. You should use liveness probes in order to notify Kubernetes +Configuring your application to restart failed processes is not sufficient to +keep a distributed system healthy. There are many scenarios where +a system's processes can be both alive and unresponsive, or otherwise +unhealthy. You should use liveness probes in order to notify Kubernetes that your application's processes are unhealthy and should be restarted. @@ -869,7 +869,7 @@ The Pod `template` for the `zk` StatefulSet specifies a liveness probe. ``` -The probe calls a simple bash script that uses the ZooKeeper `ruok` four letter +The probe calls a simple bash script that uses the ZooKeeper `ruok` four letter word to test the server's health. @@ -900,7 +900,7 @@ kubectl exec zk-0 -- rm /opt/zookeeper/bin/zkOk.sh ``` -When the liveness probe for the ZooKeeper process fails, Kubernetes will +When the liveness probe for the ZooKeeper process fails, Kubernetes will automatically restart the process for you, ensuring that unhealthy processes in the ensemble are restarted. @@ -921,10 +921,10 @@ zk-0 1/1 Running 1 1h #### Testing for Readiness -Readiness is not the same as liveness. If a process is alive, it is scheduled -and healthy. If a process is ready, it is able to process input. Liveness is +Readiness is not the same as liveness. If a process is alive, it is scheduled +and healthy. If a process is ready, it is able to process input. Liveness is a necessary, but not sufficient, condition for readiness. There are many cases, -particularly during initialization and termination, when a process can be +particularly during initialization and termination, when a process can be alive but not ready. @@ -932,8 +932,8 @@ If you specify a readiness probe, Kubernetes will ensure that your application's processes will not receive network traffic until their readiness checks pass. -For a ZooKeeper server, liveness implies readiness. Therefore, the readiness -probe from the `zookeeper.yaml` manifest is identical to the liveness probe. +For a ZooKeeper server, liveness implies readiness. Therefore, the readiness +probe from the `zookeeper.yaml` manifest is identical to the liveness probe. ```yaml @@ -946,28 +946,28 @@ probe from the `zookeeper.yaml` manifest is identical to the liveness probe. ``` -Even though the liveness and readiness probes are identical, it is important -to specify both. This ensures that only healthy servers in the ZooKeeper +Even though the liveness and readiness probes are identical, it is important +to specify both. This ensures that only healthy servers in the ZooKeeper ensemble receive network traffic. ### Tolerating Node Failure -ZooKeeper needs a quorum of servers in order to successfully commit mutations -to data. For a three server ensemble, two servers must be healthy in order for -writes to succeed. In quorum based systems, members are deployed across failure -domains to ensure availability. In order to avoid an outage, due to the loss of an -individual machine, best practices preclude co-locating multiple instances of the +ZooKeeper needs a quorum of servers in order to successfully commit mutations +to data. For a three server ensemble, two servers must be healthy in order for +writes to succeed. In quorum based systems, members are deployed across failure +domains to ensure availability. In order to avoid an outage, due to the loss of an +individual machine, best practices preclude co-locating multiple instances of the application on the same machine. -By default, Kubernetes may co-locate Pods in a StatefulSet on the same node. +By default, Kubernetes may co-locate Pods in a StatefulSet on the same node. For the three server ensemble you created, if two servers reside on the same node, and that node fails, the clients of your ZooKeeper service will experience -an outage until at least one of the Pods can be rescheduled. +an outage until at least one of the Pods can be rescheduled. You should always provision additional capacity to allow the processes of critical -systems to be rescheduled in the event of node failures. If you do so, then the -outage will only last until the Kubernetes scheduler reschedules one of the ZooKeeper +systems to be rescheduled in the event of node failures. If you do so, then the +outage will only last until the Kubernetes scheduler reschedules one of the ZooKeeper servers. However, if you want your service to tolerate node failures with no downtime, you should use a `PodAntiAffinity` annotation. @@ -985,7 +985,7 @@ kubernetes-minion-group-a5aq kubernetes-minion-group-2g2d ``` -This is because the Pods in the `zk` StatefulSet contain a +This is because the Pods in the `zk` StatefulSet contain a [PodAntiAffinity](/docs/user-guide/node-selection/) annotation. ```yaml @@ -1006,11 +1006,11 @@ scheduler.alpha.kubernetes.io/affinity: > } ``` -The `requiredDuringSchedulingRequiredDuringExecution` field tells the +The `requiredDuringSchedulingRequiredDuringExecution` field tells the Kubernetes Scheduler that it should never co-locate two Pods from the `zk-headless` Service in the domain defined by the `topologyKey`. The `topologyKey` -`kubernetes.io/hostname` indicates that the domain is an individual node. Using -different rules, labels, and selectors, you can extend this technique to spread +`kubernetes.io/hostname` indicates that the domain is an individual node. Using +different rules, labels, and selectors, you can extend this technique to spread your ensemble across physical, network, and power failure domains. ### Surviving Maintenance @@ -1018,8 +1018,8 @@ your ensemble across physical, network, and power failure domains. **In this section you will cordon and drain nodes. If you are using this tutorial on a shared cluster, be sure that this will not adversely affect other tenants.** -The previous section showed you how to spread your Pods across nodes to survive -unplanned node failures, but you also need to plan for temporary node failures +The previous section showed you how to spread your Pods across nodes to survive +unplanned node failures, but you also need to plan for temporary node failures that occur due to planned maintenance. Get the nodes in your cluster. @@ -1028,7 +1028,7 @@ Get the nodes in your cluster. kubectl get nodes ``` -Use [`kubectl cordon`](/docs/user-guide/kubectl/kubectl_cordon/) to +Use [`kubectl cordon`](/docs/user-guide/kubectl/kubectl_cordon/) to cordon all but four of the nodes in your cluster. ```shell{% raw %} @@ -1041,8 +1041,8 @@ Get the `zk-budget` PodDisruptionBudget. kubectl get poddisruptionbudget zk-budget ``` -The `min-available` field indicates to Kubernetes that at least two Pods from -`zk` StatefulSet must be available at any time. +The `min-available` field indicates to Kubernetes that at least two Pods from +`zk` StatefulSet must be available at any time. ```yaml NAME MIN-AVAILABLE ALLOWED-DISRUPTIONS AGE @@ -1065,7 +1065,7 @@ kubernetes-minion-group-ixsl kubernetes-minion-group-i4c4 {% endraw %}``` -Use [`kubectl drain`](/docs/user-guide/kubectl/kubectl_drain/) to cordon and +Use [`kubectl drain`](/docs/user-guide/kubectl/kubectl_drain/) to cordon and drain the node on which the `zk-0` Pod is scheduled. ```shell {% raw %} @@ -1075,7 +1075,7 @@ pod "zk-0" deleted node "kubernetes-minion-group-pb41" drained {% endraw %}``` -As there are four nodes in your cluster, `kubectl drain`, succeeds and the +As there are four nodes in your cluster, `kubectl drain`, succeeds and the `zk-0` is rescheduled to another node. ``` @@ -1095,7 +1095,7 @@ zk-0 0/1 Running 0 51s zk-0 1/1 Running 0 1m ``` -Keep watching the StatefulSet's Pods in the first terminal and drain the node on which +Keep watching the StatefulSet's Pods in the first terminal and drain the node on which `zk-1` is scheduled. ```shell{% raw %} @@ -1105,8 +1105,8 @@ pod "zk-1" deleted node "kubernetes-minion-group-ixsl" drained {% endraw %}``` -The `zk-1` Pod can not be scheduled. As the `zk` StatefulSet contains a -`PodAntiAffinity` annotation preventing co-location of the Pods, and as only +The `zk-1` Pod can not be scheduled. As the `zk` StatefulSet contains a +`PodAntiAffinity` annotation preventing co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a Pending state. ```shell @@ -1133,7 +1133,7 @@ zk-1 0/1 Pending 0 0s zk-1 0/1 Pending 0 0s ``` -Continue to watch the Pods of the stateful set, and drain the node on which +Continue to watch the Pods of the stateful set, and drain the node on which `zk-2` is scheduled. ```shell{% raw %} @@ -1145,9 +1145,9 @@ There are pending pods when an error occurred: Cannot evict pod as it would viol pod/zk-2 {% endraw %}``` -Use `CRTL-C` to terminate to kubectl. +Use `CRTL-C` to terminate to kubectl. -You can not drain the third node because evicting `zk-2` would violate `zk-budget`. However, +You can not drain the third node because evicting `zk-2` would violate `zk-budget`. However, the node will remain cordoned. Use `zkCli.sh` to retrieve the value you entered during the sanity test from `zk-0`. @@ -1232,9 +1232,9 @@ node "kubernetes-minion-group-ixsl" uncordoned ``` You can use `kubectl drain` in conjunction with PodDisruptionBudgets to ensure that your service -remains available during maintenance. If drain is used to cordon nodes and evict pods prior to -taking the node offline for maintenance, services that express a disruption budget will have that -budget respected. You should always allocate additional capacity for critical services so that +remains available during maintenance. If drain is used to cordon nodes and evict pods prior to +taking the node offline for maintenance, services that express a disruption budget will have that +budget respected. You should always allocate additional capacity for critical services so that their Pods can be immediately rescheduled. {% endcapture %} @@ -1242,8 +1242,8 @@ their Pods can be immediately rescheduled. {% capture cleanup %} * Use `kubectl uncordon` to uncordon all the nodes in your cluster. * You will need to delete the persistent storage media for the PersistentVolumes -used in this tutorial. Follow the necessary steps, based on your environment, -storage configuration, and provisioning method, to ensure that all storage is +used in this tutorial. Follow the necessary steps, based on your environment, +storage configuration, and provisioning method, to ensure that all storage is reclaimed. {% endcapture %} {% include templates/tutorial.md %} diff --git a/docs/user-guide/managing-deployments.md b/docs/user-guide/managing-deployments.md index 43e283b4a8..c20a9ceeb5 100644 --- a/docs/user-guide/managing-deployments.md +++ b/docs/user-guide/managing-deployments.md @@ -85,8 +85,8 @@ NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx-svc 10.0.0.208 80/TCP 0s ``` -With the above commands, we first create resources under docs/user-guide/nginx/ and print the resources created with `-o name` output format -(print each resource as resource/name). Then we `grep` only the "service", and then print it with `kubectl get`. +With the above commands, we first create resources under docs/user-guide/nginx/ and print the resources created with `-o name` output format +(print each resource as resource/name). Then we `grep` only the "service", and then print it with `kubectl get`. If you happen to organize your resources across several subdirectories within a particular directory, you can recursively perform the operations on the subdirectories also, by specifying `--recursive` or `-R` alongside the `--filename,-f` flag. From df7eb8128d2b6e64dbe100b1578ad5c1d77b93d3 Mon Sep 17 00:00:00 2001 From: tim-zju <21651152@zju.edu.cn> Date: Thu, 22 Dec 2016 20:01:22 +0800 Subject: [PATCH 08/14] fix space problems which ide results in Signed-off-by: tim-zju <21651152@zju.edu.cn> --- docs/getting-started-guides/windows/index.md | 6 ++-- docs/user-guide/managing-deployments.md | 34 +++++++++--------- docs/user-guide/pod-security-policy/index.md | 36 +++++++++---------- .../replication-controller/index.md | 4 +-- docs/user-guide/security-context.md | 4 +-- 5 files changed, 42 insertions(+), 42 deletions(-) diff --git a/docs/getting-started-guides/windows/index.md b/docs/getting-started-guides/windows/index.md index e2c6748330..b5926744ae 100644 --- a/docs/getting-started-guides/windows/index.md +++ b/docs/getting-started-guides/windows/index.md @@ -38,13 +38,13 @@ To run Windows Server Containers on Kubernetes, you'll need to set up both your 1. Windows Server container host running Windows Server 2016 and Docker v1.12. Follow the setup instructions outlined by this blog post: https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/quick_start_windows_server 2. DNS support for Windows recently got merged to docker master and is currently not supported in a stable docker release. To use DNS build docker from master or download the binary from [Docker master](https://master.dockerproject.org/) -3. Pull the `apprenda/pause` image from `https://hub.docker.com/r/apprenda/pause` +3. Pull the `apprenda/pause` image from `https://hub.docker.com/r/apprenda/pause` 4. RRAS (Routing) Windows feature enabled 5. Install a VMSwitch of type `Internal`, by running `New-VMSwitch -Name KubeProxySwitch -SwitchType Internal` command in *PowerShell* window. This will create a new Network Interface with name `vEthernet (KubeProxySwitch)`. This interface will be used by kube-proxy to add Service IPs. **Linux Host Setup** -1. Linux hosts should be setup according to their respective distro documentation and the requirements of the Kubernetes version you will be using. +1. Linux hosts should be setup according to their respective distro documentation and the requirements of the Kubernetes version you will be using. 2. CNI network plugin installed. ### Component Setup @@ -111,7 +111,7 @@ route add 192.168.1.0 mask 255.255.255.0 192.168.1.1 if Date: Thu, 22 Dec 2016 20:28:51 +0800 Subject: [PATCH 09/14] fix space problems which ide results in Signed-off-by: tim-zju <21651152@zju.edu.cn> --- docs/getting-started-guides/meanstack.md | 4 ++-- docs/getting-started-guides/windows/index.md | 2 +- docs/tutorials/stateful-application/zookeeper.md | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/getting-started-guides/meanstack.md b/docs/getting-started-guides/meanstack.md index e5ae6a297e..ca34d32753 100644 --- a/docs/getting-started-guides/meanstack.md +++ b/docs/getting-started-guides/meanstack.md @@ -20,9 +20,9 @@ Thankfully, there is a system we can use to manage our containers in a cluster e Before we jump in and start kube'ing it up, it's important to understand some of the fundamentals of Kubernetes. * Containers: These are the Docker, rtk, AppC, or whatever Container you are running. You can think of these like subatomic particles; everything is made up of them, but you rarely (if ever) interact with them directly. -* Pods: Pods are the basic component of Kubernetes. They are a group of Containers that are scheduled, live, and die together. Why would you want to have a group of containers instead of just a single container? Let’s say you had a log processor, a web server, and a database. If you couldn't use Pods, you would have to bundle the log processor in the web server and database containers, and each time you updated one you would have to update the other. With Pods, you can just reuse the same log processor for both the web server and database. +* Pods: Pods are the basic component of Kubernetes. They are a group of Containers that are scheduled, live, and die together. Why would you want to have a group of containers instead of just a single container? Let's say you had a log processor, a web server, and a database. If you couldn't use Pods, you would have to bundle the log processor in the web server and database containers, and each time you updated one you would have to update the other. With Pods, you can just reuse the same log processor for both the web server and database. * Deployments: A Deployment provides declarative updates for Pods. You can define Deployments to create new Pods, or replace existing Pods. You only need to describe the desired state in a Deployment object, and the deployment controller will change the actual state to the desired state at a controlled rate for you. You can define Deployments to create new resources, or replace existing ones by new ones. -* Services: A service is the single point of contact for a group of Pods. For example, let’s say you have a Deployment that creates four copies of a web server pod. A Service will split the traffic to each of the four copies. Services are "permanent" while the pods behind them can come and go, so it’s a good idea to use Services. +* Services: A service is the single point of contact for a group of Pods. For example, let's say you have a Deployment that creates four copies of a web server pod. A Service will split the traffic to each of the four copies. Services are "permanent" while the pods behind them can come and go, so it's a good idea to use Services. ## Step 1: Creating the Container diff --git a/docs/getting-started-guides/windows/index.md b/docs/getting-started-guides/windows/index.md index b5926744ae..dd775b81af 100644 --- a/docs/getting-started-guides/windows/index.md +++ b/docs/getting-started-guides/windows/index.md @@ -15,7 +15,7 @@ In Kubernetes version 1.5, Windows Server Containers for Kubernetes is supported 4. Docker Version 1.12.2-cs2-ws-beta or later for Windows Server nodes (Linux nodes and Kubernetes control plane can run any Kubernetes supported Docker Version) ## Networking -Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) don’t natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used. +Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) don't natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used. ### Linux The above networking approach is already supported on Linux using a bridge interface, which essentially creates a private network local to the node. Similar to the Windows side, routes to all other pod CIDRs must be created in order to send packets via the "public" NIC. diff --git a/docs/tutorials/stateful-application/zookeeper.md b/docs/tutorials/stateful-application/zookeeper.md index 90a78fdc31..c6dcf705be 100644 --- a/docs/tutorials/stateful-application/zookeeper.md +++ b/docs/tutorials/stateful-application/zookeeper.md @@ -173,7 +173,7 @@ zk-2 ``` The servers in a ZooKeeper ensemble use natural numbers as unique identifiers, and -each server's identifier is stored in a file called `myid` in the server’s +each server's identifier is stored in a file called `myid` in the server's data directory. Examine the contents of the `myid` file for each server. From 0328b6d591bc698050dd4935771ba1775e68063e Mon Sep 17 00:00:00 2001 From: Martially <21651061@zju.edu.cn> Date: Fri, 23 Dec 2016 14:06:54 +0800 Subject: [PATCH 10/14] fix typo Signed-off-by: Martially <21651061@zju.edu.cn> --- docs/whatisk8s.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/whatisk8s.md b/docs/whatisk8s.md index 7c1e637b6d..2e53554863 100644 --- a/docs/whatisk8s.md +++ b/docs/whatisk8s.md @@ -52,7 +52,7 @@ Summary of container benefits: * **Cloud and OS distribution portability**: Runs on Ubuntu, RHEL, CoreOS, on-prem, Google Container Engine, and anywhere else. * **Application-centric management**: - Raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources. + Raises the level of abstraction from running an OS on virtual hardware to run an application on an OS using logical resources. * **Loosely coupled, distributed, elastic, liberated [micro-services](http://martinfowler.com/articles/microservices.html)**: Applications are broken into smaller, independent pieces and can be deployed and managed dynamically -- not a fat monolithic stack running on one big single-purpose machine. * **Resource isolation**: From 996b4343dcd6a563dbdf876a3d06f1894cc3a532 Mon Sep 17 00:00:00 2001 From: Martially <21651061@zju.edu.cn> Date: Fri, 23 Dec 2016 14:17:29 +0800 Subject: [PATCH 11/14] link error Signed-off-by: Martially <21651061@zju.edu.cn> --- docs/whatisk8s.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/whatisk8s.md b/docs/whatisk8s.md index 2e53554863..dde25433de 100644 --- a/docs/whatisk8s.md +++ b/docs/whatisk8s.md @@ -106,7 +106,7 @@ Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) syst * Kubernetes does not provide nor mandate a comprehensive application configuration language/system (e.g., [jsonnet](https://github.com/google/jsonnet)). * Kubernetes does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems. -On the other hand, a number of PaaS systems run *on* Kubernetes, such as [Openshift](https://github.com/openshift/origin), [Deis](http://deis.io/), and [Gondor](https://gondor.io/). You could also roll your own custom PaaS, integrate with a CI system of your choice, or get along just fine with just Kubernetes: bring your container images and deploy them on Kubernetes. +On the other hand, a number of PaaS systems run *on* Kubernetes, such as [Openshift](https://github.com/openshift/origin), [Deis](http://deis.io/), and [Eldarion Cloud](http://eldarion.cloud/). You could also roll your own custom PaaS, integrate with a CI system of your choice, or get along just fine with just Kubernetes: bring your container images and deploy them on Kubernetes. Since Kubernetes operates at the application level rather than at just the hardware level, it provides some generally applicable features common to PaaS offerings, such as deployment, scaling, load balancing, logging, monitoring, etc. However, Kubernetes is not monolithic, and these default solutions are optional and pluggable. From 990bc41aac05df748eee6d9a4fe99db41e58e4e1 Mon Sep 17 00:00:00 2001 From: CandiceGuo Date: Fri, 23 Dec 2016 14:39:27 +0800 Subject: [PATCH 12/14] modify some grammar mistakes in /doc/ --- docs/admin/accessing-the-api.md | 2 +- docs/admin/apparmor/index.md | 2 +- docs/admin/federation/index.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/admin/accessing-the-api.md b/docs/admin/accessing-the-api.md index 5a57db23ce..a70f1c0920 100644 --- a/docs/admin/accessing-the-api.md +++ b/docs/admin/accessing-the-api.md @@ -86,7 +86,7 @@ For version 1.2, clusters created by `kube-up.sh` are configured so that no auth required for any request. As of version 1.3, clusters created by `kube-up.sh` are configured so that the ABAC authorization -modules is enabled. However, its input file is initially set to allow all users to do all +modules are enabled. However, its input file is initially set to allow all users to do all operations. The cluster administrator needs to edit that file, or configure a different authorizer to restrict what users can do. diff --git a/docs/admin/apparmor/index.md b/docs/admin/apparmor/index.md index 4c2d02d989..224f0bbdeb 100644 --- a/docs/admin/apparmor/index.md +++ b/docs/admin/apparmor/index.md @@ -384,7 +384,7 @@ Specifying the default profile to apply to containers when none is provided: - **key**: `apparmor.security.beta.kubernetes.io/defaultProfileName` - **value**: a profile reference, described above -Specifying the list of profiles Pod containers are allowed to specify: +Specifying the list of profiles Pod containers is allowed to specify: - **key**: `apparmor.security.beta.kubernetes.io/allowedProfileNames` - **value**: a comma-separated list of profile references (described above) diff --git a/docs/admin/federation/index.md b/docs/admin/federation/index.md index 478f7563de..f8fb5b6c4f 100644 --- a/docs/admin/federation/index.md +++ b/docs/admin/federation/index.md @@ -110,7 +110,7 @@ $ KUBE_REGISTRY="gcr.io/myrepository" federation/develop/develop.sh build_image $ KUBE_REGISTRY="gcr.io/myrepository" federation/develop/develop.sh push ``` -Note: This is going to overwite the values you might have set for +Note: This is going to overwrite the values you might have set for `apiserverRegistry`, `apiserverVersion`, `controllerManagerRegistry` and `controllerManagerVersion` in your `${FEDERATION_OUTPUT_ROOT}/values.yaml` file. Hence, it is not recommend to customize these values in From 41a9b7c55d302096c9f0d62119722299cdecfb58 Mon Sep 17 00:00:00 2001 From: Martially <21651061@zju.edu.cn> Date: Fri, 23 Dec 2016 15:26:12 +0800 Subject: [PATCH 13/14] fix typo Signed-off-by: Martially <21651061@zju.edu.cn> --- .../stateful-application/run-replicated-stateful-application.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/stateful-application/run-replicated-stateful-application.md b/docs/tutorials/stateful-application/run-replicated-stateful-application.md index 29f0d68242..30d22e1cce 100644 --- a/docs/tutorials/stateful-application/run-replicated-stateful-application.md +++ b/docs/tutorials/stateful-application/run-replicated-stateful-application.md @@ -180,7 +180,7 @@ replicating. In general, when a new Pod joins the set as a slave, it must assume the MySQL master might already have data on it. It also must assume that the replication logs might not go all the way back to the beginning of time. -These conservative assumptions are the key to allowing a running StatefulSet +These conservative assumptions are the key to allow a running StatefulSet to scale up and down over time, rather than being fixed at its initial size. The second Init Container, named `clone-mysql`, performs a clone operation on From c67b76a2f0607943ffc0685f210edbc47646d1c3 Mon Sep 17 00:00:00 2001 From: guohaifang Date: Fri, 23 Dec 2016 15:29:44 +0800 Subject: [PATCH 14/14] problems of grammer --- docs/tutorials/stateful-application/basic-stateful-set.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/stateful-application/basic-stateful-set.md b/docs/tutorials/stateful-application/basic-stateful-set.md index 07e41cd56d..60a8a30652 100644 --- a/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/docs/tutorials/stateful-application/basic-stateful-set.md @@ -11,7 +11,7 @@ title: StatefulSet Basics --- {% capture overview %} -This tutorial provides an introduction to managing applications with +This tutorial provides an introduction to manage applications with [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/). It demonstrates how to create, delete, scale, and update the container image of a StatefulSet.

    flexVolume

    FlexVolume represents a generic volume resource that is provisioned/attached using a exec based plugin. This is an alpha feature and may change in future.

    FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. This is an alpha feature and may change in future.

    false

    v1.FlexVolumeSource

    flexVolume

    FlexVolume represents a generic volume resource that is provisioned/attached using a exec based plugin. This is an alpha feature and may change in future.

    FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. This is an alpha feature and may change in future.

    false

    v1.FlexVolumeSource