diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 9dd8149a15..934d7947ae 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -33,4 +33,4 @@ Note that code issues should be filed against the main kubernetes repository, wh ### Submitting Documentation Pull Requests -If you’re fixing an issue in the existing documentation, you should submit a PR against the master branch. Follow [these instructions to create a documentation pull request against the kubernetes.io repository](http://kubernetes.io/docs/contribute/create-pull-request/). +If you're fixing an issue in the existing documentation, you should submit a PR against the master branch. Follow [these instructions to create a documentation pull request against the kubernetes.io repository](http://kubernetes.io/docs/contribute/create-pull-request/). diff --git a/LICENSE b/LICENSE index 06c608dcf4..b6988e7edc 100644 --- a/LICENSE +++ b/LICENSE @@ -378,7 +378,7 @@ Section 8 -- Interpretation. Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances -will be considered the “Licensor.” The text of the Creative Commons +will be considered the "Licensor." The text of the Creative Commons public licenses is dedicated to the public domain under the CC0 Public Domain Dedication. Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as diff --git a/_includes/v1.3/extensions-v1beta1-definitions.html b/_includes/v1.3/extensions-v1beta1-definitions.html index 0c4ab489a5..92ce832083 100755 --- a/_includes/v1.3/extensions-v1beta1-definitions.html +++ b/_includes/v1.3/extensions-v1beta1-definitions.html @@ -2079,7 +2079,7 @@ Populated by the system when a graceful deletion is requested. Read-only. More i

v1.FlexVolumeSource

-

FlexVolume represents a generic volume resource that is provisioned/attached using a exec based plugin. This is an alpha feature and may change in future.

+

FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. This is an alpha feature and may change in future.

@@ -2535,7 +2535,7 @@ Populated by the system when a graceful deletion is requested. Read-only. More i - + @@ -5867,7 +5867,7 @@ Both these may change in the future. Incoming requests are matched against the h - + diff --git a/_includes/v1.3/extensions-v1beta1-operations.html b/_includes/v1.3/extensions-v1beta1-operations.html index be39609140..21f12fcf7a 100755 --- a/_includes/v1.3/extensions-v1beta1-operations.html +++ b/_includes/v1.3/extensions-v1beta1-operations.html @@ -5578,7 +5578,7 @@
-

create a Ingress

+

create an Ingress

POST /apis/extensions/v1beta1/namespaces/{namespace}/ingresses
@@ -5959,7 +5959,7 @@
-

delete a Ingress

+

delete an Ingress

DELETE /apis/extensions/v1beta1/namespaces/{namespace}/ingresses/{name}
diff --git a/_includes/v1.3/v1-definitions.html b/_includes/v1.3/v1-definitions.html index e833b003ea..693f3ce4c7 100755 --- a/_includes/v1.3/v1-definitions.html +++ b/_includes/v1.3/v1-definitions.html @@ -2560,7 +2560,7 @@ The resulting set of endpoints can be viewed as:

v1.FlexVolumeSource

-

FlexVolume represents a generic volume resource that is provisioned/attached using a exec based plugin. This is an alpha feature and may change in future.

+

FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. This is an alpha feature and may change in future.

flexVolume

FlexVolume represents a generic volume resource that is provisioned/attached using a exec based plugin. This is an alpha feature and may change in future.

FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. This is an alpha feature and may change in future.

false

v1.FlexVolumeSource

path

Path is a extended POSIX regex as defined by IEEE Std 1003.1, (i.e. this follows the egrep/unix syntax, not the perl syntax) matched against the path of an incoming request. Currently it can contain characters disallowed from the conventional "path" part of a URL as defined by RFC 3986. Paths must begin with a /. If unspecified, the path defaults to a catch all sending traffic to the backend.

Path is an extended POSIX regex as defined by IEEE Std 1003.1, (i.e this follows the egrep/unix syntax, not the perl syntax) matched against the path of an incoming request. Currently it can contain characters disallowed from the conventional "path" part of a URL as defined by RFC 3986. Paths must begin with a /. If unspecified, the path defaults to a catch all sending traffic to the backend.

false

string

@@ -3268,7 +3268,7 @@ The resulting set of endpoints can be viewed as:
- + @@ -5555,7 +5555,7 @@ The resulting set of endpoints can be viewed as:
- + diff --git a/_includes/v1.3/v1-operations.html b/_includes/v1.3/v1-operations.html index 24e21c4f53..de6b5117e6 100755 --- a/_includes/v1.3/v1-operations.html +++ b/_includes/v1.3/v1-operations.html @@ -2676,7 +2676,7 @@
-

create a Endpoints

+

create an Endpoints

POST /api/v1/namespaces/{namespace}/endpoints
@@ -3057,7 +3057,7 @@
-

delete a Endpoints

+

delete an Endpoints

DELETE /api/v1/namespaces/{namespace}/endpoints/{name}
@@ -3619,7 +3619,7 @@
-

create a Event

+

create an Event

POST /api/v1/namespaces/{namespace}/events
@@ -4000,7 +4000,7 @@
-

delete a Event

+

delete an Event

DELETE /api/v1/namespaces/{namespace}/events/{name}
diff --git a/_includes/v1.4/extensions-v1beta1-operations.html b/_includes/v1.4/extensions-v1beta1-operations.html index a18a2f6030..ce55af43d9 100755 --- a/_includes/v1.4/extensions-v1beta1-operations.html +++ b/_includes/v1.4/extensions-v1beta1-operations.html @@ -5578,7 +5578,7 @@
-

create a Ingress

+

create an Ingress

POST /apis/extensions/v1beta1/namespaces/{namespace}/ingresses
@@ -5959,7 +5959,7 @@
-

delete a Ingress

+

delete an Ingress

DELETE /apis/extensions/v1beta1/namespaces/{namespace}/ingresses/{name}
diff --git a/_includes/v1.4/v1-operations.html b/_includes/v1.4/v1-operations.html index 875b464420..f866fc12fc 100755 --- a/_includes/v1.4/v1-operations.html +++ b/_includes/v1.4/v1-operations.html @@ -2676,7 +2676,7 @@
-

create a Endpoints

+

create an Endpoints

POST /api/v1/namespaces/{namespace}/endpoints
@@ -3057,7 +3057,7 @@
-

delete a Endpoints

+

delete an Endpoints

DELETE /api/v1/namespaces/{namespace}/endpoints/{name}
@@ -3619,7 +3619,7 @@
-

create a Event

+

create an Event

POST /api/v1/namespaces/{namespace}/events
@@ -4000,7 +4000,7 @@
-

delete a Event

+

delete an Event

DELETE /api/v1/namespaces/{namespace}/events/{name}
@@ -7885,7 +7885,7 @@
-

create eviction of a Eviction

+

create eviction of an Eviction

POST /api/v1/namespaces/{namespace}/pods/{name}/eviction
diff --git a/case-studies/index.html b/case-studies/index.html index ce14542424..f593d73fb9 100644 --- a/case-studies/index.html +++ b/case-studies/index.html @@ -17,19 +17,19 @@ title: Case Studies
Pearson -

“We chose Kubernetes because of its flexibility, ease of management and the way it improves our engineers’ productivity.”

+

"We chose Kubernetes because of its flexibility, ease of management and the way it improves our engineers' productivity."

Read about Pearson
Wikimedia -

“With Kubernetes, we’re simplifying our environment and making it easier for developers to build the tools that make wikis run better.”

+

"With Kubernetes, we're simplifying our environment and making it easier for developers to build the tools that make wikis run better."

Read about Wikimedia
eBay -

Inside eBay’s shift to Kubernetes and containers atop OpenStack

+

Inside eBay's shift to Kubernetes and containers atop OpenStack

Read about eBay
@@ -45,7 +45,7 @@ title: Case Studies
- + diff --git a/case-studies/pearson.html b/case-studies/pearson.html index bf871789b9..50f16ce7ae 100644 --- a/case-studies/pearson.html +++ b/case-studies/pearson.html @@ -13,13 +13,13 @@ title: Pearson Case Study
-

Using Kubernetes to reinvent the world’s largest educational company

+

Using Kubernetes to reinvent the world's largest educational company

- Pearson, the world’s education company, serving 75 million learners worldwide, set a goal to more than double that number to 200 million by 2025. A key part of this growth is in digital learning experiences, and that requires an infrastructure platform that is able to scale quickly and deliver products to market faster. So Pearson’s Cloud Technology team chose Kubernetes to help build a platform to meet the business requirements.

+ Pearson, the world's education company, serving 75 million learners worldwide, set a goal to more than double that number to 200 million by 2025. A key part of this growth is in digital learning experiences, and that requires an infrastructure platform that is able to scale quickly and deliver products to market faster. So Pearson's Cloud Technology team chose Kubernetes to help build a platform to meet the business requirements.

Pearson

- “To transform our infrastructure, we had to think beyond simply enabling automated provisioning, we realized we had to build a platform that would allow Pearson developers to build manage and deploy applications in a completely different way. We chose Kubernetes because of its flexibility, ease of management and the way it would improve our engineers’ productivity.”

+ "To transform our infrastructure, we had to think beyond simply enabling automated provisioning, we realized we had to build a platform that would allow Pearson developers to build manage and deploy applications in a completely different way. We chose Kubernetes because of its flexibility, ease of management and the way it would improve our engineers' productivity."

— Chris Jackson, Director for Cloud Product Engineering, Pearson

@@ -38,7 +38,7 @@ title: Pearson Case Study

Why Kubernetes:

    -
  • Kubernetes will allow Pearson’s teams to develop their apps in a consistent manner, saving time and minimizing complexity.
  • +
  • Kubernetes will allow Pearson's teams to develop their apps in a consistent manner, saving time and minimizing complexity.
@@ -52,7 +52,7 @@ title: Pearson Case Study

Results:

    -
  • Pearson is building an enterprise-wide platform for delivering innovative, web-based educational content. They expect engineers’ productivity to increase by up to 20 percent.
  • +
  • Pearson is building an enterprise-wide platform for delivering innovative, web-based educational content. They expect engineers' productivity to increase by up to 20 percent.
@@ -63,9 +63,9 @@ title: Pearson Case Study

Kubernetes powers a comprehensive developer experience

-

Pearson wanted to use as much open source technology as possible for the platform given that it provides both technical and commercial benefits over the duration of the project. Jackson says, “Building an infrastructure platform based on open source technology in Pearson was a no-brainer, the sharing of technical challenges and advanced use cases in a community of people with talent far beyond what we could hire independently allows us to innovate at a level we could not reach on our own. Our engineers enjoy returning code to the community and participating in talks, blogs and meetings, it’s a great way for us to allow our team to express themselves and share the pride they have in their work.”

-

It also wanted to use a container-focused platform. Pearson has 400 development groups and diverse brands with varying business and technical needs. With containers, each brand could experiment with building new types of content using their preferred technologies, and then deliver it using containers. Pearson chose Kubernetes because it believes that is the best technology for managing containers, has the widest community support and offers the most flexible and powerful tools.“

-

Kubernetes is at the core of the platform we’ve built for developers. After we get our big spike in back-to-school in traffic, much of Pearson’s traffic will interact with Kubernetes. It is proving to be as effective as we had hoped,” Jackson says.

+

Pearson wanted to use as much open source technology as possible for the platform given that it provides both technical and commercial benefits over the duration of the project. Jackson says, "Building an infrastructure platform based on open source technology in Pearson was a no-brainer, the sharing of technical challenges and advanced use cases in a community of people with talent far beyond what we could hire independently allows us to innovate at a level we could not reach on our own. Our engineers enjoy returning code to the community and participating in talks, blogs and meetings, it's a great way for us to allow our team to express themselves and share the pride they have in their work."

+

It also wanted to use a container-focused platform. Pearson has 400 development groups and diverse brands with varying business and technical needs. With containers, each brand could experiment with building new types of content using their preferred technologies, and then deliver it using containers. Pearson chose Kubernetes because it believes that is the best technology for managing containers, has the widest community support and offers the most flexible and powerful tools."

+

Kubernetes is at the core of the platform we've built for developers. After we get our big spike in back-to-school in traffic, much of Pearson's traffic will interact with Kubernetes. It is proving to be as effective as we had hoped," Jackson says.

@@ -74,9 +74,9 @@ title: Pearson Case Study

Encouraging experimentation, saving engineers time

-

With the new platform, Pearson will increase stability and performance, and to bring products to market more quickly. The company says its engineers will also get a productivity boost because they won’t spend time managing infrastructure. Jackson estimates 15 to 20 percent in productivity savings.

+

With the new platform, Pearson will increase stability and performance, and to bring products to market more quickly. The company says its engineers will also get a productivity boost because they won't spend time managing infrastructure. Jackson estimates 15 to 20 percent in productivity savings.

Beyond that, Pearson says the platform will encourage innovation because of the ease with which new applications can be developed, and because applications will be deployed far more quickly than in the past. It expects that will help the company meet its goal of reaching 200 million learners within the next 10 years.

-

“We’re already seeing tremendous benefits with Kubernetes — improved engineering productivity, faster delivery of applications and a simplified infrastructure. But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online,” says Jackson.

+

"We're already seeing tremendous benefits with Kubernetes — improved engineering productivity, faster delivery of applications and a simplified infrastructure. But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online," says Jackson.

diff --git a/case-studies/wikimedia.html b/case-studies/wikimedia.html index 00eb47e3e0..2d3b686128 100644 --- a/case-studies/wikimedia.html +++ b/case-studies/wikimedia.html @@ -20,7 +20,7 @@ title: Wikimedia Case Study
Wikimedia

- “Wikimedia Tool Labs is vital for making sure wikis all around the world work as well as they possibly can. Because it’s grown organically for almost 10 years, it has become an extremely challenging environment and difficult to maintain. It’s like a big ball of mud — you really can’t see through it. With Kubernetes, we’re simplifying the environment and making it easier for developers to build the tools that make wikis run better.” + "Wikimedia Tool Labs is vital for making sure wikis all around the world work as well as they possibly can. Because it's grown organically for almost 10 years, it has become an extremely challenging environment and difficult to maintain. It's like a big ball of mud — you really can't see through it. With Kubernetes, we're simplifying the environment and making it easier for developers to build the tools that make wikis run better."

— Yuvi Panda, operations engineer at Wikimedia Foundation and Wikimedia Tool Labs

@@ -67,13 +67,13 @@ title: Wikimedia Case Study

Using Kubernetes to provide tools for maintaining wikis

- Wikimedia Tool Labs is run by a staff of four-and-a-half paid employees and two volunteers. The infrastructure didn't make it easy or intuitive for developers to build bots and other tools to make wikis work more easily. Yuvi says, “It’s incredibly chaotic. We have lots of Perl and Bash duct tape on top of it. Everything is super fragile.” + Wikimedia Tool Labs is run by a staff of four-and-a-half paid employees and two volunteers. The infrastructure didn't make it easy or intuitive for developers to build bots and other tools to make wikis work more easily. Yuvi says, "It's incredibly chaotic. We have lots of Perl and Bash duct tape on top of it. Everything is super fragile."

To solve the problem, Wikimedia Tool Labs migrated parts of its infrastructure to Kubernetes, in preparation for eventually moving its entire system. Yuvi said Kubernetes greatly simplifies maintenance. The goal is to allow developers creating bots and other tools to use whatever development methods they want, but make it easier for the Wikimedia Tool Labs to maintain the required infrastructure for hosting and sharing them.

- “With Kubernetes, I’ve been able to remove a lot of our custom-made code, which makes everything easier to maintain. Our users’ code also runs in a more stable way than previously,” says Yuvi. + "With Kubernetes, I've been able to remove a lot of our custom-made code, which makes everything easier to maintain. Our users' code also runs in a more stable way than previously," says Yuvi.

@@ -84,13 +84,13 @@ title: Wikimedia Case Study

Simplifying infrastructure and keeping wikis running better

- Wikimedia Tool Labs has seen great success with the initial Kubernetes deployment. Old code is being simplified and eliminated, contributing developers don’t have to change the way they write their tools and bots, and those tools and bots run in a more stable fashion than they have in the past. The paid staff and volunteers are able to better keep up with fixing issues. + Wikimedia Tool Labs has seen great success with the initial Kubernetes deployment. Old code is being simplified and eliminated, contributing developers don't have to change the way they write their tools and bots, and those tools and bots run in a more stable fashion than they have in the past. The paid staff and volunteers are able to better keep up with fixing issues.

- In the future, with a more complete migration to Kubernetes, Wikimedia Tool Labs expects to make it even easier to host and maintain the bots and tools that help run wikis across the world. The tool labs already host approximately 1,300 tools and bots from 800 volunteers, with many more being submitted every day. Twenty percent of the tool labs’ web tools that account for more than 60 percent of web traffic now run on Kubernetes. The tool labs has a 25-node cluster that keeps up with each new Kubernetes release. Many existing web tools are migrating to Kubernetes. + In the future, with a more complete migration to Kubernetes, Wikimedia Tool Labs expects to make it even easier to host and maintain the bots and tools that help run wikis across the world. The tool labs already host approximately 1,300 tools and bots from 800 volunteers, with many more being submitted every day. Twenty percent of the tool labs' web tools that account for more than 60 percent of web traffic now run on Kubernetes. The tool labs has a 25-node cluster that keeps up with each new Kubernetes release. Many existing web tools are migrating to Kubernetes.

- “Our goal is to make sure that people all over the world can share knowledge as easily as possible. Kubernetes helps with that, by making it easier for wikis everywhere to have the tools they need to thrive,” says Yuvi. + "Our goal is to make sure that people all over the world can share knowledge as easily as possible. Kubernetes helps with that, by making it easier for wikis everywhere to have the tools they need to thrive," says Yuvi.

diff --git a/community.html b/community.html index 9ef63c1b66..a10a100375 100644 --- a/community.html +++ b/community.html @@ -24,8 +24,8 @@ title: Community

SIGs

Have a special interest in how Kubernetes works with another technology? See our ever growing lists of SIGs, - from AWS and Openstack to Big Data and Scalability, there’s a place for you to contribute and instructions - for forming a new SIG if your special interest isn’t covered (yet).

+ from AWS and Openstack to Big Data and Scalability, there's a place for you to contribute and instructions + for forming a new SIG if your special interest isn't covered (yet).

Events

diff --git a/docs/admin/accessing-the-api.md b/docs/admin/accessing-the-api.md index 5a57db23ce..a70f1c0920 100644 --- a/docs/admin/accessing-the-api.md +++ b/docs/admin/accessing-the-api.md @@ -86,7 +86,7 @@ For version 1.2, clusters created by `kube-up.sh` are configured so that no auth required for any request. As of version 1.3, clusters created by `kube-up.sh` are configured so that the ABAC authorization -modules is enabled. However, its input file is initially set to allow all users to do all +modules are enabled. However, its input file is initially set to allow all users to do all operations. The cluster administrator needs to edit that file, or configure a different authorizer to restrict what users can do. diff --git a/docs/admin/addons.md b/docs/admin/addons.md index f45aebeb09..aeee68cc30 100644 --- a/docs/admin/addons.md +++ b/docs/admin/addons.md @@ -14,7 +14,7 @@ Add-ons in each section are sorted alphabetically - the ordering does not imply * [Calico](http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/) is a secure L3 networking and network policy provider. * [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm) unites Flannel and Calico, providing networking and network policy. -* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml) is a overlay network provider that can be used with Kubernetes. +* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml) is an overlay network provider that can be used with Kubernetes. * [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/user-guide/networkpolicies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize). * [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database. diff --git a/docs/admin/admission-controllers.md b/docs/admin/admission-controllers.md index 475f2e4be9..089dce2605 100644 --- a/docs/admin/admission-controllers.md +++ b/docs/admin/admission-controllers.md @@ -126,7 +126,7 @@ For additional HTTP configuration, refer to the [kubeconfig](/docs/user-guide/ku When faced with an admission decision, the API Server POSTs a JSON serialized api.imagepolicy.v1alpha1.ImageReview object describing the action. This object contains fields describing the containers being admitted, as well as any pod annotations that match `*.image-policy.k8s.io/*`. -Note that webhook API objects are subject to the same versioning compatibility rules as other Kubernetes API objects. Implementers should be aware of looser compatibility promises for alpha objects and check the “apiVersion” field of the request to ensure correct deserialization. Additionally, the API Server must enable the imagepolicy.k8s.io/v1alpha1 API extensions group (`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`). +Note that webhook API objects are subject to the same versioning compatibility rules as other Kubernetes API objects. Implementers should be aware of looser compatibility promises for alpha objects and check the "apiVersion" field of the request to ensure correct deserialization. Additionally, the API Server must enable the imagepolicy.k8s.io/v1alpha1 API extensions group (`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`). An example request body: @@ -151,7 +151,7 @@ An example request body: } ``` -The remote service is expected to fill the ImageReviewStatus field of the request and respond to either allow or disallow access. The response body’s “spec” field is ignored and may be omitted. A permissive response would return: +The remote service is expected to fill the ImageReviewStatus field of the request and respond to either allow or disallow access. The response body's "spec" field is ignored and may be omitted. A permissive response would return: ``` { diff --git a/docs/admin/apparmor/index.md b/docs/admin/apparmor/index.md index 4c2d02d989..224f0bbdeb 100644 --- a/docs/admin/apparmor/index.md +++ b/docs/admin/apparmor/index.md @@ -384,7 +384,7 @@ Specifying the default profile to apply to containers when none is provided: - **key**: `apparmor.security.beta.kubernetes.io/defaultProfileName` - **value**: a profile reference, described above -Specifying the list of profiles Pod containers are allowed to specify: +Specifying the list of profiles Pod containers is allowed to specify: - **key**: `apparmor.security.beta.kubernetes.io/allowedProfileNames` - **value**: a comma-separated list of profile references (described above) diff --git a/docs/admin/federation/index.md b/docs/admin/federation/index.md index 478f7563de..f8fb5b6c4f 100644 --- a/docs/admin/federation/index.md +++ b/docs/admin/federation/index.md @@ -110,7 +110,7 @@ $ KUBE_REGISTRY="gcr.io/myrepository" federation/develop/develop.sh build_image $ KUBE_REGISTRY="gcr.io/myrepository" federation/develop/develop.sh push ``` -Note: This is going to overwite the values you might have set for +Note: This is going to overwrite the values you might have set for `apiserverRegistry`, `apiserverVersion`, `controllerManagerRegistry` and `controllerManagerVersion` in your `${FEDERATION_OUTPUT_ROOT}/values.yaml` file. Hence, it is not recommend to customize these values in diff --git a/docs/admin/ha-master-gce.md b/docs/admin/ha-master-gce.md index 262dafbe0a..871ce56606 100644 --- a/docs/admin/ha-master-gce.md +++ b/docs/admin/ha-master-gce.md @@ -24,7 +24,7 @@ If true, reads will be directed to leader etcd replica. Setting this value to true is optional: reads will be more reliable but will also be slower. Optionally, you can specify a GCE zone where the first master replica is to be created. -Set the the following flag: +Set the following flag: * `KUBE_GCE_ZONE=zone` - zone where the first master replica will run. diff --git a/docs/admin/networking.md b/docs/admin/networking.md index c8a8c53d9c..e1de39fdbd 100644 --- a/docs/admin/networking.md +++ b/docs/admin/networking.md @@ -173,7 +173,7 @@ Lars Kellogg-Stedman. [Nuage](http://www.nuagenetworks.net) provides a highly scalable policy-based Software-Defined Networking (SDN) platform. Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards. -The Nuage platform uses overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Nuage’s policy abstraction model is designed with applications in mind and makes it easy to declare fine-grained policies for applications.The platform’s real-time analytics engine enables visibility and security monitoring for Kubernetes applications. +The Nuage platform uses overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Nuage's policy abstraction model is designed with applications in mind and makes it easy to declare fine-grained policies for applications.The platform's real-time analytics engine enables visibility and security monitoring for Kubernetes applications. ### OpenVSwitch diff --git a/docs/admin/rescheduler.md b/docs/admin/rescheduler.md index 27c512bff9..ba3633e83b 100644 --- a/docs/admin/rescheduler.md +++ b/docs/admin/rescheduler.md @@ -30,7 +30,7 @@ given the pods that are already running in the cluster the rescheduler tries to free up space for the add-on by evicting some pods; then the scheduler will schedule the add-on pod. To avoid situation when another pod is scheduled into the space prepared for the critical add-on, -the chosen node gets a temporary taint “CriticalAddonsOnly” before the eviction(s) +the chosen node gets a temporary taint "CriticalAddonsOnly" before the eviction(s) (see [more details](https://github.com/kubernetes/kubernetes/blob/master/docs/design/taint-toleration-dedicated.md)). Each critical add-on has to tolerate it, the other pods shouldn't tolerate the taint. The tain is removed once the add-on is successfully scheduled. @@ -57,4 +57,3 @@ and have the following annotations specified: * `scheduler.alpha.kubernetes.io/tolerations` set to `[{"key":"CriticalAddonsOnly", "operator":"Exists"}]` The first one marks a pod a critical. The second one is required by Rescheduler algorithm. - diff --git a/docs/getting-started-guides/libvirt-coreos.md b/docs/getting-started-guides/libvirt-coreos.md index 33c6c6be67..ca2e9e7d75 100644 --- a/docs/getting-started-guides/libvirt-coreos.md +++ b/docs/getting-started-guides/libvirt-coreos.md @@ -30,7 +30,7 @@ Another difference is that no security is enforced on `libvirt-coreos` at all. F * Kubernetes secrets are not protected as securely as they are on production environments; * etc. -So, an k8s application developer should not validate its interaction with Kubernetes on `libvirt-coreos` because he might technically succeed in doing things that are prohibited on a production environment like: +So, a k8s application developer should not validate its interaction with Kubernetes on `libvirt-coreos` because he might technically succeed in doing things that are prohibited on a production environment like: * un-authenticated access to Kube API server; * Access to Kubernetes private data structures inside etcd; diff --git a/docs/getting-started-guides/logging.md b/docs/getting-started-guides/logging.md index ff874e119d..05c41cd3c6 100644 --- a/docs/getting-started-guides/logging.md +++ b/docs/getting-started-guides/logging.md @@ -79,7 +79,7 @@ root 479 0.0 0.0 4348 812 ? S 00:05 0:00 sleep 1 root 480 0.0 0.0 15572 2212 ? R 00:05 0:00 ps aux ``` -What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original container's execution and only see the log lines for the new container? Let’s find out. First let's delete the currently running counter. +What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original container's execution and only see the log lines for the new container? Let's find out. First let's delete the currently running counter. ```shell $ kubectl delete pod counter diff --git a/docs/getting-started-guides/meanstack.md b/docs/getting-started-guides/meanstack.md index 37df0513f7..ca34d32753 100644 --- a/docs/getting-started-guides/meanstack.md +++ b/docs/getting-started-guides/meanstack.md @@ -17,12 +17,12 @@ Thankfully, there is a system we can use to manage our containers in a cluster e ## The Basics of Using Kubernetes -Before we jump in and start kube’ing it up, it’s important to understand some of the fundamentals of Kubernetes. +Before we jump in and start kube'ing it up, it's important to understand some of the fundamentals of Kubernetes. * Containers: These are the Docker, rtk, AppC, or whatever Container you are running. You can think of these like subatomic particles; everything is made up of them, but you rarely (if ever) interact with them directly. -* Pods: Pods are the basic component of Kubernetes. They are a group of Containers that are scheduled, live, and die together. Why would you want to have a group of containers instead of just a single container? Let’s say you had a log processor, a web server, and a database. If you couldn't use Pods, you would have to bundle the log processor in the web server and database containers, and each time you updated one you would have to update the other. With Pods, you can just reuse the same log processor for both the web server and database. +* Pods: Pods are the basic component of Kubernetes. They are a group of Containers that are scheduled, live, and die together. Why would you want to have a group of containers instead of just a single container? Let's say you had a log processor, a web server, and a database. If you couldn't use Pods, you would have to bundle the log processor in the web server and database containers, and each time you updated one you would have to update the other. With Pods, you can just reuse the same log processor for both the web server and database. * Deployments: A Deployment provides declarative updates for Pods. You can define Deployments to create new Pods, or replace existing Pods. You only need to describe the desired state in a Deployment object, and the deployment controller will change the actual state to the desired state at a controlled rate for you. You can define Deployments to create new resources, or replace existing ones by new ones. -* Services: A service is the single point of contact for a group of Pods. For example, let’s say you have a Deployment that creates four copies of a web server pod. A Service will split the traffic to each of the four copies. Services are "permanent" while the pods behind them can come and go, so it’s a good idea to use Services. +* Services: A service is the single point of contact for a group of Pods. For example, let's say you have a Deployment that creates four copies of a web server pod. A Service will split the traffic to each of the four copies. Services are "permanent" while the pods behind them can come and go, so it's a good idea to use Services. ## Step 1: Creating the Container @@ -37,7 +37,7 @@ To do this, you need to use more Docker. Make sure you have the latest version i Getting the code: -Before starting, let’s get some code to run. You can follow along on your personal machine or a Linux VM in the cloud. I recommend using Linux or a Linux VM; running Docker on Mac and Windows is outside the scope of this tutorial. +Before starting, let's get some code to run. You can follow along on your personal machine or a Linux VM in the cloud. I recommend using Linux or a Linux VM; running Docker on Mac and Windows is outside the scope of this tutorial. ```shell $ git clone https://github.com/ijason/NodeJS-Sample-App.git app @@ -45,7 +45,7 @@ $ mv app/EmployeeDB/* app/ $ sed -i -- 's/localhost/mongo/g' ./app/app.js ``` -This is the same sample app we ran before. The second line just moves everything from the `EmployeeDB` subfolder up into the app folder so it’s easier to access. The third line, once again, replaces the hardcoded `localhost` with the `mongo` proxy. +This is the same sample app we ran before. The second line just moves everything from the `EmployeeDB` subfolder up into the app folder so it's easier to access. The third line, once again, replaces the hardcoded `localhost` with the `mongo` proxy. Building the Docker image: @@ -83,7 +83,7 @@ $ ls Dockerfile app ``` -Let’s build. +Let's build. ```shell $ docker build -t myapp . @@ -139,7 +139,7 @@ After some time, it will finish. You can check the console to see the container ## **Step 4: Creating the Cluster** -So now you have the custom container, let’s create a cluster to run it. +So now you have the custom container, let's create a cluster to run it. Currently, a cluster can be as small as one machine to as big as 100 machines. You can pick any machine type you want, so you can have a cluster of a single `f1-micro` instance, 100 `n1-standard-32` instances (3,200 cores!), and anything in between. @@ -193,7 +193,7 @@ $ gcloud compute disks create \ Pick the same zone as your cluster and an appropriate disk size for your application. -Now, we need to create a Deployment that will run the database. I’m using a Deployment and not a Pod, because if a standalone Pod dies, it won't restart automatically. +Now, we need to create a Deployment that will run the database. I'm using a Deployment and not a Pod, because if a standalone Pod dies, it won't restart automatically. ### `db-deployment.yml` @@ -231,7 +231,7 @@ We call the deployment `mongo-deployment`, specify one replica, and open the app The `volumes` section creates the volume for Kubernetes to use. There is a Google Container Engine-specific `gcePersistentDisk` section that maps the disk we made into a Kubernetes volume, and we mount the volume into the `/data/db` directory (as described in the MongoDB Docker documentation) -Now we have the Deployment, let’s create the Service: +Now we have the Deployment, let's create the Service: ### `db-service.yml` @@ -267,7 +267,7 @@ db-service.yml ## Step 6: Running the Database -First, let’s "log in" to the cluster +First, let's "log in" to the cluster ```shell $ gcloud container clusters get-credentials mean-cluster @@ -305,14 +305,14 @@ mongo-deployment-xxxx 1/1 Running 0 3m ## Step 7: Creating the Web Server -Now the database is running, let’s start the web server. +Now the database is running, let's start the web server. We need two things: 1. Deployment to spin up and down web server pods 2. Service to expose our website to the interwebs -Let’s look at the Deployment configuration: +Let's look at the Deployment configuration: ### `web-deployment.yml` diff --git a/docs/getting-started-guides/mesos/index.md b/docs/getting-started-guides/mesos/index.md index 948eae1a41..499ff0ba51 100644 --- a/docs/getting-started-guides/mesos/index.md +++ b/docs/getting-started-guides/mesos/index.md @@ -229,7 +229,7 @@ We assume that kube-dns will use Note that we have passed these two values already as parameter to the apiserver above. -A template for an replication controller spinning up the pod with the 3 containers can be found at [cluster/addons/dns/skydns-rc.yaml.in][11] in the repository. The following steps are necessary in order to get a valid replication controller yaml file: +A template for a replication controller spinning up the pod with the 3 containers can be found at [cluster/addons/dns/skydns-rc.yaml.in][11] in the repository. The following steps are necessary in order to get a valid replication controller yaml file: - replace `{% raw %}{{ pillar['dns_replicas'] }}{% endraw %}` with `1` - replace `{% raw %}{{ pillar['dns_domain'] }}{% endraw %}` with `cluster.local.` diff --git a/docs/getting-started-guides/rackspace.md b/docs/getting-started-guides/rackspace.md index 00c73a8e59..ff59f4d31b 100644 --- a/docs/getting-started-guides/rackspace.md +++ b/docs/getting-started-guides/rackspace.md @@ -45,7 +45,7 @@ There is a specific `cluster/rackspace` directory with the scripts for the follo 1. A cloud network will be created and all instances will be attached to this network. - flanneld uses this network for next hop routing. These routes allow the containers running on each node to communicate with one another on this private network. -2. A SSH key will be created and uploaded if needed. This key must be used to ssh into the machines (we do not capture the password). +2. An SSH key will be created and uploaded if needed. This key must be used to ssh into the machines (we do not capture the password). 3. The master server and additional nodes will be created via the `nova` CLI. A `cloud-config.yaml` is generated and provided as user-data with the entire configuration for the systems. 4. We then boot as many nodes as defined via `$NUM_NODES`. diff --git a/docs/getting-started-guides/windows/index.md b/docs/getting-started-guides/windows/index.md index 3096bed7eb..dd775b81af 100644 --- a/docs/getting-started-guides/windows/index.md +++ b/docs/getting-started-guides/windows/index.md @@ -15,18 +15,18 @@ In Kubernetes version 1.5, Windows Server Containers for Kubernetes is supported 4. Docker Version 1.12.2-cs2-ws-beta or later for Windows Server nodes (Linux nodes and Kubernetes control plane can run any Kubernetes supported Docker Version) ## Networking -Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) don’t natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used. +Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) don't natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used. ### Linux -The above networking approach is already supported on Linux using a bridge interface, which essentially creates a private network local to the node. Similar to the Windows side, routes to all other pod CIDRs must be created in order to send packets via the “public” NIC. +The above networking approach is already supported on Linux using a bridge interface, which essentially creates a private network local to the node. Similar to the Windows side, routes to all other pod CIDRs must be created in order to send packets via the "public" NIC. ### Windows Each Window Server node should have the following configuration: 1. Two NICs (virtual networking adapters) are required on each Windows Server node - The two Windows container networking modes of interest (transparent and L2 bridge) use an external Hyper-V virtual switch. This means that one of the NICs is entirely allocated to the bridge, creating the need for the second NIC. 2. Transparent container network created - This is a manual configuration step and is shown in **_Route Setup_** section below -3. RRAS (Routing) Windows feature enabled - Allows routing between NICs on the box, and also “captures” packets that have the destination IP of a POD running on the node. To enable, open “Server Manager”. Click on “Roles”, “Add Roles”. Click “Next”. Select “Network Policy and Access Services”. Click on “Routing and Remote Access Service” and the underlying checkboxes -4. Routes defined pointing to the other pod CIDRs via the “public” NIC - These routes are added to the built-in routing table as shown in **_Route Setup_** section below +3. RRAS (Routing) Windows feature enabled - Allows routing between NICs on the box, and also "captures" packets that have the destination IP of a POD running on the node. To enable, open "Server Manager". Click on "Roles", "Add Roles". Click "Next". Select "Network Policy and Access Services". Click on "Routing and Remote Access Service" and the underlying checkboxes +4. Routes defined pointing to the other pod CIDRs via the "public" NIC - These routes are added to the built-in routing table as shown in **_Route Setup_** section below The following diagram illustrates the Windows Server networking setup for Kubernetes Setup ![Windows Setup](windows-setup.png) diff --git a/docs/hellonode.md b/docs/hellonode.md index 412e495c11..fcb8eec480 100755 --- a/docs/hellonode.md +++ b/docs/hellonode.md @@ -12,7 +12,7 @@ title: Hello World on Google Container Engine The goal of this codelab is for you to turn a simple Hello World node.js app into a replicated application running on Kubernetes. We will show you how to take code that you have developed on your machine, turn it into a Docker container image, and then run that image on [Google Container Engine](https://cloud.google.com/container-engine/). -Here’s a diagram of the various parts in play in this codelab to help you understand how pieces fit with one another. Use this as a reference as we progress through the codelab; it should all make sense by the time we get to the end. +Here's a diagram of the various parts in play in this codelab to help you understand how pieces fit with one another. Use this as a reference as we progress through the codelab; it should all make sense by the time we get to the end. ![image](/images/hellonode/image_1.png) @@ -38,7 +38,7 @@ export PROJECT_ID="your-project-id" Next, [enable billing](https://console.cloud.google.com/billing) in the Cloud Console in order to use Google Cloud resources and [enable the Container Engine API](https://console.cloud.google.com/project/_/kubernetes/list). -New users of Google Cloud Platform receive a [$300 free trial](https://console.cloud.google.com/billing/freetrial?hl=en). Running through this codelab shouldn’t cost you more than a few dollars of that trial. Google Container Engine pricing is documented [here](https://cloud.google.com/container-engine/pricing). +New users of Google Cloud Platform receive a [$300 free trial](https://console.cloud.google.com/billing/freetrial?hl=en). Running through this codelab shouldn't cost you more than a few dollars of that trial. Google Container Engine pricing is documented [here](https://cloud.google.com/container-engine/pricing). Next, make sure you [download Node.js](https://nodejs.org/en/download/). You can skip this and the steps for installing Docker and Cloud SDK if you're using Cloud Shell. @@ -79,7 +79,7 @@ You should be able to see your "Hello World!" message at http://localhost:8080/. Stop the running node server by pressing Ctrl-C. -Now let’s package this application in a Docker container. +Now let's package this application in a Docker container. ## Create a Docker container image @@ -109,7 +109,7 @@ Let's try your image out with Docker: docker run -d -p 8080:8080 --name hello_tutorial gcr.io/$PROJECT_ID/hello-node:v1 ``` -Visit your app in the browser, or use `curl` or `wget` if you’d like : +Visit your app in the browser, or use `curl` or `wget` if you'd like : ```shell curl http://localhost:8080 @@ -123,7 +123,7 @@ You should see `Hello World!` curl "http://$(docker-machine ip YOUR-VM-MACHINE-NAME):8080" ``` -Let’s now stop the container. You can list the docker containers with: +Let's now stop the container. You can list the docker containers with: ```shell docker ps @@ -180,7 +180,7 @@ You should get a Kubernetes cluster with three nodes, ready to receive your cont ![image](/images/hellonode/image_11.png) -It’s now time to deploy your own containerized application to the Kubernetes cluster! +It's now time to deploy your own containerized application to the Kubernetes cluster! ```shell gcloud container clusters get-credentials hello-world @@ -258,7 +258,7 @@ kubectl expose deployment hello-node --type="LoadBalancer" **If this fails, make sure your client and server are both version 1.3. See the [Create your cluster](#create-your-cluster) section for details.** -The flag used in this command specifies that we’ll be using the load-balancer provided by the underlying infrastructure (in this case the [Compute Engine load balancer](https://cloud.google.com/compute/docs/load-balancing/)). Note that we expose the deployment, and not the pod directly. This will cause the resulting service to load balance traffic across all pods managed by the deployment (in this case only 1 pod, but we will add more replicas later). +The flag used in this command specifies that we'll be using the load-balancer provided by the underlying infrastructure (in this case the [Compute Engine load balancer](https://cloud.google.com/compute/docs/load-balancing/)). Note that we expose the deployment, and not the pod directly. This will cause the resulting service to load balance traffic across all pods managed by the deployment (in this case only 1 pod, but we will add more replicas later). The Kubernetes master creates the load balancer and related Compute Engine forwarding rules, target pools, and firewall rules to make the service fully accessible from outside of Google Cloud Platform. @@ -322,7 +322,7 @@ hello-node-714049816-ztzrb 1/1 Running 0 41m Note the **declarative approach** here - rather than starting or stopping new instances you declare how many instances you want to be running. Kubernetes reconciliation loops simply make sure the reality matches what you requested and take action if needed. -Here’s a diagram summarizing the state of our Kubernetes cluster: +Here's a diagram summarizing the state of our Kubernetes cluster: ![image](/images/hellonode/image_13.png) @@ -330,7 +330,7 @@ Here’s a diagram summarizing the state of our Kubernetes cluster: As always, the application you deployed to production requires bug fixes or additional features. Kubernetes is here to help you deploy a new version to production without impacting your users. -First, let’s modify the application. On the development machine, edit server.js and update the response message: +First, let's modify the application. On the development machine, edit server.js and update the response message: ```javascript response.end('Hello Kubernetes World!'); @@ -345,7 +345,7 @@ gcloud docker -- push gcr.io/$PROJECT_ID/hello-node:v2 Building and pushing this updated image should be much quicker as we take full advantage of the Docker cache. -We’re now ready for Kubernetes to smoothly update our deployment to the new version of the application. In order to change +We're now ready for Kubernetes to smoothly update our deployment to the new version of the application. In order to change the image label for our running container, we will need to edit the existing *hello-node deployment* and change the image from `gcr.io/$PROJECT_ID/hello-node:v1` to `gcr.io/$PROJECT_ID/hello-node:v2`. To do this, we will use the `kubectl set image` command. @@ -364,7 +364,7 @@ hello-node 4 5 4 3 1h While this is happening, the users of the services should not see any interruption. After a little while they will start accessing the new version of your application. You can find more details in the [deployment documentation](/docs/user-guide/deployments/). -Hopefully with these deployment, scaling and update features you’ll agree that once you’ve setup your environment (your GKE/Kubernetes cluster here), Kubernetes is here to help you focus on the application rather than the infrastructure. +Hopefully with these deployment, scaling and update features you'll agree that once you've setup your environment (your GKE/Kubernetes cluster here), Kubernetes is here to help you focus on the application rather than the infrastructure. ## Observe the Kubernetes Web UI (optional) diff --git a/docs/tutorials/kubernetes-basics/cluster-intro.html b/docs/tutorials/kubernetes-basics/cluster-intro.html index 6009a55aeb..830b651594 100644 --- a/docs/tutorials/kubernetes-basics/cluster-intro.html +++ b/docs/tutorials/kubernetes-basics/cluster-intro.html @@ -90,7 +90,7 @@ title: Using Minikube to Create a Cluster

A Kubernetes cluster can be deployed on either physical or virtual machines. To get started with Kubernetes development, you can use Minikube. Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, Mac OS and Windows systems. The Minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete. For this bootcamp, however, you'll use a provided online terminal with Minikube pre-installed.

-

Now that you know what Kubernetes is, let’s go to the online tutorial and start our first cluster!

+

Now that you know what Kubernetes is, let's go to the online tutorial and start our first cluster!

diff --git a/docs/tutorials/kubernetes-basics/deploy-intro.html b/docs/tutorials/kubernetes-basics/deploy-intro.html index 5352e0ea38..b8ca582f1d 100644 --- a/docs/tutorials/kubernetes-basics/deploy-intro.html +++ b/docs/tutorials/kubernetes-basics/deploy-intro.html @@ -86,9 +86,9 @@ title: Using kubectl to Create a Deployment
-

For our first Deployment, we’ll use a Node.js application packaged in a Docker container. The source code and the Dockerfile are available in the GitHub repository for the Kubernetes Bootcamp.

+

For our first Deployment, we'll use a Node.js application packaged in a Docker container. The source code and the Dockerfile are available in the GitHub repository for the Kubernetes Bootcamp.

-

Now that you know what Deployments are, let’s go to the online tutorial and deploy our first app!

+

Now that you know what Deployments are, let's go to the online tutorial and deploy our first app!

diff --git a/docs/tutorials/kubernetes-basics/explore-intro.html b/docs/tutorials/kubernetes-basics/explore-intro.html index edc813d3d4..e16d2a0755 100644 --- a/docs/tutorials/kubernetes-basics/explore-intro.html +++ b/docs/tutorials/kubernetes-basics/explore-intro.html @@ -34,7 +34,7 @@ title: Viewing Pods and Nodes
  • Networking, as a unique cluster IP address
  • Information about how to run each container, such as the container image version or specific ports to use
  • -

    A Pod models an application-specific “logical host” and can contain different application containers which are relatively tightly coupled. For example, a Pod might include both the container with your Node.js app as well as a different container that feeds the data to be published by the Node.js webserver. The containers in a Pod share an IP Address and port space, are always co-located and co-scheduled, and run in a shared context on the same Node.

    +

    A Pod models an application-specific "logical host" and can contain different application containers which are relatively tightly coupled. For example, a Pod might include both the container with your Node.js app as well as a different container that feeds the data to be published by the Node.js webserver. The containers in a Pod share an IP Address and port space, are always co-located and co-scheduled, and run in a shared context on the same Node.

    Pods are the atomic unit on the Kubernetes platform. When we create a Deployment on Kubernetes, that Deployment creates Pods with containers inside them (as opposed to creating containers directly). Each Pod is tied to the Node where it is scheduled, and remains there until termination (according to restart policy) or deletion. In case of a Node failure, identical Pods are scheduled on other available Nodes in the cluster.

    @@ -117,7 +117,7 @@ title: Viewing Pods and Nodes

    You can use these commands to see when applications were deployed, what their current statuses are, where they are running and what their configurations are.

    -

    Now that we know more about our cluster components and the command line, let’s explore our application.

    +

    Now that we know more about our cluster components and the command line, let's explore our application.

    diff --git a/docs/tutorials/kubernetes-basics/expose-intro.html b/docs/tutorials/kubernetes-basics/expose-intro.html index f426009b23..9ee7a4117a 100644 --- a/docs/tutorials/kubernetes-basics/expose-intro.html +++ b/docs/tutorials/kubernetes-basics/expose-intro.html @@ -28,7 +28,7 @@ title: Using a Service to Expose Your App

    Kubernetes Services

    -

    While Pods do have their own unique IP across the cluster, those IP’s are not exposed outside Kubernetes. Taking into account that over time Pods may be terminated, deleted or replaced by other Pods, we need a way to let other Pods and applications automatically discover each other. Kubernetes addresses this by grouping Pods in Services. A Kubernetes Service is an abstraction layer which defines a logical set of Pods and enables external traffic exposure, load balancing and service discovery for those Pods.

    +

    While Pods do have their own unique IP across the cluster, those IP's are not exposed outside Kubernetes. Taking into account that over time Pods may be terminated, deleted or replaced by other Pods, we need a way to let other Pods and applications automatically discover each other. Kubernetes addresses this by grouping Pods in Services. A Kubernetes Service is an abstraction layer which defines a logical set of Pods and enables external traffic exposure, load balancing and service discovery for those Pods.

    This abstraction will allow us to expose Pods to traffic originating from outside the cluster. Services have their own unique cluster-private IP address and expose a port to receive traffic. If you choose to expose the service outside the cluster, the options are:

      @@ -70,7 +70,7 @@ title: Using a Service to Expose Your App
      -

      A Service provides load balancing of traffic across the contained set of Pods. This is useful when a service is created to group all Pods from a specific Deployment (our application will make use of this in the next module, when we’ll have multiple instances running).

      +

      A Service provides load balancing of traffic across the contained set of Pods. This is useful when a service is created to group all Pods from a specific Deployment (our application will make use of this in the next module, when we'll have multiple instances running).

      Services are also responsible for service-discovery within the cluster (covered in Accessing the Service). This will for example allow a frontend service (like a web server) to receive traffic from a backend service (like a database) without worrying about Pods.

      @@ -120,7 +120,7 @@ title: Using a Service to Expose Your App

      Labels can be attached to objects at the creation time or later and can be modified at any time. The kubectl run command sets some default Labels/Label Selectors on the new Pods/ Deployment. The link between Labels and Label Selectors defines the relationship between the Deployment and the Pods it creates.

      -

      Now let’s expose our application with the help of a Service, and apply some new Labels.

      +

      Now let's expose our application with the help of a Service, and apply some new Labels.


      diff --git a/docs/tutorials/kubernetes-basics/scale-intro.html b/docs/tutorials/kubernetes-basics/scale-intro.html index 49b9e49dec..cf3635eba1 100644 --- a/docs/tutorials/kubernetes-basics/scale-intro.html +++ b/docs/tutorials/kubernetes-basics/scale-intro.html @@ -101,7 +101,7 @@ title: Running Multiple Instances of Your App
      -

      Once you have multiple instances of an Application running, you would be able to do Rolling updates without downtime. We’ll cover that in the next module. Now, let’s go to the online terminal and scale our application.

      +

      Once you have multiple instances of an Application running, you would be able to do Rolling updates without downtime. We'll cover that in the next module. Now, let's go to the online terminal and scale our application.


      diff --git a/docs/tutorials/kubernetes-basics/update-intro.html b/docs/tutorials/kubernetes-basics/update-intro.html index 9ed498ce35..d331e3f58c 100644 --- a/docs/tutorials/kubernetes-basics/update-intro.html +++ b/docs/tutorials/kubernetes-basics/update-intro.html @@ -116,7 +116,7 @@ title: Performing a Rolling Update
      -

      In the following interactive tutorial we’ll update our application to a new version, and also perform a rollback.

      +

      In the following interactive tutorial we'll update our application to a new version, and also perform a rollback.


      diff --git a/docs/tutorials/services/source-ip.md b/docs/tutorials/services/source-ip.md index 6657e42720..76548e68ad 100644 --- a/docs/tutorials/services/source-ip.md +++ b/docs/tutorials/services/source-ip.md @@ -29,7 +29,7 @@ This document makes use of the following terms: You must have a working Kubernetes 1.5 cluster to run the examples in this document. The examples use a small nginx webserver that echoes back the source -IP of requests it receives through a HTTP header. You can create it as follows: +IP of requests it receives through an HTTP header. You can create it as follows: ```console $ kubectl run source-ip-app --image=gcr.io/google_containers/echoserver:1.4 diff --git a/docs/tutorials/stateful-application/basic-stateful-set.md b/docs/tutorials/stateful-application/basic-stateful-set.md index 0edf9f9d38..ff16a5c62b 100644 --- a/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/docs/tutorials/stateful-application/basic-stateful-set.md @@ -11,7 +11,7 @@ title: StatefulSet Basics --- {% capture overview %} -This tutorial provides an introduction to managing applications with +This tutorial provides an introduction to manage applications with [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/). It demonstrates how to create, delete, scale, and update the container image of a StatefulSet. diff --git a/docs/tutorials/stateful-application/run-replicated-stateful-application.md b/docs/tutorials/stateful-application/run-replicated-stateful-application.md index 29f0d68242..30d22e1cce 100644 --- a/docs/tutorials/stateful-application/run-replicated-stateful-application.md +++ b/docs/tutorials/stateful-application/run-replicated-stateful-application.md @@ -180,7 +180,7 @@ replicating. In general, when a new Pod joins the set as a slave, it must assume the MySQL master might already have data on it. It also must assume that the replication logs might not go all the way back to the beginning of time. -These conservative assumptions are the key to allowing a running StatefulSet +These conservative assumptions are the key to allow a running StatefulSet to scale up and down over time, rather than being fixed at its initial size. The second Init Container, named `clone-mysql`, performs a clone operation on diff --git a/docs/tutorials/stateful-application/zookeeper.md b/docs/tutorials/stateful-application/zookeeper.md index 90a78fdc31..c6dcf705be 100644 --- a/docs/tutorials/stateful-application/zookeeper.md +++ b/docs/tutorials/stateful-application/zookeeper.md @@ -173,7 +173,7 @@ zk-2 ``` The servers in a ZooKeeper ensemble use natural numbers as unique identifiers, and -each server's identifier is stored in a file called `myid` in the server’s +each server's identifier is stored in a file called `myid` in the server's data directory. Examine the contents of the `myid` file for each server. diff --git a/docs/user-guide/configuring-containers.md b/docs/user-guide/configuring-containers.md index 1fa82f52e9..51ac150f07 100644 --- a/docs/user-guide/configuring-containers.md +++ b/docs/user-guide/configuring-containers.md @@ -75,7 +75,7 @@ apiVersion: v1 kind: Pod metadata: name: hello-world -spec: # specification of the pod’s contents +spec: # specification of the pod's contents restartPolicy: Never containers: - name: hello diff --git a/docs/user-guide/connecting-applications.md b/docs/user-guide/connecting-applications.md index c0cb825a3b..89a711ee6d 100644 --- a/docs/user-guide/connecting-applications.md +++ b/docs/user-guide/connecting-applications.md @@ -181,7 +181,7 @@ default-token-il9rc kubernetes.io/service-account-token 1 nginxsecret Opaque 2 ``` -Now modify your nginx replicas to start a https server using the certificate in the secret, and the Service, to expose both ports (80 and 443): +Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443): {% include code.html language="yaml" file="nginx-secure-app.yaml" ghlink="/docs/user-guide/nginx-secure-app" %} diff --git a/docs/user-guide/jobs/work-queue-2/rediswq.py b/docs/user-guide/jobs/work-queue-2/rediswq.py index ebefa64311..ceda8bd1e3 100644 --- a/docs/user-guide/jobs/work-queue-2/rediswq.py +++ b/docs/user-guide/jobs/work-queue-2/rediswq.py @@ -95,7 +95,7 @@ class RedisWQ(object): # Record that we (this session id) are working on a key. Expire that # note after the lease timeout. # Note: if we crash at this line of the program, then GC will see no lease - # for this item an later return it to the main queue. + # for this item a later return it to the main queue. itemkey = self._itemkey(item) self._db.setex(self._lease_key_prefix + itemkey, lease_secs, self._session) return item diff --git a/docs/user-guide/load-balancer.md b/docs/user-guide/load-balancer.md index d8540d98e5..fadeb38d5f 100644 --- a/docs/user-guide/load-balancer.md +++ b/docs/user-guide/load-balancer.md @@ -93,7 +93,7 @@ Due to the implementation of this feature, the source IP for sessions as seen in that will preserve the client Source IP for GCE/GKE environments. This feature will be phased in for other cloud providers in subsequent releases. ## Annotation to modify the LoadBalancer behavior for preservation of Source IP -In 1.5, an Beta feature has been added that changes the behavior of the external LoadBalancer feature. +In 1.5, a Beta feature has been added that changes the behavior of the external LoadBalancer feature. This feature can be activated by adding the beta annotation below to the metadata section of the Service Configuration file. diff --git a/docs/user-guide/managing-deployments.md b/docs/user-guide/managing-deployments.md index 2555e5601c..13be7b0238 100644 --- a/docs/user-guide/managing-deployments.md +++ b/docs/user-guide/managing-deployments.md @@ -396,7 +396,7 @@ spec: The patch is specified using json. -The system ensures that you don’t clobber changes made by other users or components by confirming that the `resourceVersion` doesn’t differ from the version you edited. If you want to update regardless of other changes, remove the `resourceVersion` field when you edit the resource. However, if you do this, don’t use your original configuration file as the source since additional fields most likely were set in the live state. +The system ensures that you don't clobber changes made by other users or components by confirming that the `resourceVersion` doesn't differ from the version you edited. If you want to update regardless of other changes, remove the `resourceVersion` field when you edit the resource. However, if you do this, don't use your original configuration file as the source since additional fields most likely were set in the live state. For more information, please see [kubectl patch](/docs/user-guide/kubectl/kubectl_patch/) document. diff --git a/docs/user-guide/petset.md b/docs/user-guide/petset.md index 0f6abcde1a..3247cf5a42 100644 --- a/docs/user-guide/petset.md +++ b/docs/user-guide/petset.md @@ -88,7 +88,7 @@ Only use PetSet if your application requires some or all of these properties. Ma Example workloads for PetSet: -* Databases like MySQL or PostgreSQL that require a single instance attached to a NFS persistent volume at any time +* Databases like MySQL or PostgreSQL that require a single instance attached to an NFS persistent volume at any time * Clustered software like Zookeeper, Etcd, or Elasticsearch that require stable membership. ## Alpha limitations diff --git a/docs/user-guide/pod-security-policy/index.md b/docs/user-guide/pod-security-policy/index.md index 46db299311..6a756c4766 100644 --- a/docs/user-guide/pod-security-policy/index.md +++ b/docs/user-guide/pod-security-policy/index.md @@ -26,7 +26,7 @@ administrator to control the following: 1. The SELinux context of the container. 1. The user ID. 1. The use of host namespaces and networking. -1. Allocating an FSGroup that owns the pod’s volumes +1. Allocating an FSGroup that owns the pod's volumes 1. Configuring allowable supplemental groups 1. Requiring the use of a read only root file system 1. Controlling the usage of volume types diff --git a/docs/user-guide/pods/init-container.md b/docs/user-guide/pods/init-container.md index 2f4748f9cf..e5319b7018 100644 --- a/docs/user-guide/pods/init-container.md +++ b/docs/user-guide/pods/init-container.md @@ -105,7 +105,7 @@ If the pod is [restarted](#pod-restart-reasons) all init containers must execute again. Changes to the init container spec are limited to the container image field. -Altering a init container image field is equivalent to restarting the pod. +Altering an init container image field is equivalent to restarting the pod. Because init containers can be restarted, retried, or reexecuted, init container code should be idempotent. In particular, code that writes to files on EmptyDirs diff --git a/docs/user-guide/prereqs.md b/docs/user-guide/prereqs.md index 4be0d6a188..3b9688f1b8 100644 --- a/docs/user-guide/prereqs.md +++ b/docs/user-guide/prereqs.md @@ -5,7 +5,7 @@ assignees: title: Installing and Setting up kubectl --- -To deploy and manage applications on Kubernetes, you’ll use the Kubernetes command-line tool, [kubectl](/docs/user-guide/kubectl/). It lets you inspect your cluster resources, create, delete, and update components, and much more. You will use it to look at your new cluster and bring up example apps. +To deploy and manage applications on Kubernetes, you'll use the Kubernetes command-line tool, [kubectl](/docs/user-guide/kubectl/). It lets you inspect your cluster resources, create, delete, and update components, and much more. You will use it to look at your new cluster and bring up example apps. ## Install kubectl Binary Via curl diff --git a/docs/user-guide/replicasets.md b/docs/user-guide/replicasets.md index ea3e7bde14..769ea58c02 100644 --- a/docs/user-guide/replicasets.md +++ b/docs/user-guide/replicasets.md @@ -37,6 +37,7 @@ their ReplicaSets. A ReplicaSet ensures that a specified number of pod “replicas” are running at any given time. However, a Deployment is a higher-level concept that manages ReplicaSets and + provides declarative updates to pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don't require updates at all. diff --git a/docs/user-guide/replication-controller/index.md b/docs/user-guide/replication-controller/index.md index 3b91828535..32e1cea9dd 100644 --- a/docs/user-guide/replication-controller/index.md +++ b/docs/user-guide/replication-controller/index.md @@ -236,6 +236,7 @@ object](/docs/api-reference/v1/definitions/#_v1_replicationcontroller). It’s mainly used by [`Deployment`](/docs/user-guide/deployments/) as a mechanism to orchestrate pod creation, deletion and updates. Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don’t require updates at all. + ### Deployment (Recommended) [`Deployment`](/docs/user-guide/deployments/) is a higher-level API object that updates its underlying Replica Sets and their Pods diff --git a/docs/user-guide/security-context.md b/docs/user-guide/security-context.md index 3d216447ca..c95a14b9b0 100644 --- a/docs/user-guide/security-context.md +++ b/docs/user-guide/security-context.md @@ -20,7 +20,7 @@ metadata: name: hello-world spec: containers: - # specification of the pod’s containers + # specification of the pod's containers # ... securityContext: fsGroup: 1234 @@ -85,4 +85,3 @@ Please refer to the [API documentation](/docs/api-reference/v1/definitions/#_v1_securitycontext) for a detailed listing and description of all the fields available within the container security context. - diff --git a/docs/whatisk8s.md b/docs/whatisk8s.md index 7c1e637b6d..dde25433de 100644 --- a/docs/whatisk8s.md +++ b/docs/whatisk8s.md @@ -52,7 +52,7 @@ Summary of container benefits: * **Cloud and OS distribution portability**: Runs on Ubuntu, RHEL, CoreOS, on-prem, Google Container Engine, and anywhere else. * **Application-centric management**: - Raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources. + Raises the level of abstraction from running an OS on virtual hardware to run an application on an OS using logical resources. * **Loosely coupled, distributed, elastic, liberated [micro-services](http://martinfowler.com/articles/microservices.html)**: Applications are broken into smaller, independent pieces and can be deployed and managed dynamically -- not a fat monolithic stack running on one big single-purpose machine. * **Resource isolation**: @@ -106,7 +106,7 @@ Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) syst * Kubernetes does not provide nor mandate a comprehensive application configuration language/system (e.g., [jsonnet](https://github.com/google/jsonnet)). * Kubernetes does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems. -On the other hand, a number of PaaS systems run *on* Kubernetes, such as [Openshift](https://github.com/openshift/origin), [Deis](http://deis.io/), and [Gondor](https://gondor.io/). You could also roll your own custom PaaS, integrate with a CI system of your choice, or get along just fine with just Kubernetes: bring your container images and deploy them on Kubernetes. +On the other hand, a number of PaaS systems run *on* Kubernetes, such as [Openshift](https://github.com/openshift/origin), [Deis](http://deis.io/), and [Eldarion Cloud](http://eldarion.cloud/). You could also roll your own custom PaaS, integrate with a CI system of your choice, or get along just fine with just Kubernetes: bring your container images and deploy them on Kubernetes. Since Kubernetes operates at the application level rather than at just the hardware level, it provides some generally applicable features common to PaaS offerings, such as deployment, scaling, load balancing, logging, monitoring, etc. However, Kubernetes is not monolithic, and these default solutions are optional and pluggable. diff --git a/index.html b/index.html index 728100db84..78b964ce39 100644 --- a/index.html +++ b/index.html @@ -80,7 +80,7 @@

      Self-healing

      Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers - that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.

      + that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.

    @@ -100,7 +100,7 @@

    Automated rollouts and rollbacks

    Kubernetes progressively rolls out changes to your application or its configuration, while monitoring - application health to ensure it doesn’t kill all your instances at the same time. If something goes + application health to ensure it doesn't kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you. Take advantage of a growing ecosystem of deployment solutions.

    @@ -131,7 +131,7 @@

    Case Studies

    -

    Using Kubernetes to reinvent the world’s largest educational company

    +

    Using Kubernetes to reinvent the world's largest educational company

    Read more
    @@ -139,11 +139,11 @@ Read more
    -

    Inside eBay’s shift to Kubernetes and containers atop OpenStack

    +

    Inside eBay's shift to Kubernetes and containers atop OpenStack

    Read more
    -

    Migrating from a homegrown ‘cluster’ to Kubernetes

    +

    Migrating from a homegrown 'cluster' to Kubernetes

    Watch the video
    @@ -154,7 +154,7 @@ - + @@ -162,11 +162,11 @@ - + - +

    flexVolume

    FlexVolume represents a generic volume resource that is provisioned/attached using a exec based plugin. This is an alpha feature and may change in future.

    FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. This is an alpha feature and may change in future.

    false

    v1.FlexVolumeSource

    flexVolume

    FlexVolume represents a generic volume resource that is provisioned/attached using a exec based plugin. This is an alpha feature and may change in future.

    FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. This is an alpha feature and may change in future.

    false

    v1.FlexVolumeSource