@@ -45,7 +45,7 @@ title: Case Studies
-
+
diff --git a/case-studies/pearson.html b/case-studies/pearson.html
index bf871789b9..50f16ce7ae 100644
--- a/case-studies/pearson.html
+++ b/case-studies/pearson.html
@@ -13,13 +13,13 @@ title: Pearson Case Study
-
Using Kubernetes to reinvent the world’s largest educational company
+
Using Kubernetes to reinvent the world's largest educational company
- Pearson, the world’s education company, serving 75 million learners worldwide, set a goal to more than double that number to 200 million by 2025. A key part of this growth is in digital learning experiences, and that requires an infrastructure platform that is able to scale quickly and deliver products to market faster. So Pearson’s Cloud Technology team chose Kubernetes to help build a platform to meet the business requirements.
+ Pearson, the world's education company, serving 75 million learners worldwide, set a goal to more than double that number to 200 million by 2025. A key part of this growth is in digital learning experiences, and that requires an infrastructure platform that is able to scale quickly and deliver products to market faster. So Pearson's Cloud Technology team chose Kubernetes to help build a platform to meet the business requirements.
- “To transform our infrastructure, we had to think beyond simply enabling automated provisioning, we realized we had to build a platform that would allow Pearson developers to build manage and deploy applications in a completely different way. We chose Kubernetes because of its flexibility, ease of management and the way it would improve our engineers’ productivity.”
+ "To transform our infrastructure, we had to think beyond simply enabling automated provisioning, we realized we had to build a platform that would allow Pearson developers to build manage and deploy applications in a completely different way. We chose Kubernetes because of its flexibility, ease of management and the way it would improve our engineers' productivity."
— Chris Jackson, Director for Cloud Product Engineering, Pearson
@@ -38,7 +38,7 @@ title: Pearson Case Study
Why Kubernetes:
- - Kubernetes will allow Pearson’s teams to develop their apps in a consistent manner, saving time and minimizing complexity.
+ - Kubernetes will allow Pearson's teams to develop their apps in a consistent manner, saving time and minimizing complexity.
@@ -52,7 +52,7 @@ title: Pearson Case Study
Results:
- - Pearson is building an enterprise-wide platform for delivering innovative, web-based educational content. They expect engineers’ productivity to increase by up to 20 percent.
+ - Pearson is building an enterprise-wide platform for delivering innovative, web-based educational content. They expect engineers' productivity to increase by up to 20 percent.
@@ -63,9 +63,9 @@ title: Pearson Case Study
Kubernetes powers a comprehensive developer experience
-
Pearson wanted to use as much open source technology as possible for the platform given that it provides both technical and commercial benefits over the duration of the project. Jackson says, “Building an infrastructure platform based on open source technology in Pearson was a no-brainer, the sharing of technical challenges and advanced use cases in a community of people with talent far beyond what we could hire independently allows us to innovate at a level we could not reach on our own. Our engineers enjoy returning code to the community and participating in talks, blogs and meetings, it’s a great way for us to allow our team to express themselves and share the pride they have in their work.”
-
It also wanted to use a container-focused platform. Pearson has 400 development groups and diverse brands with varying business and technical needs. With containers, each brand could experiment with building new types of content using their preferred technologies, and then deliver it using containers. Pearson chose Kubernetes because it believes that is the best technology for managing containers, has the widest community support and offers the most flexible and powerful tools.“
-
Kubernetes is at the core of the platform we’ve built for developers. After we get our big spike in back-to-school in traffic, much of Pearson’s traffic will interact with Kubernetes. It is proving to be as effective as we had hoped,” Jackson says.
+
Pearson wanted to use as much open source technology as possible for the platform given that it provides both technical and commercial benefits over the duration of the project. Jackson says, "Building an infrastructure platform based on open source technology in Pearson was a no-brainer, the sharing of technical challenges and advanced use cases in a community of people with talent far beyond what we could hire independently allows us to innovate at a level we could not reach on our own. Our engineers enjoy returning code to the community and participating in talks, blogs and meetings, it's a great way for us to allow our team to express themselves and share the pride they have in their work."
+
It also wanted to use a container-focused platform. Pearson has 400 development groups and diverse brands with varying business and technical needs. With containers, each brand could experiment with building new types of content using their preferred technologies, and then deliver it using containers. Pearson chose Kubernetes because it believes that is the best technology for managing containers, has the widest community support and offers the most flexible and powerful tools."
+
Kubernetes is at the core of the platform we've built for developers. After we get our big spike in back-to-school in traffic, much of Pearson's traffic will interact with Kubernetes. It is proving to be as effective as we had hoped," Jackson says.
@@ -74,9 +74,9 @@ title: Pearson Case Study
Encouraging experimentation, saving engineers time
-
With the new platform, Pearson will increase stability and performance, and to bring products to market more quickly. The company says its engineers will also get a productivity boost because they won’t spend time managing infrastructure. Jackson estimates 15 to 20 percent in productivity savings.
+
With the new platform, Pearson will increase stability and performance, and to bring products to market more quickly. The company says its engineers will also get a productivity boost because they won't spend time managing infrastructure. Jackson estimates 15 to 20 percent in productivity savings.
Beyond that, Pearson says the platform will encourage innovation because of the ease with which new applications can be developed, and because applications will be deployed far more quickly than in the past. It expects that will help the company meet its goal of reaching 200 million learners within the next 10 years.
-
“We’re already seeing tremendous benefits with Kubernetes — improved engineering productivity, faster delivery of applications and a simplified infrastructure. But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online,” says Jackson.
+
"We're already seeing tremendous benefits with Kubernetes — improved engineering productivity, faster delivery of applications and a simplified infrastructure. But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online," says Jackson.
diff --git a/case-studies/wikimedia.html b/case-studies/wikimedia.html
index 00eb47e3e0..2d3b686128 100644
--- a/case-studies/wikimedia.html
+++ b/case-studies/wikimedia.html
@@ -20,7 +20,7 @@ title: Wikimedia Case Study
- “Wikimedia Tool Labs is vital for making sure wikis all around the world work as well as they possibly can. Because it’s grown organically for almost 10 years, it has become an extremely challenging environment and difficult to maintain. It’s like a big ball of mud — you really can’t see through it. With Kubernetes, we’re simplifying the environment and making it easier for developers to build the tools that make wikis run better.”
+ "Wikimedia Tool Labs is vital for making sure wikis all around the world work as well as they possibly can. Because it's grown organically for almost 10 years, it has become an extremely challenging environment and difficult to maintain. It's like a big ball of mud — you really can't see through it. With Kubernetes, we're simplifying the environment and making it easier for developers to build the tools that make wikis run better."
— Yuvi Panda, operations engineer at Wikimedia Foundation and Wikimedia Tool Labs
@@ -67,13 +67,13 @@ title: Wikimedia Case Study
Using Kubernetes to provide tools for maintaining wikis
- Wikimedia Tool Labs is run by a staff of four-and-a-half paid employees and two volunteers. The infrastructure didn't make it easy or intuitive for developers to build bots and other tools to make wikis work more easily. Yuvi says, “It’s incredibly chaotic. We have lots of Perl and Bash duct tape on top of it. Everything is super fragile.”
+ Wikimedia Tool Labs is run by a staff of four-and-a-half paid employees and two volunteers. The infrastructure didn't make it easy or intuitive for developers to build bots and other tools to make wikis work more easily. Yuvi says, "It's incredibly chaotic. We have lots of Perl and Bash duct tape on top of it. Everything is super fragile."
To solve the problem, Wikimedia Tool Labs migrated parts of its infrastructure to Kubernetes, in preparation for eventually moving its entire system. Yuvi said Kubernetes greatly simplifies maintenance. The goal is to allow developers creating bots and other tools to use whatever development methods they want, but make it easier for the Wikimedia Tool Labs to maintain the required infrastructure for hosting and sharing them.
- “With Kubernetes, I’ve been able to remove a lot of our custom-made code, which makes everything easier to maintain. Our users’ code also runs in a more stable way than previously,” says Yuvi.
+ "With Kubernetes, I've been able to remove a lot of our custom-made code, which makes everything easier to maintain. Our users' code also runs in a more stable way than previously," says Yuvi.
@@ -84,13 +84,13 @@ title: Wikimedia Case Study
Simplifying infrastructure and keeping wikis running better
- Wikimedia Tool Labs has seen great success with the initial Kubernetes deployment. Old code is being simplified and eliminated, contributing developers don’t have to change the way they write their tools and bots, and those tools and bots run in a more stable fashion than they have in the past. The paid staff and volunteers are able to better keep up with fixing issues.
+ Wikimedia Tool Labs has seen great success with the initial Kubernetes deployment. Old code is being simplified and eliminated, contributing developers don't have to change the way they write their tools and bots, and those tools and bots run in a more stable fashion than they have in the past. The paid staff and volunteers are able to better keep up with fixing issues.
- In the future, with a more complete migration to Kubernetes, Wikimedia Tool Labs expects to make it even easier to host and maintain the bots and tools that help run wikis across the world. The tool labs already host approximately 1,300 tools and bots from 800 volunteers, with many more being submitted every day. Twenty percent of the tool labs’ web tools that account for more than 60 percent of web traffic now run on Kubernetes. The tool labs has a 25-node cluster that keeps up with each new Kubernetes release. Many existing web tools are migrating to Kubernetes.
+ In the future, with a more complete migration to Kubernetes, Wikimedia Tool Labs expects to make it even easier to host and maintain the bots and tools that help run wikis across the world. The tool labs already host approximately 1,300 tools and bots from 800 volunteers, with many more being submitted every day. Twenty percent of the tool labs' web tools that account for more than 60 percent of web traffic now run on Kubernetes. The tool labs has a 25-node cluster that keeps up with each new Kubernetes release. Many existing web tools are migrating to Kubernetes.
- “Our goal is to make sure that people all over the world can share knowledge as easily as possible. Kubernetes helps with that, by making it easier for wikis everywhere to have the tools they need to thrive,” says Yuvi.
+ "Our goal is to make sure that people all over the world can share knowledge as easily as possible. Kubernetes helps with that, by making it easier for wikis everywhere to have the tools they need to thrive," says Yuvi.
diff --git a/community.html b/community.html
index 9ef63c1b66..a10a100375 100644
--- a/community.html
+++ b/community.html
@@ -24,8 +24,8 @@ title: Community
SIGs
Have a special interest in how Kubernetes works with another technology? See our ever growing
lists of SIGs,
- from AWS and Openstack to Big Data and Scalability, there’s a place for you to contribute and instructions
- for forming a new SIG if your special interest isn’t covered (yet).
+ from AWS and Openstack to Big Data and Scalability, there's a place for you to contribute and instructions
+ for forming a new SIG if your special interest isn't covered (yet).
Events
diff --git a/docs/admin/accessing-the-api.md b/docs/admin/accessing-the-api.md
index 5a57db23ce..a70f1c0920 100644
--- a/docs/admin/accessing-the-api.md
+++ b/docs/admin/accessing-the-api.md
@@ -86,7 +86,7 @@ For version 1.2, clusters created by `kube-up.sh` are configured so that no auth
required for any request.
As of version 1.3, clusters created by `kube-up.sh` are configured so that the ABAC authorization
-modules is enabled. However, its input file is initially set to allow all users to do all
+modules are enabled. However, its input file is initially set to allow all users to do all
operations. The cluster administrator needs to edit that file, or configure a different authorizer
to restrict what users can do.
diff --git a/docs/admin/addons.md b/docs/admin/addons.md
index f45aebeb09..aeee68cc30 100644
--- a/docs/admin/addons.md
+++ b/docs/admin/addons.md
@@ -14,7 +14,7 @@ Add-ons in each section are sorted alphabetically - the ordering does not imply
* [Calico](http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/) is a secure L3 networking and network policy provider.
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm) unites Flannel and Calico, providing networking and network policy.
-* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml) is a overlay network provider that can be used with Kubernetes.
+* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml) is an overlay network provider that can be used with Kubernetes.
* [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/user-guide/networkpolicies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
diff --git a/docs/admin/admission-controllers.md b/docs/admin/admission-controllers.md
index 475f2e4be9..089dce2605 100644
--- a/docs/admin/admission-controllers.md
+++ b/docs/admin/admission-controllers.md
@@ -126,7 +126,7 @@ For additional HTTP configuration, refer to the [kubeconfig](/docs/user-guide/ku
When faced with an admission decision, the API Server POSTs a JSON serialized api.imagepolicy.v1alpha1.ImageReview object describing the action. This object contains fields describing the containers being admitted, as well as any pod annotations that match `*.image-policy.k8s.io/*`.
-Note that webhook API objects are subject to the same versioning compatibility rules as other Kubernetes API objects. Implementers should be aware of looser compatibility promises for alpha objects and check the “apiVersion” field of the request to ensure correct deserialization. Additionally, the API Server must enable the imagepolicy.k8s.io/v1alpha1 API extensions group (`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`).
+Note that webhook API objects are subject to the same versioning compatibility rules as other Kubernetes API objects. Implementers should be aware of looser compatibility promises for alpha objects and check the "apiVersion" field of the request to ensure correct deserialization. Additionally, the API Server must enable the imagepolicy.k8s.io/v1alpha1 API extensions group (`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`).
An example request body:
@@ -151,7 +151,7 @@ An example request body:
}
```
-The remote service is expected to fill the ImageReviewStatus field of the request and respond to either allow or disallow access. The response body’s “spec” field is ignored and may be omitted. A permissive response would return:
+The remote service is expected to fill the ImageReviewStatus field of the request and respond to either allow or disallow access. The response body's "spec" field is ignored and may be omitted. A permissive response would return:
```
{
diff --git a/docs/admin/apparmor/index.md b/docs/admin/apparmor/index.md
index 4c2d02d989..224f0bbdeb 100644
--- a/docs/admin/apparmor/index.md
+++ b/docs/admin/apparmor/index.md
@@ -384,7 +384,7 @@ Specifying the default profile to apply to containers when none is provided:
- **key**: `apparmor.security.beta.kubernetes.io/defaultProfileName`
- **value**: a profile reference, described above
-Specifying the list of profiles Pod containers are allowed to specify:
+Specifying the list of profiles Pod containers is allowed to specify:
- **key**: `apparmor.security.beta.kubernetes.io/allowedProfileNames`
- **value**: a comma-separated list of profile references (described above)
diff --git a/docs/admin/federation/index.md b/docs/admin/federation/index.md
index 478f7563de..f8fb5b6c4f 100644
--- a/docs/admin/federation/index.md
+++ b/docs/admin/federation/index.md
@@ -110,7 +110,7 @@ $ KUBE_REGISTRY="gcr.io/myrepository" federation/develop/develop.sh build_image
$ KUBE_REGISTRY="gcr.io/myrepository" federation/develop/develop.sh push
```
-Note: This is going to overwite the values you might have set for
+Note: This is going to overwrite the values you might have set for
`apiserverRegistry`, `apiserverVersion`, `controllerManagerRegistry` and
`controllerManagerVersion` in your `${FEDERATION_OUTPUT_ROOT}/values.yaml`
file. Hence, it is not recommend to customize these values in
diff --git a/docs/admin/ha-master-gce.md b/docs/admin/ha-master-gce.md
index 262dafbe0a..871ce56606 100644
--- a/docs/admin/ha-master-gce.md
+++ b/docs/admin/ha-master-gce.md
@@ -24,7 +24,7 @@ If true, reads will be directed to leader etcd replica.
Setting this value to true is optional: reads will be more reliable but will also be slower.
Optionally, you can specify a GCE zone where the first master replica is to be created.
-Set the the following flag:
+Set the following flag:
* `KUBE_GCE_ZONE=zone` - zone where the first master replica will run.
diff --git a/docs/admin/networking.md b/docs/admin/networking.md
index c8a8c53d9c..e1de39fdbd 100644
--- a/docs/admin/networking.md
+++ b/docs/admin/networking.md
@@ -173,7 +173,7 @@ Lars Kellogg-Stedman.
[Nuage](http://www.nuagenetworks.net) provides a highly scalable policy-based Software-Defined Networking (SDN) platform. Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
-The Nuage platform uses overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Nuage’s policy abstraction model is designed with applications in mind and makes it easy to declare fine-grained policies for applications.The platform’s real-time analytics engine enables visibility and security monitoring for Kubernetes applications.
+The Nuage platform uses overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Nuage's policy abstraction model is designed with applications in mind and makes it easy to declare fine-grained policies for applications.The platform's real-time analytics engine enables visibility and security monitoring for Kubernetes applications.
### OpenVSwitch
diff --git a/docs/admin/rescheduler.md b/docs/admin/rescheduler.md
index 27c512bff9..ba3633e83b 100644
--- a/docs/admin/rescheduler.md
+++ b/docs/admin/rescheduler.md
@@ -30,7 +30,7 @@ given the pods that are already running in the cluster
the rescheduler tries to free up space for the add-on by evicting some pods; then the scheduler will schedule the add-on pod.
To avoid situation when another pod is scheduled into the space prepared for the critical add-on,
-the chosen node gets a temporary taint “CriticalAddonsOnly” before the eviction(s)
+the chosen node gets a temporary taint "CriticalAddonsOnly" before the eviction(s)
(see [more details](https://github.com/kubernetes/kubernetes/blob/master/docs/design/taint-toleration-dedicated.md)).
Each critical add-on has to tolerate it,
the other pods shouldn't tolerate the taint. The tain is removed once the add-on is successfully scheduled.
@@ -57,4 +57,3 @@ and have the following annotations specified:
* `scheduler.alpha.kubernetes.io/tolerations` set to `[{"key":"CriticalAddonsOnly", "operator":"Exists"}]`
The first one marks a pod a critical. The second one is required by Rescheduler algorithm.
-
diff --git a/docs/getting-started-guides/libvirt-coreos.md b/docs/getting-started-guides/libvirt-coreos.md
index 33c6c6be67..ca2e9e7d75 100644
--- a/docs/getting-started-guides/libvirt-coreos.md
+++ b/docs/getting-started-guides/libvirt-coreos.md
@@ -30,7 +30,7 @@ Another difference is that no security is enforced on `libvirt-coreos` at all. F
* Kubernetes secrets are not protected as securely as they are on production environments;
* etc.
-So, an k8s application developer should not validate its interaction with Kubernetes on `libvirt-coreos` because he might technically succeed in doing things that are prohibited on a production environment like:
+So, a k8s application developer should not validate its interaction with Kubernetes on `libvirt-coreos` because he might technically succeed in doing things that are prohibited on a production environment like:
* un-authenticated access to Kube API server;
* Access to Kubernetes private data structures inside etcd;
diff --git a/docs/getting-started-guides/logging.md b/docs/getting-started-guides/logging.md
index ff874e119d..05c41cd3c6 100644
--- a/docs/getting-started-guides/logging.md
+++ b/docs/getting-started-guides/logging.md
@@ -79,7 +79,7 @@ root 479 0.0 0.0 4348 812 ? S 00:05 0:00 sleep 1
root 480 0.0 0.0 15572 2212 ? R 00:05 0:00 ps aux
```
-What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original container's execution and only see the log lines for the new container? Let’s find out. First let's delete the currently running counter.
+What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original container's execution and only see the log lines for the new container? Let's find out. First let's delete the currently running counter.
```shell
$ kubectl delete pod counter
diff --git a/docs/getting-started-guides/meanstack.md b/docs/getting-started-guides/meanstack.md
index 37df0513f7..ca34d32753 100644
--- a/docs/getting-started-guides/meanstack.md
+++ b/docs/getting-started-guides/meanstack.md
@@ -17,12 +17,12 @@ Thankfully, there is a system we can use to manage our containers in a cluster e
## The Basics of Using Kubernetes
-Before we jump in and start kube’ing it up, it’s important to understand some of the fundamentals of Kubernetes.
+Before we jump in and start kube'ing it up, it's important to understand some of the fundamentals of Kubernetes.
* Containers: These are the Docker, rtk, AppC, or whatever Container you are running. You can think of these like subatomic particles; everything is made up of them, but you rarely (if ever) interact with them directly.
-* Pods: Pods are the basic component of Kubernetes. They are a group of Containers that are scheduled, live, and die together. Why would you want to have a group of containers instead of just a single container? Let’s say you had a log processor, a web server, and a database. If you couldn't use Pods, you would have to bundle the log processor in the web server and database containers, and each time you updated one you would have to update the other. With Pods, you can just reuse the same log processor for both the web server and database.
+* Pods: Pods are the basic component of Kubernetes. They are a group of Containers that are scheduled, live, and die together. Why would you want to have a group of containers instead of just a single container? Let's say you had a log processor, a web server, and a database. If you couldn't use Pods, you would have to bundle the log processor in the web server and database containers, and each time you updated one you would have to update the other. With Pods, you can just reuse the same log processor for both the web server and database.
* Deployments: A Deployment provides declarative updates for Pods. You can define Deployments to create new Pods, or replace existing Pods. You only need to describe the desired state in a Deployment object, and the deployment controller will change the actual state to the desired state at a controlled rate for you. You can define Deployments to create new resources, or replace existing ones by new ones.
-* Services: A service is the single point of contact for a group of Pods. For example, let’s say you have a Deployment that creates four copies of a web server pod. A Service will split the traffic to each of the four copies. Services are "permanent" while the pods behind them can come and go, so it’s a good idea to use Services.
+* Services: A service is the single point of contact for a group of Pods. For example, let's say you have a Deployment that creates four copies of a web server pod. A Service will split the traffic to each of the four copies. Services are "permanent" while the pods behind them can come and go, so it's a good idea to use Services.
## Step 1: Creating the Container
@@ -37,7 +37,7 @@ To do this, you need to use more Docker. Make sure you have the latest version i
Getting the code:
-Before starting, let’s get some code to run. You can follow along on your personal machine or a Linux VM in the cloud. I recommend using Linux or a Linux VM; running Docker on Mac and Windows is outside the scope of this tutorial.
+Before starting, let's get some code to run. You can follow along on your personal machine or a Linux VM in the cloud. I recommend using Linux or a Linux VM; running Docker on Mac and Windows is outside the scope of this tutorial.
```shell
$ git clone https://github.com/ijason/NodeJS-Sample-App.git app
@@ -45,7 +45,7 @@ $ mv app/EmployeeDB/* app/
$ sed -i -- 's/localhost/mongo/g' ./app/app.js
```
-This is the same sample app we ran before. The second line just moves everything from the `EmployeeDB` subfolder up into the app folder so it’s easier to access. The third line, once again, replaces the hardcoded `localhost` with the `mongo` proxy.
+This is the same sample app we ran before. The second line just moves everything from the `EmployeeDB` subfolder up into the app folder so it's easier to access. The third line, once again, replaces the hardcoded `localhost` with the `mongo` proxy.
Building the Docker image:
@@ -83,7 +83,7 @@ $ ls
Dockerfile app
```
-Let’s build.
+Let's build.
```shell
$ docker build -t myapp .
@@ -139,7 +139,7 @@ After some time, it will finish. You can check the console to see the container
## **Step 4: Creating the Cluster**
-So now you have the custom container, let’s create a cluster to run it.
+So now you have the custom container, let's create a cluster to run it.
Currently, a cluster can be as small as one machine to as big as 100 machines. You can pick any machine type you want, so you can have a cluster of a single `f1-micro` instance, 100 `n1-standard-32` instances (3,200 cores!), and anything in between.
@@ -193,7 +193,7 @@ $ gcloud compute disks create \
Pick the same zone as your cluster and an appropriate disk size for your application.
-Now, we need to create a Deployment that will run the database. I’m using a Deployment and not a Pod, because if a standalone Pod dies, it won't restart automatically.
+Now, we need to create a Deployment that will run the database. I'm using a Deployment and not a Pod, because if a standalone Pod dies, it won't restart automatically.
### `db-deployment.yml`
@@ -231,7 +231,7 @@ We call the deployment `mongo-deployment`, specify one replica, and open the app
The `volumes` section creates the volume for Kubernetes to use. There is a Google Container Engine-specific `gcePersistentDisk` section that maps the disk we made into a Kubernetes volume, and we mount the volume into the `/data/db` directory (as described in the MongoDB Docker documentation)
-Now we have the Deployment, let’s create the Service:
+Now we have the Deployment, let's create the Service:
### `db-service.yml`
@@ -267,7 +267,7 @@ db-service.yml
## Step 6: Running the Database
-First, let’s "log in" to the cluster
+First, let's "log in" to the cluster
```shell
$ gcloud container clusters get-credentials mean-cluster
@@ -305,14 +305,14 @@ mongo-deployment-xxxx 1/1 Running 0 3m
## Step 7: Creating the Web Server
-Now the database is running, let’s start the web server.
+Now the database is running, let's start the web server.
We need two things:
1. Deployment to spin up and down web server pods
2. Service to expose our website to the interwebs
-Let’s look at the Deployment configuration:
+Let's look at the Deployment configuration:
### `web-deployment.yml`
diff --git a/docs/getting-started-guides/mesos/index.md b/docs/getting-started-guides/mesos/index.md
index 948eae1a41..499ff0ba51 100644
--- a/docs/getting-started-guides/mesos/index.md
+++ b/docs/getting-started-guides/mesos/index.md
@@ -229,7 +229,7 @@ We assume that kube-dns will use
Note that we have passed these two values already as parameter to the apiserver above.
-A template for an replication controller spinning up the pod with the 3 containers can be found at [cluster/addons/dns/skydns-rc.yaml.in][11] in the repository. The following steps are necessary in order to get a valid replication controller yaml file:
+A template for a replication controller spinning up the pod with the 3 containers can be found at [cluster/addons/dns/skydns-rc.yaml.in][11] in the repository. The following steps are necessary in order to get a valid replication controller yaml file:
- replace `{% raw %}{{ pillar['dns_replicas'] }}{% endraw %}` with `1`
- replace `{% raw %}{{ pillar['dns_domain'] }}{% endraw %}` with `cluster.local.`
diff --git a/docs/getting-started-guides/rackspace.md b/docs/getting-started-guides/rackspace.md
index 00c73a8e59..ff59f4d31b 100644
--- a/docs/getting-started-guides/rackspace.md
+++ b/docs/getting-started-guides/rackspace.md
@@ -45,7 +45,7 @@ There is a specific `cluster/rackspace` directory with the scripts for the follo
1. A cloud network will be created and all instances will be attached to this network.
- flanneld uses this network for next hop routing. These routes allow the containers running on each node to communicate with one another on this private network.
-2. A SSH key will be created and uploaded if needed. This key must be used to ssh into the machines (we do not capture the password).
+2. An SSH key will be created and uploaded if needed. This key must be used to ssh into the machines (we do not capture the password).
3. The master server and additional nodes will be created via the `nova` CLI. A `cloud-config.yaml` is generated and provided as user-data with the entire configuration for the systems.
4. We then boot as many nodes as defined via `$NUM_NODES`.
diff --git a/docs/getting-started-guides/windows/index.md b/docs/getting-started-guides/windows/index.md
index 3096bed7eb..dd775b81af 100644
--- a/docs/getting-started-guides/windows/index.md
+++ b/docs/getting-started-guides/windows/index.md
@@ -15,18 +15,18 @@ In Kubernetes version 1.5, Windows Server Containers for Kubernetes is supported
4. Docker Version 1.12.2-cs2-ws-beta or later for Windows Server nodes (Linux nodes and Kubernetes control plane can run any Kubernetes supported Docker Version)
## Networking
-Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) don’t natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used.
+Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) don't natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used.
### Linux
-The above networking approach is already supported on Linux using a bridge interface, which essentially creates a private network local to the node. Similar to the Windows side, routes to all other pod CIDRs must be created in order to send packets via the “public” NIC.
+The above networking approach is already supported on Linux using a bridge interface, which essentially creates a private network local to the node. Similar to the Windows side, routes to all other pod CIDRs must be created in order to send packets via the "public" NIC.
### Windows
Each Window Server node should have the following configuration:
1. Two NICs (virtual networking adapters) are required on each Windows Server node - The two Windows container networking modes of interest (transparent and L2 bridge) use an external Hyper-V virtual switch. This means that one of the NICs is entirely allocated to the bridge, creating the need for the second NIC.
2. Transparent container network created - This is a manual configuration step and is shown in **_Route Setup_** section below
-3. RRAS (Routing) Windows feature enabled - Allows routing between NICs on the box, and also “captures” packets that have the destination IP of a POD running on the node. To enable, open “Server Manager”. Click on “Roles”, “Add Roles”. Click “Next”. Select “Network Policy and Access Services”. Click on “Routing and Remote Access Service” and the underlying checkboxes
-4. Routes defined pointing to the other pod CIDRs via the “public” NIC - These routes are added to the built-in routing table as shown in **_Route Setup_** section below
+3. RRAS (Routing) Windows feature enabled - Allows routing between NICs on the box, and also "captures" packets that have the destination IP of a POD running on the node. To enable, open "Server Manager". Click on "Roles", "Add Roles". Click "Next". Select "Network Policy and Access Services". Click on "Routing and Remote Access Service" and the underlying checkboxes
+4. Routes defined pointing to the other pod CIDRs via the "public" NIC - These routes are added to the built-in routing table as shown in **_Route Setup_** section below
The following diagram illustrates the Windows Server networking setup for Kubernetes Setup
![Windows Setup](windows-setup.png)
diff --git a/docs/hellonode.md b/docs/hellonode.md
index 412e495c11..fcb8eec480 100755
--- a/docs/hellonode.md
+++ b/docs/hellonode.md
@@ -12,7 +12,7 @@ title: Hello World on Google Container Engine
The goal of this codelab is for you to turn a simple Hello World node.js app into a replicated application running on Kubernetes. We will show you how to take code that you have developed on your machine, turn it into a Docker container image, and then run that image on [Google Container Engine](https://cloud.google.com/container-engine/).
-Here’s a diagram of the various parts in play in this codelab to help you understand how pieces fit with one another. Use this as a reference as we progress through the codelab; it should all make sense by the time we get to the end.
+Here's a diagram of the various parts in play in this codelab to help you understand how pieces fit with one another. Use this as a reference as we progress through the codelab; it should all make sense by the time we get to the end.
![image](/images/hellonode/image_1.png)
@@ -38,7 +38,7 @@ export PROJECT_ID="your-project-id"
Next, [enable billing](https://console.cloud.google.com/billing) in the Cloud Console in order to use Google Cloud resources and [enable the Container Engine API](https://console.cloud.google.com/project/_/kubernetes/list).
-New users of Google Cloud Platform receive a [$300 free trial](https://console.cloud.google.com/billing/freetrial?hl=en). Running through this codelab shouldn’t cost you more than a few dollars of that trial. Google Container Engine pricing is documented [here](https://cloud.google.com/container-engine/pricing).
+New users of Google Cloud Platform receive a [$300 free trial](https://console.cloud.google.com/billing/freetrial?hl=en). Running through this codelab shouldn't cost you more than a few dollars of that trial. Google Container Engine pricing is documented [here](https://cloud.google.com/container-engine/pricing).
Next, make sure you [download Node.js](https://nodejs.org/en/download/). You can skip this and the steps for installing Docker and Cloud SDK if you're using Cloud Shell.
@@ -79,7 +79,7 @@ You should be able to see your "Hello World!" message at http://localhost:8080/.
Stop the running node server by pressing Ctrl-C.
-Now let’s package this application in a Docker container.
+Now let's package this application in a Docker container.
## Create a Docker container image
@@ -109,7 +109,7 @@ Let's try your image out with Docker:
docker run -d -p 8080:8080 --name hello_tutorial gcr.io/$PROJECT_ID/hello-node:v1
```
-Visit your app in the browser, or use `curl` or `wget` if you’d like :
+Visit your app in the browser, or use `curl` or `wget` if you'd like :
```shell
curl http://localhost:8080
@@ -123,7 +123,7 @@ You should see `Hello World!`
curl "http://$(docker-machine ip YOUR-VM-MACHINE-NAME):8080"
```
-Let’s now stop the container. You can list the docker containers with:
+Let's now stop the container. You can list the docker containers with:
```shell
docker ps
@@ -180,7 +180,7 @@ You should get a Kubernetes cluster with three nodes, ready to receive your cont
![image](/images/hellonode/image_11.png)
-It’s now time to deploy your own containerized application to the Kubernetes cluster!
+It's now time to deploy your own containerized application to the Kubernetes cluster!
```shell
gcloud container clusters get-credentials hello-world
@@ -258,7 +258,7 @@ kubectl expose deployment hello-node --type="LoadBalancer"
**If this fails, make sure your client and server are both version 1.3. See the [Create your cluster](#create-your-cluster) section for details.**
-The flag used in this command specifies that we’ll be using the load-balancer provided by the underlying infrastructure (in this case the [Compute Engine load balancer](https://cloud.google.com/compute/docs/load-balancing/)). Note that we expose the deployment, and not the pod directly. This will cause the resulting service to load balance traffic across all pods managed by the deployment (in this case only 1 pod, but we will add more replicas later).
+The flag used in this command specifies that we'll be using the load-balancer provided by the underlying infrastructure (in this case the [Compute Engine load balancer](https://cloud.google.com/compute/docs/load-balancing/)). Note that we expose the deployment, and not the pod directly. This will cause the resulting service to load balance traffic across all pods managed by the deployment (in this case only 1 pod, but we will add more replicas later).
The Kubernetes master creates the load balancer and related Compute Engine forwarding rules, target pools, and firewall rules to make the service fully accessible from outside of Google Cloud Platform.
@@ -322,7 +322,7 @@ hello-node-714049816-ztzrb 1/1 Running 0 41m
Note the **declarative approach** here - rather than starting or stopping new instances you declare how many instances you want to be running. Kubernetes reconciliation loops simply make sure the reality matches what you requested and take action if needed.
-Here’s a diagram summarizing the state of our Kubernetes cluster:
+Here's a diagram summarizing the state of our Kubernetes cluster:
![image](/images/hellonode/image_13.png)
@@ -330,7 +330,7 @@ Here’s a diagram summarizing the state of our Kubernetes cluster:
As always, the application you deployed to production requires bug fixes or additional features. Kubernetes is here to help you deploy a new version to production without impacting your users.
-First, let’s modify the application. On the development machine, edit server.js and update the response message:
+First, let's modify the application. On the development machine, edit server.js and update the response message:
```javascript
response.end('Hello Kubernetes World!');
@@ -345,7 +345,7 @@ gcloud docker -- push gcr.io/$PROJECT_ID/hello-node:v2
Building and pushing this updated image should be much quicker as we take full advantage of the Docker cache.
-We’re now ready for Kubernetes to smoothly update our deployment to the new version of the application. In order to change
+We're now ready for Kubernetes to smoothly update our deployment to the new version of the application. In order to change
the image label for our running container, we will need to edit the existing *hello-node deployment* and change the image from
`gcr.io/$PROJECT_ID/hello-node:v1` to `gcr.io/$PROJECT_ID/hello-node:v2`. To do this, we will use the `kubectl set image` command.
@@ -364,7 +364,7 @@ hello-node 4 5 4 3 1h
While this is happening, the users of the services should not see any interruption. After a little while they will start accessing the new version of your application. You can find more details in the [deployment documentation](/docs/user-guide/deployments/).
-Hopefully with these deployment, scaling and update features you’ll agree that once you’ve setup your environment (your GKE/Kubernetes cluster here), Kubernetes is here to help you focus on the application rather than the infrastructure.
+Hopefully with these deployment, scaling and update features you'll agree that once you've setup your environment (your GKE/Kubernetes cluster here), Kubernetes is here to help you focus on the application rather than the infrastructure.
## Observe the Kubernetes Web UI (optional)
diff --git a/docs/tutorials/kubernetes-basics/cluster-intro.html b/docs/tutorials/kubernetes-basics/cluster-intro.html
index 6009a55aeb..830b651594 100644
--- a/docs/tutorials/kubernetes-basics/cluster-intro.html
+++ b/docs/tutorials/kubernetes-basics/cluster-intro.html
@@ -90,7 +90,7 @@ title: Using Minikube to Create a Cluster
A Kubernetes cluster can be deployed on either physical or virtual machines. To get started with Kubernetes development, you can use Minikube. Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, Mac OS and Windows systems. The Minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete. For this bootcamp, however, you'll use a provided online terminal with Minikube pre-installed.
-
Now that you know what Kubernetes is, let’s go to the online tutorial and start our first cluster!
+
Now that you know what Kubernetes is, let's go to the online tutorial and start our first cluster!