@@ -45,7 +45,7 @@ title: Case Studies
-
+
diff --git a/case-studies/pearson.html b/case-studies/pearson.html
index bf871789b9..50f16ce7ae 100644
--- a/case-studies/pearson.html
+++ b/case-studies/pearson.html
@@ -13,13 +13,13 @@ title: Pearson Case Study
-
Using Kubernetes to reinvent the world’s largest educational company
+
Using Kubernetes to reinvent the world's largest educational company
- Pearson, the world’s education company, serving 75 million learners worldwide, set a goal to more than double that number to 200 million by 2025. A key part of this growth is in digital learning experiences, and that requires an infrastructure platform that is able to scale quickly and deliver products to market faster. So Pearson’s Cloud Technology team chose Kubernetes to help build a platform to meet the business requirements.
+ Pearson, the world's education company, serving 75 million learners worldwide, set a goal to more than double that number to 200 million by 2025. A key part of this growth is in digital learning experiences, and that requires an infrastructure platform that is able to scale quickly and deliver products to market faster. So Pearson's Cloud Technology team chose Kubernetes to help build a platform to meet the business requirements.
- “To transform our infrastructure, we had to think beyond simply enabling automated provisioning, we realized we had to build a platform that would allow Pearson developers to build manage and deploy applications in a completely different way. We chose Kubernetes because of its flexibility, ease of management and the way it would improve our engineers’ productivity.”
+ "To transform our infrastructure, we had to think beyond simply enabling automated provisioning, we realized we had to build a platform that would allow Pearson developers to build manage and deploy applications in a completely different way. We chose Kubernetes because of its flexibility, ease of management and the way it would improve our engineers' productivity."
— Chris Jackson, Director for Cloud Product Engineering, Pearson
@@ -38,7 +38,7 @@ title: Pearson Case Study
Why Kubernetes:
- - Kubernetes will allow Pearson’s teams to develop their apps in a consistent manner, saving time and minimizing complexity.
+ - Kubernetes will allow Pearson's teams to develop their apps in a consistent manner, saving time and minimizing complexity.
@@ -52,7 +52,7 @@ title: Pearson Case Study
Results:
- - Pearson is building an enterprise-wide platform for delivering innovative, web-based educational content. They expect engineers’ productivity to increase by up to 20 percent.
+ - Pearson is building an enterprise-wide platform for delivering innovative, web-based educational content. They expect engineers' productivity to increase by up to 20 percent.
@@ -63,9 +63,9 @@ title: Pearson Case Study
Kubernetes powers a comprehensive developer experience
-
Pearson wanted to use as much open source technology as possible for the platform given that it provides both technical and commercial benefits over the duration of the project. Jackson says, “Building an infrastructure platform based on open source technology in Pearson was a no-brainer, the sharing of technical challenges and advanced use cases in a community of people with talent far beyond what we could hire independently allows us to innovate at a level we could not reach on our own. Our engineers enjoy returning code to the community and participating in talks, blogs and meetings, it’s a great way for us to allow our team to express themselves and share the pride they have in their work.”
-
It also wanted to use a container-focused platform. Pearson has 400 development groups and diverse brands with varying business and technical needs. With containers, each brand could experiment with building new types of content using their preferred technologies, and then deliver it using containers. Pearson chose Kubernetes because it believes that is the best technology for managing containers, has the widest community support and offers the most flexible and powerful tools.“
-
Kubernetes is at the core of the platform we’ve built for developers. After we get our big spike in back-to-school in traffic, much of Pearson’s traffic will interact with Kubernetes. It is proving to be as effective as we had hoped,” Jackson says.
+
Pearson wanted to use as much open source technology as possible for the platform given that it provides both technical and commercial benefits over the duration of the project. Jackson says, "Building an infrastructure platform based on open source technology in Pearson was a no-brainer, the sharing of technical challenges and advanced use cases in a community of people with talent far beyond what we could hire independently allows us to innovate at a level we could not reach on our own. Our engineers enjoy returning code to the community and participating in talks, blogs and meetings, it's a great way for us to allow our team to express themselves and share the pride they have in their work."
+
It also wanted to use a container-focused platform. Pearson has 400 development groups and diverse brands with varying business and technical needs. With containers, each brand could experiment with building new types of content using their preferred technologies, and then deliver it using containers. Pearson chose Kubernetes because it believes that is the best technology for managing containers, has the widest community support and offers the most flexible and powerful tools."
+
Kubernetes is at the core of the platform we've built for developers. After we get our big spike in back-to-school in traffic, much of Pearson's traffic will interact with Kubernetes. It is proving to be as effective as we had hoped," Jackson says.
@@ -74,9 +74,9 @@ title: Pearson Case Study
Encouraging experimentation, saving engineers time
-
With the new platform, Pearson will increase stability and performance, and to bring products to market more quickly. The company says its engineers will also get a productivity boost because they won’t spend time managing infrastructure. Jackson estimates 15 to 20 percent in productivity savings.
+
With the new platform, Pearson will increase stability and performance, and to bring products to market more quickly. The company says its engineers will also get a productivity boost because they won't spend time managing infrastructure. Jackson estimates 15 to 20 percent in productivity savings.
Beyond that, Pearson says the platform will encourage innovation because of the ease with which new applications can be developed, and because applications will be deployed far more quickly than in the past. It expects that will help the company meet its goal of reaching 200 million learners within the next 10 years.
-
“We’re already seeing tremendous benefits with Kubernetes — improved engineering productivity, faster delivery of applications and a simplified infrastructure. But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online,” says Jackson.
+
"We're already seeing tremendous benefits with Kubernetes — improved engineering productivity, faster delivery of applications and a simplified infrastructure. But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online," says Jackson.
diff --git a/case-studies/wikimedia.html b/case-studies/wikimedia.html
index 00eb47e3e0..2d3b686128 100644
--- a/case-studies/wikimedia.html
+++ b/case-studies/wikimedia.html
@@ -20,7 +20,7 @@ title: Wikimedia Case Study
- “Wikimedia Tool Labs is vital for making sure wikis all around the world work as well as they possibly can. Because it’s grown organically for almost 10 years, it has become an extremely challenging environment and difficult to maintain. It’s like a big ball of mud — you really can’t see through it. With Kubernetes, we’re simplifying the environment and making it easier for developers to build the tools that make wikis run better.”
+ "Wikimedia Tool Labs is vital for making sure wikis all around the world work as well as they possibly can. Because it's grown organically for almost 10 years, it has become an extremely challenging environment and difficult to maintain. It's like a big ball of mud — you really can't see through it. With Kubernetes, we're simplifying the environment and making it easier for developers to build the tools that make wikis run better."
— Yuvi Panda, operations engineer at Wikimedia Foundation and Wikimedia Tool Labs
@@ -67,13 +67,13 @@ title: Wikimedia Case Study
Using Kubernetes to provide tools for maintaining wikis
- Wikimedia Tool Labs is run by a staff of four-and-a-half paid employees and two volunteers. The infrastructure didn't make it easy or intuitive for developers to build bots and other tools to make wikis work more easily. Yuvi says, “It’s incredibly chaotic. We have lots of Perl and Bash duct tape on top of it. Everything is super fragile.”
+ Wikimedia Tool Labs is run by a staff of four-and-a-half paid employees and two volunteers. The infrastructure didn't make it easy or intuitive for developers to build bots and other tools to make wikis work more easily. Yuvi says, "It's incredibly chaotic. We have lots of Perl and Bash duct tape on top of it. Everything is super fragile."
To solve the problem, Wikimedia Tool Labs migrated parts of its infrastructure to Kubernetes, in preparation for eventually moving its entire system. Yuvi said Kubernetes greatly simplifies maintenance. The goal is to allow developers creating bots and other tools to use whatever development methods they want, but make it easier for the Wikimedia Tool Labs to maintain the required infrastructure for hosting and sharing them.
- “With Kubernetes, I’ve been able to remove a lot of our custom-made code, which makes everything easier to maintain. Our users’ code also runs in a more stable way than previously,” says Yuvi.
+ "With Kubernetes, I've been able to remove a lot of our custom-made code, which makes everything easier to maintain. Our users' code also runs in a more stable way than previously," says Yuvi.
@@ -84,13 +84,13 @@ title: Wikimedia Case Study
Simplifying infrastructure and keeping wikis running better
- Wikimedia Tool Labs has seen great success with the initial Kubernetes deployment. Old code is being simplified and eliminated, contributing developers don’t have to change the way they write their tools and bots, and those tools and bots run in a more stable fashion than they have in the past. The paid staff and volunteers are able to better keep up with fixing issues.
+ Wikimedia Tool Labs has seen great success with the initial Kubernetes deployment. Old code is being simplified and eliminated, contributing developers don't have to change the way they write their tools and bots, and those tools and bots run in a more stable fashion than they have in the past. The paid staff and volunteers are able to better keep up with fixing issues.
- In the future, with a more complete migration to Kubernetes, Wikimedia Tool Labs expects to make it even easier to host and maintain the bots and tools that help run wikis across the world. The tool labs already host approximately 1,300 tools and bots from 800 volunteers, with many more being submitted every day. Twenty percent of the tool labs’ web tools that account for more than 60 percent of web traffic now run on Kubernetes. The tool labs has a 25-node cluster that keeps up with each new Kubernetes release. Many existing web tools are migrating to Kubernetes.
+ In the future, with a more complete migration to Kubernetes, Wikimedia Tool Labs expects to make it even easier to host and maintain the bots and tools that help run wikis across the world. The tool labs already host approximately 1,300 tools and bots from 800 volunteers, with many more being submitted every day. Twenty percent of the tool labs' web tools that account for more than 60 percent of web traffic now run on Kubernetes. The tool labs has a 25-node cluster that keeps up with each new Kubernetes release. Many existing web tools are migrating to Kubernetes.
- “Our goal is to make sure that people all over the world can share knowledge as easily as possible. Kubernetes helps with that, by making it easier for wikis everywhere to have the tools they need to thrive,” says Yuvi.
+ "Our goal is to make sure that people all over the world can share knowledge as easily as possible. Kubernetes helps with that, by making it easier for wikis everywhere to have the tools they need to thrive," says Yuvi.
diff --git a/community.html b/community.html
index 9ef63c1b66..a10a100375 100644
--- a/community.html
+++ b/community.html
@@ -24,8 +24,8 @@ title: Community
SIGs
Have a special interest in how Kubernetes works with another technology? See our ever growing
lists of SIGs,
- from AWS and Openstack to Big Data and Scalability, there’s a place for you to contribute and instructions
- for forming a new SIG if your special interest isn’t covered (yet).
+ from AWS and Openstack to Big Data and Scalability, there's a place for you to contribute and instructions
+ for forming a new SIG if your special interest isn't covered (yet).
Events
diff --git a/docs/admin/admission-controllers.md b/docs/admin/admission-controllers.md
index 475f2e4be9..089dce2605 100644
--- a/docs/admin/admission-controllers.md
+++ b/docs/admin/admission-controllers.md
@@ -126,7 +126,7 @@ For additional HTTP configuration, refer to the [kubeconfig](/docs/user-guide/ku
When faced with an admission decision, the API Server POSTs a JSON serialized api.imagepolicy.v1alpha1.ImageReview object describing the action. This object contains fields describing the containers being admitted, as well as any pod annotations that match `*.image-policy.k8s.io/*`.
-Note that webhook API objects are subject to the same versioning compatibility rules as other Kubernetes API objects. Implementers should be aware of looser compatibility promises for alpha objects and check the “apiVersion” field of the request to ensure correct deserialization. Additionally, the API Server must enable the imagepolicy.k8s.io/v1alpha1 API extensions group (`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`).
+Note that webhook API objects are subject to the same versioning compatibility rules as other Kubernetes API objects. Implementers should be aware of looser compatibility promises for alpha objects and check the "apiVersion" field of the request to ensure correct deserialization. Additionally, the API Server must enable the imagepolicy.k8s.io/v1alpha1 API extensions group (`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`).
An example request body:
@@ -151,7 +151,7 @@ An example request body:
}
```
-The remote service is expected to fill the ImageReviewStatus field of the request and respond to either allow or disallow access. The response body’s “spec” field is ignored and may be omitted. A permissive response would return:
+The remote service is expected to fill the ImageReviewStatus field of the request and respond to either allow or disallow access. The response body's "spec" field is ignored and may be omitted. A permissive response would return:
```
{
diff --git a/docs/admin/networking.md b/docs/admin/networking.md
index 0b73e855bb..565005a991 100644
--- a/docs/admin/networking.md
+++ b/docs/admin/networking.md
@@ -173,7 +173,7 @@ Lars Kellogg-Stedman.
[Nuage](http://www.nuagenetworks.net) provides a highly scalable policy-based Software-Defined Networking (SDN) platform. Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
-The Nuage platform uses overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Nuage’s policy abstraction model is designed with applications in mind and makes it easy to declare fine-grained policies for applications.The platform’s real-time analytics engine enables visibility and security monitoring for Kubernetes applications.
+The Nuage platform uses overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Nuage's policy abstraction model is designed with applications in mind and makes it easy to declare fine-grained policies for applications.The platform's real-time analytics engine enables visibility and security monitoring for Kubernetes applications.
### OpenVSwitch
diff --git a/docs/admin/rescheduler.md b/docs/admin/rescheduler.md
index e1a2cca5de..11d15c10dd 100644
--- a/docs/admin/rescheduler.md
+++ b/docs/admin/rescheduler.md
@@ -30,7 +30,7 @@ given the pods that are already running in the cluster
the rescheduler tries to free up space for the add-on by evicting some pods; then the scheduler will schedule the add-on pod.
To avoid situation when another pod is scheduled into the space prepared for the critical add-on,
-the chosen node gets a temporary taint “CriticalAddonsOnly” before the eviction(s)
+the chosen node gets a temporary taint "CriticalAddonsOnly" before the eviction(s)
(see [more details](https://github.com/kubernetes/kubernetes/blob/master/docs/design/taint-toleration-dedicated.md)).
Each critical add-on has to tolerate it,
the other pods shouldn't tolerate the taint. The tain is removed once the add-on is successfully scheduled.
@@ -57,4 +57,3 @@ and have the following annotations specified:
* `scheduler.alpha.kubernetes.io/tolerations` set to `[{"key":"CriticalAddonsOnly", "operator":"Exists"}]`
The first one marks a pod a critical. The second one is required by Rescheduler algorithm.
-
diff --git a/docs/getting-started-guides/logging.md b/docs/getting-started-guides/logging.md
index ff874e119d..05c41cd3c6 100644
--- a/docs/getting-started-guides/logging.md
+++ b/docs/getting-started-guides/logging.md
@@ -79,7 +79,7 @@ root 479 0.0 0.0 4348 812 ? S 00:05 0:00 sleep 1
root 480 0.0 0.0 15572 2212 ? R 00:05 0:00 ps aux
```
-What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original container's execution and only see the log lines for the new container? Let’s find out. First let's delete the currently running counter.
+What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original container's execution and only see the log lines for the new container? Let's find out. First let's delete the currently running counter.
```shell
$ kubectl delete pod counter
diff --git a/docs/getting-started-guides/meanstack.md b/docs/getting-started-guides/meanstack.md
index 37df0513f7..e1e7bd7696 100644
--- a/docs/getting-started-guides/meanstack.md
+++ b/docs/getting-started-guides/meanstack.md
@@ -17,12 +17,12 @@ Thankfully, there is a system we can use to manage our containers in a cluster e
## The Basics of Using Kubernetes
-Before we jump in and start kube’ing it up, it’s important to understand some of the fundamentals of Kubernetes.
+Before we jump in and start kube'ing it up, it's important to understand some of the fundamentals of Kubernetes.
* Containers: These are the Docker, rtk, AppC, or whatever Container you are running. You can think of these like subatomic particles; everything is made up of them, but you rarely (if ever) interact with them directly.
-* Pods: Pods are the basic component of Kubernetes. They are a group of Containers that are scheduled, live, and die together. Why would you want to have a group of containers instead of just a single container? Let’s say you had a log processor, a web server, and a database. If you couldn't use Pods, you would have to bundle the log processor in the web server and database containers, and each time you updated one you would have to update the other. With Pods, you can just reuse the same log processor for both the web server and database.
+* Pods: Pods are the basic component of Kubernetes. They are a group of Containers that are scheduled, live, and die together. Why would you want to have a group of containers instead of just a single container? Let's say you had a log processor, a web server, and a database. If you couldn't use Pods, you would have to bundle the log processor in the web server and database containers, and each time you updated one you would have to update the other. With Pods, you can just reuse the same log processor for both the web server and database.
* Deployments: A Deployment provides declarative updates for Pods. You can define Deployments to create new Pods, or replace existing Pods. You only need to describe the desired state in a Deployment object, and the deployment controller will change the actual state to the desired state at a controlled rate for you. You can define Deployments to create new resources, or replace existing ones by new ones.
-* Services: A service is the single point of contact for a group of Pods. For example, let’s say you have a Deployment that creates four copies of a web server pod. A Service will split the traffic to each of the four copies. Services are "permanent" while the pods behind them can come and go, so it’s a good idea to use Services.
+* Services: A service is the single point of contact for a group of Pods. For example, let's say you have a Deployment that creates four copies of a web server pod. A Service will split the traffic to each of the four copies. Services are "permanent" while the pods behind them can come and go, so it's a good idea to use Services.
## Step 1: Creating the Container
@@ -37,7 +37,7 @@ To do this, you need to use more Docker. Make sure you have the latest version i
Getting the code:
-Before starting, let’s get some code to run. You can follow along on your personal machine or a Linux VM in the cloud. I recommend using Linux or a Linux VM; running Docker on Mac and Windows is outside the scope of this tutorial.
+Before starting, let's get some code to run. You can follow along on your personal machine or a Linux VM in the cloud. I recommend using Linux or a Linux VM; running Docker on Mac and Windows is outside the scope of this tutorial.
```shell
$ git clone https://github.com/ijason/NodeJS-Sample-App.git app
@@ -45,7 +45,7 @@ $ mv app/EmployeeDB/* app/
$ sed -i -- 's/localhost/mongo/g' ./app/app.js
```
-This is the same sample app we ran before. The second line just moves everything from the `EmployeeDB` subfolder up into the app folder so it’s easier to access. The third line, once again, replaces the hardcoded `localhost` with the `mongo` proxy.
+This is the same sample app we ran before. The second line just moves everything from the `EmployeeDB` subfolder up into the app folder so it's easier to access. The third line, once again, replaces the hardcoded `localhost` with the `mongo` proxy.
Building the Docker image:
@@ -83,7 +83,7 @@ $ ls
Dockerfile app
```
-Let’s build.
+Le's build.
```shell
$ docker build -t myapp .
@@ -139,7 +139,7 @@ After some time, it will finish. You can check the console to see the container
## **Step 4: Creating the Cluster**
-So now you have the custom container, let’s create a cluster to run it.
+So now you have the custom container, let's create a cluster to run it.
Currently, a cluster can be as small as one machine to as big as 100 machines. You can pick any machine type you want, so you can have a cluster of a single `f1-micro` instance, 100 `n1-standard-32` instances (3,200 cores!), and anything in between.
@@ -193,7 +193,7 @@ $ gcloud compute disks create \
Pick the same zone as your cluster and an appropriate disk size for your application.
-Now, we need to create a Deployment that will run the database. I’m using a Deployment and not a Pod, because if a standalone Pod dies, it won't restart automatically.
+Now, we need to create a Deployment that will run the database. I'm using a Deployment and not a Pod, because if a standalone Pod dies, it won't restart automatically.
### `db-deployment.yml`
@@ -231,7 +231,7 @@ We call the deployment `mongo-deployment`, specify one replica, and open the app
The `volumes` section creates the volume for Kubernetes to use. There is a Google Container Engine-specific `gcePersistentDisk` section that maps the disk we made into a Kubernetes volume, and we mount the volume into the `/data/db` directory (as described in the MongoDB Docker documentation)
-Now we have the Deployment, let’s create the Service:
+Now we have the Deployment, let's create the Service:
### `db-service.yml`
@@ -267,7 +267,7 @@ db-service.yml
## Step 6: Running the Database
-First, let’s "log in" to the cluster
+First, let's "log in" to the cluster
```shell
$ gcloud container clusters get-credentials mean-cluster
@@ -305,14 +305,14 @@ mongo-deployment-xxxx 1/1 Running 0 3m
## Step 7: Creating the Web Server
-Now the database is running, let’s start the web server.
+Now the database is running, let's start the web server.
We need two things:
1. Deployment to spin up and down web server pods
2. Service to expose our website to the interwebs
-Let’s look at the Deployment configuration:
+Let's look at the Deployment configuration:
### `web-deployment.yml`
@@ -371,7 +371,7 @@ At this point, the local directory looks like this
```shell
$ ls
-Dockerfile
+Dockerfile
app
db-deployment.yml
db-service.yml
diff --git a/docs/getting-started-guides/windows/index.md b/docs/getting-started-guides/windows/index.md
index 511d125dcd..6349bc52ce 100644
--- a/docs/getting-started-guides/windows/index.md
+++ b/docs/getting-started-guides/windows/index.md
@@ -15,18 +15,18 @@ In Kubernetes version 1.5, Windows Server Containers for Kubernetes is supported
4. Docker Version 1.12.2-cs2-ws-beta or later
## Networking
-Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) don’t natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used.
+Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) don't natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used.
### Linux
-The above networking approach is already supported on Linux using a bridge interface, which essentially creates a private network local to the node. Similar to the Windows side, routes to all other pod CIDRs must be created in order to send packets via the “public” NIC.
+The above networking approach is already supported on Linux using a bridge interface, which essentially creates a private network local to the node. Similar to the Windows side, routes to all other pod CIDRs must be created in order to send packets via the "public" NIC.
### Windows
Each Window Server node should have the following configuration:
1. Two NICs (virtual networking adapters) are required on each Windows Server node - The two Windows container networking modes of interest (transparent and L2 bridge) use an external Hyper-V virtual switch. This means that one of the NICs is entirely allocated to the bridge, creating the need for the second NIC.
2. Transparent container network created - This is a manual configuration step and is shown in **_Route Setup_** section below
-3. RRAS (Routing) Windows feature enabled - Allows routing between NICs on the box, and also “captures” packets that have the destination IP of a POD running on the node. To enable, open “Server Manager”. Click on “Roles”, “Add Roles”. Click “Next”. Select “Network Policy and Access Services”. Click on “Routing and Remote Access Service” and the underlying checkboxes
-4. Routes defined pointing to the other pod CIDRs via the “public” NIC - These routes are added to the built-in routing table as shown in **_Route Setup_** section below
+3. RRAS (Routing) Windows feature enabled - Allows routing between NICs on the box, and also "captures" packets that have the destination IP of a POD running on the node. To enable, open "Server Manager". Click on "Roles", "Add Roles". Click "Next". Select "Network Policy and Access Services". Click on "Routing and Remote Access Service" and the underlying checkboxes
+4. Routes defined pointing to the other pod CIDRs via the "public" NIC - These routes are added to the built-in routing table as shown in **_Route Setup_** section below
The following diagram illustrates the Windows Server networking setup for Kubernetes Setup
![Windows Setup](windows-setup.png)
@@ -38,12 +38,12 @@ To run Windows Server Containers on Kubernetes, you'll need to set up both your
1. Windows Server container host running Windows Server 2016 and Docker v1.12. Follow the setup instructions outlined by this blog post: https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/quick_start_windows_server
2. DNS support for Windows recently got merged to docker master and is currently not supported in a stable docker release. To use DNS build docker from master or download the binary from [Docker master](https://master.dockerproject.org/)
-3. Pull the `apprenda/pause` image from `https://hub.docker.com/r/apprenda/pause`
+3. Pull the `apprenda/pause` image from `https://hub.docker.com/r/apprenda/pause`
4. RRAS (Routing) Windows feature enabled
**Linux Host Setup**
-1. Linux hosts should be setup according to their respective distro documentation and the requirements of the Kubernetes version you will be using.
+1. Linux hosts should be setup according to their respective distro documentation and the requirements of the Kubernetes version you will be using.
2. CNI network plugin installed.
### Component Setup
@@ -110,7 +110,7 @@ route add 192.168.1.0 mask 255.255.255.0 192.168.1.1 if
A Kubernetes cluster can be deployed on either physical or virtual machines. To get started with Kubernetes development, you can use Minikube. Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, Mac OS and Windows systems. The Minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete. For this bootcamp, however, you'll use a provided online terminal with Minikube pre-installed.
- Now that you know what Kubernetes is, let’s go to the online tutorial and start our first cluster!
+ Now that you know what Kubernetes is, let's go to the online tutorial and start our first cluster!