Fix relative links issue in English content (#13307)

* `http://kubernetes.io/docs/` -> `/docs/` in content/en folder

* `https://kubernetes.io/docs/` -> `/docs/` in content/en folder
pull/13326/head
chenrui 2019-03-20 19:05:05 -04:00 committed by Kubernetes Prow Robot
parent 89b2eb7f64
commit 5a5f77db64
166 changed files with 522 additions and 570 deletions

View File

@ -5,7 +5,7 @@ slug: resource-usage-monitoring-kubernetes
url: /blog/2015/05/Resource-Usage-Monitoring-Kubernetes
---
Understanding how an application behaves when deployed is crucial to scaling the application and providing a reliable service. In a Kubernetes cluster, application performance can be examined at many different levels: containers, [pods](http://kubernetes.io/docs/user-guide/pods), [services](http://kubernetes.io/docs/user-guide/services), and whole clusters. As part of Kubernetes we want to provide users with detailed resource usage information about their running applications at all these levels. This will give users deep insights into how their applications are performing and where possible application bottlenecks may be found. In comes [Heapster](https://github.com/kubernetes/heapster), a project meant to provide a base monitoring platform on Kubernetes.
Understanding how an application behaves when deployed is crucial to scaling the application and providing a reliable service. In a Kubernetes cluster, application performance can be examined at many different levels: containers, [pods](/docs/user-guide/pods), [services](/docs/user-guide/services), and whole clusters. As part of Kubernetes we want to provide users with detailed resource usage information about their running applications at all these levels. This will give users deep insights into how their applications are performing and where possible application bottlenecks may be found. In comes [Heapster](https://github.com/kubernetes/heapster), a project meant to provide a base monitoring platform on Kubernetes.
**Overview**

View File

@ -26,7 +26,7 @@ We say Kubernetes scales to a certain number of nodes only if both of these SLOs
### API responsiveness for user-level abstractions[2](https://www.blogger.com/blogger.g?blogID=112706738355446097#2) 
Kubernetes offers high-level abstractions for users to represent their applications. For example, the ReplicationController is an abstraction representing a collection of [pods](http://kubernetes.io/docs/user-guide/pods/). Listing all ReplicationControllers or listing all pods from a given ReplicationController is a very common use case. On the other hand, there is little reason someone would want to list all pods in the system  for example, 30,000 pods (1000 nodes with 30 pods per node) represent ~150MB of data (~5kB/pod \* 30k pods). So this test uses ReplicationControllers.
Kubernetes offers high-level abstractions for users to represent their applications. For example, the ReplicationController is an abstraction representing a collection of [pods](/docs/user-guide/pods/). Listing all ReplicationControllers or listing all pods from a given ReplicationController is a very common use case. On the other hand, there is little reason someone would want to list all pods in the system  for example, 30,000 pods (1000 nodes with 30 pods per node) represent ~150MB of data (~5kB/pod \* 30k pods). So this test uses ReplicationControllers.
For this test (assuming N to be number of nodes in the cluster), we:

View File

@ -14,7 +14,7 @@ Well take a look at some examples of this below, but first...
###
A quick intro to Kubernetes metadata 
Kubernetes metadata is abundant in the form of [_labels_](http://kubernetes.io/docs/user-guide/labels/) and [_annotations_](http://kubernetes.io/docs/user-guide/annotations/). Labels are designed to be identifying metadata for your infrastructure, whereas annotations are designed to be non-identifying. For both, theyre simply generic key:value pairs that look like this:
Kubernetes metadata is abundant in the form of [_labels_](/docs/user-guide/labels/) and [_annotations_](/docs/user-guide/annotations/). Labels are designed to be identifying metadata for your infrastructure, whereas annotations are designed to be non-identifying. For both, theyre simply generic key:value pairs that look like this:
```
"labels": {

View File

@ -11,7 +11,7 @@ In Kubernetes, Services and Pods have IPs only routable by the cluster network,
### Ingress controllers
Today, with containers or VMs, configuring a web server or load balancer is harder than it should be. Most web server configuration files are very similar. There are some applications that have weird little quirks that tend to throw a wrench in things, but for the most part, you can apply the same logic to them and achieve a desired result. In Kubernetes 1.2, the Ingress resource embodies this idea, and an Ingress controller is meant to handle all the quirks associated with a specific "class" of Ingress (be it a single instance of a load balancer, or a more complicated setup of frontends that provide GSLB, CDN, DDoS protection etc). An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches the ApiServer's /ingresses endpoint for updates to the [Ingress resource](http://kubernetes.io/docs/user-guide/ingress/). Its job is to satisfy requests for ingress.
Today, with containers or VMs, configuring a web server or load balancer is harder than it should be. Most web server configuration files are very similar. There are some applications that have weird little quirks that tend to throw a wrench in things, but for the most part, you can apply the same logic to them and achieve a desired result. In Kubernetes 1.2, the Ingress resource embodies this idea, and an Ingress controller is meant to handle all the quirks associated with a specific "class" of Ingress (be it a single instance of a load balancer, or a more complicated setup of frontends that provide GSLB, CDN, DDoS protection etc). An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches the ApiServer's /ingresses endpoint for updates to the [Ingress resource](/docs/user-guide/ingress/). Its job is to satisfy requests for ingress.
Your Kubernetes cluster must have exactly one Ingress controller that supports TLS for the following example to work. If youre on a cloud-provider, first check the “kube-system” namespace for an Ingress controller RC. If there isnt one, you can deploy the [nginx controller](https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx), or [write your own](https://github.com/kubernetes/contrib/tree/master/ingress/controllers#writing-an-ingress-controller) in \< 100 lines of code.
@ -102,7 +102,7 @@ $ curl 130.X.X.X -Lk
CLIENT VALUES:client\_address=10.48.0.1command=GETreal path=/
```
### Future work
You can read more about the [Ingress API](http://kubernetes.io/docs/user-guide/ingress/) or controllers by following the links. The Ingress is still in beta, and we would love your input to grow it. You can contribute by writing controllers or evolving the API. All things related to the meaning of the word “[ingress](https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=ingress%20meaning)” are in scope, this includes DNS, different TLS modes, SNI, load balancing at layer 4, content caching, more algorithms, better health checks; the list goes on.
You can read more about the [Ingress API](/docs/user-guide/ingress/) or controllers by following the links. The Ingress is still in beta, and we would love your input to grow it. You can contribute by writing controllers or evolving the API. All things related to the meaning of the word “[ingress](https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=ingress%20meaning)” are in scope, this includes DNS, different TLS modes, SNI, load balancing at layer 4, content caching, more algorithms, better health checks; the list goes on.
There are many ways to participate. If youre particularly interested in Kubernetes and networking, youll be interested in:

View File

@ -60,7 +60,7 @@ All of our work is done in the open, to learn the latest about the project j[oin
- Scheduled job&nbsp;
- Public dashboard that allows for nightly test runs across multiple cloud providers&nbsp;
- Lots, lots more!&nbsp;
Kubernetes 1.2 is available for download at [get.k8s.io](http://get.k8s.io/) and via the open source repository hosted on [GitHub](https://github.com/kubernetes/kubernetes). To get started with Kubernetes try our new [Hello World app](http://kubernetes.io/docs/hellonode/).&nbsp;
Kubernetes 1.2 is available for download at [get.k8s.io](http://get.k8s.io/) and via the open source repository hosted on [GitHub](https://github.com/kubernetes/kubernetes). To get started with Kubernetes try our new [Hello World app](/docs/hellonode/).&nbsp;

View File

@ -27,7 +27,7 @@ Inference can be very resource intensive. Our server executes the following Tens
| [![](https://2.bp.blogspot.com/-Gcb6gxzqDkE/VvHJHE7yD3I/AAAAAAAAA4Y/4EZD83OV_8goqodV2pcaQKYeinokf9UuA/s640/tensorflowserving-3.png)](https://2.bp.blogspot.com/-Gcb6gxzqDkE/VvHJHE7yD3I/AAAAAAAAA4Y/4EZD83OV_8goqodV2pcaQKYeinokf9UuA/s1600/tensorflowserving-3.png) |
| Schematic diagram of Inception-v3 |
Fortunately, this is where Kubernetes can help us. Kubernetes distributes inference request processing across a cluster using its [External Load Balancer](http://kubernetes.io/docs/user-guide/load-balancer/). Each [pod](http://kubernetes.io/docs/user-guide/pods/) in the cluster contains a [TensorFlow Serving Docker image](https://tensorflow.github.io/serving/docker) with the TensorFlow Serving-based gRPC server and a trained Inception-v3 model. The model is represented as a [set of files](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/session_bundle/README.md) describing the shape of the TensorFlow graph, model weights, assets, and so on. Since everything is neatly packaged together, we can dynamically scale the number of replicated pods using the [Kubernetes Replication Controller](http://kubernetes.io/docs/user-guide/replication-controller/operations/) to keep up with the service demands.
Fortunately, this is where Kubernetes can help us. Kubernetes distributes inference request processing across a cluster using its [External Load Balancer](/docs/user-guide/load-balancer/). Each [pod](/docs/user-guide/pods/) in the cluster contains a [TensorFlow Serving Docker image](https://tensorflow.github.io/serving/docker) with the TensorFlow Serving-based gRPC server and a trained Inception-v3 model. The model is represented as a [set of files](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/session_bundle/README.md) describing the shape of the TensorFlow graph, model weights, assets, and so on. Since everything is neatly packaged together, we can dynamically scale the number of replicated pods using the [Kubernetes Replication Controller](/docs/user-guide/replication-controller/operations/) to keep up with the service demands.
To help you try this out yourself, weve written a [step-by-step tutorial](https://tensorflow.github.io/serving/serving_inception), which shows you how to create the TensorFlow Serving Docker container to serve the Inception-v3 image classification model, configure a Kubernetes cluster and run classification requests against it. We hope this will make it easier for you to integrate machine learning into your own applications and scale it with Kubernetes! To learn more about TensorFlow Serving, check out [tensorflow.github.io/serving](http://tensorflow.github.io/serving).&nbsp;

View File

@ -110,7 +110,7 @@ Computing the model and saving it is much slower than computing the model and th
{{< /note >}}
### Using Horizontal Pod Autoscaling with Spark (Optional)&nbsp;
Spark is somewhat elastic to workers coming and going, which means we have an opportunity: we can use use [Kubernetes Horizontal Pod Autoscaling](http://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/) to scale-out the Spark worker pool automatically, setting a target CPU threshold for the workers and a minimum/maximum pool size. This obviates the need for having to configure the number of worker replicas manually.
Spark is somewhat elastic to workers coming and going, which means we have an opportunity: we can use use [Kubernetes Horizontal Pod Autoscaling](/docs/user-guide/horizontal-pod-autoscaling/) to scale-out the Spark worker pool automatically, setting a target CPU threshold for the workers and a minimum/maximum pool size. This obviates the need for having to configure the number of worker replicas manually.
Create the Autoscaler like this (note: if you didnt change the machine type for the cluster, you probably want to limit the --max to something smaller):&nbsp;

View File

@ -184,7 +184,7 @@ spec:
path: cfg/game.properties
restartPolicy: Never
```
In the above example, the Deployment uses keys of the ConfigMap via two of the different mechanisms available. The property-like keys of the ConfigMap are used as environment variables to the single container in the Deployment template, and the file-like keys populate a volume. For more details, please see the [ConfigMap docs](http://kubernetes.io/docs/user-guide/configmap/).
In the above example, the Deployment uses keys of the ConfigMap via two of the different mechanisms available. The property-like keys of the ConfigMap are used as environment variables to the single container in the Deployment template, and the file-like keys populate a volume. For more details, please see the [ConfigMap docs](/docs/user-guide/configmap/).
We hope that these basic primitives are easy to use and look forward to seeing what people build with ConfigMaps. Thanks to the community members that provided feedback about this feature. Special thanks also to Tamer Tas who made a great contribution to the proposal and implementation of ConfigMap.

View File

@ -24,8 +24,8 @@ Without further ado, lets start playing around with Deployments!
### Getting started
If you want to try this example, basically youll need 3 things:
1. **A running Kubernetes cluster** : If you dont already have one, check the [Getting Started guides](http://kubernetes.io/docs/getting-started-guides/) for a list of solutions on a range of platforms, from your laptop, to VMs on a cloud provider, to a rack of bare metal servers.
2. **Kubectl, the Kubernetes CLI** : If you see a URL response after running kubectl cluster-info, youre ready to go. Otherwise, follow the [instructions](http://kubernetes.io/docs/user-guide/prereqs/) to install and configure kubectl; or the [instructions for hosted solutions](https://cloud.google.com/container-engine/docs/before-you-begin) if you have a Google Container Engine cluster.
1. **A running Kubernetes cluster** : If you dont already have one, check the [Getting Started guides](/docs/getting-started-guides/) for a list of solutions on a range of platforms, from your laptop, to VMs on a cloud provider, to a rack of bare metal servers.
2. **Kubectl, the Kubernetes CLI** : If you see a URL response after running kubectl cluster-info, youre ready to go. Otherwise, follow the [instructions](/docs/user-guide/prereqs/) to install and configure kubectl; or the [instructions for hosted solutions](https://cloud.google.com/container-engine/docs/before-you-begin) if you have a Google Container Engine cluster.
3. The [configuration files for this demo](https://github.com/kubernetes/kubernetes.github.io/tree/master/docs/user-guide/update-demo).
If you choose not to run this example yourself, thats okay. Just watch this [video](https://youtu.be/eigalYy0v4w) to see whats going on in each step.
@ -99,7 +99,7 @@ deployment "update-demo" rolled back
Everythings back to normal, phew!
To learn more about rollback, visit [rolling back a Deployment](http://kubernetes.io/docs/user-guide/deployments/#rolling-back-a-deployment).
To learn more about rollback, visit [rolling back a Deployment](/docs/user-guide/deployments/#rolling-back-a-deployment).
### Updating your application (for real)
After a while, we finally figure that the right image tag is “kitten”, instead of “kitty”. Now change .spec.template.spec.containers[0].image tag from “nautilus“ to “kitten“.
@ -119,7 +119,7 @@ $ kubectl describe deployment/update-demo
[![](https://1.bp.blogspot.com/-3U1OTNqdz1s/Vv7Kfw4uGYI/AAAAAAAAChU/CgF6Mv5J6b8_lANXkpEIFytRGo9x0Bn_A/s640/deployment-API-6.png)](https://1.bp.blogspot.com/-3U1OTNqdz1s/Vv7Kfw4uGYI/AAAAAAAAChU/CgF6Mv5J6b8_lANXkpEIFytRGo9x0Bn_A/s1600/deployment-API-6.png)
From the events section, youll find that the Deployment is managing another resource called [Replica Set](http://kubernetes.io/docs/user-guide/replicasets/), each controls the number of replicas of a different pod template. The Deployment enables progressive rollout by scaling up and down Replica Sets of new and old pod templates.
From the events section, youll find that the Deployment is managing another resource called [Replica Set](/docs/user-guide/replicasets/), each controls the number of replicas of a different pod template. The Deployment enables progressive rollout by scaling up and down Replica Sets of new and old pod templates.
### Conclusion
Now, youve learned the basic use of Deployment objects:
@ -127,7 +127,7 @@ Now, youve learned the basic use of Deployment objects:
1. Deploy an app with a Deployment, using kubectl run
2. Updating the app by updating the Deployment with kubectl edit
3. Rolling back to a previously deployed app with kubectl rollout undo
But theres so much more in Deployment that this article didnt cover! To discover more, continue reading [Deployments introduction](http://kubernetes.io/docs/user-guide/deployments/).
But theres so much more in Deployment that this article didnt cover! To discover more, continue reading [Deployments introduction](/docs/user-guide/deployments/).
**_Note:_** _In Kubernetes 1.2, Deployment (beta release) is now feature-complete and enabled by default. For those of you who have tried Deployment in Kubernetes 1.1, please **delete all Deployment 1.1 resources** (including the Replication Controllers and Pods they manage) before trying out Deployments in 1.2. This is necessary because we made some non-backward-compatible changes to the API._

View File

@ -105,7 +105,7 @@ spec:
```
If a Namespace does not have a Network spec, it will use the default Kubernetes network model instead, including the default kube-proxy. So if a user creates a Pod in a Namespace with an associated Network, Hypernetes will follow the [Kubernetes Network Plugin Model](http://kubernetes.io/docs/admin/network-plugins/) to set up a Neutron network for this Pod. Here is a high level example:
If a Namespace does not have a Network spec, it will use the default Kubernetes network model instead, including the default kube-proxy. So if a user creates a Pod in a Namespace with an associated Network, Hypernetes will follow the [Kubernetes Network Plugin Model](/docs/admin/network-plugins/) to set up a Neutron network for this Pod. Here is a high level example:

View File

@ -17,11 +17,11 @@ In this blog post, we describe the journey we took to implement deployment scrip
**BACKGROUND**
While Kubernetes is designed to operate on any IaaS, and [solution guides](http://kubernetes.io/docs/getting-started-guides/#table-of-solutions) exist for many platforms including [Google Compute Engine](http://kubernetes.io/docs/getting-started-guides/gce/), [AWS](http://kubernetes.io/docs/getting-started-guides/aws/), [Azure](http://kubernetes.io/docs/getting-started-guides/coreos/azure/), and [Rackspace](http://kubernetes.io/docs/getting-started-guides/rackspace/), the Kubernetes project refers to these as “versioned distros,” as they are only tested against a particular binary release of Kubernetes. On the other hand, “development distros” are used daily by automated, e2e tests for the latest Kubernetes source code, and serve as gating checks to code submission.
While Kubernetes is designed to operate on any IaaS, and [solution guides](/docs/getting-started-guides/#table-of-solutions) exist for many platforms including [Google Compute Engine](/docs/getting-started-guides/gce/), [AWS](/docs/getting-started-guides/aws/), [Azure](/docs/getting-started-guides/coreos/azure/), and [Rackspace](/docs/getting-started-guides/rackspace/), the Kubernetes project refers to these as “versioned distros,” as they are only tested against a particular binary release of Kubernetes. On the other hand, “development distros” are used daily by automated, e2e tests for the latest Kubernetes source code, and serve as gating checks to code submission.
When we first surveyed existing support for Kubernetes on Azure, we found documentation for running Kubernetes on Azure using CoreOS and Weave. The documentation includes [scripts for deployment](http://kubernetes.io/docs/getting-started-guides/coreos/azure/), but the scripts do not conform to the cluster/kube-up.sh framework for automated cluster creation required by a “development distro.” Further, there did not exist a continuous integration job that utilized the scripts to validate Kubernetes using the end-to-end test scenarios (those found in test/e2e in the Kubernetes repository).
When we first surveyed existing support for Kubernetes on Azure, we found documentation for running Kubernetes on Azure using CoreOS and Weave. The documentation includes [scripts for deployment](/docs/getting-started-guides/coreos/azure/), but the scripts do not conform to the cluster/kube-up.sh framework for automated cluster creation required by a “development distro.” Further, there did not exist a continuous integration job that utilized the scripts to validate Kubernetes using the end-to-end test scenarios (those found in test/e2e in the Kubernetes repository).

View File

@ -9,13 +9,13 @@ Kubernetes automates deployment, operations, and scaling of applications, but ou
Our work on the latter is just beginning, but you can already see it manifested in a few features of Kubernetes. For example:
- The “[graceful termination](http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_podspec)” mechanism provides a callback into the container a configurable amount of time before it is killed (due to a rolling update, node drain for maintenance, etc.). This allows the application to cleanly shut down, e.g. persist in-memory state and cleanly conclude open connections.
- [Liveness and readiness probes](http://kubernetes.io/docs/user-guide/production-pods/#liveness-and-readiness-probes-aka-health-checks) check a configurable application HTTP endpoint (other probe types are supported as well) to determine if the container is alive and/or ready to receive traffic. The response determines whether Kubernetes will restart the container, include it in the load-balancing pool for its Service, etc.
- [ConfigMap](http://kubernetes.io/docs/user-guide/configmap/) allows applications to read their configuration from a Kubernetes resource rather than using command-line flags.
- The “[graceful termination](/docs/api-reference/v1/definitions/#_v1_podspec)” mechanism provides a callback into the container a configurable amount of time before it is killed (due to a rolling update, node drain for maintenance, etc.). This allows the application to cleanly shut down, e.g. persist in-memory state and cleanly conclude open connections.
- [Liveness and readiness probes](/docs/user-guide/production-pods/#liveness-and-readiness-probes-aka-health-checks) check a configurable application HTTP endpoint (other probe types are supported as well) to determine if the container is alive and/or ready to receive traffic. The response determines whether Kubernetes will restart the container, include it in the load-balancing pool for its Service, etc.
- [ConfigMap](/docs/user-guide/configmap/) allows applications to read their configuration from a Kubernetes resource rather than using command-line flags.
More generally, we see Kubernetes enabling a new generation of design patterns, similar to [object oriented design patterns](https://en.wikipedia.org/wiki/Object-oriented_programming#Design_patterns), but this time for containerized applications. That design patterns would emerge from containerized architectures is not surprising -- containers provide many of the same benefits as software objects, in terms of modularity/packaging, abstraction, and reuse. Even better, because containers generally interact with each other via HTTP and widely available data formats like JSON, the benefits can be provided in a language-independent way.
This week Kubernetes co-founder Brendan Burns is presenting a [**paper**](https://www.usenix.org/conference/hotcloud16/workshop-program/presentation/burns) outlining our thoughts on this topic at the [8th Usenix Workshop on Hot Topics in Cloud Computing](https://www.usenix.org/conference/hotcloud16) (HotCloud 16), a venue where academic researchers and industry practitioners come together to discuss ideas at the forefront of research in private and public cloud technology. The paper describes three classes of patterns: management patterns (such as those described above), patterns involving multiple cooperating containers running on the same node, and patterns involving containers running across multiple nodes. We dont want to spoil the fun of reading the paper, but we will say that youll see that the [Pod](http://kubernetes.io/docs/user-guide/pods/) abstraction is a key enabler for the last two types of patterns.
This week Kubernetes co-founder Brendan Burns is presenting a [**paper**](https://www.usenix.org/conference/hotcloud16/workshop-program/presentation/burns) outlining our thoughts on this topic at the [8th Usenix Workshop on Hot Topics in Cloud Computing](https://www.usenix.org/conference/hotcloud16) (HotCloud 16), a venue where academic researchers and industry practitioners come together to discuss ideas at the forefront of research in private and public cloud technology. The paper describes three classes of patterns: management patterns (such as those described above), patterns involving multiple cooperating containers running on the same node, and patterns involving containers running across multiple nodes. We dont want to spoil the fun of reading the paper, but we will say that youll see that the [Pod](/docs/user-guide/pods/) abstraction is a key enabler for the last two types of patterns.
As the Kubernetes project continues to bring our decade of experience with [Borg](https://queue.acm.org/detail.cfm?id=2898444) to the open source community, we aim not only to make application deployment and operations at scale simple and reliable, but also to make it easy to create “cloud-native” applications in the first place. Our work on documenting our ideas around design patterns for container-based services, and Kubernetess enabling of such patterns, is a first step in this direction. We look forward to working with the academic and practitioner communities to identify and codify additional patterns, with the aim of helping containers fulfill the promise of bringing increased simplicity and reliability to the entire software lifecycle, from development, to deployment, to operations.

View File

@ -13,7 +13,7 @@ _Editors note: this post is part of a [series of in-depth articles](https://k
Thanks to a large number of contributions from the community and project members, we were able to deliver many new features for [Kubernetes 1.3 release](https://kubernetes.io/blog/2016/07/kubernetes-1.3-bridging-cloud-native-and-enterprise-workloads). We have been carefully listening to all the great feedback we have received from our users (see the [summary infographics](http://static.lwy.io/img/kubernetes_dashboard_infographic.png)) and addressed the highest priority requests and pain points.
The Dashboard UI now handles all workload resources. This means that no matter what workload type you run, it is visible in the web interface and you can do operational changes on it. For example, you can modify your stateful MySQL installation with [Pet Sets](http://kubernetes.io/docs/user-guide/petset/), do a rolling update of your web server with Deployments or install cluster monitoring with DaemonSets.&nbsp;
The Dashboard UI now handles all workload resources. This means that no matter what workload type you run, it is visible in the web interface and you can do operational changes on it. For example, you can modify your stateful MySQL installation with [Pet Sets](/docs/user-guide/petset/), do a rolling update of your web server with Deployments or install cluster monitoring with DaemonSets.&nbsp;

View File

@ -39,7 +39,7 @@ We could not have achieved this milestone without the tireless effort of countle
**Availability**
Kubernetes 1.3 is available for download at [get.k8s.io](http://get.k8s.io/)&nbsp;and via the open source repository hosted on [GitHub](http://github.com/kubernetes/kubernetes). To get started with Kubernetes try our [Hello World app](http://kubernetes.io/docs/hellonode/).
Kubernetes 1.3 is available for download at [get.k8s.io](http://get.k8s.io/)&nbsp;and via the open source repository hosted on [GitHub](http://github.com/kubernetes/kubernetes). To get started with Kubernetes try our [Hello World app](/docs/hellonode/).

View File

@ -168,7 +168,7 @@ When we launched Kubernetes support in Rancher we decided to maintain our own di
- Rancher as a CredentialProvider (to support Rancher private registries).
- Rancher Ingress controller to back up Kubernetes ingress resource.
So weve decided to eliminate the need of Rancher Kubernetes distribution, and try to upstream all our changes to the Kubernetes repo. To do that, we will be reworking our networking integration, and support Rancher networking as a [CNI plugin for Kubernetes](http://kubernetes.io/docs/admin/network-plugins/#cni). More details on that will be shared as soon as the feature design is finalized, but expect it to come in the next 2-3 months. We will also continue investing in Ranchers core capabilities integrated with Kubernetes, including, but not limited to:
So weve decided to eliminate the need of Rancher Kubernetes distribution, and try to upstream all our changes to the Kubernetes repo. To do that, we will be reworking our networking integration, and support Rancher networking as a [CNI plugin for Kubernetes](/docs/admin/network-plugins/#cni). More details on that will be shared as soon as the feature design is finalized, but expect it to come in the next 2-3 months. We will also continue investing in Ranchers core capabilities integrated with Kubernetes, including, but not limited to:
- Access rights management via Rancher environment that represents Kubernetes cluster
- Credential management and easy web-based access to standard kubectl cli

View File

@ -71,10 +71,10 @@ This dual interface to the container environment is an area of very active devel
So what can you do with rktnetes today? Currently, rktnetes passes all of [the applicable Kubernetes “end-to-end” (aka “e2e”) tests](http://storage.googleapis.com/kubernetes-test-history/static/suite-rktnetes:kubernetes-e2e-gce.html), provides standard metrics to cAdvisor, manages networks using [CNI](https://github.com/containernetworking/cni), handles per-container/pod logs, and automatically garbage collects old containers and images. Kubernetes running on rkt already provides more than the basics of a modular, flexible container runtime for Kubernetes clusters, and it is already a functional part of our development environment at CoreOS.
Developers and early adopters can follow the known issues in the [rktnetes notes](http://kubernetes.io/docs/getting-started-guides/rkt/notes/) to get an idea &nbsp;of the wrinkles and bumps test-drivers can expect to encounter. This list groups the high-level pieces required to bring rktnetes to feature parity with the existing container runtime and API. We hope youll try out rktnetes in your Kubernetes clusters, too.
Developers and early adopters can follow the known issues in the [rktnetes notes](/docs/getting-started-guides/rkt/notes/) to get an idea &nbsp;of the wrinkles and bumps test-drivers can expect to encounter. This list groups the high-level pieces required to bring rktnetes to feature parity with the existing container runtime and API. We hope youll try out rktnetes in your Kubernetes clusters, too.
#### Use rkt with Kubernetes Today
The introductory guide [_Running Kubernetes on rkt_](http://kubernetes.io/docs/getting-started-guides/rkt/) walks through the steps to spin up a rktnetes cluster, from kubelet --container-runtime=rkt to networking and starting pods. This intro also sketches the configuration youll need to start a cluster on GCE with the Kubernetes kube-up.sh script.
The introductory guide [_Running Kubernetes on rkt_](/docs/getting-started-guides/rkt/) walks through the steps to spin up a rktnetes cluster, from kubelet --container-runtime=rkt to networking and starting pods. This intro also sketches the configuration youll need to start a cluster on GCE with the Kubernetes kube-up.sh script.
Recent work aims to make rktnetes cluster creation much easier, too. While not yet merged, an&nbsp;[in-progress pull request creates a single rktnetes configuration toggle](https://github.com/coreos/coreos-kubernetes/pull/551) to select rkt as the container engine when deploying a Kubernetes cluster with the [coreos-kubernetes](https://github.com/coreos/coreos-kubernetes#kubernetes-on-coreos) configuration tools. You can also check out the [rktnetes workshop project](https://github.com/coreos/rkt8s-workshop), which launches a single-node rktnetes cluster on just about any developer workstation with one vagrant up command.

View File

@ -25,7 +25,7 @@ I believe that reach to be a validation of the vision underlying Kubernetes: to
- managing and maintaining clustered software like databases and message queues
Allow developers and operators to move to the next scale of abstraction, just like they have enabled Google and others in the tech ecosystem to scale to datacenter computers and beyond. From Kubernetes 1.0 to 1.3 we have continually improved the power and flexibility of the platform while ALSO improving performance, scalability, reliability, and usability. The explosion of integrations and tools that run on top of Kubernetes further validates core architectural decisions to be [composable](https://research.google.com/pubs/pub43438.html), to expose [open and flexible APIs](http://kubernetes.io/docs/api/), and to [deliberately limit the core platform](http://kubernetes.io/docs/whatisk8s/#kubernetes-is-not) and encourage extension.
Allow developers and operators to move to the next scale of abstraction, just like they have enabled Google and others in the tech ecosystem to scale to datacenter computers and beyond. From Kubernetes 1.0 to 1.3 we have continually improved the power and flexibility of the platform while ALSO improving performance, scalability, reliability, and usability. The explosion of integrations and tools that run on top of Kubernetes further validates core architectural decisions to be [composable](https://research.google.com/pubs/pub43438.html), to expose [open and flexible APIs](/docs/api/), and to [deliberately limit the core platform](/docs/whatisk8s/#kubernetes-is-not) and encourage extension.
Today Kubernetes has one of the largest and most vibrant communities in the open source ecosystem, with almost a thousand contributors, one of the highest human-generated commit rates of any single-repository project on GitHub, over a thousand projects based around Kubernetes, and correspondingly active Stack Overflow and Slack channels. Red Hat is proud to be part of this ecosystem as the largest contributor to Kubernetes after Google, and every day more companies and individuals join us. The idea of Kubernetes found fertile ground, and you, the community, provided the excitement and commitment that made it grow.

View File

@ -13,7 +13,7 @@ _Editors note: this post is part of a [series of in-depth articles](https://k
For the [Kubernetes 1.3 launch](https://kubernetes.io/blog/2016/07/kubernetes-1.3-bridging-cloud-native-and-enterprise-workloads), we wanted to put the new Pet Set through its paces. By testing a thousand instances of [Cassandra](https://cassandra.apache.org/), we could make sure that Kubernetes 1.3 was production ready. Read on for how we adapted Cassandra to Kubernetes, and had our largest deployment ever.
Its fairly straightforward to use containers with basic stateful applications today. Using a persistent volume, you can mount a disk in a pod, and ensure that your data lasts beyond the life of your pod. However, with deployments of distributed stateful applications, things can become more tricky. With Kubernetes 1.3, the new [Pet Set](http://kubernetes.io/docs/user-guide/petset/) component makes everything much easier. To test this new feature out at scale, we decided to host the Greek Pet Monster Races! We raced Centaurs and other Ancient Greek Monsters over hundreds of thousands of races across multiple availability zones.
Its fairly straightforward to use containers with basic stateful applications today. Using a persistent volume, you can mount a disk in a pod, and ensure that your data lasts beyond the life of your pod. However, with deployments of distributed stateful applications, things can become more tricky. With Kubernetes 1.3, the new [Pet Set](/docs/user-guide/petset/) component makes everything much easier. To test this new feature out at scale, we decided to host the Greek Pet Monster Races! We raced Centaurs and other Ancient Greek Monsters over hundreds of thousands of races across multiple availability zones.
[![File:Cassandra1.jpeg](https://upload.wikimedia.org/wikipedia/commons/thumb/4/42/Cassandra1.jpeg/283px-Cassandra1.jpeg)](https://upload.wikimedia.org/wikipedia/commons/thumb/4/42/Cassandra1.jpeg/283px-Cassandra1.jpeg)
As many of you know Kubernetes is from the Ancient Greek: κυβερνήτης. This means helmsman, pilot, steersman, or ship master. So in order to keep track of race results, we needed a data store, and we choose Cassandra. Κασσάνδρα, Cassandra who was the daughter of King of Priam and Queen Hecuba of Troy. With multiple references to the ancient Greek language, we thought it would be appropriate to race ancient Greek monsters.
@ -62,7 +62,7 @@ So back to our races!
As we have mentioned, Cassandra was a perfect candidate to deploy via a Pet Set. A Pet Set is much like a [Replica Controller](http://kubernetes.io/docs/user-guide/replication-controller/) with a few new bells and whistles. Here's an example YAML manifest:
As we have mentioned, Cassandra was a perfect candidate to deploy via a Pet Set. A Pet Set is much like a [Replica Controller](/docs/user-guide/replication-controller/) with a few new bells and whistles. Here's an example YAML manifest:
@ -367,7 +367,7 @@ Yes we deployed 1,000 pets, but one really did not want to join the party! Techn
- The source code for the demo is available on [GitHub](https://github.com/k8s-for-greeks/gpmr): (Pet Set examples will be merged into the Kubernetes Cassandra Examples).
- More information about [Jobs](http://kubernetes.io/docs/user-guide/jobs/)
- More information about [Jobs](/docs/user-guide/jobs/)
- [Documentation for Pet Set](https://github.com/kubernetes/kubernetes.github.io/blob/release-1.3/docs/user-guide/petset.md)
- Image credits: Cassandra [image](https://commons.wikimedia.org/wiki/File:Cassandra1.jpeg) and Cyclops [image](https://commons.wikimedia.org/wiki/File:Polyphemus.gif)

View File

@ -15,7 +15,7 @@ You may have [heard me say before](https://www.diamanti.com/blog/the-next-great-
Beyond stateless containers like web servers (so-called “cattle” because they are interchangeable), users are increasingly deploying stateful workloads with containers to benefit from “build once, run anywhere” and to improve bare metal efficiency/utilization. These “pets” (so-called because each requires special handling) bring new requirements including longer life cycle, configuration dependencies, stateful failover, and performance sensitivity. Container orchestration must address these needs to successfully deploy and scale apps.
Enter [Pet Set](http://kubernetes.io/docs/user-guide/petset/), a new object in Kubernetes 1.3 for improved stateful application support. Pet Set sequences through the startup phase of each database replica (for example), ensuring orderly master/slave configuration. Pet Set also simplifies service discovery by leveraging ubiquitous DNS SRV records, a well-recognized and long-understood mechanism.
Enter [Pet Set](/docs/user-guide/petset/), a new object in Kubernetes 1.3 for improved stateful application support. Pet Set sequences through the startup phase of each database replica (for example), ensuring orderly master/slave configuration. Pet Set also simplifies service discovery by leveraging ubiquitous DNS SRV records, a well-recognized and long-understood mechanism.
Diamantis [FlexVolume contribution](https://github.com/kubernetes/kubernetes/pull/13840) to Kubernetes enables stateful workloads by providing persistent volumes with low-latency storage and guaranteed performance, including enforced quality-of-service from container to media.

View File

@ -37,7 +37,7 @@ The first time one or more nodes are attached to a cluster, PMK configures the n
**Containerized kubelet?**
Another hurdle we encountered resulted from our original decision to run kubelet as recommended by the [Multi-node Docker Deployment Guide](http://kubernetes.io/docs/getting-started-guides/docker-multinode/). We discovered that this approach introduces complexities that led to many difficult-to-troubleshoot bugs that were sensitive to the combined versions of&nbsp;Kubernetes, Docker, and the node OS. Example: kubelets need to&nbsp;mount directories containing secrets into containers to support the [Service Accounts](http://kubernetes.io/docs/user-guide/service-accounts/) mechanism. It turns out that [doing this from inside of a container is tricky](https://github.com/kubernetes/kubernetes/issues/6848), and requires a [complex sequence of steps](https://github.com/kubernetes/kubernetes/blob/release-1.0/pkg/util/mount/nsenter_mount.go#L37) that turned out to be fragile. After fixing a continuing stream of issues, we finally decided to run kubelet as a native program on the host OS, resulting in significantly better stability.
Another hurdle we encountered resulted from our original decision to run kubelet as recommended by the [Multi-node Docker Deployment Guide](/docs/getting-started-guides/docker-multinode/). We discovered that this approach introduces complexities that led to many difficult-to-troubleshoot bugs that were sensitive to the combined versions of&nbsp;Kubernetes, Docker, and the node OS. Example: kubelets need to&nbsp;mount directories containing secrets into containers to support the [Service Accounts](/docs/user-guide/service-accounts/) mechanism. It turns out that [doing this from inside of a container is tricky](https://github.com/kubernetes/kubernetes/issues/6848), and requires a [complex sequence of steps](https://github.com/kubernetes/kubernetes/blob/release-1.0/pkg/util/mount/nsenter_mount.go#L37) that turned out to be fragile. After fixing a continuing stream of issues, we finally decided to run kubelet as a native program on the host OS, resulting in significantly better stability.
**Overcoming networking hurdles**

View File

@ -13,11 +13,11 @@ _[Who's on First?](https://www.youtube.com/watch?v=kTcRRaXV-fg) by Abbott and Co
**Introduction**
Kubernetes is a system with several concepts. Many of these concepts get manifested as “objects” in the RESTful API (often called “resources” or “kinds”). One of these concepts is [Namespaces](http://kubernetes.io/docs/user-guide/namespaces/). In Kubernetes, Namespaces are the way to partition a single Kubernetes cluster into multiple virtual clusters. In this post well highlight examples of how our customers are using Namespaces.&nbsp;
Kubernetes is a system with several concepts. Many of these concepts get manifested as “objects” in the RESTful API (often called “resources” or “kinds”). One of these concepts is [Namespaces](/docs/user-guide/namespaces/). In Kubernetes, Namespaces are the way to partition a single Kubernetes cluster into multiple virtual clusters. In this post well highlight examples of how our customers are using Namespaces.&nbsp;
But first, a metaphor: Namespaces are like human family names. A family name, e.g. Wong, identifies a family unit. Within the Wong family, one of its members, e.g. Sam Wong, is readily identified as just “Sam” by the family. Outside of the family, and to avoid “Which Sam?” problems, Sam would usually be referred to as “Sam Wong”, perhaps even “Sam Wong from San Francisco”. &nbsp;
Namespaces are a logical partitioning capability that enable one Kubernetes cluster to be used by multiple users, teams of users, or a single user with multiple applications without concern for undesired interaction. Each user, team of users, or application may exist within its Namespace, isolated from every other user of the cluster and operating as if it were the sole user of the cluster. (Furthermore, [Resource Quotas](http://kubernetes.io/docs/admin/resourcequota/) provide the ability to allocate a subset of a Kubernetes clusters resources to a Namespace.)
Namespaces are a logical partitioning capability that enable one Kubernetes cluster to be used by multiple users, teams of users, or a single user with multiple applications without concern for undesired interaction. Each user, team of users, or application may exist within its Namespace, isolated from every other user of the cluster and operating as if it were the sole user of the cluster. (Furthermore, [Resource Quotas](/docs/admin/resourcequota/) provide the ability to allocate a subset of a Kubernetes clusters resources to a Namespace.)
For all but the most trivial uses of Kubernetes, you will benefit by using Namespaces. In this post, well cover the most common ways that weve seen Kubernetes users on Google Cloud Platform use Namespaces, but our list is not exhaustive and wed be interested to learn other examples from you.
@ -125,7 +125,7 @@ You may wish to, but you cannot create a hierarchy of namespaces. Namespaces can
Namespaces are easy to create and use but its also easy to deploy code inadvertently into the wrong namespace. Good DevOps hygiene suggests documenting and automating processes where possible and this will help. The other way to avoid using the wrong namespace is to set a [kubectl context](http://kubernetes.io/docs/user-guide/kubectl/kubectl_config_set-context/).&nbsp;
Namespaces are easy to create and use but its also easy to deploy code inadvertently into the wrong namespace. Good DevOps hygiene suggests documenting and automating processes where possible and this will help. The other way to avoid using the wrong namespace is to set a [kubectl context](/docs/user-guide/kubectl/kubectl_config_set-context/).&nbsp;

View File

@ -4,7 +4,7 @@ date: 2016-08-31
slug: security-best-practices-kubernetes-deployment
url: /blog/2016/08/Security-Best-Practices-Kubernetes-Deployment
---
_Note: some of the recommendations in this post are no longer current. Current cluster hardening options are described in this [documentation](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/)._
_Note: some of the recommendations in this post are no longer current. Current cluster hardening options are described in this [documentation](/docs/tasks/administer-cluster/securing-a-cluster/)._
_Editors note: todays post is by Amir Jerbi and Michael Cherny of Aqua Security, describing security best practices for Kubernetes deployments, based on data theyve collected from various use-cases seen in both on-premises and cloud deployments._
@ -32,7 +32,7 @@ There is work in progress being done in Kubernetes for image authorization plugi
**Limit Direct Access to Kubernetes Nodes**
You should limit SSH access to Kubernetes nodes, reducing the risk for unauthorized access to host resource. Instead you should ask users to use "kubectl exec", which will provide direct access to the container environment without the ability to access the host.
You can use Kubernetes [Authorization Plugins](http://kubernetes.io/docs/reference/access-authn-authz/authorization/) to further control user access to resources. This allows defining fine-grained-access control rules for specific namespace, containers and operations.
You can use Kubernetes [Authorization Plugins](/docs/reference/access-authn-authz/authorization/) to further control user access to resources. This allows defining fine-grained-access control rules for specific namespace, containers and operations.
**Create Administrative Boundaries between Resources**
Limiting the scope of user permissions can reduce the impact of mistakes or malicious activities. A Kubernetes namespace allows you to partition created resources into logically named groups. Resources created in one namespace can be hidden from other namespaces. By default, each resource created by a user in Kubernetes cluster runs in a default namespace, called default. You can create additional namespaces and attach resources and users to them. You can use Kubernetes Authorization plugins to create policies that segregate access to namespace resources between different users.
@ -203,11 +203,11 @@ spec:
Reference [here](http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_podsecuritycontext).
Reference [here](/docs/api-reference/v1/definitions/#_v1_podsecuritycontext).
In case you are running containers with elevated privileges (--privileged) you should consider using the “DenyEscalatingExec” admission control. This control denies exec and attach commands to pods that run with escalated privileges that allow host access. This includes pods that run as privileged, have access to the host IPC namespace, and have access to the host PID namespace. For more details on admission controls, see the Kubernetes [documentation](http://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/).
In case you are running containers with elevated privileges (--privileged) you should consider using the “DenyEscalatingExec” admission control. This control denies exec and attach commands to pods that run with escalated privileges that allow host access. This includes pods that run as privileged, have access to the host IPC namespace, and have access to the host PID namespace. For more details on admission controls, see the Kubernetes [documentation](/docs/reference/access-authn-authz/admission-controllers/).

View File

@ -8,9 +8,9 @@ _Editors note: todays guest post is by Shailesh Mittal, Software Architect
**Introduction**
Persistent volumes in Kubernetes are foundational as customers move beyond stateless workloads to run stateful applications. While Kubernetes has supported stateful applications such as MySQL, Kafka, Cassandra, and Couchbase for a while, the introduction of Pet Sets has significantly improved this support. In particular, the procedure to sequence the provisioning and startup, the ability to scale and associate durably by [Pet Sets](http://kubernetes.io/docs/user-guide/petset/) has provided the ability to automate to scale the “Pets” (applications that require consistent handling and durable placement).
Persistent volumes in Kubernetes are foundational as customers move beyond stateless workloads to run stateful applications. While Kubernetes has supported stateful applications such as MySQL, Kafka, Cassandra, and Couchbase for a while, the introduction of Pet Sets has significantly improved this support. In particular, the procedure to sequence the provisioning and startup, the ability to scale and associate durably by [Pet Sets](/docs/user-guide/petset/) has provided the ability to automate to scale the “Pets” (applications that require consistent handling and durable placement).
Datera, elastic block storage for cloud deployments, has [seamlessly integrated with Kubernetes](http://datera.io/blog-library/8/19/datera-simplifies-stateful-containers-on-kubernetes-13) through the [FlexVolume](http://kubernetes.io/docs/user-guide/volumes/#flexvolume) framework. Based on the first principles of containers, Datera allows application resource provisioning to be decoupled from the underlying physical infrastructure. This brings clean contracts (aka, no dependency or direct knowledge of the underlying physical infrastructure), declarative formats, and eventually portability to stateful applications.
Datera, elastic block storage for cloud deployments, has [seamlessly integrated with Kubernetes](http://datera.io/blog-library/8/19/datera-simplifies-stateful-containers-on-kubernetes-13) through the [FlexVolume](/docs/user-guide/volumes/#flexvolume) framework. Based on the first principles of containers, Datera allows application resource provisioning to be decoupled from the underlying physical infrastructure. This brings clean contracts (aka, no dependency or direct knowledge of the underlying physical infrastructure), declarative formats, and eventually portability to stateful applications.
While Kubernetes allows for great flexibility to define the underlying application infrastructure through yaml configurations, Datera allows for that configuration to be passed to the storage infrastructure to provide persistence. Through the notion of Datera AppTemplates, in a Kubernetes environment, stateful applications can be automated to scale.
@ -22,7 +22,7 @@ While Kubernetes allows for great flexibility to define the underlying applicati
Persistent storage is defined using the Kubernetes [PersistentVolume](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistent-volumes) subsystem. PersistentVolumes are volume plugins and define volumes that live independently of the lifecycle of the pod that is using it. They are implemented as NFS, iSCSI, or by cloud provider specific storage system. Datera has developed a volume plugin for PersistentVolumes that can provision iSCSI block storage on the Datera Data Fabric for Kubernetes pods.
Persistent storage is defined using the Kubernetes [PersistentVolume](/docs/user-guide/persistent-volumes/#persistent-volumes) subsystem. PersistentVolumes are volume plugins and define volumes that live independently of the lifecycle of the pod that is using it. They are implemented as NFS, iSCSI, or by cloud provider specific storage system. Datera has developed a volume plugin for PersistentVolumes that can provision iSCSI block storage on the Datera Data Fabric for Kubernetes pods.

View File

@ -20,7 +20,7 @@ Since the release of Kubernetes 1.3 back in July, users have been able to define
What does this mean for the application developer? At last, Kubernetes has gained the necessary capabilities to provide "[defence in depth](https://en.wikipedia.org/wiki/Defense_in_depth_(computing))". Traffic can be segmented and different parts of your application can be secured independently. For example, you can very easily protect each of your services via specific network policies: All the pods identified by a [Replication Controller](http://kubernetes.io/docs/user-guide/replication-controller/) behind a service are already identified by a specific label. Therefore, you can use this same label to apply a policy to those pods.
What does this mean for the application developer? At last, Kubernetes has gained the necessary capabilities to provide "[defence in depth](https://en.wikipedia.org/wiki/Defense_in_depth_(computing))". Traffic can be segmented and different parts of your application can be secured independently. For example, you can very easily protect each of your services via specific network policies: All the pods identified by a [Replication Controller](/docs/user-guide/replication-controller/) behind a service are already identified by a specific label. Therefore, you can use this same label to apply a policy to those pods.

View File

@ -22,7 +22,7 @@ To run performance tests, we had to find a system that could manage networked co
The performance improvement we observed was due to the number of containers we could “pack” on a single machine. Ironically, we began the Docker experiment wanting to avoid “noisy neighbor,” which we assumed was inevitable when several containers shared the same VM. However, that isolation also acted as a bottleneck, both in performance and cost. To use a real-world example, If a machine has 2 cores and you need 3 cores, you have a problem. Its rare to come across a public-cloud VM with 3 cores, so the typical solution is to buy 4 cores and not utilize them fully.
This is where Kubernetes really starts to shine. It has the concept of [requests and limits](http://kubernetes.io/docs/user-guide/compute-resources/), which provides granular control over resource sharing. Multiple containers can share an underlying host VM without the fear of “noisy neighbors”. They can request exclusive control over an amount of RAM, for example, and they can define a limit in anticipation of overflow. Its practical, performant, and cost-effective multi-tenancy. We were able to deliver the best of both the single-tenant and multi-tenant worlds.
This is where Kubernetes really starts to shine. It has the concept of [requests and limits](/docs/user-guide/compute-resources/), which provides granular control over resource sharing. Multiple containers can share an underlying host VM without the fear of “noisy neighbors”. They can request exclusive control over an amount of RAM, for example, and they can define a limit in anticipation of overflow. Its practical, performant, and cost-effective multi-tenancy. We were able to deliver the best of both the single-tenant and multi-tenant worlds.
**Kubernetes + Supergiant**
We built [Supergiant](https://supergiant.io/) originally for our own Elasticsearch customers. Supergiant solves Kubernetes complications by allowing pre-packaged and re-deployable application topologies. In more specific terms, Supergiant lets you use Components, which are somewhat similar to a microservice. Components represent an almost-uniform set of Instances of software (e.g., Elasticsearch, MongoDB, your web application, etc.). They roll up all the various Kubernetes and cloud operations needed to deploy a complex topology into a compact entity that is easy to manage.

View File

@ -16,26 +16,26 @@ There are three stages in setting up a Kubernetes cluster, and we decided to foc
3. **Add-ons** : installing necessary cluster add-ons like DNS and monitoring services, a pod network, etc
We realized early on that there's enormous variety in the way that users want to **provision** their machines.
They use lots of different cloud providers, private clouds, bare metal, or even Raspberry Pi's, and almost always have their own preferred tools for automating provisioning machines: Terraform or CloudFormation, Chef, Puppet or Ansible, or even PXE booting bare metal. So we made an important decision: **kubeadm would not provision machines**. Instead, the only assumption it makes is that the user has some [computers running Linux](http://kubernetes.io/docs/getting-started-guides/kubeadm/#prerequisites).
They use lots of different cloud providers, private clouds, bare metal, or even Raspberry Pi's, and almost always have their own preferred tools for automating provisioning machines: Terraform or CloudFormation, Chef, Puppet or Ansible, or even PXE booting bare metal. So we made an important decision: **kubeadm would not provision machines**. Instead, the only assumption it makes is that the user has some [computers running Linux](/docs/getting-started-guides/kubeadm/#prerequisites).
Another important constraint was we didn't want to just build another tool that "configures Kubernetes from the outside, by poking all the bits into place". There are many external projects out there for doing this, but we wanted to aim higher. We chose to actually improve the Kubernetes core itself to make it easier to install. Luckily, a lot of the groundwork for making this happen had already been started.
We realized that if we made Kubernetes insanely easy to install manually, it should be obvious to users how to automate that process using any tooling.
So, enter [kubeadm](http://kubernetes.io/docs/getting-started-guides/kubeadm/). It has no infrastructure dependencies, and satisfies the requirements above. It's easy to use and should be easy to automate. It's still in **alpha** , but it works like this:
So, enter [kubeadm](/docs/getting-started-guides/kubeadm/). It has no infrastructure dependencies, and satisfies the requirements above. It's easy to use and should be easy to automate. It's still in **alpha** , but it works like this:
- You install Docker and the official Kubernetes packages for you distribution.
- Select a master host, run kubeadm init.
- This sets up the control plane and outputs a kubeadm join [...] command which includes a secure token.
- On each host selected to be a worker node, run the kubeadm join [...] command from above.
- Install a pod network. [Weave Net](https://github.com/weaveworks/weave-kube) is a great place to start here. Install it using just kubectl apply -f https://git.io/weave-kube
Presto! You have a working Kubernetes cluster! [Try kubeadm today](http://kubernetes.io/docs/getting-started-guides/kubeadm/).&nbsp;
Presto! You have a working Kubernetes cluster! [Try kubeadm today](/docs/getting-started-guides/kubeadm/).&nbsp;
For a video walkthrough, check this out:
Follow the&nbsp;[kubeadm getting started guide](http://kubernetes.io/docs/getting-started-guides/kubeadm/) to try it yourself, and please give us [feedback on GitHub](https://github.com/kubernetes/kubernetes/issues/new), mentioning **@kubernetes/sig-cluster-lifecycle**!
Follow the&nbsp;[kubeadm getting started guide](/docs/getting-started-guides/kubeadm/) to try it yourself, and please give us [feedback on GitHub](https://github.com/kubernetes/kubernetes/issues/new), mentioning **@kubernetes/sig-cluster-lifecycle**!
Finally, I want to give a huge shout-out to so many people in the SIG-cluster-lifecycle, without whom this wouldn't have been possible. I'll mention just a few here:
@ -54,7 +54,7 @@ This truly has been an excellent cross-company and cross-timezone achievement, w
_--[Luke Marsden](https://twitter.com/lmarsden), Head of Developer Experience at [Weaveworks](https://twitter.com/weaveworks)_
- Try [kubeadm](http://kubernetes.io/docs/getting-started-guides/kubeadm/) to install Kubernetes today
- Try [kubeadm](/docs/getting-started-guides/kubeadm/) to install Kubernetes today
- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)&nbsp;
- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)&nbsp;
- Connect with the community on [Slack](http://slack.k8s.io/)

View File

@ -17,38 +17,38 @@ Additional product highlights in this release include simplified cluster deploym
**Cluster creation with two commands -** To get started with Kubernetes a user must provision nodes, install Kubernetes and bootstrap the cluster. A common request from users is to have an easy, portable way to do this on any cloud (public, private, or bare metal).
- Kubernetes 1.4 introduces [kubeadm](http://kubernetes.io/docs/getting-started-guides/kubeadm/) which reduces bootstrapping to two commands, with no complex scripts involved. Once kubernetes is installed, kubeadm init starts the master while kubeadm join joins the nodes to the cluster.
- Kubernetes 1.4 introduces [kubeadm](/docs/getting-started-guides/kubeadm/) which reduces bootstrapping to two commands, with no complex scripts involved. Once kubernetes is installed, kubeadm init starts the master while kubeadm join joins the nodes to the cluster.
- Installation is also streamlined by packaging Kubernetes with its dependencies, for most major Linux distributions including Red Hat and Ubuntu Xenial. This means users can now install Kubernetes using familiar tools such as apt-get and yum.
- Add-on deployments, such as for an overlay network, can be reduced to one command by using a [DaemonSet](http://kubernetes.io/docs/admin/daemons/).
- Enabling this simplicity is a new certificates API and its use for kubelet [TLS bootstrap](http://kubernetes.io/docs/admin/master-node-communication/#kubelet-tls-bootstrap), as well as a new discovery API.
- Add-on deployments, such as for an overlay network, can be reduced to one command by using a [DaemonSet](/docs/admin/daemons/).
- Enabling this simplicity is a new certificates API and its use for kubelet [TLS bootstrap](/docs/admin/master-node-communication/#kubelet-tls-bootstrap), as well as a new discovery API.
**Expanded stateful application support -** While cloud-native applications are built to run in containers, many existing applications need additional features to make it easy to adopt containers. Most commonly, these include stateful applications such as batch processing, databases and key-value stores. In Kubernetes 1.4, we have introduced a number of features simplifying the deployment of such applications, including:&nbsp;
- [ScheduledJob](http://kubernetes.io/docs/user-guide/scheduled-jobs/) is introduced as Alpha so users can run batch jobs at regular intervals.
- [ScheduledJob](/docs/user-guide/scheduled-jobs/) is introduced as Alpha so users can run batch jobs at regular intervals.
- Init-containers are Beta, addressing the need to run one or more containers before starting the main application, for example to sequence dependencies when starting a database or multi-tier app.
- [Dynamic PVC Provisioning](http://kubernetes.io/docs/user-guide/persistent-volumes/) moved to Beta. This feature now enables cluster administrators to expose multiple storage provisioners and allows users to select them using a new Storage Class API object. &nbsp;
- [Dynamic PVC Provisioning](/docs/user-guide/persistent-volumes/) moved to Beta. This feature now enables cluster administrators to expose multiple storage provisioners and allows users to select them using a new Storage Class API object. &nbsp;
- Curated and pre-tested [Helm charts](https://github.com/kubernetes/charts) for common stateful applications such as MariaDB, MySQL and Jenkins will be available for one-command launches using version 2 of the Helm Package Manager.
**Cluster federation API additions -** One of the most requested capabilities from our global customers has been the ability to build applications with clusters that span regions and clouds.&nbsp;
- [Federated Replica Sets](http://kubernetes.io/docs/user-guide/federation/replicasets/) Beta - replicas can now span some or all clusters enabling cross region or cross cloud replication. The total federated replica count and relative cluster weights / replica counts are continually reconciled by a federated replica-set controller to ensure you have the pods you need in each region / cloud.
- Federated Services are now Beta, and [secrets](http://kubernetes.io/docs/user-guide/federation/secrets/), [events](http://kubernetes.io/docs/user-guide/federation/events) and [namespaces](http://kubernetes.io/docs/user-guide/federation/namespaces) have also been added to the federation API.
- [Federated Ingress](http://kubernetes.io/docs/user-guide/federation/federated-ingress/) Alpha - starting with Google Cloud Platform (GCP), users can create a single L7 globally load balanced VIP that spans services deployed across a federation of clusters within GCP. With Federated Ingress in GCP, external clients point to a single IP address and are sent to the closest cluster with usable capacity in any region or zone of the federation in GCP.
- [Federated Replica Sets](/docs/user-guide/federation/replicasets/) Beta - replicas can now span some or all clusters enabling cross region or cross cloud replication. The total federated replica count and relative cluster weights / replica counts are continually reconciled by a federated replica-set controller to ensure you have the pods you need in each region / cloud.
- Federated Services are now Beta, and [secrets](/docs/user-guide/federation/secrets/), [events](/docs/user-guide/federation/events) and [namespaces](/docs/user-guide/federation/namespaces) have also been added to the federation API.
- [Federated Ingress](/docs/user-guide/federation/federated-ingress/) Alpha - starting with Google Cloud Platform (GCP), users can create a single L7 globally load balanced VIP that spans services deployed across a federation of clusters within GCP. With Federated Ingress in GCP, external clients point to a single IP address and are sent to the closest cluster with usable capacity in any region or zone of the federation in GCP.
**Container security support -** Administrators of multi-tenant clusters require the ability to provide varying sets of permissions among tenants, infrastructure components, and end users of the system.
- [Pod Security Policy](http://kubernetes.io/docs/user-guide/pod-security-policy/) is a new object that enables cluster administrators to control the creation and validation of security contexts for pods/containers. Admins can associate service accounts, groups, and users with a set of constraints to define a security context.
- [AppArmor](http://kubernetes.io/docs/admin/apparmor/) support is added, enabling admins to run a more secure deployment, and provide better auditing and monitoring of their systems. Users can configure a container to run in an AppArmor profile by setting a single field.
- [Pod Security Policy](/docs/user-guide/pod-security-policy/) is a new object that enables cluster administrators to control the creation and validation of security contexts for pods/containers. Admins can associate service accounts, groups, and users with a set of constraints to define a security context.
- [AppArmor](/docs/admin/apparmor/) support is added, enabling admins to run a more secure deployment, and provide better auditing and monitoring of their systems. Users can configure a container to run in an AppArmor profile by setting a single field.
**Infrastructure enhancements -&nbsp;** We continue adding to the scheduler, storage and client capabilities in Kubernetes based on user and ecosystem needs.
- Scheduler - introducing [inter-pod affinity and anti-affinity](http://kubernetes.io/docs/user-guide/node-selection/)&nbsp;Alpha for users who want to customize how Kubernetes co-locates or spreads their pods. Also [priority scheduling capability for cluster add-ons](http://kubernetes.io/docs/admin/rescheduler/#guaranteed-scheduling-of-critical-add-on-pods) such as DNS, Heapster, and the Kube Dashboard.
- Scheduler - introducing [inter-pod affinity and anti-affinity](/docs/user-guide/node-selection/)&nbsp;Alpha for users who want to customize how Kubernetes co-locates or spreads their pods. Also [priority scheduling capability for cluster add-ons](/docs/admin/rescheduler/#guaranteed-scheduling-of-critical-add-on-pods) such as DNS, Heapster, and the Kube Dashboard.
- Disruption SLOs - Pod Disruption Budget is introduced to limit impact of pods deleted by cluster management operations (such as node upgrade) at any one time.
- Storage - New [volume plugins](http://kubernetes.io/docs/user-guide/volumes/) for Quobyte and Azure Data Disk have been added.
- Storage - New [volume plugins](/docs/user-guide/volumes/) for Quobyte and Azure Data Disk have been added.
- Clients - Swagger 2.0 support is added, enabling non-Go clients.
**Kubernetes Dashboard UI -** lastly, a great looking Kubernetes [Dashboard UI](https://github.com/kubernetes/dashboard#kubernetes-dashboard) with 90% CLI parity for at-a-glance management.
@ -56,7 +56,7 @@ Additional product highlights in this release include simplified cluster deploym
For a complete list of updates see the [release notes](https://github.com/kubernetes/kubernetes/pull/33410) on GitHub. Apart from features the most impressive aspect of Kubernetes development is the community of contributors. This is particularly true of the 1.4 release, the full breadth of which will unfold in upcoming weeks.
**Availability**
Kubernetes 1.4 is available for download at [get.k8s.io](http://get.k8s.io/) and via the open source repository hosted on [GitHub](http://github.com/kubernetes/kubernetes). To get started with Kubernetes try the [Hello World app](http://kubernetes.io/docs/hellonode/).
Kubernetes 1.4 is available for download at [get.k8s.io](http://get.k8s.io/) and via the open source repository hosted on [GitHub](http://github.com/kubernetes/kubernetes). To get started with Kubernetes try the [Hello World app](/docs/hellonode/).
To get involved with the project, join the [weekly community meeting](https://groups.google.com/forum/#!forum/kubernetes-community-video-chat) or start contributing to the project here (marked help).&nbsp;
@ -80,8 +80,3 @@ Were very grateful to our community of over 900 contributors who contributed
Thank you for your support!&nbsp;
_-- Aparna Sinha, Product Manager, Google_

View File

@ -13,11 +13,11 @@ The alpha version of dynamic provisioning only allowed a single, hard-coded prov
Although the alpha version of the feature was limited in utility, it allowed us to “get some miles” on the idea, and helped determine the direction we wanted to take.
The beta version of dynamic provisioning, new in Kubernetes 1.4, introduces a [new API object](http://kubernetes.io/docs/user-guide/persistent-volumes/#storageclasses), StorageClass. Multiple StorageClass objects can be defined each specifying a volume plugin (aka provisioner) to use to provision a volume and the set of parameters to pass to that provisioner when provisioning. This design allows cluster administrators to define and expose multiple flavors of storage (from the same or different storage systems) within a cluster, each with a custom set of parameters. This design also ensures that end users dont have to worry about the complexity and nuances of how storage is provisioned, but still have the ability to select from multiple storage options.
The beta version of dynamic provisioning, new in Kubernetes 1.4, introduces a [new API object](/docs/user-guide/persistent-volumes/#storageclasses), StorageClass. Multiple StorageClass objects can be defined each specifying a volume plugin (aka provisioner) to use to provision a volume and the set of parameters to pass to that provisioner when provisioning. This design allows cluster administrators to define and expose multiple flavors of storage (from the same or different storage systems) within a cluster, each with a custom set of parameters. This design also ensures that end users dont have to worry about the complexity and nuances of how storage is provisioned, but still have the ability to select from multiple storage options.
**How Do I use It?**
Below is an example of how a cluster administrator would expose two tiers of storage, and how a user would select and use one. For more details, see the [reference](http://kubernetes.io/docs/user-guide/persistent-volumes/#storageclasses) and [example](https://github.com/kubernetes/kubernetes/tree/release-1.4/examples/experimental/persistent-volume-provisioning) docs.
Below is an example of how a cluster administrator would expose two tiers of storage, and how a user would select and use one. For more details, see the [reference](/docs/user-guide/persistent-volumes/#storageclasses) and [example](https://github.com/kubernetes/kubernetes/tree/release-1.4/examples/experimental/persistent-volume-provisioning) docs.
**Admin Configuration**

View File

@ -10,15 +10,15 @@ In Kubernetes 1.3, we announced Kubernetes Cluster Federation and introduced the
In the latest release, [Kubernetes 1.4](https://kubernetes.io/blog/2016/09/kubernetes-1.4-making-it-easy-to-run-on-kuberentes-anywhere), we've extended Cluster Federation to support Replica Sets, Secrets, Namespaces and Ingress objects. This means that you no longer need to deploy and manage these objects individually in each of your federated clusters. Just create them once in the federation, and have its built-in controllers automatically handle that for you.
[**Federated Replica Sets**](http://kubernetes.io/docs/user-guide/federation/replicasets/) leverage the same configuration as non-federated Kubernetes Replica Sets and automatically distribute Pods across one or more federated clusters. By default, replicas are evenly distributed across all clusters, but for cases where that is not the desired behavior, we've introduced Replica Set preferences, which allow replicas to be distributed across only some clusters, or in non-equal proportions ([define annotations](https://github.com/kubernetes/kubernetes/blob/master/federation/apis/federation/types.go#L114)).
[**Federated Replica Sets**](/docs/user-guide/federation/replicasets/) leverage the same configuration as non-federated Kubernetes Replica Sets and automatically distribute Pods across one or more federated clusters. By default, replicas are evenly distributed across all clusters, but for cases where that is not the desired behavior, we've introduced Replica Set preferences, which allow replicas to be distributed across only some clusters, or in non-equal proportions ([define annotations](https://github.com/kubernetes/kubernetes/blob/master/federation/apis/federation/types.go#L114)).
Starting with Google Cloud Platform (GCP), weve introduced [**Federated Ingress**](http://kubernetes.io/docs/user-guide/federation/federated-ingress/) as a Kubernetes 1.4 alpha feature which enables external clients point to a single IP address and have requests sent to the closest cluster with usable capacity in any region, zone of the Federation.
Starting with Google Cloud Platform (GCP), weve introduced [**Federated Ingress**](/docs/user-guide/federation/federated-ingress/) as a Kubernetes 1.4 alpha feature which enables external clients point to a single IP address and have requests sent to the closest cluster with usable capacity in any region, zone of the Federation.
[**Federated Secrets**](http://kubernetes.io/docs/user-guide/federation/secrets/) automatically create and manage secrets across all clusters in a Federation, automatically ensuring that these are kept globally consistent and up-to-date, even if some clusters are offline when the original updates are applied.
[**Federated Secrets**](/docs/user-guide/federation/secrets/) automatically create and manage secrets across all clusters in a Federation, automatically ensuring that these are kept globally consistent and up-to-date, even if some clusters are offline when the original updates are applied.
[**Federated Namespaces**](http://kubernetes.io/docs/user-guide/federation/namespaces/) are similar to the traditional [Kubernetes Namespaces](http://kubernetes.io/docs/user-guide/namespaces/) providing the same functionality. Creating them in the Federation control plane ensures that they are synchronized across all the clusters in Federation.
[**Federated Namespaces**](/docs/user-guide/federation/namespaces/) are similar to the traditional [Kubernetes Namespaces](/docs/user-guide/namespaces/) providing the same functionality. Creating them in the Federation control plane ensures that they are synchronized across all the clusters in Federation.
[**Federated Events**](http://kubernetes.io/docs/user-guide/federation/events/) are similar to the traditional Kubernetes Events providing the same functionality. Federation Events are stored only in Federation control plane and are not passed on to the underlying kubernetes clusters.
[**Federated Events**](/docs/user-guide/federation/events/) are similar to the traditional Kubernetes Events providing the same functionality. Federation Events are stored only in Federation control plane and are not passed on to the underlying kubernetes clusters.
Lets walk through how all this stuff works. Were going to provision 3 clusters per region, spanning 3 continents (Europe, North America and Asia).
@ -68,7 +68,7 @@ gce-us-central1-c Ready 39s
In our example, well be deploying the service and ingress object using the federated control plane. The [ConfigMap](http://kubernetes.io/docs/user-guide/configmap/) object isnt currently supported by Federation, so well be deploying it manually in each of the underlying Federation clusters. Our cluster deployment will look as follows:
In our example, well be deploying the service and ingress object using the federated control plane. The [ConfigMap](/docs/user-guide/configmap/) object isnt currently supported by Federation, so well be deploying it manually in each of the underlying Federation clusters. Our cluster deployment will look as follows:
@ -113,7 +113,7 @@ Session Affinity: None
Lets now create a Federated Ingress. Federated Ingresses are created in much that same way as traditional [Kubernetes Ingresses](http://kubernetes.io/docs/user-guide/ingress/): by making an API call which specifies the desired properties of your logical ingress point. In the case of Federated Ingress, this API call is directed to the Federation API endpoint, rather than a Kubernetes cluster API endpoint. The API for Federated Ingress is 100% compatible with the API for traditional Kubernetes Services.
Lets now create a Federated Ingress. Federated Ingresses are created in much that same way as traditional [Kubernetes Ingresses](/docs/user-guide/ingress/): by making an API call which specifies the desired properties of your logical ingress point. In the case of Federated Ingress, this API call is directed to the Federation API endpoint, rather than a Kubernetes cluster API endpoint. The API for Federated Ingress is 100% compatible with the API for traditional Kubernetes Services.

View File

@ -6,7 +6,7 @@ url: /blog/2016/10/Production-Kubernetes-Dashboard-UI-1.4-improvements_3
---
With the release of [Kubernetes 1.4](https://kubernetes.io/blog/2016/09/kubernetes-1.4-making-it-easy-to-run-on-kuberentes-anywhere) last week, Dashboard the official web UI for Kubernetes has a number of exciting updates and improvements of its own. The past three months have been busy ones for the Dashboard team, and were excited to share the resulting features of that effort here. If youre not familiar with Dashboard, the [GitHub repo](https://github.com/kubernetes/dashboard#kubernetes-dashboard) is a great place to get started.
A quick recap before unwrapping our shiny new features: Dashboard was initially released March 2016. One of the focuses for Dashboard throughout its lifetime has been the onboarding experience; its a less intimidating way for Kubernetes newcomers to get started, and by showing multiple resources at once, it provides contextualization lacking in [kubectl](http://kubernetes.io/docs/user-guide/kubectl-overview/) (the CLI). After that initial release though, the product team realized that fine-tuning for a beginner audience was getting ahead of ourselves: there were still fundamental product requirements that Dashboard needed to satisfy in order to have a productive UX to onboard new users too. That became our mission for this release: closing the gap between Dashboard and kubectl by showing more resources, leveraging a web UIs strengths in monitoring and troubleshooting, and architecting this all in a user friendly way.
A quick recap before unwrapping our shiny new features: Dashboard was initially released March 2016. One of the focuses for Dashboard throughout its lifetime has been the onboarding experience; its a less intimidating way for Kubernetes newcomers to get started, and by showing multiple resources at once, it provides contextualization lacking in [kubectl](/docs/user-guide/kubectl-overview/) (the CLI). After that initial release though, the product team realized that fine-tuning for a beginner audience was getting ahead of ourselves: there were still fundamental product requirements that Dashboard needed to satisfy in order to have a productive UX to onboard new users too. That became our mission for this release: closing the gap between Dashboard and kubectl by showing more resources, leveraging a web UIs strengths in monitoring and troubleshooting, and architecting this all in a user friendly way.
**Monitoring Graphs**
Real time visualization is a strength that UIs have over CLIs, and with 1.4 were happy to capitalize on that capability with the introduction of real-time CPU and memory usage graphs for all workloads running on your cluster. Even with the numerous third-party solutions for monitoring, Dashboard should include at least some basic out-of-the box functionality in this area. Next up on the roadmap for graphs is extending the timespan the graph represents, adding drill-down capabilities to reveal more details, and improving the UX of correlating data between different graphs.
@ -49,7 +49,7 @@ Once in the relevant Namespace, I check out my Deployments to see if anything se
![](https://lh5.googleusercontent.com/rViAg6xFe219i7qxeBRU62-1SFBLI6VIg3pbU5HBmvIKsb3KJFr5RldP0vziVXao3u-hWM3EMvzTNnSFRQWCTViaQiVbAv_PTjd87s7GOZelroeL4gjcfFU3JljrOKKnWL3Wzy5c)
I realize we need to perform a rolling update to a newer version of that app that can handle the increased requests its evidently getting, so I update this Deployments image, which in turn creates a new [Replica Set](http://kubernetes.io/docs/user-guide/replicasets/).
I realize we need to perform a rolling update to a newer version of that app that can handle the increased requests its evidently getting, so I update this Deployments image, which in turn creates a new [Replica Set](/docs/user-guide/replicasets/).

View File

@ -6,13 +6,13 @@ url: /blog/2016/10/Tail-Kubernetes-With-Stern
---
_Editors note: todays post is by Antti Kupila, Software Engineer, at Wercker, about building a tool to tail multiple pods and containers on Kubernetes._
We love Kubernetes here at [Wercker](http://wercker.com/) and build all our infrastructure on top of it. When deploying anything you need to have good visibility to what's going on and logs are a first view into the inner workings of your application. Good old tail -f has been around for a long time and Kubernetes has this too, built right into [kubectl](http://kubernetes.io/docs/user-guide/kubectl-overview/).
We love Kubernetes here at [Wercker](http://wercker.com/) and build all our infrastructure on top of it. When deploying anything you need to have good visibility to what's going on and logs are a first view into the inner workings of your application. Good old tail -f has been around for a long time and Kubernetes has this too, built right into [kubectl](/docs/user-guide/kubectl-overview/).
I should say that tail is by no means the tool to use for debugging issues but instead you should feed the logs into a more persistent place, such as [Elasticsearch](https://www.elastic.co/products/elasticsearch). However, there's still a place for tail where you need to quickly debug something or perhaps you don't have persistent logging set up yet (such as when developing an app in [Minikube](https://github.com/kubernetes/minikube)).
**Multiple Pods**
Kubernetes has the concept of [Replication Controllers](http://kubernetes.io/docs/user-guide/replication-controller/) which ensure that n pods are running at the same time. This allows rolling updates and redundancy. Considering they're quite easy to set up there's really no reason not to do so.
Kubernetes has the concept of [Replication Controllers](/docs/user-guide/replication-controller/) which ensure that n pods are running at the same time. This allows rolling updates and redundancy. Considering they're quite easy to set up there's really no reason not to do so.
However now there are multiple pods running and they all have a unique id. One issue here is that you'll need to know the exact pod id (kubectl get pods) but that changes every time a pod is created so you'll need to do this every time. Another consideration is the fact that Kubernetes load balances the traffic so you won't know at which pod the request ends up at. If you're tailing pod A but the traffic ends up at pod B you'll miss what happened.

View File

@ -124,7 +124,7 @@ With dynamic reconfiguration of the network, the replication mechanics of Kubern
- Build a load analysis and scaling service (easy, right?)
- If load patterns match the configured triggers in the scaling service (for example, request rate or volume above certain bounds), issue: kubectl scale --replicas=COUNT rc NAME
This would allow us fine-grained control of autoscaling at the platform level, instead of from the applications themselves but well also evaluate [**Horizontal Pod Autoscaling**](http://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/) in Kubernetes; which may suit our need without a custom service.&nbsp;
This would allow us fine-grained control of autoscaling at the platform level, instead of from the applications themselves but well also evaluate [**Horizontal Pod Autoscaling**](/docs/user-guide/horizontal-pod-autoscaling/) in Kubernetes; which may suit our need without a custom service.&nbsp;

View File

@ -9,7 +9,7 @@ In Kubernetes 1.4, we introduced a new node performance analysis tool, called th
**Background**
A Kubernetes cluster is made up of both master and worker nodes. The master node manages the clusters state, and the worker nodes do the actual work of running and managing pods. To do so, on each worker node, a binary, called [Kubelet](http://kubernetes.io/docs/admin/kubelet/), watches for any changes in pod configuration, and takes corresponding actions to make sure that containers run successfully. High performance of the Kubelet, such as low latency to converge with new pod configuration and efficient housekeeping with low resource usage, is essential for the entire Kubernetes cluster. To measure this performance, Kubernetes uses [end-to-end (e2e) tests](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/e2e-tests.md#overview) to continuously monitor benchmark changes of latest builds with new features.
A Kubernetes cluster is made up of both master and worker nodes. The master node manages the clusters state, and the worker nodes do the actual work of running and managing pods. To do so, on each worker node, a binary, called [Kubelet](/docs/admin/kubelet/), watches for any changes in pod configuration, and takes corresponding actions to make sure that containers run successfully. High performance of the Kubelet, such as low latency to converge with new pod configuration and efficient housekeeping with low resource usage, is essential for the entire Kubernetes cluster. To measure this performance, Kubernetes uses [end-to-end (e2e) tests](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/e2e-tests.md#overview) to continuously monitor benchmark changes of latest builds with new features.
**Kubernetes SLOs are defined by the following benchmarks** :

View File

@ -8,16 +8,16 @@ _Editors note: this post is part of a [series of in-depth articles](https://k
In the latest [Kubernetes 1.5 release](https://kubernetes.io/blog/2016/12/kubernetes-1.5-supporting-production-workloads), youll notice that support for Cluster Federation is maturing. That functionality was introduced in Kubernetes 1.3, and the 1.5 release includes a number of new features, including an easier setup experience and a step closer to supporting all Kubernetes API objects.
A new command line tool called **[kubefed](http://kubernetes.io/docs/admin/federation/kubefed/)** was introduced to make getting started with Cluster Federation much simpler. Also, alpha level support was added for Federated DaemonSets, Deployments and ConfigMaps. In summary:
A new command line tool called **[kubefed](/docs/admin/federation/kubefed/)** was introduced to make getting started with Cluster Federation much simpler. Also, alpha level support was added for Federated DaemonSets, Deployments and ConfigMaps. In summary:
- **DaemonSets** are Kubernetes deployment rules that guarantee that a given pod is always present at every node, as new nodes are added to the cluster (more [info](http://kubernetes.io/docs/admin/daemons/)).
- **Deployments** describe the desired state of Replica Sets (more [info](http://kubernetes.io/docs/user-guide/deployments/)).
- **ConfigMaps** are variables applied to Replica Sets (which greatly improves image reusability as their parameters can be externalized - more [info](http://kubernetes.io/docs/user-guide/configmap/)).
- **DaemonSets** are Kubernetes deployment rules that guarantee that a given pod is always present at every node, as new nodes are added to the cluster (more [info](/docs/admin/daemons/)).
- **Deployments** describe the desired state of Replica Sets (more [info](/docs/user-guide/deployments/)).
- **ConfigMaps** are variables applied to Replica Sets (which greatly improves image reusability as their parameters can be externalized - more [info](/docs/user-guide/configmap/)).
**Federated DaemonSets** , **Federated Deployments** , **Federated ConfigMaps** take the qualities of the base concepts to the next level. For instance, Federated DaemonSets guarantee that a pod is deployed on every node of the newly added cluster.
But what actually is “federation”? Lets explain it by what needs it satisfies. Imagine a service that operates globally. Naturally, all its users expect to get the same quality of service, whether they are located in Asia, Europe, or the US. What this means is that the service must respond equally fast to requests at each location. This sounds simple, but theres lots of logic involved behind the scenes. This is what Kubernetes Cluster Federation aims to do.
How does it work? One of the Kubernetes clusters must become a master by running a **Federation Control Plane**. In practice, this is a controller that monitors the health of other clusters, and provides a single entry point for administration. The entry point behaves like a typical Kubernetes cluster. It allows creating [Replica Sets](http://kubernetes.io/docs/user-guide/replicasets/), [Deployments](http://kubernetes.io/docs/user-guide/deployments/), [Services](http://kubernetes.io/docs/user-guide/services/), but the federated control plane passes the resources to underlying clusters. This means that if we request the federation control plane to create a Replica Set with 1,000 replicas, it will spread the request across all underlying clusters. If we have 5 clusters, then by default each will get its share of 200 replicas.
How does it work? One of the Kubernetes clusters must become a master by running a **Federation Control Plane**. In practice, this is a controller that monitors the health of other clusters, and provides a single entry point for administration. The entry point behaves like a typical Kubernetes cluster. It allows creating [Replica Sets](/docs/user-guide/replicasets/), [Deployments](/docs/user-guide/deployments/), [Services](/docs/user-guide/services/), but the federated control plane passes the resources to underlying clusters. This means that if we request the federation control plane to create a Replica Set with 1,000 replicas, it will spread the request across all underlying clusters. If we have 5 clusters, then by default each will get its share of 200 replicas.
This on its own is a powerful mechanism. But theres more. Its also possible to create a Federated Ingress. Effectively, this is a global application-layer load balancer. Thanks to an understanding of the application layer, it allows load balancing to be “smarter” -- for instance, by taking into account the geographical location of clients and servers, and routing the traffic between them in an optimal way.
@ -54,7 +54,7 @@ export FED\_DNS\_ZONE=\<YOUR DNS SUFFIX e.g. example.com\>
```
And get kubectl and kubefed binaries. (for installation instructions refer to guides [here](http://kubernetes.io/docs/user-guide/prereqs/) and [here](http://kubernetes.io/docs/admin/federation/kubefed/#getting-kubefed)).
And get kubectl and kubefed binaries. (for installation instructions refer to guides [here](/docs/user-guide/prereqs/) and [here](/docs/admin/federation/kubefed/#getting-kubefed)).
Now the setup is ready to create a few Google Container Engine (GKE) clusters with gcloud container clusters create (1-create.sh). In this case one is in US, one in Europe and one in Asia.
```
@ -210,7 +210,7 @@ As you can see, the two commands refer to the “federation” context, i.e. to
**Creating The Ingress**
After the Service is ready, we can create [Ingress](http://kubernetes.io/docs/user-guide/ingress/) - the global load balancer. The command is like this:
After the Service is ready, we can create [Ingress](/docs/user-guide/ingress/) - the global load balancer. The command is like this:

View File

@ -9,7 +9,7 @@ _Editor's note: Todays post is by Bernard Van De Walle, Kubernetes Lead Engin
**Kubernetes Network Policies&nbsp;**
Kubernetes supports a [new API for network policies](http://kubernetes.io/docs/user-guide/networkpolicies/) that provides a sophisticated model for isolating applications and reducing their attack surface. This feature, which came out of the [SIG-Network group](https://github.com/kubernetes/community/wiki/SIG-Network), makes it very easy and elegant to define network policies by using the built-in labels and selectors Kubernetes constructs.
Kubernetes supports a [new API for network policies](/docs/user-guide/networkpolicies/) that provides a sophisticated model for isolating applications and reducing their attack surface. This feature, which came out of the [SIG-Network group](https://github.com/kubernetes/community/wiki/SIG-Network), makes it very easy and elegant to define network policies by using the built-in labels and selectors Kubernetes constructs.
Kubernetes has left it up to third parties to implement these network policies and does not provide a default implementation.
@ -28,7 +28,7 @@ The authentication and authorization function in Trireme is overlaid on the TCP
![](https://lh3.googleusercontent.com/PhkJ4eoRc50gm6oSTZbw138l3jzVKjjQrn2mNHjys9Cu7RG-q2X-f5PX07ZY6xjbIQT0ud8oMSX6yNwjDpmDq3a3lYWcc_gBYJBjvBLP8PIHZaTW54fJppDze9pYxOmZY-JNqQ1Y)
The Trireme implementation talks directly to the Kubernetes master without an external controller and receives notifications on policy updates and pod instantiations so that it can maintain a local cache of the policy and update the authorization rules as needed. There is no requirement for any shared state between Trireme components that needs to be synchronized. Trireme can be deployed either as a standalone process in every worker or by using [Daemon Sets](http://kubernetes.io/docs/admin/daemons/). In the latter case, Kubernetes takes ownership of the lifecycle of the Trireme pods.&nbsp;
The Trireme implementation talks directly to the Kubernetes master without an external controller and receives notifications on policy updates and pod instantiations so that it can maintain a local cache of the policy and update the authorization rules as needed. There is no requirement for any shared state between Trireme components that needs to be synchronized. Trireme can be deployed either as a standalone process in every worker or by using [Daemon Sets](/docs/admin/daemons/). In the latter case, Kubernetes takes ownership of the lifecycle of the Trireme pods.&nbsp;
Trireme's simplicity is derived from the separation of security policy from network transport. Policy enforcement is linked directly to the labels present on the connection, irrespective of the networking scheme used to make the pods communicate. This identity linkage enables tremendous flexibility to operators to use any networking scheme they like without tying security policy enforcement to network implementation details. Also, the implementation of security policy across the federated clusters becomes simple and viable.

View File

@ -17,24 +17,24 @@ Lastly, for those interested in the internals of Kubernetes, 1.5 introduces Cont
**Whats New**
[**StatefulSet**](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) beta (formerly known as PetSet) allows workloads that require persistent identity or per-instance storage to be [created](http://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#creating-a-statefulset), [scaled](http://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#scaling-a-statefulset), [deleted](http://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#deleting-statefulsets) and [repaired](http://kubernetes.io/docs/tasks/manage-stateful-set/debugging-a-statefulset/) on Kubernetes. You can use StatefulSets to ease the deployment of any stateful service, and tutorial examples are available in the repository. In order to ensure that there are never two pods with the same identity, the Kubernetes node controller no longer force deletes pods on unresponsive nodes. Instead, it waits until the old pod is confirmed dead in one of several ways: automatically when the kubelet reports back and confirms the old pod is terminated; automatically when a cluster-admin deletes the node; or when a database admin confirms it is safe to proceed by force deleting the old pod. Users are now warned if they try to force delete pods via the CLI. For users who will be migrating from PetSets to StatefulSets, please follow the upgrade [guide](http://kubernetes.io/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set).
[**StatefulSet**](/docs/concepts/abstractions/controllers/statefulsets/) beta (formerly known as PetSet) allows workloads that require persistent identity or per-instance storage to be [created](/docs/tutorials/stateful-application/basic-stateful-set/#creating-a-statefulset), [scaled](/docs/tutorials/stateful-application/basic-stateful-set/#scaling-a-statefulset), [deleted](/docs/tutorials/stateful-application/basic-stateful-set/#deleting-statefulsets) and [repaired](/docs/tasks/manage-stateful-set/debugging-a-statefulset/) on Kubernetes. You can use StatefulSets to ease the deployment of any stateful service, and tutorial examples are available in the repository. In order to ensure that there are never two pods with the same identity, the Kubernetes node controller no longer force deletes pods on unresponsive nodes. Instead, it waits until the old pod is confirmed dead in one of several ways: automatically when the kubelet reports back and confirms the old pod is terminated; automatically when a cluster-admin deletes the node; or when a database admin confirms it is safe to proceed by force deleting the old pod. Users are now warned if they try to force delete pods via the CLI. For users who will be migrating from PetSets to StatefulSets, please follow the upgrade [guide](/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set).
**[PodDisruptionBudget](http://kubernetes.io/docs/admin/disruptions/)** beta is an API object that specifies the minimum number or minimum percentage of replicas of a collection of pods that must be up at any time. With PodDisruptionBudget, an application deployer can ensure that cluster operations that voluntarily evict pods will never take down so many simultaneously as to cause data loss, an outage, or an unacceptable service degradation. In Kubernetes 1.5 the “kubectl drain” command supports PodDisruptionBudget, allowing safe draining of nodes for maintenance activities, and it will soon also be used by node upgrade and cluster autoscaler (when removing nodes). This can be useful for a quorum based application to ensure the number of replicas running is never below the number needed for quorum, or for a web front end to ensure the number of replicas serving load never falls below a certain percentage.
**[PodDisruptionBudget](/docs/admin/disruptions/)** beta is an API object that specifies the minimum number or minimum percentage of replicas of a collection of pods that must be up at any time. With PodDisruptionBudget, an application deployer can ensure that cluster operations that voluntarily evict pods will never take down so many simultaneously as to cause data loss, an outage, or an unacceptable service degradation. In Kubernetes 1.5 the “kubectl drain” command supports PodDisruptionBudget, allowing safe draining of nodes for maintenance activities, and it will soon also be used by node upgrade and cluster autoscaler (when removing nodes). This can be useful for a quorum based application to ensure the number of replicas running is never below the number needed for quorum, or for a web front end to ensure the number of replicas serving load never falls below a certain percentage.
**[Kubefed](http://kubernetes.io/docs/admin/federation/kubefed.md)** alpha is a new command line tool to help you manage federated clusters, making it easy to deploy new federation control planes and add or remove clusters from existing federations. Also new in cluster federation is the addition of [ConfigMaps](http://kubernetes.io/docs/user-guide/federation/configmap.md) alpha and [DaemonSets](http://kubernetes.io/docs/user-guide/federation/daemonsets.md) alpha and [deployments](http://kubernetes.io/docs/user-guide/federation/deployment.md) alpha to the [federation API](http://kubernetes.io/docs/user-guide/federation/index.md) allowing you to create, update and delete these objects across multiple clusters from a single endpoint.
**[Kubefed](/docs/admin/federation/kubefed.md)** alpha is a new command line tool to help you manage federated clusters, making it easy to deploy new federation control planes and add or remove clusters from existing federations. Also new in cluster federation is the addition of [ConfigMaps](/docs/user-guide/federation/configmap.md) alpha and [DaemonSets](/docs/user-guide/federation/daemonsets.md) alpha and [deployments](/docs/user-guide/federation/deployment.md) alpha to the [federation API](/docs/user-guide/federation/index.md) allowing you to create, update and delete these objects across multiple clusters from a single endpoint.
**[HA Masters](http://kubernetes.io/docs/admin/ha-master-gce.md)** alpha provides the ability to create and delete clusters with highly available (replicated) masters on GCE using the kube-up/kube-down scripts. Allows setup of zone distributed HA masters, with at least one etcd replica per zone, at least one API server per zone, and master-elected components like scheduler and controller-manager distributed across zones.
**[HA Masters](/docs/admin/ha-master-gce.md)** alpha provides the ability to create and delete clusters with highly available (replicated) masters on GCE using the kube-up/kube-down scripts. Allows setup of zone distributed HA masters, with at least one etcd replica per zone, at least one API server per zone, and master-elected components like scheduler and controller-manager distributed across zones.
**[Windows server containers](http://kubernetes.io/docs/getting-started-guides/windows/)** alpha provides initial support for Windows Server 2016 nodes and scheduling Windows Server Containers.&nbsp;
**[Windows server containers](/docs/getting-started-guides/windows/)** alpha provides initial support for Windows Server 2016 nodes and scheduling Windows Server Containers.&nbsp;
**[Container Runtime Interface](https://github.com/kubernetes/kubernetes/blob/release-1.5/docs/devel/container-runtime-interface.md)** (CRI) alpha introduces the v1 CRI API to allow pluggable container runtimes; an experimental docker-CRI integration is ready for testing and feedback.
[**Node conformance test**](http://kubernetes.io/docs/admin/node-conformance.md) beta is a containerized test framework that provides a system verification and functionality test for nodes. The test validates whether the node meets the minimum requirements for Kubernetes; a node that passes the tests is qualified to join a Kubernetes. Node conformance test is available at: gcr.io/google\_containers/node-test:0.2 for users to verify node setup.
[**Node conformance test**](/docs/admin/node-conformance.md) beta is a containerized test framework that provides a system verification and functionality test for nodes. The test validates whether the node meets the minimum requirements for Kubernetes; a node that passes the tests is qualified to join a Kubernetes. Node conformance test is available at: gcr.io/google\_containers/node-test:0.2 for users to verify node setup.
These are just some of the highlights in our last release for the year. For a complete list please visit the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#v151).&nbsp;
**Availability**
Kubernetes 1.5 is available for download [here](https://github.com/kubernetes/kubernetes/releases/tag/v1.5.1) on GitHub and via [get.k8s.io](http://get.k8s.io/). To get started with Kubernetes, try one of the [new interactive tutorials](http://kubernetes.io/docs/tutorials/kubernetes-basics/). Dont forget to take 1.5 for a spin before the holidays!&nbsp;
Kubernetes 1.5 is available for download [here](https://github.com/kubernetes/kubernetes/releases/tag/v1.5.1) on GitHub and via [get.k8s.io](http://get.k8s.io/). To get started with Kubernetes, try one of the [new interactive tutorials](/docs/tutorials/kubernetes-basics/). Dont forget to take 1.5 for a spin before the holidays!&nbsp;
**User Adoption**
Its been a year-and-a-half since GA, and the rate of [Kubernetes user adoption](http://kubernetes.io/case-studies/) continues to surpass estimates. Organizations running production workloads on Kubernetes include the world's largest companies, young startups, and everything in between. Since Kubernetes is open and runs anywhere, weve seen adoption on a diverse set of platforms; Pokémon Go (Google Cloud), Ticketmaster (AWS), SAP (OpenStack), Box (bare-metal), and hybrid environments that mix-and-match the above. Here are a few user highlights:

View File

@ -6,11 +6,11 @@ url: /blog/2016/12/Statefulset-Run-Scale-Stateful-Applications-In-Kubernetes
---
_Editors note: this post is part of a [series of in-depth articles](https://kubernetes.io/blog/2016/12/five-days-of-kubernetes-1.5) on what's new in Kubernetes 1.5_
In the latest release, [Kubernetes 1.5](https://kubernetes.io/blog/2016/12/kubernetes-1.5-supporting-production-workloads), weve moved the feature formerly known as PetSet into beta as [StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/). There were no major changes to the API Object, other than the community selected name, but we added the semantics of “at most one pod per index” for deployment of the Pods in the set. Along with ordered deployment, ordered termination, unique network names, and persistent stable storage, we think we have the right primitives to support many containerized stateful workloads. We dont claim that the feature is 100% complete (it is software after all), but we believe that it is useful in its current form, and that we can extend the API in a backwards-compatible way as we progress toward an eventual GA release.
In the latest release, [Kubernetes 1.5](https://kubernetes.io/blog/2016/12/kubernetes-1.5-supporting-production-workloads), weve moved the feature formerly known as PetSet into beta as [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/). There were no major changes to the API Object, other than the community selected name, but we added the semantics of “at most one pod per index” for deployment of the Pods in the set. Along with ordered deployment, ordered termination, unique network names, and persistent stable storage, we think we have the right primitives to support many containerized stateful workloads. We dont claim that the feature is 100% complete (it is software after all), but we believe that it is useful in its current form, and that we can extend the API in a backwards-compatible way as we progress toward an eventual GA release.
**When is StatefulSet the Right Choice for my Storage Application?**
[Deployments](http://kubernetes.io/docs/user-guide/deployments/) and [ReplicaSets](http://kubernetes.io/docs/user-guide/replicasets/) are a great way to run stateless replicas of an application on Kubernetes, but their semantics arent really right for deploying stateful applications. The purpose of StatefulSet is to provide a controller with the correct semantics for deploying a wide range of stateful workloads. However, moving your storage application onto Kubernetes isnt always the correct choice. Before you go all in on converging your storage tier and your orchestration framework, you should ask yourself a few questions.
[Deployments](/docs/user-guide/deployments/) and [ReplicaSets](/docs/user-guide/replicasets/) are a great way to run stateless replicas of an application on Kubernetes, but their semantics arent really right for deploying stateful applications. The purpose of StatefulSet is to provide a controller with the correct semantics for deploying a wide range of stateful workloads. However, moving your storage application onto Kubernetes isnt always the correct choice. Before you go all in on converging your storage tier and your orchestration framework, you should ask yourself a few questions.
**Can your application run using remote storage or does it require local storage media?**
@ -34,10 +34,10 @@ If you run your storage application on high-end hardware or extra-large instance
**A Practical Example - ZooKeeper**
[ZooKeeper](https://zookeeper.apache.org/doc/current/) is an interesting use case for StatefulSet for two reasons. First, it demonstrates that StatefulSet can be used to run a distributed, strongly consistent storage application on Kubernetes. Second, it's a prerequisite for running workloads like [Apache Hadoop](http://hadoop.apache.org/) and [Apache Kakfa](https://kafka.apache.org/) on Kubernetes. An [in-depth tutorial](http://kubernetes.io/docs/tutorials/stateful-application/zookeeper/) on deploying a ZooKeeper ensemble on Kubernetes is available in the Kubernetes documentation, and well outline a few of the key features below.
[ZooKeeper](https://zookeeper.apache.org/doc/current/) is an interesting use case for StatefulSet for two reasons. First, it demonstrates that StatefulSet can be used to run a distributed, strongly consistent storage application on Kubernetes. Second, it's a prerequisite for running workloads like [Apache Hadoop](http://hadoop.apache.org/) and [Apache Kakfa](https://kafka.apache.org/) on Kubernetes. An [in-depth tutorial](/docs/tutorials/stateful-application/zookeeper/) on deploying a ZooKeeper ensemble on Kubernetes is available in the Kubernetes documentation, and well outline a few of the key features below.
**Creating a ZooKeeper Ensemble**
Creating an ensemble is as simple as using [kubectl create](http://kubernetes.io/docs/user-guide/kubectl/kubectl_create/) to generate the objects stored in the manifest.
Creating an ensemble is as simple as using [kubectl create](/docs/user-guide/kubectl/kubectl_create/) to generate the objects stored in the manifest.
```
@ -297,7 +297,7 @@ zk-0 0/1 Terminating 0 15m
You can use [kubectl apply](http://kubernetes.io/docs/user-guide/kubectl/kubectl_apply/) to recreate the zk StatefulSet and redeploy the ensemble.
You can use [kubectl apply](/docs/user-guide/kubectl/kubectl_apply/) to recreate the zk StatefulSet and redeploy the ensemble.
@ -355,7 +355,7 @@ You should always provision headroom capacity for critical processes in your clu
If the SLAs for your service preclude even brief outages due to a single node failure, you should use a [PodAntiAffinity](http://kubernetes.io/docs/user-guide/node-selection/) annotation. The manifest used to create the ensemble contains such an annotation, and it tells the Kubernetes Scheduler to not place more than one Pod from the zk StatefulSet on the same node.
If the SLAs for your service preclude even brief outages due to a single node failure, you should use a [PodAntiAffinity](/docs/user-guide/node-selection/) annotation. The manifest used to create the ensemble contains such an annotation, and it tells the Kubernetes Scheduler to not place more than one Pod from the zk StatefulSet on the same node.
@ -366,7 +366,7 @@ If the SLAs for your service preclude even brief outages due to a single node fa
The manifest used to create the ZooKeeper ensemble also creates a [PodDistruptionBudget](http://kubernetes.io/docs/admin/disruptions/), zk-budget. The zk-budget informs Kubernetes about the upper limit of disruptions (unhealthy Pods) that the service can tolerate.
The manifest used to create the ZooKeeper ensemble also creates a [PodDistruptionBudget](/docs/admin/disruptions/), zk-budget. The zk-budget informs Kubernetes about the upper limit of disruptions (unhealthy Pods) that the service can tolerate.
@ -418,7 +418,7 @@ zk-budget 2 1 2h
zk-budget indicates that at least two members of the ensemble must be available at all times for the ensemble to be healthy. If you attempt to drain a node prior taking it offline, and if draining it would terminate a Pod that violates the budget, the drain operation will fail. If you use [kubectl drain](http://kubernetes.io/docs/user-guide/kubectl/kubectl_drain/), in conjunction with PodDisruptionBudgets, to cordon your nodes and to evict all Pods prior to maintenance or decommissioning, you can ensure that the procedure wont be disruptive to your stateful applications.
zk-budget indicates that at least two members of the ensemble must be available at all times for the ensemble to be healthy. If you attempt to drain a node prior taking it offline, and if draining it would terminate a Pod that violates the budget, the drain operation will fail. If you use [kubectl drain](/docs/user-guide/kubectl/kubectl_drain/), in conjunction with PodDisruptionBudgets, to cordon your nodes and to evict all Pods prior to maintenance or decommissioning, you can ensure that the procedure wont be disruptive to your stateful applications.

View File

@ -64,7 +64,7 @@ Support for Windows Server-based containers is in alpha release mode for Kuberne
- **Runtime Operations** - the SIG will play a key part in defining the monitoring interface of the Container Runtime Interface (CRI), leveraging it to provide deep insight and monitoring for Windows Server-based containers
**Get Started**
To get started with Kubernetes on Windows Server 2016, please visit the [GitHub guide](http://kubernetes.io/docs/getting-started-guides/windows/) for more details.
To get started with Kubernetes on Windows Server 2016, please visit the [GitHub guide](/docs/getting-started-guides/windows/) for more details.
If you want to help with Windows Server support, then please connect with the [Windows Server SIG](https://github.com/kubernetes/community/blob/master/sig-windows/README.md) or connect directly with Michael Michael, the SIG lead, on [GitHub](https://github.com/michmike).&nbsp;
_--[Michael Michael](https://twitter.com/michmike77), Senior Director of Product Management, Apprenda&nbsp;_

View File

@ -9,7 +9,7 @@ _Editor's note: Todays post is by Sandeep Dinesh, Developer Advocate, Google
Conventional wisdom says you cant run a database in a container. “Containers are stateless!” they say, and “databases are pointless without state!”
Of course, this is not true at all. At Google, everything runs in a container, including databases. You just need the right tools. [Kubernetes 1.5](https://kubernetes.io/blog/2016/12/kubernetes-1.5-supporting-production-workloads) includes the new [StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) API object (in previous versions, StatefulSet was known as PetSet). With StatefulSets, Kubernetes makes it much easier to run stateful workloads such as databases.
Of course, this is not true at all. At Google, everything runs in a container, including databases. You just need the right tools. [Kubernetes 1.5](https://kubernetes.io/blog/2016/12/kubernetes-1.5-supporting-production-workloads) includes the new [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/) API object (in previous versions, StatefulSet was known as PetSet). With StatefulSets, Kubernetes makes it much easier to run stateful workloads such as databases.
If youve followed my previous posts, you know how to create a [MEAN Stack app with Docker](http://blog.sandeepdinesh.com/2015/07/running-mean-web-application-in-docker.html), then [migrate it to Kubernetes](https://medium.com/google-cloud/running-a-mean-stack-on-google-cloud-platform-with-kubernetes-149ca81c2b5d) to provide easier management and reliability, and [create a MongoDB replica set](https://medium.com/google-cloud/mongodb-replica-sets-with-kubernetes-d96606bd9474) to provide redundancy and high availability.
@ -25,7 +25,7 @@ _Note: StatefulSets are currently a beta resource. The [sidecar container](https
Before we get started, youll need a Kubernetes 1.5+ and the [Kubernetes command line tool](http://kubernetes.io/docs/user-guide/prereqs/). If you want to follow along with this tutorial and use Google Cloud Platform, you also need the [Google Cloud SDK](http://cloud.google.com/sdk).
Before we get started, youll need a Kubernetes 1.5+ and the [Kubernetes command line tool](/docs/user-guide/prereqs/). If you want to follow along with this tutorial and use Google Cloud Platform, you also need the [Google Cloud SDK](http://cloud.google.com/sdk).
@ -62,7 +62,7 @@ gcloud container clusters get-credentials test-cluster
To set up the MongoDB replica set, you need three things: A [StorageClass](http://kubernetes.io/docs/user-guide/persistent-volumes/#storageclasses), a [Headless Service](http://kubernetes.io/docs/user-guide/services/#headless-services), and a [StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/).
To set up the MongoDB replica set, you need three things: A [StorageClass](/docs/user-guide/persistent-volumes/#storageclasses), a [Headless Service](/docs/user-guide/services/#headless-services), and a [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/).
@ -108,7 +108,7 @@ Lets examine each piece in more detail.
The storage class tells Kubernetes what kind of storage to use for the database nodes. You can set up many different types of StorageClasses in a ton of different environments. For example, if you run Kubernetes in your own datacenter, you can use [GlusterFS](https://www.gluster.org/). On GCP, your [storage choices](https://cloud.google.com/compute/docs/disks/) are SSDs and hard disks. There are currently drivers for [AWS](http://kubernetes.io/docs/user-guide/persistent-volumes/#aws), [Azure](http://kubernetes.io/docs/user-guide/persistent-volumes/#azure-disk), [Google Cloud](http://kubernetes.io/docs/user-guide/persistent-volumes/#gce), [GlusterFS](http://kubernetes.io/docs/user-guide/persistent-volumes/#glusterfs), [OpenStack Cinder](http://kubernetes.io/docs/user-guide/persistent-volumes/#openstack-cinder), [vSphere](http://kubernetes.io/docs/user-guide/persistent-volumes/#vsphere), [Ceph RBD](http://kubernetes.io/docs/user-guide/persistent-volumes/#ceph-rbd), and [Quobyte](http://kubernetes.io/docs/user-guide/persistent-volumes/#quobyte).
The storage class tells Kubernetes what kind of storage to use for the database nodes. You can set up many different types of StorageClasses in a ton of different environments. For example, if you run Kubernetes in your own datacenter, you can use [GlusterFS](https://www.gluster.org/). On GCP, your [storage choices](https://cloud.google.com/compute/docs/disks/) are SSDs and hard disks. There are currently drivers for [AWS](/docs/user-guide/persistent-volumes/#aws), [Azure](/docs/user-guide/persistent-volumes/#azure-disk), [Google Cloud](/docs/user-guide/persistent-volumes/#gce), [GlusterFS](/docs/user-guide/persistent-volumes/#glusterfs), [OpenStack Cinder](/docs/user-guide/persistent-volumes/#openstack-cinder), [vSphere](/docs/user-guide/persistent-volumes/#vsphere), [Ceph RBD](/docs/user-guide/persistent-volumes/#ceph-rbd), and [Quobyte](/docs/user-guide/persistent-volumes/#quobyte).
@ -189,7 +189,7 @@ You can tell this is a Headless Service because the clusterIP is set to “None.
The pièce de résistance. The StatefulSet actually runs MongoDB and orchestrates everything together. StatefulSets differ from Kubernetes [ReplicaSets](http://kubernetes.io/docs/user-guide/replicasets/) (not to be confused with MongoDB replica sets!) in certain ways that makes them more suited for stateful applications. Unlike Kubernetes ReplicaSets, pods created under a StatefulSet have a few unique attributes. The name of the pod is not random, instead each pod gets an ordinal name. Combined with the Headless Service, this allows pods to have stable identification. In addition, pods are created one at a time instead of all at once, which can help when bootstrapping a stateful system. You can read more about StatefulSets in the [documentation](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/).
The pièce de résistance. The StatefulSet actually runs MongoDB and orchestrates everything together. StatefulSets differ from Kubernetes [ReplicaSets](/docs/user-guide/replicasets/) (not to be confused with MongoDB replica sets!) in certain ways that makes them more suited for stateful applications. Unlike Kubernetes ReplicaSets, pods created under a StatefulSet have a few unique attributes. The name of the pod is not random, instead each pod gets an ordinal name. Combined with the Headless Service, this allows pods to have stable identification. In addition, pods are created one at a time instead of all at once, which can help when bootstrapping a stateful system. You can read more about StatefulSets in the [documentation](/docs/concepts/abstractions/controllers/statefulsets/).

View File

@ -24,13 +24,13 @@ For companies deploying applications on Kubernetes, one of biggest questions is
**Kubernetes Networking**
Kubernetes provides a core set of platform services exposed through [APIs](http://kubernetes.io/docs/api/). The platform can be extended in several ways through the extensions API, plugins and labels. This has allowed a wide variety integrations and tools to be developed for Kubernetes. Kubernetes recognizes that the network in each deployment is going to be unique. Instead of trying to make the core system try to handle all those use cases, Kubernetes chose to make the network pluggable.
Kubernetes provides a core set of platform services exposed through [APIs](/docs/api/). The platform can be extended in several ways through the extensions API, plugins and labels. This has allowed a wide variety integrations and tools to be developed for Kubernetes. Kubernetes recognizes that the network in each deployment is going to be unique. Instead of trying to make the core system try to handle all those use cases, Kubernetes chose to make the network pluggable.
With [Nuage Networks](http://www.nuagenetworks.net/) we provide a scalable policy-based SDN platform. The platform is managed by a Network Policy Engine that abstracts away the complexity associated with configuring the system. There is a separate SDN Controller that comes with a very rich routing feature set and is designed to scale horizontally. Nuage uses the open source [Open vSwitch (OVS)](http://www.openvswitch.org/) for the data plane with some enhancements in the OVS user space. Just like Kubernetes, Nuage has embraced openness as a core tenet for its platform. Nuage provides open APIs that allow users to orchestrate their networks and integrate network services such as firewalls, load balancers, IPAM tools etc. Nuage is supported in a wide variety of cloud platforms like OpenStack and VMware as well as container platforms like Kubernetes and others.
The Nuage platform implements a Kubernetes [network plugin](http://kubernetes.io/docs/admin/network-plugins/) that creates VXLAN overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Each Pod is given an IP address from a network that belongs to a [Namespace](https://kubernetes.io/docs/user-guide/namespaces/) and is not tied to the Kubernetes node.
The Nuage platform implements a Kubernetes [network plugin](/docs/admin/network-plugins/) that creates VXLAN overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Each Pod is given an IP address from a network that belongs to a [Namespace](/docs/user-guide/namespaces/) and is not tied to the Kubernetes node.
As cloud applications are built using microservices, the ability to control traffic among these microservices is a fundamental requirement. It is important to point out that these network policies also need to control traffic that is going to/coming from external networks and services. Nuages policy abstraction model makes it easy to declare fine-grained ingress/egress policies for applications. Kubernetes has a beta [Network Policy API](http://kubernetes.io/docs/user-guide/networkpolicies/) implemented using the Kubernetes Extensions API. Nuage implements this Network Policy API to address a wide variety of policy use cases such as:
As cloud applications are built using microservices, the ability to control traffic among these microservices is a fundamental requirement. It is important to point out that these network policies also need to control traffic that is going to/coming from external networks and services. Nuages policy abstraction model makes it easy to declare fine-grained ingress/egress policies for applications. Kubernetes has a beta [Network Policy API](/docs/user-guide/networkpolicies/) implemented using the Kubernetes Extensions API. Nuage implements this Network Policy API to address a wide variety of policy use cases such as:
- Kubernetes Namespace isolation

View File

@ -9,7 +9,7 @@ _Todays post is by Brendan Burns, Partner Architect, at Microsoft & Kubernete
Containers are revolutionizing the way that people build, package and deploy software. But what is often overlooked is how they are revolutionizing the way that people build the software that builds, packages and deploys software. (its ok if you have to read that sentence twice…) Today, and in a talk at [Container World](https://tmt.knect365.com/container-world/) tomorrow, Im taking a look at how container orchestrators like Kubernetes form the foundation for next generation platform as a service (PaaS). In particular, Im interested in how cloud container as a service (CaaS) platforms like [Azure Container Service](https://azure.microsoft.com/en-us/services/container-service/), [Google Container Engine](https://cloud.google.com/container-engine/) and [others](https://kubernetes.io/docs/getting-started-guides/#hosted-solutions) are becoming the new infrastructure layer that PaaS is built upon.
Containers are revolutionizing the way that people build, package and deploy software. But what is often overlooked is how they are revolutionizing the way that people build the software that builds, packages and deploys software. (its ok if you have to read that sentence twice…) Today, and in a talk at [Container World](https://tmt.knect365.com/container-world/) tomorrow, Im taking a look at how container orchestrators like Kubernetes form the foundation for next generation platform as a service (PaaS). In particular, Im interested in how cloud container as a service (CaaS) platforms like [Azure Container Service](https://azure.microsoft.com/en-us/services/container-service/), [Google Container Engine](https://cloud.google.com/container-engine/) and [others](/docs/getting-started-guides/#hosted-solutions) are becoming the new infrastructure layer that PaaS is built upon.
To see this, its important to consider the set of services that have traditionally been provided by PaaS platforms:
@ -18,7 +18,7 @@ To see this, its important to consider the set of services that have traditio
- Reliable, zero-downtime rollout of software versions
- Healing, auto-scaling, load balancing
When you look at this list, its clear that most of these traditional “PaaS” roles have now been taken over by containers. The container image and container image build tooling has become the way to package up your application. [Container registries](https://kubernetes.io/docs/user-guide/images/#using-a-private-registry) have become the way to distribute your application across the world. Reliable software rollout is achieved using orchestrator concepts like [Deployment](https://kubernetes.io/docs/user-guide/deployments/#what-is-a-deployment) in Kubernetes, and service healing, auto-scaling and load-balancing are all properties of an application deployed in Kubernetes using [ReplicaSets](https://kubernetes.io/docs/user-guide/replicasets/#what-is-a-replicaset) and [Services](https://kubernetes.io/docs/user-guide/services/).
When you look at this list, its clear that most of these traditional “PaaS” roles have now been taken over by containers. The container image and container image build tooling has become the way to package up your application. [Container registries](/docs/user-guide/images/#using-a-private-registry) have become the way to distribute your application across the world. Reliable software rollout is achieved using orchestrator concepts like [Deployment](/docs/user-guide/deployments/#what-is-a-deployment) in Kubernetes, and service healing, auto-scaling and load-balancing are all properties of an application deployed in Kubernetes using [ReplicaSets](/docs/user-guide/replicasets/#what-is-a-replicaset) and [Services](/docs/user-guide/services/).
What then is left for PaaS? Is PaaS going to be replaced by container as a service? I think the answer is “no.” The piece that is left for PaaS is the part that was always the most important part of PaaS in the first place, and thats the opinionated developer experience. In addition to all of the generic parts of PaaS that I listed above, the most important part of a PaaS has always been the way in which the developer experience and application framework made developers more productive within the boundaries of the platform. PaaS enables developers to go from source code on their laptop to a world-wide scalable service in less than an hour. Thats hugely powerful.&nbsp;

View File

@ -38,7 +38,7 @@ $ KUBE\_GCE\_ZONE=europe-west1-b ./cluster/kube-up.sh
Now, we will add two additional pools of worker nodes, each of three nodes, in zones europe-west1-c and europe-west1-d (more details on adding pools of worker nodes can be find [here](http://kubernetes.io/docs/setup/multiple-zones/)):
Now, we will add two additional pools of worker nodes, each of three nodes, in zones europe-west1-c and europe-west1-d (more details on adding pools of worker nodes can be find [here](/docs/setup/multiple-zones/)):
```

View File

@ -6,7 +6,7 @@ url: /blog/2017/02/Postgresql-Clusters-Kubernetes-Statefulsets
---
_Editors note: Todays guest post is by Jeff McCormick, a developer at Crunchy Data, showing how to build a PostgreSQL cluster using the new Kubernetes StatefulSet feature._
In an earlier [post](https://kubernetes.io/blog/2016/09/creating-postgresql-cluster-using-helm), I described how to deploy a PostgreSQL cluster using [Helm](https://github.com/kubernetes/helm), a Kubernetes package manager. The following example provides the steps for building a PostgreSQL cluster using the new Kubernetes [StatefulSets](https://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) feature.
In an earlier [post](https://kubernetes.io/blog/2016/09/creating-postgresql-cluster-using-helm), I described how to deploy a PostgreSQL cluster using [Helm](https://github.com/kubernetes/helm), a Kubernetes package manager. The following example provides the steps for building a PostgreSQL cluster using the new Kubernetes [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) feature.
**StatefulSets Example**
@ -14,7 +14,7 @@ In an earlier [post](https://kubernetes.io/blog/2016/09/creating-postgresql-clus
StatefulSets is a new feature implemented in [Kubernetes 1.5](https://kubernetes.io/blog/2016/12/kubernetes-1.5-supporting-production-workloads) (prior versions it was known as PetSets). As a result, running this example will require an environment based on Kubernetes 1.5.0 or above.
The example in this blog deploys on Centos7 using [kubeadm](https://kubernetes.io/docs/admin/kubeadm/). Some instructions on what kubeadm provides and how to deploy a Kubernetes cluster is located [here](http://linoxide.com/containers/setup-kubernetes-kubeadm-centos).
The example in this blog deploys on Centos7 using [kubeadm](/docs/admin/kubeadm/). Some instructions on what kubeadm provides and how to deploy a Kubernetes cluster is located [here](http://linoxide.com/containers/setup-kubernetes-kubeadm-centos).
**Step 2** - Install NFS
@ -177,7 +177,7 @@ PostgreSQL replicas are configured to connect to the master database via a Servi
During the container initialization, a master container will use a [Service Account](https://kubernetes.io/docs/user-guide/service-accounts/) (pgset-sa) to change its container label value to match the master Service selector. Changing the label is important to enable traffic destined to the master database to reach the correct container within the Stateful Set. All other pods in the set assume the replica Service label by default.
During the container initialization, a master container will use a [Service Account](/docs/user-guide/service-accounts/) (pgset-sa) to change its container label value to match the master Service selector. Changing the label is important to enable traffic destined to the master database to reach the correct container within the Stateful Set. All other pods in the set assume the replica Service label by default.

View File

@ -139,7 +139,7 @@ Figure 2: Job A of three pods and Job B of one pod running on two nodes.
The entrypoint of each pod is [start.sh](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/usage/k8s/src/k8s_train/start.sh). It downloads data from a storage service, so that trainers can read quickly from the pod-local disk space. After downloading completes, it runs a Python script, [start\_paddle.py](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/usage/k8s/src/k8s_train/start_paddle.py), which starts a parameter server, waits until parameter servers of all pods are ready to serve, and then starts the trainer process in the pod.
This waiting is necessary because each trainer needs to talk to all parameter servers, as shown in Figure. 1. Kubernetes [API](http://kubernetes.io/docs/api-reference/v1/operations/#_list_or_watch_objects_of_kind_pod) enables trainers to check the status of pods, so the Python script could wait until all parameter servers status change to "running" before it triggers the training process.
This waiting is necessary because each trainer needs to talk to all parameter servers, as shown in Figure. 1. Kubernetes [API](/docs/api-reference/v1/operations/#_list_or_watch_objects_of_kind_pod) enables trainers to check the status of pods, so the Python script could wait until all parameter servers status change to "running" before it triggers the training process.
Currently, the mapping from data shards to pods/trainers is static. If we are going to run N trainers, we would need to partition the data into N shards, and statically assign each shard to a trainer. Again we rely on the Kubernetes API to enlist pods in a job so could we index pods / trainers from 1 to N. The i-th trainer would read the i-th data shard.

View File

@ -6,13 +6,13 @@ url: /blog/2017/03/Advanced-Scheduling-In-Kubernetes
---
_Editors note: this post is part of a [series of in-depth articles](https://kubernetes.io/blog/2017/03/five-days-of-kubernetes-1.6) on what's new in Kubernetes 1.6_
The Kubernetes schedulers default behavior works well for most cases -- for example, it ensures that pods are only placed on nodes that have sufficient free resources, it ties to spread pods from the same set ([ReplicaSet](https://kubernetes.io/docs/user-guide/replicasets/), [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/), etc.) across nodes, it tries to balance out the resource utilization of nodes, etc.
The Kubernetes schedulers default behavior works well for most cases -- for example, it ensures that pods are only placed on nodes that have sufficient free resources, it ties to spread pods from the same set ([ReplicaSet](/docs/user-guide/replicasets/), [StatefulSet](/docs/concepts/workloads/controllers/statefulset/), etc.) across nodes, it tries to balance out the resource utilization of nodes, etc.
But sometimes you want to control how your pods are scheduled. For example, perhaps you want to ensure that certain pods only schedule on nodes with specialized hardware, or you want to co-locate services that communicate frequently, or you want to dedicate a set of nodes to a particular set of users. Ultimately, you know much more about how your applications should be scheduled and deployed than Kubernetes ever will. So **[Kubernetes 1.6](https://kubernetes.io/blog/2017/03/kubernetes-1.6-multi-user-multi-workloads-at-scale) offers four advanced scheduling features: node affinity/anti-affinity, taints and tolerations, pod affinity/anti-affinity, and custom schedulers**. Each of these features are now in _beta_ in Kubernetes 1.6.
**Node Affinity/Anti-Affinity**
[Node Affinity/Anti-Affinity](https://kubernetes.io/docs/user-guide/node-selection/#node-affinity-beta-feature) is one way to set rules on which nodes are selected by the scheduler. This feature is a generalization of the [nodeSelector](https://kubernetes.io/docs/user-guide/node-selection/#nodeselector) feature which has been in Kubernetes since version 1.0. The rules are defined using the familiar concepts of custom labels on nodes and selectors specified in pods, and they can be either required or preferred, depending on how strictly you want the scheduler to enforce them.
[Node Affinity/Anti-Affinity](/docs/user-guide/node-selection/#node-affinity-beta-feature) is one way to set rules on which nodes are selected by the scheduler. This feature is a generalization of the [nodeSelector](/docs/user-guide/node-selection/#nodeselector) feature which has been in Kubernetes since version 1.0. The rules are defined using the familiar concepts of custom labels on nodes and selectors specified in pods, and they can be either required or preferred, depending on how strictly you want the scheduler to enforce them.
Required rules must be met for a pod to schedule on a particular node. If no node matches the criteria (plus all of the other normal criteria, such as having enough free resources for the pods resource request), then the pod wont be scheduled. Required rules are specified in the requiredDuringSchedulingIgnoredDuringExecution field of nodeAffinity.
@ -91,7 +91,7 @@ Additional use cases for this feature are to restrict scheduling based on nodes
**Taints and Tolerations**
A related feature is “[taints and tolerations](https://kubernetes.io/docs/user-guide/node-selection/#taints-and-toleations-beta-feature),” which allows you to mark (“taint”) a node so that no pods can schedule onto it unless a pod explicitly “tolerates” the taint. Marking nodes instead of pods (as in node affinity/anti-affinity) is particularly useful for situations where most pods in the cluster should avoid scheduling onto the node. For example, you might want to mark your master node as schedulable only by Kubernetes system components, or dedicate a set of nodes to a particular group of users, or keep regular pods away from nodes that have special hardware so as to leave room for pods that need the special hardware.
A related feature is “[taints and tolerations](/docs/user-guide/node-selection/#taints-and-toleations-beta-feature),” which allows you to mark (“taint”) a node so that no pods can schedule onto it unless a pod explicitly “tolerates” the taint. Marking nodes instead of pods (as in node affinity/anti-affinity) is particularly useful for situations where most pods in the cluster should avoid scheduling onto the node. For example, you might want to mark your master node as schedulable only by Kubernetes system components, or dedicate a set of nodes to a particular group of users, or keep regular pods away from nodes that have special hardware so as to leave room for pods that need the special hardware.
The kubectl command allows you to set taints on nodes, for example:
@ -120,7 +120,7 @@ tolerations:
In addition to moving taints and tolerations to _beta_ in Kubernetes 1.6, we have introduced an _alpha_ feature that uses taints and tolerations to allow you to customize how long a pod stays bound to a node when the node experiences a problem like a network partition instead of using the default five minutes. See [this section](https://kubernetes.io/docs/user-guide/node-selection/#per-pod-configurable-eviction-behavior-when-there-are-node-problems-alpha-feature) of the documentation for more details.
In addition to moving taints and tolerations to _beta_ in Kubernetes 1.6, we have introduced an _alpha_ feature that uses taints and tolerations to allow you to customize how long a pod stays bound to a node when the node experiences a problem like a network partition instead of using the default five minutes. See [this section](/docs/user-guide/node-selection/#per-pod-configurable-eviction-behavior-when-there-are-node-problems-alpha-feature) of the documentation for more details.
@ -128,7 +128,7 @@ In addition to moving taints and tolerations to _beta_ in Kubernetes 1.6, we hav
Node affinity/anti-affinity allows you to constrain which nodes a pod can run on based on the nodes labels. But what if you want to specify rules about how pods should be placed relative to one another, for example to spread or pack pods within a service or relative to pods in other services? For that you can use [pod affinity/anti-affinity](https://kubernetes.io/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature), which is also _beta_ in Kubernetes 1.6.
Node affinity/anti-affinity allows you to constrain which nodes a pod can run on based on the nodes labels. But what if you want to specify rules about how pods should be placed relative to one another, for example to spread or pack pods within a service or relative to pods in other services? For that you can use [pod affinity/anti-affinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature), which is also _beta_ in Kubernetes 1.6.
@ -163,7 +163,7 @@ Pod affinity/anti-affinity is very flexible. Imagine you have profiled the perfo
**Custom Schedulers**
If the Kubernetes schedulers various features dont give you enough control over the scheduling of your workloads, you can delegate responsibility for scheduling arbitrary subsets of pods to your own custom scheduler(s) that run(s) alongside, or instead of, the default Kubernetes scheduler. [Multiple schedulers](https://kubernetes.io/docs/admin/multiple-schedulers/) is _beta_ in Kubernetes 1.6.
If the Kubernetes schedulers various features dont give you enough control over the scheduling of your workloads, you can delegate responsibility for scheduling arbitrary subsets of pods to your own custom scheduler(s) that run(s) alongside, or instead of, the default Kubernetes scheduler. [Multiple schedulers](/docs/admin/multiple-schedulers/) is _beta_ in Kubernetes 1.6.
Each new pod is normally scheduled by the default scheduler. But if you provide the name of your own custom scheduler, the default scheduler will ignore that Pod and allow your scheduler to schedule the Pod to a node. Lets look at an example.

View File

@ -9,9 +9,9 @@ _Editors note: this post is part of a [series of in-depth articles](https://k
Storage is a critical part of running stateful containers, and Kubernetes offers powerful primitives for managing it. Dynamic volume provisioning, a feature unique to Kubernetes, allows storage volumes to be created on-demand. Before dynamic provisioning, cluster administrators had to manually make calls to their cloud or storage provider to provision new storage volumes, and then create PersistentVolume objects to represent them in Kubernetes. With dynamic provisioning, these two steps are automated, eliminating the need for cluster administrators to pre-provision storage. Instead, the storage resources can be dynamically provisioned using the provisioner specified by the StorageClass object (see [user-guide](https://kubernetes.io/docs/user-guide/persistent-volumes/index#storageclasses)). StorageClasses are essentially blueprints that abstract away the underlying storage provider, as well as other parameters, like disk-type (e.g.; solid-state vs standard disks).
Storage is a critical part of running stateful containers, and Kubernetes offers powerful primitives for managing it. Dynamic volume provisioning, a feature unique to Kubernetes, allows storage volumes to be created on-demand. Before dynamic provisioning, cluster administrators had to manually make calls to their cloud or storage provider to provision new storage volumes, and then create PersistentVolume objects to represent them in Kubernetes. With dynamic provisioning, these two steps are automated, eliminating the need for cluster administrators to pre-provision storage. Instead, the storage resources can be dynamically provisioned using the provisioner specified by the StorageClass object (see [user-guide](/docs/user-guide/persistent-volumes/index#storageclasses)). StorageClasses are essentially blueprints that abstract away the underlying storage provider, as well as other parameters, like disk-type (e.g.; solid-state vs standard disks).
StorageClasses use provisioners that are specific to the storage platform or cloud provider to give Kubernetes access to the physical media being used. Several storage provisioners are provided in-tree (see [user-guide](https://kubernetes.io/docs/user-guide/persistent-volumes/index#provisioner)), but additionally out-of-tree provisioners are now supported (see [kubernetes-incubator](https://github.com/kubernetes-incubator/external-storage)).
StorageClasses use provisioners that are specific to the storage platform or cloud provider to give Kubernetes access to the physical media being used. Several storage provisioners are provided in-tree (see [user-guide](/docs/user-guide/persistent-volumes/index#provisioner)), but additionally out-of-tree provisioners are now supported (see [kubernetes-incubator](https://github.com/kubernetes-incubator/external-storage)).
In the [Kubernetes 1.6 release](https://kubernetes.io/blog/2017/03/kubernetes-1.6-multi-user-multi-workloads-at-scale), **dynamic provisioning has been promoted to stable** (having entered beta in 1.4). This is a big step forward in completing the Kubernetes storage automation vision, allowing cluster administrators to control how resources are provisioned and giving users the ability to focus more on their application. With all of these benefits, **there are a few important user-facing changes (discussed below) that are important to understand before using Kubernetes 1.6**.
@ -78,7 +78,7 @@ The following table provides more detail on default storage classes pre-installe
| VMware vSphere | thin | vsphere-volume |
While these pre-installed default storage classes are chosen to be “reasonable” for most storage users, [this guide](https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class) provides instructions on how to specify your own default.
While these pre-installed default storage classes are chosen to be “reasonable” for most storage users, [this guide](/docs/tasks/administer-cluster/change-default-storage-class) provides instructions on how to specify your own default.
@ -86,13 +86,13 @@ While these pre-installed default storage classes are chosen to be “reasonable
All PVs have a reclaim policy associated with them that dictates what happens to a PV once it becomes released from a claim (see [user-guide](https://kubernetes.io/docs/user-guide/persistent-volumes/#reclaiming)). Since the goal of dynamic provisioning is to completely automate the lifecycle of storage resources, the default reclaim policy for dynamically provisioned volumes is “delete”. This means that when a PersistentVolumeClaim (PVC) is released, the dynamically provisioned volume is de-provisioned (deleted) on the storage provider and the data is likely irretrievable. If this is not the desired behavior, the user must change the reclaim policy on the corresponding PersistentVolume (PV) object after the volume is provisioned.
All PVs have a reclaim policy associated with them that dictates what happens to a PV once it becomes released from a claim (see [user-guide](/docs/user-guide/persistent-volumes/#reclaiming)). Since the goal of dynamic provisioning is to completely automate the lifecycle of storage resources, the default reclaim policy for dynamically provisioned volumes is “delete”. This means that when a PersistentVolumeClaim (PVC) is released, the dynamically provisioned volume is de-provisioned (deleted) on the storage provider and the data is likely irretrievable. If this is not the desired behavior, the user must change the reclaim policy on the corresponding PersistentVolume (PV) object after the volume is provisioned.
**How do I change the reclaim policy on a dynamically provisioned volume?**
You can change the reclaim policy by editing the PV object and changing the “persistentVolumeReclaimPolicy” field to the desired value. For more information on various reclaim policies see [user-guide](https://kubernetes.io/docs/user-guide/persistent-volumes/#reclaim-policy).
You can change the reclaim policy by editing the PV object and changing the “persistentVolumeReclaimPolicy” field to the desired value. For more information on various reclaim policies see [user-guide](/docs/user-guide/persistent-volumes/#reclaim-policy).

View File

@ -6,60 +6,60 @@ url: /blog/2017/03/Kubernetes-1.6-Multi-User-Multi-Workloads-At-Scale
---
Today were announcing the release of Kubernetes 1.6.
In this release the communitys focus is on scale and automation, to help you deploy multiple workloads to multiple users on a cluster. We are announcing that 5,000 node clusters are supported. We moved dynamic storage provisioning to _stable_. Role-based access control ([RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)), [kubefed](https://kubernetes.io/docs/tutorials/federation/set-up-cluster-federation-kubefed/), [kubeadm](https://kubernetes.io/docs/getting-started-guides/kubeadm/), and several scheduling features are moving to _beta_. We have also added intelligent defaults throughout to enable greater automation out of the box.
In this release the communitys focus is on scale and automation, to help you deploy multiple workloads to multiple users on a cluster. We are announcing that 5,000 node clusters are supported. We moved dynamic storage provisioning to _stable_. Role-based access control ([RBAC](/docs/reference/access-authn-authz/rbac/)), [kubefed](/docs/tutorials/federation/set-up-cluster-federation-kubefed/), [kubeadm](/docs/getting-started-guides/kubeadm/), and several scheduling features are moving to _beta_. We have also added intelligent defaults throughout to enable greater automation out of the box.
**Whats New**
**Scale and Federation** : Large enterprise users looking for proof of at-scale performance will be pleased to know that Kubernetes stringent scalability [SLO](https://kubernetes.io/blog/2016/03/1000-nodes-and-beyond-updates-to-Kubernetes-performance-and-scalability-in-12) now supports 5,000 node (150,000 pod) clusters. This 150% increase in total cluster size, powered by a new version of [etcd v3](https://coreos.com/blog/etcd3-a-new-etcd.html) by CoreOS, is great news if you are deploying applications such as search or games which can grow to consume larger clusters.
For users who want to scale beyond 5,000 nodes or spread across multiple regions or clouds, [federation](https://kubernetes.io/docs/concepts/cluster-administration/federation/) lets you combine multiple Kubernetes clusters and address them through a single API endpoint. In this release, the [kubefed](https://kubernetes.io//docs/tutorials/federation/set-up-cluster-federation-kubefed) command line utility graduated to _beta_ - with improved support for on-premise clusters. kubefed now [automatically configures](https://kubernetes.io//docs/tutorials/federation/set-up-cluster-federation-kubefed.md#kube-dns-configuration) kube-dns on joining clusters and can pass arguments to federated components.
For users who want to scale beyond 5,000 nodes or spread across multiple regions or clouds, [federation](/docs/concepts/cluster-administration/federation/) lets you combine multiple Kubernetes clusters and address them through a single API endpoint. In this release, the [kubefed](https://kubernetes.io//docs/tutorials/federation/set-up-cluster-federation-kubefed) command line utility graduated to _beta_ - with improved support for on-premise clusters. kubefed now [automatically configures](https://kubernetes.io//docs/tutorials/federation/set-up-cluster-federation-kubefed.md#kube-dns-configuration) kube-dns on joining clusters and can pass arguments to federated components.
**Security and Setup** : Users concerned with security will find that [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac), now _beta_ adds a significant security benefit through more tightly scoped default roles for system components. The default RBAC policies in 1.6 grant scoped permissions to control-plane components, nodes, and controllers. RBAC allows cluster administrators to selectively grant particular users or service accounts fine-grained access to specific resources on a per-namespace basis. RBAC users upgrading from 1.5 to 1.6 should view the guidance [here](https://kubernetes.io/docs/reference/access-authn-authz/rbac#upgrading-from-1-5).&nbsp;
**Security and Setup** : Users concerned with security will find that [RBAC](/docs/reference/access-authn-authz/rbac), now _beta_ adds a significant security benefit through more tightly scoped default roles for system components. The default RBAC policies in 1.6 grant scoped permissions to control-plane components, nodes, and controllers. RBAC allows cluster administrators to selectively grant particular users or service accounts fine-grained access to specific resources on a per-namespace basis. RBAC users upgrading from 1.5 to 1.6 should view the guidance [here](/docs/reference/access-authn-authz/rbac#upgrading-from-1-5).&nbsp;
Users looking for an easy way to provision a secure cluster on physical or cloud servers can use [kubeadm](https://kubernetes.io/docs/getting-started-guides/kubeadm/), which is now _beta_. kubeadm has been enhanced with a set of command line flags and a base feature set that includes RBAC setup, use of the [Bootstrap Token system](http://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/) and an enhanced [Certificates API](https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/).
Users looking for an easy way to provision a secure cluster on physical or cloud servers can use [kubeadm](/docs/getting-started-guides/kubeadm/), which is now _beta_. kubeadm has been enhanced with a set of command line flags and a base feature set that includes RBAC setup, use of the [Bootstrap Token system](/docs/reference/access-authn-authz/bootstrap-tokens/) and an enhanced [Certificates API](/docs/tasks/tls/managing-tls-in-a-cluster/).
**Advanced Scheduling** : This release adds a set of [powerful and versatile scheduling constructs](https://kubernetes.io/docs/user-guide/node-selection/) to give you greater control over how pods are scheduled, including rules to restrict pods to particular nodes in heterogeneous clusters, and rules to spread or pack pods across failure domains such as nodes, racks, and zones.
**Advanced Scheduling** : This release adds a set of [powerful and versatile scheduling constructs](/docs/user-guide/node-selection/) to give you greater control over how pods are scheduled, including rules to restrict pods to particular nodes in heterogeneous clusters, and rules to spread or pack pods across failure domains such as nodes, racks, and zones.
[Node affinity/anti-affinity](https://kubernetes.io/docs/user-guide/node-selection/#node-affinity-beta-feature), now in _beta_, allows you to restrict pods to schedule only on certain nodes based on node labels. Use built-in or custom node labels to select specific zones, hostnames, hardware architecture, operating system version, specialized hardware, etc. The scheduling rules can be required or preferred, depending on how strictly you want the scheduler to enforce them.
[Node affinity/anti-affinity](/docs/user-guide/node-selection/#node-affinity-beta-feature), now in _beta_, allows you to restrict pods to schedule only on certain nodes based on node labels. Use built-in or custom node labels to select specific zones, hostnames, hardware architecture, operating system version, specialized hardware, etc. The scheduling rules can be required or preferred, depending on how strictly you want the scheduler to enforce them.
A related feature, called [taints and tolerations](https://kubernetes.io/docs/user-guide/node-selection/#taints-and-tolerations-beta-feature), makes it possible to compactly represent rules for excluding pods from particular nodes. The feature, also now in _beta_, makes it easy, for example, to dedicate sets of nodes to particular sets of users, or to keep nodes that have special hardware available for pods that need the special hardware by excluding pods that dont need it.
A related feature, called [taints and tolerations](/docs/user-guide/node-selection/#taints-and-tolerations-beta-feature), makes it possible to compactly represent rules for excluding pods from particular nodes. The feature, also now in _beta_, makes it easy, for example, to dedicate sets of nodes to particular sets of users, or to keep nodes that have special hardware available for pods that need the special hardware by excluding pods that dont need it.
Sometimes you want to co-schedule services, or pods within a service, near each other topologically, for example to optimize North-South or East-West communication. Or you want to spread pods of a service for failure tolerance, or keep antagonistic pods separated, or ensure sole tenancy of nodes. [Pod affinity and anti-affinity](https://kubernetes.io/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature), now in _beta_, enables such use cases by letting you set hard or soft requirements for spreading and packing pods relative to one another within arbitrary topologies (node, zone, etc.).
Sometimes you want to co-schedule services, or pods within a service, near each other topologically, for example to optimize North-South or East-West communication. Or you want to spread pods of a service for failure tolerance, or keep antagonistic pods separated, or ensure sole tenancy of nodes. [Pod affinity and anti-affinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature), now in _beta_, enables such use cases by letting you set hard or soft requirements for spreading and packing pods relative to one another within arbitrary topologies (node, zone, etc.).
Lastly, for the ultimate in scheduling flexibility, you can run your own custom scheduler(s) alongside, or instead of, the default Kubernetes scheduler. Each scheduler is responsible for different sets of pods. [Multiple schedulers](https://kubernetes.io/docs/admin/multiple-schedulers/) is _beta_ in this release.&nbsp;
Lastly, for the ultimate in scheduling flexibility, you can run your own custom scheduler(s) alongside, or instead of, the default Kubernetes scheduler. Each scheduler is responsible for different sets of pods. [Multiple schedulers](/docs/admin/multiple-schedulers/) is _beta_ in this release.&nbsp;
**Dynamic Storage Provisioning** : Users deploying stateful applications will benefit from the extensive storage automation capabilities in this release of Kubernetes.
Since its early days, Kubernetes has been able to automatically attach and detach storage, format disk, mount and unmount volumes per the pod spec, and do so seamlessly as pods move between nodes. In addition, the PersistentVolumeClaim (PVC) and PersistentVolume (PV) objects decouple the request for storage from the specific storage implementation, making the pod spec portable across a range of cloud and on-premise environments. In this release [StorageClass](https://kubernetes.io/docs/user-guide/persistent-volumes/#storageclasses) and [dynamic volume provisioning](https://kubernetes.io/docs/user-guide/persistent-volumes/#dynamic) are promoted to _stable_, completing the automation story by creating and deleting storage on demand, eliminating the need to pre-provision.
Since its early days, Kubernetes has been able to automatically attach and detach storage, format disk, mount and unmount volumes per the pod spec, and do so seamlessly as pods move between nodes. In addition, the PersistentVolumeClaim (PVC) and PersistentVolume (PV) objects decouple the request for storage from the specific storage implementation, making the pod spec portable across a range of cloud and on-premise environments. In this release [StorageClass](/docs/user-guide/persistent-volumes/#storageclasses) and [dynamic volume provisioning](/docs/user-guide/persistent-volumes/#dynamic) are promoted to _stable_, completing the automation story by creating and deleting storage on demand, eliminating the need to pre-provision.
The design allows cluster administrators to define and expose multiple flavors of storage within a cluster, each with a custom set of parameters. End users can stop worrying about the complexity and nuances of how storage is provisioned, while still selecting from multiple storage options.
In 1.6 Kubernetes comes with a set of built-in defaults to completely automate the storage provisioning lifecycle, freeing you to work on your applications. Specifically, Kubernetes now pre-installs system-defined StorageClass objects for AWS, Azure, GCP, OpenStack and VMware vSphere [by default](https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class). This gives Kubernetes users on these providers the benefits of dynamic storage provisioning without having to manually setup StorageClass objects. This is a [change in the default behavior](https://kubernetes.io/docs/user-guide/persistent-volumes/index#class-1) of PVC objects on these clouds. Note that default behavior is that dynamically provisioned volumes are created with the “delete” [reclaim policy](https://kubernetes.io/docs/user-guide/persistent-volumes#reclaim-policy). That means once the PVC is deleted, the dynamically provisioned volume is automatically deleted so users do not have the extra step of cleaning up.
In 1.6 Kubernetes comes with a set of built-in defaults to completely automate the storage provisioning lifecycle, freeing you to work on your applications. Specifically, Kubernetes now pre-installs system-defined StorageClass objects for AWS, Azure, GCP, OpenStack and VMware vSphere [by default](/docs/tasks/administer-cluster/change-default-storage-class). This gives Kubernetes users on these providers the benefits of dynamic storage provisioning without having to manually setup StorageClass objects. This is a [change in the default behavior](/docs/user-guide/persistent-volumes/index#class-1) of PVC objects on these clouds. Note that default behavior is that dynamically provisioned volumes are created with the “delete” [reclaim policy](/docs/user-guide/persistent-volumes#reclaim-policy). That means once the PVC is deleted, the dynamically provisioned volume is automatically deleted so users do not have the extra step of cleaning up.
In addition, we have expanded the range of storage supported overall including:
- ScaleIO Kubernetes [Volume Plugin](https://kubernetes.io/docs/user-guide/persistent-volumes/index#scaleio) enabling pods to seamlessly access and use data stored on ScaleIO volumes.
- Portworx Kubernetes [Volume Plugin](https://kubernetes.io/docs/user-guide/persistent-volumes/index#portworx-volume) adding the capability to use Portworx as a storage provider for Kubernetes clusters. Portworx pools your server capacity and turns your servers or cloud instances into converged, highly available compute and storage nodes.
- ScaleIO Kubernetes [Volume Plugin](/docs/user-guide/persistent-volumes/index#scaleio) enabling pods to seamlessly access and use data stored on ScaleIO volumes.
- Portworx Kubernetes [Volume Plugin](/docs/user-guide/persistent-volumes/index#portworx-volume) adding the capability to use Portworx as a storage provider for Kubernetes clusters. Portworx pools your server capacity and turns your servers or cloud instances into converged, highly available compute and storage nodes.
- Support for NFSv3, NFSv4, and GlusterFS on clusters using the [COS node image](https://cloud.google.com/container-engine/docs/node-image-migration)&nbsp;
- Support for user-written/run dynamic PV provisioners. A golang library and examples can be found [here](http://github.com/kubernetes-incubator/external-storage).
- _Beta_ support for [mount options](https://kubernetes.io/docs/user-guide/persistent-volumes/index.md#mount-options) in persistent volumes
- _Beta_ support for [mount options](/docs/user-guide/persistent-volumes/index.md#mount-options) in persistent volumes
**Container Runtime Interface, etcd v3 and Daemon set updates** : while users may not directly interact with the container runtime or the API server datastore, they are foundational components for user facing functionality in Kubernetes. As such the community invests in expanding the capabilities of these and other system components.
- The Docker-CRI implementation is _beta_ and is enabled by default in kubelet. _Alpha_ support for other runtimes, [cri-o](https://github.com/kubernetes-incubator/cri-o/releases/tag/v0.1), [frakti](https://github.com/kubernetes/frakti/releases/tag/v0.1), [rkt](https://github.com/coreos/rkt/issues?q=is%3Aopen+is%3Aissue+label%3Aarea%2Fcri), has also been implemented.
- The default backend storage for the API server has been [upgraded](https://kubernetes.io/docs/admin/upgrade-1-6/) to use [etcd v3](https://coreos.com/blog/etcd3-a-new-etcd.html) by default for new clusters. If you are upgrading from a 1.5 cluster, care should be taken to ensure continuity by planning a data migration window.&nbsp;
- The default backend storage for the API server has been [upgraded](/docs/admin/upgrade-1-6/) to use [etcd v3](https://coreos.com/blog/etcd3-a-new-etcd.html) by default for new clusters. If you are upgrading from a 1.5 cluster, care should be taken to ensure continuity by planning a data migration window.&nbsp;
- Node reliability is improved as Kubelet exposes an admin configurable [Node Allocatable](https://kubernetes.io//docs/admin/node-allocatable.md#node-allocatable) feature to reserve compute resources for system daemons.
- [Daemon set updates](https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set) lets you perform rolling updates on a daemon set
- [Daemon set updates](/docs/tasks/manage-daemon/update-daemon-set) lets you perform rolling updates on a daemon set
**Alpha features** : this release was mostly focused on maturing functionality, however, a few alpha features were added to support the roadmap
- [Out-of-tree cloud provider](https://kubernetes.io/docs/concepts/overview/components#cloud-controller-manager) support adds a new cloud-controller-manager binary that may be used for testing the new out-of-core cloud provider flow
- [Per-pod-eviction](https://kubernetes.io/docs/user-guide/node-selection/#per-pod-configurable-eviction-behavior-when-there-are-node-problems-alpha-feature) in case of node problems combined with tolerationSeconds, lets users tune the duration a pod stays bound to a node that is experiencing problems
- [Pod Injection Policy](https://kubernetes.io/docs/user-guide/pod-preset/) adds a new API resource PodPreset to inject information such as secrets, volumes, volume mounts, and environment variables into pods at creation time.
- [Custom metrics](https://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/#support-for-custom-metrics) support in the Horizontal Pod Autoscaler changed to use&nbsp;
- [Out-of-tree cloud provider](/docs/concepts/overview/components#cloud-controller-manager) support adds a new cloud-controller-manager binary that may be used for testing the new out-of-core cloud provider flow
- [Per-pod-eviction](/docs/user-guide/node-selection/#per-pod-configurable-eviction-behavior-when-there-are-node-problems-alpha-feature) in case of node problems combined with tolerationSeconds, lets users tune the duration a pod stays bound to a node that is experiencing problems
- [Pod Injection Policy](/docs/user-guide/pod-preset/) adds a new API resource PodPreset to inject information such as secrets, volumes, volume mounts, and environment variables into pods at creation time.
- [Custom metrics](/docs/user-guide/horizontal-pod-autoscaling/#support-for-custom-metrics) support in the Horizontal Pod Autoscaler changed to use&nbsp;
- Multiple Nvidia [GPU support](https://vishh.github.io/docs/user-guide/gpus/) is introduced with the Docker runtime only
@ -89,7 +89,7 @@ Were continuing to see rapid adoption of Kubernetes in all sectors and sizes
- Share your Kubernetes use case story with the community [here](https://docs.google.com/a/google.com/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform).
**Availability**
Kubernetes 1.6 is available for download [here](https://github.com/kubernetes/kubernetes/releases/tag/v1.6.0) on GitHub and via [get.k8s.io](http://get.k8s.io/). To get started with Kubernetes, try one of the these [interactive tutorials](http://kubernetes.io/docs/tutorials/kubernetes-basics/).&nbsp;
Kubernetes 1.6 is available for download [here](https://github.com/kubernetes/kubernetes/releases/tag/v1.6.0) on GitHub and via [get.k8s.io](http://get.k8s.io/). To get started with Kubernetes, try one of the these [interactive tutorials](/docs/tutorials/kubernetes-basics/).&nbsp;
**Get Involved**

View File

@ -69,7 +69,7 @@ We want to emphasize that the optimization work we have done during the last few
**Whats next?**
People frequently ask how far we are going to go in improving Kubernetes scalability. Currently we do not have plans to increase scalability beyond 5000-node clusters (within our SLOs) in the next few releases. If you need clusters larger than 5000 nodes, we recommend to use [federation](https://kubernetes.io/docs/concepts/cluster-administration/federation/) to aggregate multiple Kubernetes clusters.
People frequently ask how far we are going to go in improving Kubernetes scalability. Currently we do not have plans to increase scalability beyond 5000-node clusters (within our SLOs) in the next few releases. If you need clusters larger than 5000 nodes, we recommend to use [federation](/docs/concepts/cluster-administration/federation/) to aggregate multiple Kubernetes clusters.
However, that doesnt mean we are going to stop working on scalability and performance. As we mentioned at the beginning of this post, our top priority is to refine our two existing SLOs and introduce new ones that will cover more parts of the system, e.g. networking. This effort has already started within the Scalability SIG. We have made significant progress on how we would like to define performance SLOs, and this work should be finished in the coming month.

View File

@ -6,7 +6,7 @@ url: /blog/2017/04/Configuring-Private-Dns-Zones-Upstream-Nameservers-Kubernetes
---
_Editors note: this post is part of a [series of in-depth articles](https://kubernetes.io/blog/2017/03/five-days-of-kubernetes-1.6) on what's new in Kubernetes 1.6_
Many users have existing domain name zones that they would like to integrate into their Kubernetes DNS namespace. For example, hybrid-cloud users may want to resolve their internal “.corp” domain addresses within the cluster. Other users may have a zone populated by a non-Kubernetes service discovery system (like Consul). Were pleased to announce that, in [Kubernetes 1.6](https://kubernetes.io/blog/2017/03/kubernetes-1.6-multi-user-multi-workloads-at-scale), [kube-dns](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) adds support for configurable private DNS zones (often called “stub domains”) and external upstream DNS nameservers. In this blog post, we describe how to configure and use this feature.
Many users have existing domain name zones that they would like to integrate into their Kubernetes DNS namespace. For example, hybrid-cloud users may want to resolve their internal “.corp” domain addresses within the cluster. Other users may have a zone populated by a non-Kubernetes service discovery system (like Consul). Were pleased to announce that, in [Kubernetes 1.6](https://kubernetes.io/blog/2017/03/kubernetes-1.6-multi-user-multi-workloads-at-scale), [kube-dns](/docs/concepts/services-networking/dns-pod-service/) adds support for configurable private DNS zones (often called “stub domains”) and external upstream DNS nameservers. In this blog post, we describe how to configure and use this feature.
**Default lookup flow**

View File

@ -98,7 +98,7 @@ spec:
In front of the Kubernetes services, load balancing the different canary versions of the service, lives a small cluster of HAProxy pods that get their haproxy.conf from the Kubernetes [ConfigMaps](https://kubernetes.io/docs/tasks/configure-pod-container/configmap/) that looks something like this:
In front of the Kubernetes services, load balancing the different canary versions of the service, lives a small cluster of HAProxy pods that get their haproxy.conf from the Kubernetes [ConfigMaps](/docs/tasks/configure-pod-container/configmap/) that looks something like this:

View File

@ -13,7 +13,7 @@ The focus of this post is to highlight some of the interesting new capabilities
**RBAC vs ABAC**
Currently there are several [authorization mechanisms](https://kubernetes.io/docs/reference/access-authn-authz/authorization/) available for use with Kubernetes. Authorizers are the mechanisms that decide who is permitted to make what changes to the cluster using the Kubernetes API. This affects things like kubectl, system components, and also certain applications that run in the cluster and manipulate the state of the cluster, like Jenkins with the Kubernetes plugin, or [Helm](https://github.com/kubernetes/helm) that runs in the cluster and uses the Kubernetes API to install applications in the cluster. Out of the available authorization mechanisms, ABAC and RBAC are the mechanisms local to a Kubernetes cluster that allow configurable permissions policies.
Currently there are several [authorization mechanisms](/docs/reference/access-authn-authz/authorization/) available for use with Kubernetes. Authorizers are the mechanisms that decide who is permitted to make what changes to the cluster using the Kubernetes API. This affects things like kubectl, system components, and also certain applications that run in the cluster and manipulate the state of the cluster, like Jenkins with the Kubernetes plugin, or [Helm](https://github.com/kubernetes/helm) that runs in the cluster and uses the Kubernetes API to install applications in the cluster. Out of the available authorization mechanisms, ABAC and RBAC are the mechanisms local to a Kubernetes cluster that allow configurable permissions policies.
ABAC, Attribute Based Access Control, is a powerful concept. However, as implemented in Kubernetes, ABAC is difficult to manage and understand. It requires ssh and root filesystem access on the master VM of the cluster to make authorization policy changes. For permission changes to take effect the cluster API server must be restarted.
@ -23,7 +23,7 @@ Based on where the Kubernetes community is focusing their development efforts, g
**Basic Concepts**
There are a few basic ideas behind RBAC that are foundational in understanding it. At its core, RBAC is a way of granting users granular access to [Kubernetes API resources](https://kubernetes.io/docs/api-reference/v1.6/).
There are a few basic ideas behind RBAC that are foundational in understanding it. At its core, RBAC is a way of granting users granular access to [Kubernetes API resources](/docs/api-reference/v1.6/).
[![](https://1.bp.blogspot.com/-v6KLs1tT_xI/WOa0anGP4sI/AAAAAAAABBo/KIgYfp8PjusuykUVTfgu9-2uKj_wXo4lwCLcB/s400/rbac1.png)](https://1.bp.blogspot.com/-v6KLs1tT_xI/WOa0anGP4sI/AAAAAAAABBo/KIgYfp8PjusuykUVTfgu9-2uKj_wXo4lwCLcB/s1600/rbac1.png)
@ -42,11 +42,11 @@ A RoleBinding maps a Role to a user or set of users, granting that Role's permis
[![](https://1.bp.blogspot.com/-ixDe91-cnqw/WOa0auxC0mI/AAAAAAAABBs/4LxVsr6shEgTYqUapt5QPISUeuTuztVwwCEw/s640/rbac2.png)](https://1.bp.blogspot.com/-ixDe91-cnqw/WOa0auxC0mI/AAAAAAAABBs/4LxVsr6shEgTYqUapt5QPISUeuTuztVwwCEw/s1600/rbac2.png)
Additionally there are cluster roles and cluster role bindings to consider. Cluster roles and cluster role bindings function like roles and role bindings except they have wider scope. The exact differences and how cluster roles and cluster role bindings interact with roles and role bindings are covered in the [Kubernetes documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding).
Additionally there are cluster roles and cluster role bindings to consider. Cluster roles and cluster role bindings function like roles and role bindings except they have wider scope. The exact differences and how cluster roles and cluster role bindings interact with roles and role bindings are covered in the [Kubernetes documentation](/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding).
**RBAC in Kubernetes**
RBAC is now deeply integrated into Kubernetes and used by the system components to grant the permissions necessary for them to function. [System roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings) are typically prefixed with system: so they can be easily recognized.
RBAC is now deeply integrated into Kubernetes and used by the system components to grant the permissions necessary for them to function. [System roles](/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings) are typically prefixed with system: so they can be easily recognized.
```
@ -76,7 +76,7 @@ system:controller:certificate-controller ClusterRole.v1beta1.rbac.authorization.
The RBAC system roles have been expanded to cover the necessary permissions for running a Kubernetes cluster with RBAC only.
During the permission translation from ABAC to RBAC, some of the permissions that were enabled by default in many deployments of ABAC authorized clusters were identified as unnecessarily broad and were [scoped down](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#upgrading-from-1-5) in RBAC. The area most likely to impact workloads on a cluster is the permissions available to service accounts. With the permissive ABAC configuration, requests from a pod using the pod mounted token to authenticate to the API server have broad authorization. As a concrete example, the curl command at the end of this sequence will return a JSON formatted result when ABAC is enabled and an error when only RBAC is enabled.
During the permission translation from ABAC to RBAC, some of the permissions that were enabled by default in many deployments of ABAC authorized clusters were identified as unnecessarily broad and were [scoped down](/docs/reference/access-authn-authz/rbac/#upgrading-from-1-5) in RBAC. The area most likely to impact workloads on a cluster is the permissions available to service accounts. With the permissive ABAC configuration, requests from a pod using the pod mounted token to authenticate to the API server have broad authorization. As a concrete example, the curl command at the end of this sequence will return a JSON formatted result when ABAC is enabled and an error when only RBAC is enabled.
```
@ -96,13 +96,13 @@ During the permission translation from ABAC to RBAC, some of the permissions tha
Any applications you run in your Kubernetes cluster that interact with the Kubernetes API have the potential to be affected by the permissions changes when transitioning from ABAC to RBAC.
To smooth the transition from ABAC to RBAC, you can create Kubernetes 1.6 clusters with both [ABAC and RBAC authorizers](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#parallel-authorizers) enabled. When both ABAC and RBAC are enabled, authorization for a resource is granted if either authorization policy grants access. However, under that configuration the most permissive authorizer is used and it will not be possible to use RBAC to fully control permissions.
To smooth the transition from ABAC to RBAC, you can create Kubernetes 1.6 clusters with both [ABAC and RBAC authorizers](/docs/reference/access-authn-authz/rbac/#parallel-authorizers) enabled. When both ABAC and RBAC are enabled, authorization for a resource is granted if either authorization policy grants access. However, under that configuration the most permissive authorizer is used and it will not be possible to use RBAC to fully control permissions.
At this point, RBAC is complete enough that ABAC support should be considered deprecated going forward. It will still remain in Kubernetes for the foreseeable future but development attention is focused on RBAC.
Two different talks at the at the Google Cloud Next conference touched on RBAC related changes in Kubernetes 1.6, jump to the relevant parts [here](https://www.youtube.com/watch?v=Cd4JU7qzYbE#t=8m01s) and [here](https://www.youtube.com/watch?v=18P7cFc6nTU#t=41m06s). For more detailed information about using RBAC in Kubernetes 1.6 read the full [RBAC documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/).
Two different talks at the at the Google Cloud Next conference touched on RBAC related changes in Kubernetes 1.6, jump to the relevant parts [here](https://www.youtube.com/watch?v=Cd4JU7qzYbE#t=8m01s) and [here](https://www.youtube.com/watch?v=18P7cFc6nTU#t=41m06s). For more detailed information about using RBAC in Kubernetes 1.6 read the full [RBAC documentation](/docs/reference/access-authn-authz/rbac/).
**Get Involved**

View File

@ -7,16 +7,16 @@ url: /blog/2017/05/Kubernetes-Monitoring-Guide
_Todays post is by Jean-Mathieu Saponaro, Research & Analytics Engineer at Datadog, discussing what Kubernetes changes for monitoring, and how you can prepare to properly monitor a containerized infrastructure orchestrated by Kubernetes._
Container technologies are taking the infrastructure world by storm. While containers solve or simplify infrastructure management processes, they also introduce significant complexity in terms of orchestration. Thats where Kubernetes comes to our rescue. Just like a conductor directs an orchestra, [Kubernetes](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/) oversees our ensemble of containers—starting, stopping, creating, and destroying them automatically to keep our applications humming along.
Container technologies are taking the infrastructure world by storm. While containers solve or simplify infrastructure management processes, they also introduce significant complexity in terms of orchestration. Thats where Kubernetes comes to our rescue. Just like a conductor directs an orchestra, [Kubernetes](/docs/concepts/overview/what-is-kubernetes/) oversees our ensemble of containers—starting, stopping, creating, and destroying them automatically to keep our applications humming along.
Kubernetes makes managing a containerized infrastructure much easier by creating levels of abstractions such as [pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/) and [services](https://kubernetes.io/docs/concepts/services-networking/service/). We no longer have to worry about where applications are running or if they have enough resources to work properly. But that doesnt change the fact that, in order to ensure good performance, we need to monitor our applications, the containers running them, and Kubernetes itself.
Kubernetes makes managing a containerized infrastructure much easier by creating levels of abstractions such as [pods](/docs/concepts/workloads/pods/pod/) and [services](/docs/concepts/services-networking/service/). We no longer have to worry about where applications are running or if they have enough resources to work properly. But that doesnt change the fact that, in order to ensure good performance, we need to monitor our applications, the containers running them, and Kubernetes itself.
**Rethinking monitoring for the Kubernetes era**
Just as containers have completely transformed how we think about running services on virtual machines, Kubernetes has changed the way we interact with containers. The good news is that with proper monitoring, the abstraction levels inherent to Kubernetes provide a comprehensive view of your infrastructure, even if the containers and applications are constantly moving. But Kubernetes monitoring requires us to rethink and reorient our strategies, since it differs from monitoring traditional hosts such as VMs or physical machines in several ways.
**Tags and labels become essential**
With containers and their orchestration completely managed by Kubernetes, labels are now the only way we have to interact with pods and containers. Thats why they are absolutely crucial for monitoring since all metrics and events will be sliced and diced using [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) across the different layers of your infrastructure. Defining your labels with a logical and easy-to-understand schema is essential so your metrics will be as useful as possible.
With containers and their orchestration completely managed by Kubernetes, labels are now the only way we have to interact with pods and containers. Thats why they are absolutely crucial for monitoring since all metrics and events will be sliced and diced using [labels](/docs/concepts/overview/working-with-objects/labels/) across the different layers of your infrastructure. Defining your labels with a logical and easy-to-understand schema is essential so your metrics will be as useful as possible.
**There are now more components to monitor**
@ -29,7 +29,7 @@ Kubernetes schedules applications dynamically based on scheduling policy, so you
**Be prepared for distributed clusters**
Kubernetes has the [ability](https://kubernetes.io/docs/tasks/federation/federation-service-discovery/#hybrid-cloud-capabilities) to distribute containerized applications across multiple data centers and potentially different cloud providers. That means metrics must be collected and aggregated among all these different sources.&nbsp;
Kubernetes has the [ability](/docs/tasks/federation/federation-service-discovery/#hybrid-cloud-capabilities) to distribute containerized applications across multiple data centers and potentially different cloud providers. That means metrics must be collected and aggregated among all these different sources.&nbsp;
&nbsp;

View File

@ -10,7 +10,7 @@ _Todays guest post is by Rob Hirschfeld, co-founder of open infrastructure au
Making Kubernetes operationally strong is a widely held priority and I track many deployment efforts around the project. The [incubated Kubespray project](https://github.com/kubernetes-incubator/kubespray) is of particular interest for me because it uses the popular Ansible toolset to build robust, upgradable clusters on both cloud and physical targets. I believe using tools familiar to operators grows our community.
Were excited to see the breadth of platforms enabled by Kubespray and how well it handles a wide range of options like integrating Ceph for [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) persistence and Helm for easier application uploads. Those additions have allowed us to fully integrate the [OpenStack Helm charts](https://github.com/att-comdev/openstack-helm) ([demo video](https://www.youtube.com/watch?v=wZ0vMrdx4a4&list=PLXPBeIrpXjfjabMbwYyDULOX3kZmlxEXK&index=2)).
Were excited to see the breadth of platforms enabled by Kubespray and how well it handles a wide range of options like integrating Ceph for [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) persistence and Helm for easier application uploads. Those additions have allowed us to fully integrate the [OpenStack Helm charts](https://github.com/att-comdev/openstack-helm) ([demo video](https://www.youtube.com/watch?v=wZ0vMrdx4a4&list=PLXPBeIrpXjfjabMbwYyDULOX3kZmlxEXK&index=2)).
By working with the upstream source instead of creating different install scripts, we get the benefits of a larger community. This requires some extra development effort; however, we believe helping share operational practices makes the whole community stronger. That was also the motivation behind the [SIG-Cluster Ops](https://github.com/kubernetes/community/tree/master/sig-cluster-ops).

View File

@ -13,7 +13,7 @@ With the adoption of microservices, however, new problems emerge due to the shee
**Kubernetes and Services**
Kubernetes supports a microservices architecture through the [Service](https://kubernetes.io/docs/concepts/services-networking/service/) construct. It allows developers to abstract away the functionality of a set of [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/), and expose it to other developers through a well-defined API. It allows adding a name to this level of abstraction and perform rudimentary L4 load balancing. But it doesnt help with higher-level problems, such as L7 metrics, traffic splitting, rate limiting, circuit breaking, etc.
Kubernetes supports a microservices architecture through the [Service](/docs/concepts/services-networking/service/) construct. It allows developers to abstract away the functionality of a set of [Pods](/docs/concepts/workloads/pods/pod/), and expose it to other developers through a well-defined API. It allows adding a name to this level of abstraction and perform rudimentary L4 load balancing. But it doesnt help with higher-level problems, such as L7 metrics, traffic splitting, rate limiting, circuit breaking, etc.
[Istio](https://istio.io/), announced last week at GlueCon 2017, addresses these problems in a fundamental way through a service mesh framework. With Istio, developers can implement the core logic for the microservices, and let the framework take care of the rest traffic management, discovery, service identity and security, and policy enforcement. Better yet, this can be also done for existing microservices without rewriting or recompiling any of their parts. Istio uses [Envoy](https://lyft.github.io/envoy/) as its runtime proxy component and provides an [extensible intermediation layer](https://istio.io/docs/concepts/policy-and-control/mixer.html) which allows global cross-cutting policy enforcement and telemetry collection.

View File

@ -15,34 +15,34 @@ Also, for power users, API aggregation in this release allows user-provided apis
**Whats New**
Security:
- [The Network Policy API](https://kubernetes.io/docs/concepts/services-networking/network-policies/) is promoted to stable. Network policy, implemented through a network plug-in, allows users to set and enforce rules governing which pods can communicate with each other.&nbsp;
- [Node authorizer](https://kubernetes.io/docs/reference/access-authn-authz/node/) and admission control plugin are new additions that restrict kubelets access to secrets, pods and other objects based on its node.
- [Encryption for Secrets](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/), and other resources in etcd, is now available as alpha.&nbsp;
- [Kubelet TLS bootstrapping](https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/) now supports client and server certificate rotation.
- [Audit logs](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/) stored by the API server are now more customizable and extensible with support for event filtering and webhooks. They also provide richer data for system audit.
- [The Network Policy API](/docs/concepts/services-networking/network-policies/) is promoted to stable. Network policy, implemented through a network plug-in, allows users to set and enforce rules governing which pods can communicate with each other.&nbsp;
- [Node authorizer](/docs/reference/access-authn-authz/node/) and admission control plugin are new additions that restrict kubelets access to secrets, pods and other objects based on its node.
- [Encryption for Secrets](/docs/tasks/administer-cluster/encrypt-data/), and other resources in etcd, is now available as alpha.&nbsp;
- [Kubelet TLS bootstrapping](/docs/admin/kubelet-tls-bootstrapping/) now supports client and server certificate rotation.
- [Audit logs](/docs/tasks/debug-application-cluster/audit/) stored by the API server are now more customizable and extensible with support for event filtering and webhooks. They also provide richer data for system audit.
Stateful workloads:
- [StatefulSet Updates](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets) is a new beta feature in 1.7, allowing automated updates of stateful applications such as Kafka, Zookeeper and etcd, using a range of update strategies including rolling updates.
- StatefulSets also now support faster scaling and startup for applications that do not require ordering through [Pod Management Policy](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies). This can be a major performance improvement.&nbsp;
- [Local Storage](https://kubernetes.io/docs/concepts/storage/volumes/#local) (alpha) was one of most frequently requested features for stateful applications. Users can now access local storage volumes through the standard PVC/PV interface and via StorageClasses in StatefulSets.
- DaemonSets, which create one pod per node already have an update feature, and in 1.7 have added smart [rollback and history](https://kubernetes.io/docs/tasks/manage-daemon/rollback-daemon-set/) capability.
- A new [StorageOS Volume plugin](https://kubernetes.io/docs/concepts/storage/volumes/#storageos) provides highly-available cluster-wide persistent volumes from local or attached node storage.
- [StatefulSet Updates](/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets) is a new beta feature in 1.7, allowing automated updates of stateful applications such as Kafka, Zookeeper and etcd, using a range of update strategies including rolling updates.
- StatefulSets also now support faster scaling and startup for applications that do not require ordering through [Pod Management Policy](/docs/concepts/workloads/controllers/statefulset/#pod-management-policies). This can be a major performance improvement.&nbsp;
- [Local Storage](/docs/concepts/storage/volumes/#local) (alpha) was one of most frequently requested features for stateful applications. Users can now access local storage volumes through the standard PVC/PV interface and via StorageClasses in StatefulSets.
- DaemonSets, which create one pod per node already have an update feature, and in 1.7 have added smart [rollback and history](/docs/tasks/manage-daemon/rollback-daemon-set/) capability.
- A new [StorageOS Volume plugin](/docs/concepts/storage/volumes/#storageos) provides highly-available cluster-wide persistent volumes from local or attached node storage.
Extensibility:
- [API aggregation](https://kubernetes.io/docs/concepts/api-extension/apiserver-aggregation/) at runtime is the most powerful extensibility features in this release, allowing power users to add Kubernetes-style pre-built, 3rd party or user-created APIs to their cluster.
- [API aggregation](/docs/concepts/api-extension/apiserver-aggregation/) at runtime is the most powerful extensibility features in this release, allowing power users to add Kubernetes-style pre-built, 3rd party or user-created APIs to their cluster.
- [Container Runtime Interface](https://github.com/kubernetes/community/blob/master/contributors/devel/container-runtime-interface.md) (CRI) has been enhanced with New RPC calls to retrieve container metrics from the runtime. [Validation tests for the CRI](https://github.com/kubernetes/community/blob/master/contributors/devel/cri-validation.md) have been published and Alpha integration with [containerd](http://containerd.io/), which supports basic pod lifecycle and image management is now available. Read our previous [in-depth post introducing CRI](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes).
Additional Features:
- Alpha support for [external admission controllers](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) is introduced, providing two options for adding custom business logic to the API server for modifying objects as they are created and validating policy.&nbsp;
- [Policy-based Federated Resource Placement](https://kubernetes.io/docs/tasks/federation/set-up-placement-policies-federation/) is introduced as Alpha providing placement policies for the federated clusters, based on custom requirements such as regulation, pricing or performance.
- Alpha support for [external admission controllers](/docs/reference/access-authn-authz/extensible-admission-controllers/) is introduced, providing two options for adding custom business logic to the API server for modifying objects as they are created and validating policy.&nbsp;
- [Policy-based Federated Resource Placement](/docs/tasks/federation/set-up-placement-policies-federation/) is introduced as Alpha providing placement policies for the federated clusters, based on custom requirements such as regulation, pricing or performance.
Deprecation:&nbsp;
- Third Party Resource (TPR) has been replaced with Custom Resource Definitions (CRD) which provides a cleaner API, and resolves issues and corner cases that were raised during the beta period of TPR. If you use the TPR beta feature, you are encouraged to [migrate](https://kubernetes.io/docs/tasks/access-kubernetes-api/migrate-third-party-resource/), as it is slated for removal by the community in Kubernetes 1.8.
- Third Party Resource (TPR) has been replaced with Custom Resource Definitions (CRD) which provides a cleaner API, and resolves issues and corner cases that were raised during the beta period of TPR. If you use the TPR beta feature, you are encouraged to [migrate](/docs/tasks/access-kubernetes-api/migrate-third-party-resource/), as it is slated for removal by the community in Kubernetes 1.8.
The above are a subset of the feature highlights in Kubernetes 1.7. For a complete list please visit the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#v170).
@ -60,7 +60,7 @@ Kubernetes adoption has been coming from every sector across the world. Recent u
Huge kudos and thanks go out to the Kubernetes 1.7 [release team](https://github.com/kubernetes/features/blob/master/release-1.7/release_team.md), led by Dawn Chen of Google.&nbsp;
**Availability**
Kubernetes 1.7 is available for [download on GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.7.0). To get started with Kubernetes, try one of the these [interactive tutorials](http://kubernetes.io/docs/tutorials/kubernetes-basics/).&nbsp;
Kubernetes 1.7 is available for [download on GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.7.0). To get started with Kubernetes, try one of the these [interactive tutorials](/docs/tutorials/kubernetes-basics/).&nbsp;
**Get Involved**
Join the community at [CloudNativeCon + KubeCon](http://events.linuxfoundation.org/events/cloudnativecon-and-kubecon-north-america) in Austin Dec. 6-8 for the largest Kubernetes gathering ever. [Speaking submissions](http://events.linuxfoundation.org/events/cloudnativecon-and-kubecon-north-america/program/cfp) are open till August 21 and [discounted registration](https://www.regonline.com/registration/Checkin.aspx?EventID=1903774&_ga=2.224109086.464556664.1498490094-1623727562.1496428006) ends October 6.

View File

@ -4,7 +4,7 @@ date: 2017-07-28
slug: happy-second-birthday-kubernetes
url: /blog/2017/07/Happy-Second-Birthday-Kubernetes
---
As we do every July, were excited to celebrate Kubernetes 2nd birthday! In the two years since GA 1.0 launched as an open source project, [Kubernetes](http://kubernetes.io/docs/whatisk8s/) (abbreviated as K8s) has grown to become the highest velocity cloud-related project. With more than 2,611 diverse contributors, from independents to leading global companies, the project has had 50,685 commits in the last 12 months. Of the 54 million projects on GitHub, Kubernetes is in the top 5 for number of unique developers contributing code. It also has [more pull requests and issue comments](https://www.cncf.io/blog/2017/02/27/measuring-popularity-kubernetes-using-bigquery/) than any other project on GitHub. &nbsp;
As we do every July, were excited to celebrate Kubernetes 2nd birthday! In the two years since GA 1.0 launched as an open source project, [Kubernetes](/docs/whatisk8s/) (abbreviated as K8s) has grown to become the highest velocity cloud-related project. With more than 2,611 diverse contributors, from independents to leading global companies, the project has had 50,685 commits in the last 12 months. Of the 54 million projects on GitHub, Kubernetes is in the top 5 for number of unique developers contributing code. It also has [more pull requests and issue comments](https://www.cncf.io/blog/2017/02/27/measuring-popularity-kubernetes-using-bigquery/) than any other project on GitHub. &nbsp;
![Screen Shot 2017-07-18 at 9.39.42 AM.png](https://lh3.googleusercontent.com/ldb4PfuqammWmcPiFpMa48ALxD0kGrSre0WGMpuXKqAqnKhyWEmIcJXnQcAK2sdVCiE5cvw0H2FXtLt_dVihAk4b-XTA2HIQba3A0irnRaIHup4bhFUwPLSSFmw3zFk9ZOt61TKc)

View File

@ -47,7 +47,7 @@ Our clusters consist of one or more physical or virtual machines, also known as
A user makes a request to Kubernetes to deploy the containers, specifying the number of replicas required for high availability. The Kubernetes scheduler decides where the [pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/) (groups of one or more containers) will be scheduled and which worker nodes they will be deployed on, storing this information internally in Kubernetes and [etcd](https://github.com/coreos/etcd#etcd). The deployment of pods in worker nodes is updated based on load at runtime, optimizing the placement of pods in the cluster.
A user makes a request to Kubernetes to deploy the containers, specifying the number of replicas required for high availability. The Kubernetes scheduler decides where the [pods](/docs/concepts/workloads/pods/pod/) (groups of one or more containers) will be scheduled and which worker nodes they will be deployed on, storing this information internally in Kubernetes and [etcd](https://github.com/coreos/etcd#etcd). The deployment of pods in worker nodes is updated based on load at runtime, optimizing the placement of pods in the cluster.
@ -111,7 +111,7 @@ Deploying the application in IBM Cloud Kubernetes Service:
Provision a cluster in IBM Cloud Kubernetes Service with \<x\> worker nodes. Create Kubernetes controllers for deploying the containers in worker nodes, the IBM Cloud Kubernetes Service infrastructure pulls the Docker images from IBM Cloud Container Registry to create containers. We tried deploying an application container and running a logmet agent (see Reading and displaying logs using logmet container, below) inside the containers that forwards the application logs to an IBM Cloud logging service. As part of the process, YAML files are used to create a controller resource for the UrbanCode Deploy (UCD). UCD agent is deployed as a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) controller, which is used to connect to the UCD server. The whole process of deployment of application happens in UCD. To support the application for public access, we created a service resource to interact between pods and access container services. For storage support, we created persistent volume claims and mounted the volume for the containers.
Provision a cluster in IBM Cloud Kubernetes Service with \<x\> worker nodes. Create Kubernetes controllers for deploying the containers in worker nodes, the IBM Cloud Kubernetes Service infrastructure pulls the Docker images from IBM Cloud Container Registry to create containers. We tried deploying an application container and running a logmet agent (see Reading and displaying logs using logmet container, below) inside the containers that forwards the application logs to an IBM Cloud logging service. As part of the process, YAML files are used to create a controller resource for the UrbanCode Deploy (UCD). UCD agent is deployed as a [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) controller, which is used to connect to the UCD server. The whole process of deployment of application happens in UCD. To support the application for public access, we created a service resource to interact between pods and access container services. For storage support, we created persistent volume claims and mounted the volume for the containers.
@ -138,7 +138,7 @@ Exposing services with Ingress:
[Ingress controllers](https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers) are reverse proxies that expose services outside cluster through URLs. They act as an external HTTP load balancer that uses a unique public entry point to route requests to the application.
[Ingress controllers](/docs/concepts/services-networking/ingress/#ingress-controllers) are reverse proxies that expose services outside cluster through URLs. They act as an external HTTP load balancer that uses a unique public entry point to route requests to the application.

View File

@ -15,7 +15,7 @@ When it comes to networking, however, EC2 has some limits that hinder performanc
## Traditional VPC Networking Performance Roadblocks
A Kubernetes pod network is separate from an Amazon Virtual Private Cloud (VPC) instance network; consequently, off-instance pod traffic needs a route to the destination pods. Fortunately, VPCs support setting these routes. When building a cluster network with the [kubenet](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#kubenet) plugin, whenever new nodes are added, the AWS cloud provider will automatically add a VPC route to the pods running on that node.
A Kubernetes pod network is separate from an Amazon Virtual Private Cloud (VPC) instance network; consequently, off-instance pod traffic needs a route to the destination pods. Fortunately, VPCs support setting these routes. When building a cluster network with the [kubenet](/docs/concepts/cluster-administration/network-plugins/#kubenet) plugin, whenever new nodes are added, the AWS cloud provider will automatically add a VPC route to the pods running on that node.
Using kubenet to set routes provides native VPC network performance and visibility. However, since kubenet does not support more advanced network functions like network policy for pod traffic isolation, many users choose to run a Container Network Interface (CNI) provider on the back end.

View File

@ -12,7 +12,7 @@ In this post, I discuss some of the challenges of running HPC workloads with Kub
## HPC workloads unique challenges
In Kubernetes, the base unit of scheduling is a Pod: one or more Docker containers scheduled to a cluster host. Kubernetes assumes that workloads are containers. While Kubernetes has the notion of [Cron Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/) and [Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/) that run to completion, applications deployed on Kubernetes are typically long-running services, like web servers, load balancers or data stores and while they are highly dynamic with pods coming and going, they differ greatly from HPC application patterns.
In Kubernetes, the base unit of scheduling is a Pod: one or more Docker containers scheduled to a cluster host. Kubernetes assumes that workloads are containers. While Kubernetes has the notion of [Cron Jobs](/docs/concepts/workloads/controllers/cron-jobs/) and [Jobs](/docs/concepts/workloads/controllers/jobs-run-to-completion/) that run to completion, applications deployed on Kubernetes are typically long-running services, like web servers, load balancers or data stores and while they are highly dynamic with pods coming and going, they differ greatly from HPC application patterns.
Traditional HPC applications often exhibit different characteristics:
@ -44,7 +44,7 @@ For sites running traditional HPC workloads, another approach is to use existing
- Use native job scheduling features in Kubernetes
Sites less invested in existing HPC applications can use existing scheduling facilities in Kubernetes for [jobs that run to completion](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/). While this is an option, it may be impractical for many HPC users. HPC applications are often either optimized towards massive throughput or large scale parallelism. In both cases startup and teardown latencies have a discriminating impact. Latencies that appear to be acceptable for containerized microservices today would render such applications unable to scale to the required levels.
Sites less invested in existing HPC applications can use existing scheduling facilities in Kubernetes for [jobs that run to completion](/docs/concepts/workloads/controllers/jobs-run-to-completion/). While this is an option, it may be impractical for many HPC users. HPC applications are often either optimized towards massive throughput or large scale parallelism. In both cases startup and teardown latencies have a discriminating impact. Latencies that appear to be acceptable for containerized microservices today would render such applications unable to scale to the required levels.
All of these solutions involve tradeoffs. The first option doesnt allow resources to be shared (increasing costs) and the second and third options require customers to pick a single scheduler, constraining future flexibility.

View File

@ -12,10 +12,10 @@ Were pleased to announce the delivery of Kubernetes 1.8, our third release th
## Spotlight on security
Kubernetes 1.8 graduates support for [role based access control](https://en.wikipedia.org/wiki/Role-based_access_control) (RBAC) to stable. RBAC allows cluster administrators to [dynamically define roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) to enforce access policies through the Kubernetes API. Beta support for filtering outbound traffic through [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) augments existing support for filtering inbound traffic to a pod. RBAC and Network Policies are two powerful tools for enforcing organizational and regulatory security requirements within Kubernetes.
Kubernetes 1.8 graduates support for [role based access control](https://en.wikipedia.org/wiki/Role-based_access_control) (RBAC) to stable. RBAC allows cluster administrators to [dynamically define roles](/docs/reference/access-authn-authz/rbac/) to enforce access policies through the Kubernetes API. Beta support for filtering outbound traffic through [network policies](/docs/concepts/services-networking/network-policies/) augments existing support for filtering inbound traffic to a pod. RBAC and Network Policies are two powerful tools for enforcing organizational and regulatory security requirements within Kubernetes.
Transport Layer Security (TLS) [certificate rotation](https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/) for the Kubelet graduates to beta. Automatic certificate rotation eases secure cluster operation.
Transport Layer Security (TLS) [certificate rotation](/docs/admin/kubelet-tls-bootstrapping/) for the Kubelet graduates to beta. Automatic certificate rotation eases secure cluster operation.
## Spotlight on workload support
@ -24,9 +24,9 @@ Kubernetes 1.8 promotes the core Workload APIs to beta with the apps/v1beta2 gro
For those considering running Big Data workloads on Kubernetes, the Workloads API now enables [native Kubernetes support](https://apache-spark-on-k8s.github.io/userdocs/) in Apache Spark.
Batch workloads, such as nightly ETL jobs, will benefit from the graduation of [CronJobs](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/) to beta.
Batch workloads, such as nightly ETL jobs, will benefit from the graduation of [CronJobs](/docs/concepts/workloads/controllers/cron-jobs/) to beta.
[Custom Resource Definitions](https://kubernetes.io/docs/concepts/api-extension/custom-resources/) (CRDs) remain in beta for Kubernetes 1.8. A CRD provides a powerful mechanism to extend Kubernetes with user-defined API objects. One use case for CRDs is the automation of complex stateful applications such as [key-value stores](https://github.com/coreos/etcd-operator), databases and [storage engines](https://rook.io/) through the Operator Pattern. Expect continued enhancements to CRDs such as [validation](https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation) as stabilization continues.
[Custom Resource Definitions](/docs/concepts/api-extension/custom-resources/) (CRDs) remain in beta for Kubernetes 1.8. A CRD provides a powerful mechanism to extend Kubernetes with user-defined API objects. One use case for CRDs is the automation of complex stateful applications such as [key-value stores](https://github.com/coreos/etcd-operator), databases and [storage engines](https://rook.io/) through the Operator Pattern. Expect continued enhancements to CRDs such as [validation](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation) as stabilization continues.
## Spoilers ahead
@ -40,7 +40,7 @@ Each Special Interest Group (SIG) in the community continues to deliver the most
#### Availability
Kubernetes 1.8 is available for [download on GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.8.0). To get started with Kubernetes, check out these [interactive tutorials](https://kubernetes.io/docs/tutorials/kubernetes-basics/).
Kubernetes 1.8 is available for [download on GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.8.0). To get started with Kubernetes, check out these [interactive tutorials](/docs/tutorials/kubernetes-basics/).
## Release team

View File

@ -8,15 +8,15 @@ url: /blog/2017/09/Kubernetes-Statefulsets-Daemonsets
Editor's note: today's post is by Janet Kuo and Kenneth Owens, Software Engineers at Google.
This post talks about recent updates to the [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) and [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) API objects for Kubernetes. We explore these features using [Apache ZooKeeper](https://zookeeper.apache.org/) and [Apache Kafka](https://kafka.apache.org/) StatefulSets and a [Prometheus node exporter](https://github.com/prometheus/node_exporter) DaemonSet.
This post talks about recent updates to the [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) and [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) API objects for Kubernetes. We explore these features using [Apache ZooKeeper](https://zookeeper.apache.org/) and [Apache Kafka](https://kafka.apache.org/) StatefulSets and a [Prometheus node exporter](https://github.com/prometheus/node_exporter) DaemonSet.
In Kubernetes 1.6, we added the [RollingUpdate](https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/) update strategy to the DaemonSet API Object. Configuring your DaemonSets with the RollingUpdate strategy causes the DaemonSet controller to perform automated rolling updates to the Pods in your DaemonSets when their spec.template are updated.
In Kubernetes 1.6, we added the [RollingUpdate](/docs/tasks/manage-daemon/update-daemon-set/) update strategy to the DaemonSet API Object. Configuring your DaemonSets with the RollingUpdate strategy causes the DaemonSet controller to perform automated rolling updates to the Pods in your DaemonSets when their spec.template are updated.
In Kubernetes 1.7, we enhanced the DaemonSet controller to track a history of revisions to the PodTemplateSpecs of DaemonSets. This allows the DaemonSet controller to roll back an update. We also added the [RollingUpdate](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies) strategy to the [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) API Object, and implemented revision history tracking for the StatefulSet controller. Additionally, we added the [Parallel](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#parallel-pod-management) pod management policy to support stateful applications that require Pods with unique identities but not ordered Pod creation and termination.
In Kubernetes 1.7, we enhanced the DaemonSet controller to track a history of revisions to the PodTemplateSpecs of DaemonSets. This allows the DaemonSet controller to roll back an update. We also added the [RollingUpdate](/docs/concepts/workloads/controllers/statefulset/#update-strategies) strategy to the [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) API Object, and implemented revision history tracking for the StatefulSet controller. Additionally, we added the [Parallel](/docs/concepts/workloads/controllers/statefulset/#parallel-pod-management) pod management policy to support stateful applications that require Pods with unique identities but not ordered Pod creation and termination.
# StatefulSet rolling update and Pod management policy
@ -59,7 +59,7 @@ The manifest creates an ensemble of three ZooKeeper servers using a StatefulSet,
If you use kubectl get to watch Pod creation in another terminal you will see that, in contrast to the [OrderedReady](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#orderedready-pod-management) strategy (the default policy that implements the full version of the StatefulSet guarantees), all of the Pods in the zk StatefulSet are created in parallel.
If you use kubectl get to watch Pod creation in another terminal you will see that, in contrast to the [OrderedReady](/docs/concepts/workloads/controllers/statefulset/#orderedready-pod-management) strategy (the default policy that implements the full version of the StatefulSet guarantees), all of the Pods in the zk StatefulSet are created in parallel.
@ -627,7 +627,7 @@ By design, the StatefulSet controller does not delete any persistent volume clai
# DaemonSet rolling update, history, and rollback
In this section, were going to show you how to perform a rolling update on a DaemonSet, look at its history, and then perform a rollback after a bad rollout. We will use a DaemonSet to deploy a [Prometheus node exporter](https://github.com/prometheus/node_exporter) on each Kubernetes node in the cluster. These node exporters export node metrics to the Prometheus monitoring system. For the sake of simplicity, weve omitted the installation of the [Prometheus server](https://github.com/prometheus/prometheus) and the service for [communication with DaemonSet pods](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#communicating-with-daemon-pods) from this blogpost.
In this section, were going to show you how to perform a rolling update on a DaemonSet, look at its history, and then perform a rollback after a bad rollout. We will use a DaemonSet to deploy a [Prometheus node exporter](https://github.com/prometheus/node_exporter) on each Kubernetes node in the cluster. These node exporters export node metrics to the Prometheus monitoring system. For the sake of simplicity, weve omitted the installation of the [Prometheus server](https://github.com/prometheus/prometheus) and the service for [communication with DaemonSet pods](/docs/concepts/workloads/controllers/daemonset/#communicating-with-daemon-pods) from this blogpost.
## Prerequisites

View File

@ -13,7 +13,7 @@ _**Editor's note: today's post is by Jason Messer, Principal PM Manager at Micro
## Tightly-Coupled Communication
These improvements enable tightly-coupled communication between multiple Windows Server containers (without Hyper-V isolation) within a single "[Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/)". Think of Pods as the scheduling unit for the Kubernetes cluster, inside of which, one or more application containers are co-located and able to share storage and networking resources. All containers within a Pod shared the same IP address and port range and are able to communicate with each other using localhost. This enables applications to easily leverage "helper" programs for tasks such as monitoring, configuration updates, log management, and proxies. Another way to think of a Pod is as a compute host with the app containers representing processes.
These improvements enable tightly-coupled communication between multiple Windows Server containers (without Hyper-V isolation) within a single "[Pod](/docs/concepts/workloads/pods/pod/)". Think of Pods as the scheduling unit for the Kubernetes cluster, inside of which, one or more application containers are co-located and able to share storage and networking resources. All containers within a Pod shared the same IP address and port range and are able to communicate with each other using localhost. This enables applications to easily leverage "helper" programs for tasks such as monitoring, configuration updates, log management, and proxies. Another way to think of a Pod is as a compute host with the app containers representing processes.
## Simplified Network Topology
We also simplified the network topology on Windows nodes in a Kubernetes cluster by reducing the number of endpoints required per container (or more generally, per pod) to one. Previously, Windows containers (pods) running in a Kubernetes cluster required two endpoints - one for external (internet) communication and a second for intra-cluster communication between between other nodes or pods in the cluster. This was due to the fact that external communication from containers attached to a host network with local scope (i.e. not publicly routable) required a NAT operation which could only be provided through the Windows NAT (WinNAT) component on the host. Intra-cluster communication required containers to be attached to a separate network with "global" (cluster-level) scope through a second endpoint. Recent platform improvements now enable NAT''ing to occur directly on a container endpoint which is implemented with the Microsoft Virtual Filtering Platform (VFP) Hyper-V switch extension. Now, both external and intra-cluster traffic can flow through a single endpoint.

View File

@ -8,7 +8,7 @@ _**Editor's note: this post is part of a [series of in-depth articles](https://k
Kubernetes now offers functionality to enforce rules about which pods can communicate with each other using [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/). This feature is has become stable Kubernetes 1.7 and is ready to use with supported networking plugins. The Kubernetes 1.8 release has added better capabilities to this feature.
Kubernetes now offers functionality to enforce rules about which pods can communicate with each other using [network policies](/docs/concepts/services-networking/network-policies/). This feature is has become stable Kubernetes 1.7 and is ready to use with supported networking plugins. The Kubernetes 1.8 release has added better capabilities to this feature.
## Network policy: What does it mean?
@ -19,7 +19,7 @@ If youre running multiple applications in a Kubernetes cluster or sharing a c
## How do I add Network Policy to my cluster?
Networking Policies are implemented by networking plugins. These plugins typically install an overlay network in your cluster to enforce the Network Policies configured. A number of networking plugins, including [Calico](https://kubernetes.io/docs/tasks/configure-pod-container/calico-network-policy/), [Romana](https://kubernetes.io/docs/tasks/configure-pod-container/romana-network-policy/) and [Weave Net](https://kubernetes.io/docs/tasks/configure-pod-container/weave-network-policy/), support using Network Policies.
Networking Policies are implemented by networking plugins. These plugins typically install an overlay network in your cluster to enforce the Network Policies configured. A number of networking plugins, including [Calico](/docs/tasks/configure-pod-container/calico-network-policy/), [Romana](/docs/tasks/configure-pod-container/romana-network-policy/) and [Weave Net](/docs/tasks/configure-pod-container/weave-network-policy/), support using Network Policies.
Google Container Engine (GKE) also provides beta support for [Network Policies](https://cloud.google.com/container-engine/docs/network-policy) using the Calico networking plugin when you create clusters with the following command:
@ -71,7 +71,7 @@ spec:
```
Once you apply this configuration, only pods with label **app: foo** can talk to the pods with the label **app: nginx**. For a more detailed tutorial, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/).
Once you apply this configuration, only pods with label **app: foo** can talk to the pods with the label **app: nginx**. For a more detailed tutorial, see the [Kubernetes documentation](/docs/tasks/administer-cluster/declare-network-policy/).
## Example: restricting traffic between all pods by default
@ -111,7 +111,7 @@ In addition to the previous examples, you can make the Network Policy API enforc
## Learn more
- Read more: [Networking Policy documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
- Read more: [Networking Policy documentation](/docs/concepts/services-networking/network-policies/)
- Read more: [Unofficial Network Policy Guide](https://ahmet.im/blog/kubernetes-network-policy/)
- Hands-on: [Declare a Network Policy](https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/)
- Hands-on: [Declare a Network Policy](/docs/tasks/administer-cluster/declare-network-policy/)
- Try: [Network Policy Recipes](https://github.com/ahmetb/kubernetes-networkpolicy-tutorial)

View File

@ -6,7 +6,7 @@ url: /blog/2017/10/Using-Rbac-Generally-Available-18
---
**_Editor's note: this post is part of a [series of in-depth articles](https://kubernetes.io/blog/2017/10/five-days-of-kubernetes-18) on what's new in Kubernetes 1.8. Todays post comes from Eric Chiang, software engineer, CoreOS, and SIG-Auth co-lead._**
Kubernetes 1.8 represents a significant milestone for the [role-based access control (RBAC) authorizer](https://kubernetes.io/docs/reference/access-authn-authz/rbac/), which was promoted to GA in this release. RBAC is a mechanism for controlling access to the Kubernetes API, and since its [beta in 1.6](https://kubernetes.io/blog/2017/04/rbac-support-in-kubernetes), many Kubernetes clusters and provisioning strategies have enabled it by default.
Kubernetes 1.8 represents a significant milestone for the [role-based access control (RBAC) authorizer](/docs/reference/access-authn-authz/rbac/), which was promoted to GA in this release. RBAC is a mechanism for controlling access to the Kubernetes API, and since its [beta in 1.6](https://kubernetes.io/blog/2017/04/rbac-support-in-kubernetes), many Kubernetes clusters and provisioning strategies have enabled it by default.
Going forward, we expect to see RBAC become a fundamental building block for securing Kubernetes clusters. This post explores using RBAC to manage user and application access to the Kubernetes API.

View File

@ -107,7 +107,7 @@ We are focused on stability and usability improvements as our next steps.
- Usability:
- Improve the user experience of [_crictl_](https://github.com/kubernetes-incubator/cri-tools/blob/master/docs/crictl.md). Crictl is a portable command line tool for all CRI container runtimes. The goal here is to make it easy to use for debug and development scenarios.
- Integrate cri-containerd with [_kube-up.sh_](https://kubernetes.io/docs/getting-started-guides/gce/), to help users bring up a production quality Kubernetes cluster using cri-containerd and containerd.
- Integrate cri-containerd with [_kube-up.sh_](/docs/getting-started-guides/gce/), to help users bring up a production quality Kubernetes cluster using cri-containerd and containerd.
- Improve our documentation for users and admins alike.
We plan to release our v1.0.0-beta.0 by the end of 2017.

View File

@ -22,14 +22,14 @@ Worse, these deployments are so tied to the clusters they have been deployed to
To address these concerns, were announcing the creation of the Kubeflow project, a new open source Github repo dedicated to making using ML stacks on Kubernetes easy, fast and extensible. This repository contains:
- JupyterHub to create & manage interactive Jupyter notebooks
- A Tensorflow [Custom Resource](https://kubernetes.io/docs/concepts/api-extension/custom-resources/) (CRD) that can be configured to use CPUs or GPUs, and adjusted to the size of a cluster with a single setting
- A Tensorflow [Custom Resource](/docs/concepts/api-extension/custom-resources/) (CRD) that can be configured to use CPUs or GPUs, and adjusted to the size of a cluster with a single setting
- A TF Serving container
Because this solution relies on Kubernetes, it runs wherever Kubernetes runs. Just spin up a cluster and go!
## Using Kubeflow
Let's suppose you are working with two different Kubernetes clusters: a local [minikube](https://github.com/kubernetes/minikube) cluster; and a [GKE cluster with GPUs](https://docs.google.com/forms/d/1JNnoUe1_3xZvAogAi16DwH6AjF2eu08ggED24OGO7Xc/viewform?edit_requested=true); and that you have two [kubectl contexts](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#define-clusters-users-and-contexts) defined named minikube and gke.
Let's suppose you are working with two different Kubernetes clusters: a local [minikube](https://github.com/kubernetes/minikube) cluster; and a [GKE cluster with GPUs](https://docs.google.com/forms/d/1JNnoUe1_3xZvAogAi16DwH6AjF2eu08ggED24OGO7Xc/viewform?edit_requested=true); and that you have two [kubectl contexts](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#define-clusters-users-and-contexts) defined named minikube and gke.

View File

@ -11,13 +11,13 @@ Todays release continues the evolution of an increasingly rich feature set, m
## Workloads API GA
Were excited to announce General Availability (GA) of the [apps/v1 Workloads API](https://kubernetes.io/docs/reference/workloads-18-19/), which is now enabled by default. The Apps Workloads API groups the DaemonSet, Deployment, ReplicaSet, and StatefulSet APIs together to form the foundation for long-running stateless and stateful workloads in Kubernetes. Note that the Batch Workloads API (Job and CronJob) is not part of this effort and will have a separate path to GA stability.
Were excited to announce General Availability (GA) of the [apps/v1 Workloads API](/docs/reference/workloads-18-19/), which is now enabled by default. The Apps Workloads API groups the DaemonSet, Deployment, ReplicaSet, and StatefulSet APIs together to form the foundation for long-running stateless and stateful workloads in Kubernetes. Note that the Batch Workloads API (Job and CronJob) is not part of this effort and will have a separate path to GA stability.
Deployment and ReplicaSet, two of the most commonly used objects in Kubernetes, are now stabilized after more than a year of real-world use and feedback. [SIG Apps](https://github.com/kubernetes/community/tree/master/sig-apps) has applied the lessons from this process to all four resource kinds over the last several release cycles, enabling DaemonSet and StatefulSet to join this graduation. The v1 (GA) designation indicates production hardening and readiness, and comes with the guarantee of long-term backwards compatibility.
## Windows Support (beta)
Kubernetes was originally developed for Linux systems, but as our users are realizing the benefits of container orchestration at scale, we are seeing demand for Kubernetes to run Windows workloads. Work to support Windows Server in Kubernetes began in earnest about 12 months ago. [SIG-Windows](https://github.com/kubernetes/community/tree/master/sig-windows)has now promoted this feature to beta status, which means that we can evaluate it for [usage](https://kubernetes.io/docs/getting-started-guides/windows/).
Kubernetes was originally developed for Linux systems, but as our users are realizing the benefits of container orchestration at scale, we are seeing demand for Kubernetes to run Windows workloads. Work to support Windows Server in Kubernetes began in earnest about 12 months ago. [SIG-Windows](https://github.com/kubernetes/community/tree/master/sig-windows)has now promoted this feature to beta status, which means that we can evaluate it for [usage](/docs/getting-started-guides/windows/).
@ -46,7 +46,7 @@ Each Special Interest Group (SIG) in the community continues to deliver the most
## Availability
Kubernetes 1.9 is available for [download on GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.9.0). To get started with Kubernetes, check out these [interactive tutorials](https://kubernetes.io/docs/tutorials/kubernetes-basics/).&nbsp;
Kubernetes 1.9 is available for [download on GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.9.0). To get started with Kubernetes, check out these [interactive tutorials](/docs/tutorials/kubernetes-basics/).&nbsp;

View File

@ -67,7 +67,7 @@ The original implementation of seccomp was highly restrictive. Once applied, if
[seccomp-bpf](https://blog.yadutaf.fr/2014/05/29/introduction-to-seccomp-bpf-linux-syscall-filter/) enables more complex filters and a wider range of actions. Seccomp-bpf, also known as seccomp mode 2, allows for applying custom filters in the form of BPF programs. When the BPF program is loaded, the filter is applied to each syscall and the appropriate action is taken (Allow, Kill, Trap, etc.).
seccomp-bpf is widely used in Kubernetes tools and exposed in Kubernetes itself. For example, seccomp-bpf is used in Docker to apply custom [seccomp security profiles](https://docs.docker.com/engine/security/seccomp/), in rkt to apply [seccomp isolators](https://github.com/rkt/rkt/blob/5fadf0f1f444cdfde40d57e1d199b6dd6371594c/Documentation/seccomp-guide.md), and in Kubernetes itself in its [Security Context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/).
seccomp-bpf is widely used in Kubernetes tools and exposed in Kubernetes itself. For example, seccomp-bpf is used in Docker to apply custom [seccomp security profiles](https://docs.docker.com/engine/security/seccomp/), in rkt to apply [seccomp isolators](https://github.com/rkt/rkt/blob/5fadf0f1f444cdfde40d57e1d199b6dd6371594c/Documentation/seccomp-guide.md), and in Kubernetes itself in its [Security Context](/docs/tasks/configure-pod-container/security-context/).
But in all of these cases the use of BPF is hidden behind [libseccomp](https://github.com/seccomp/libseccomp). Behind the scenes, libseccomp generates BPF code from rules provided to it. Once generated, the BPF program is loaded and the rules applied.
@ -117,7 +117,7 @@ To achieve the best possible isolation, each function call would have to happen
By using Landlock, we could isolate function calls from each other within the same container, making a temporary file created by one function call inaccessible to the next function call, for example. Integration between Landlock and technologies like Kubernetes-based serverless frameworks would be a ripe area for further exploration.
## Auditing kubectl-exec with eBPF
In Kubernetes 1.7 the [audit proposal](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/) started making its way in. It's currently pre-stable with plans to be stable in the 1.10 release. As the name implies, it allows administrators to log and audit events that take place in a Kubernetes cluster.
In Kubernetes 1.7 the [audit proposal](/docs/tasks/debug-application-cluster/audit/) started making its way in. It's currently pre-stable with plans to be stable in the 1.10 release. As the name implies, it allows administrators to log and audit events that take place in a Kubernetes cluster.
While these events log Kubernetes events, they don't currently provide the level of visibility that some may require. For example, while we can see that someone has used `kubectl exec` to enter a container, we are not able to see what commands were executed in that session. With eBPF one can attach a BPF program that would record any commands executed in the `kubectl exec` session and pass those commands to a user-space program that logs those events. We could then play that session back and know the exact sequence of events that took place.
## Learn more about eBPF

View File

@ -11,15 +11,15 @@ url: /blog/2018/01/Core-Workloads-Api-Ga
## In the Beginning …
There were [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/), tightly coupled containers that share resource requirements, networking, storage, and a lifecycle. Pods were useful, but, as it turns out, users wanted to seamlessly, reproducibly, and automatically create many identical replicas of the same Pod, so we created [ReplicationController](https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/).
There were [Pods](/docs/concepts/workloads/pods/pod-overview/), tightly coupled containers that share resource requirements, networking, storage, and a lifecycle. Pods were useful, but, as it turns out, users wanted to seamlessly, reproducibly, and automatically create many identical replicas of the same Pod, so we created [ReplicationController](/docs/concepts/workloads/controllers/replicationcontroller/).
Replication was a step forward, but what users really needed was higher level orchestration of their replicated Pods. They wanted rolling updates, roll backs, and roll overs. So the OpenShift team created [DeploymentConfig](https://docs.openshift.org/latest/architecture/core_concepts/deployments.html#deployments-and-deployment-configurations). DeploymentConfigs were also useful, and OpenShift users were happy. In order to allow all OSS Kubernetes uses to share in the elation, and to take advantage of [set-based label selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors), [ReplicaSet](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/) and [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) were added to the extensions/v1beta1 group version providing rolling updates, roll backs, and roll overs for all Kubernetes users.
Replication was a step forward, but what users really needed was higher level orchestration of their replicated Pods. They wanted rolling updates, roll backs, and roll overs. So the OpenShift team created [DeploymentConfig](https://docs.openshift.org/latest/architecture/core_concepts/deployments.html#deployments-and-deployment-configurations). DeploymentConfigs were also useful, and OpenShift users were happy. In order to allow all OSS Kubernetes uses to share in the elation, and to take advantage of [set-based label selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors), [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) and [Deployment](/docs/concepts/workloads/controllers/deployment/) were added to the extensions/v1beta1 group version providing rolling updates, roll backs, and roll overs for all Kubernetes users.
That mostly solved the problem of orchestrating containerized 12 factor apps on Kubernetes, so the community turned its attention to a different problem. Replicating a Pod \<n\> times isnt the right hammer for every nail in your cluster. Sometimes, you need to run a Pod on every Node, or on a subset of Nodes (for example, shared side cars like log shippers and metrics collectors, Kubernetes add-ons, and Distributed File Systems). The state of the art was Pods combined with NodeSelectors, or static Pods, but this is unwieldy. After having grown used to the ease of automation provided by Deployments, users demanded the same features for this category of application, so [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) was added to extension/v1beta1 as well.
That mostly solved the problem of orchestrating containerized 12 factor apps on Kubernetes, so the community turned its attention to a different problem. Replicating a Pod \<n\> times isnt the right hammer for every nail in your cluster. Sometimes, you need to run a Pod on every Node, or on a subset of Nodes (for example, shared side cars like log shippers and metrics collectors, Kubernetes add-ons, and Distributed File Systems). The state of the art was Pods combined with NodeSelectors, or static Pods, but this is unwieldy. After having grown used to the ease of automation provided by Deployments, users demanded the same features for this category of application, so [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) was added to extension/v1beta1 as well.
For a time, users were content, until they decided that Kubernetes needed to be able to orchestrate more than just 12 factor apps and cluster infrastructure. Whether your architecture is N-tier, service oriented, or micro-service oriented, your 12 factor apps depend on stateful workloads (for example, RDBMSs, distributed key value stores, and messaging queues) to provide services to end users and other applications. These stateful workloads can have availability and durability requirements that can only be achieved by distributed systems, and users were ready to use Kubernetes to orchestrate the entire stack.
While Deployments are great for stateless workloads, they dont provide the right guarantees for the orchestration of distributed systems. These applications can require stable network identities, ordered, sequential deployment, updates, and deletion, and stable, durable storage. [PetSet](https://kubernetes.io/docs/tasks/run-application/upgrade-pet-set-to-stateful-set/) was added to the apps/v1beta1 group version to address this category of application. Unfortunately, [we were less than thoughtful with its naming](https://github.com/kubernetes/kubernetes/issues/27430), and, as we always strive to be an inclusive community, we renamed the kind to [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/).
While Deployments are great for stateless workloads, they dont provide the right guarantees for the orchestration of distributed systems. These applications can require stable network identities, ordered, sequential deployment, updates, and deletion, and stable, durable storage. [PetSet](/docs/tasks/run-application/upgrade-pet-set-to-stateful-set/) was added to the apps/v1beta1 group version to address this category of application. Unfortunately, [we were less than thoughtful with its naming](https://github.com/kubernetes/kubernetes/issues/27430), and, as we always strive to be an inclusive community, we renamed the kind to [StatefulSet](/docs/concepts/workloads/controllers/statefulset/).
Finally, we were done.
@ -44,7 +44,7 @@ This was completely incompatible with strategic merge patch and kubectl apply. M
### Immutable Selectors
Selector mutation, while allowing for some use cases like promotable Deployment canaries, is not handled gracefully by our workload controllers, and we have always [strongly cautioned users against it](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#label-selector-updates). To provide a consistent, usable, and stable API, selectors were made immutable for all kinds in the workloads API.
Selector mutation, while allowing for some use cases like promotable Deployment canaries, is not handled gracefully by our workload controllers, and we have always [strongly cautioned users against it](/docs/concepts/workloads/controllers/deployment/#label-selector-updates). To provide a consistent, usable, and stable API, selectors were made immutable for all kinds in the workloads API.
We believe that there are better ways to support features like promotable canaries and orchestrated Pod relabeling, but, if restricted selector mutation is a necessary feature for our users, we can relax immutability in the future without breaking backward compatibility.
@ -83,9 +83,9 @@ We originally added a scale subresource to the apps group. This was the wrong di
## Migration and Deprecation
The question most youre probably asking now is, “Whats my migration path onto apps/v1 and how soon should I plan on migrating?” All of the group versions prior to apps/v1 are deprecated as of Kubernetes 1.9, and all new code should be developed against apps/v1, but, as discussed above, many of our users treat extensions/v1beta1 as if it were GA. We realize this, and the minimum support timelines in our [deprecation policy](https://kubernetes.io/docs/reference/deprecation-policy/) are just that, minimums.
The question most youre probably asking now is, “Whats my migration path onto apps/v1 and how soon should I plan on migrating?” All of the group versions prior to apps/v1 are deprecated as of Kubernetes 1.9, and all new code should be developed against apps/v1, but, as discussed above, many of our users treat extensions/v1beta1 as if it were GA. We realize this, and the minimum support timelines in our [deprecation policy](/docs/reference/deprecation-policy/) are just that, minimums.
In future releases, before completely removing any of the group versions, we will disable them by default in the API Server. At this point, you will still be able to use the group version, but you will have to explicitly enable it. We will also provide utilities to upgrade the storage version of the API objects to apps/v1. Remember, all of the versions of the core workloads kinds are bidirectionally convertible. If you want to manually update your core workloads API objects now, you can use [kubectl convert](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#convert) to convert manifests between group versions.
In future releases, before completely removing any of the group versions, we will disable them by default in the API Server. At this point, you will still be able to use the group version, but you will have to explicitly enable it. We will also provide utilities to upgrade the storage version of the API objects to apps/v1. Remember, all of the versions of the core workloads kinds are bidirectionally convertible. If you want to manually update your core workloads API objects now, you can use [kubectl convert](/docs/reference/generated/kubectl/kubectl-commands#convert) to convert manifests between group versions.

View File

@ -11,7 +11,7 @@ The admission stage of API server processing is one of the most powerful tools f
## What is Admission?
[Admission](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#what-are-they) is the phase of [handling an API server request](https://blog.openshift.com/kubernetes-deep-dive-api-server-part-1/) that happens before a resource is persisted, but after authorization. Admission gets access to the same information as authorization (user, URL, etc) and the complete body of an API request (for most requests).
[Admission](/docs/reference/access-authn-authz/admission-controllers/#what-are-they) is the phase of [handling an API server request](https://blog.openshift.com/kubernetes-deep-dive-api-server-part-1/) that happens before a resource is persisted, but after authorization. Admission gets access to the same information as authorization (user, URL, etc) and the complete body of an API request (for most requests).
[![](https://2.bp.blogspot.com/-p8WGg2BATsY/WlfywbD_tAI/AAAAAAAAAJw/mDqZV0dB4_Y0gXXQp_1tQ7CtMRSd6lHVwCK4BGAYYCw/s640/Screen%2BShot%2B2018-01-11%2Bat%2B3.22.07%2BPM.png)](https://2.bp.blogspot.com/-p8WGg2BATsY/WlfywbD_tAI/AAAAAAAAAJw/mDqZV0dB4_Y0gXXQp_1tQ7CtMRSd6lHVwCK4BGAYYCw/s1600/Screen%2BShot%2B2018-01-11%2Bat%2B3.22.07%2BPM.png)

View File

@ -16,13 +16,13 @@ This blog post is one of a number of efforts to make client-go more accessible t
The following API group promotions are part of Kubernetes 1.9:
- Workload objects (Deployments, DaemonSets, ReplicaSets, and StatefulSets) have been [promoted to the apps/v1 API group in Kubernetes 1.9](https://kubernetes.io/docs/reference/workloads-18-19/). client-go follows this transition and allows developers to use the latest version by importing the k8s.io/api/apps/v1 package instead of k8s.io/api/apps/v1beta1 and by using Clientset.AppsV1().
- Admission Webhook Registration has been promoted to the admissionregistration.k8s.io/v1beta1 API group in Kubernetes 1.9. The former ExternalAdmissionHookConfiguration type has been replaced by the incompatible ValidatingWebhookConfiguration and MutatingWebhookConfiguration types. Moreover, the webhook admission payload type AdmissionReview in admission.k8s.io has been promoted to v1beta1. Note that versioned objects are now passed to webhooks. Refer to the admission webhook [documentation](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) for details.
- Workload objects (Deployments, DaemonSets, ReplicaSets, and StatefulSets) have been [promoted to the apps/v1 API group in Kubernetes 1.9](/docs/reference/workloads-18-19/). client-go follows this transition and allows developers to use the latest version by importing the k8s.io/api/apps/v1 package instead of k8s.io/api/apps/v1beta1 and by using Clientset.AppsV1().
- Admission Webhook Registration has been promoted to the admissionregistration.k8s.io/v1beta1 API group in Kubernetes 1.9. The former ExternalAdmissionHookConfiguration type has been replaced by the incompatible ValidatingWebhookConfiguration and MutatingWebhookConfiguration types. Moreover, the webhook admission payload type AdmissionReview in admission.k8s.io has been promoted to v1beta1. Note that versioned objects are now passed to webhooks. Refer to the admission webhook [documentation](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) for details.
## Validation for CustomResources
In Kubernetes 1.8 we introduced CustomResourceDefinitions (CRD) [pre-persistence schema validation](https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation) as an alpha feature. With 1.9, the feature got promoted to beta and will be enabled by default. As a client-go user, you will find the API types at k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1.
In Kubernetes 1.8 we introduced CustomResourceDefinitions (CRD) [pre-persistence schema validation](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation) as an alpha feature. With 1.9, the feature got promoted to beta and will be enabled by default. As a client-go user, you will find the API types at k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1.
The [OpenAPI v3 schema](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.0.md#schemaObject) can be defined in the CRD spec as:
@ -84,12 +84,12 @@ spec.version in body should be one of [v1.0.0 v1.0.1]
Note that with [Admission Webhooks](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks), Kubernetes 1.9 provides another beta feature to validate objects before they are created or updated. Starting with 1.9, these webhooks also allow mutation of objects (for example, to set defaults or to inject values). Of course, webhooks work with CRDs as well. Moreover, webhooks can be used to implement validations that are not easily expressible with CRD validation. Note that webhooks are harder to implement than CRD validation, so for many purposes, CRD validation is the right tool.
Note that with [Admission Webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks), Kubernetes 1.9 provides another beta feature to validate objects before they are created or updated. Starting with 1.9, these webhooks also allow mutation of objects (for example, to set defaults or to inject values). Of course, webhooks work with CRDs as well. Moreover, webhooks can be used to implement validations that are not easily expressible with CRD validation. Note that webhooks are harder to implement than CRD validation, so for many purposes, CRD validation is the right tool.
## Creating namespaced informers
Often objects in one namespace or only with certain labels are to be processed in a controller. Informers [now allow](https://github.com/kubernetes/kubernetes/pull/54660) you to tweak the ListOptions used to query the API server to list and watch objects. Uninitialized objects (for consumption by [initializers](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-initializers)) can be made visible by setting IncludeUnitialized to true. All this can be done using the new NewFilteredSharedInformerFactory constructor for shared informers:
Often objects in one namespace or only with certain labels are to be processed in a controller. Informers [now allow](https://github.com/kubernetes/kubernetes/pull/54660) you to tweak the ListOptions used to query the API server to list and watch objects. Uninitialized objects (for consumption by [initializers](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-initializers)) can be made visible by setting IncludeUnitialized to true. All this can be done using the new NewFilteredSharedInformerFactory constructor for shared informers:
```
@ -251,7 +251,7 @@ Its finally possible to have dots in Go package names. In this sections ex
## Example projects
Kubernetes 1.9 includes a number of example projects which can serve as a blueprint for your own projects:
- [k8s.io/sample-apiserver](https://github.com/kubernetes/sample-apiserver) is a simple user-provided API server that is integrated into a cluster via [API aggregation](https://kubernetes.io/docs/concepts/api-extension/apiserver-aggregation/).
- [k8s.io/sample-apiserver](https://github.com/kubernetes/sample-apiserver) is a simple user-provided API server that is integrated into a cluster via [API aggregation](/docs/concepts/api-extension/apiserver-aggregation/).
- [k8s.io/sample-controller](https://github.com/kubernetes/sample-controller) is a full-featured [controller](https://github.com/kubernetes/community/blob/master/contributors/devel/controllers.md) (also called an operator) with shared informers and a workqueue to process created, changed or deleted objects. It is based on CustomResourceDefinitions and uses [k8s.io/code-generator](https://github.com/kubernetes/code-generator) to generate deepcopy functions, typed clientsets, informers, and listers.

View File

@ -5,7 +5,7 @@ slug: introducing-container-storage-interface
url: /blog/2018/01/Introducing-Container-Storage-Interface
---
One of the key differentiators for Kubernetes has been a powerful [volume plugin system](https://kubernetes.io/docs/concepts/storage/volumes/) that enables many different types of storage systems to:
One of the key differentiators for Kubernetes has been a powerful [volume plugin system](/docs/concepts/storage/volumes/) that enables many different types of storage systems to:
1. Automatically create storage when required.
2. Make storage available to containers wherever theyre scheduled.

View File

@ -29,7 +29,7 @@ The crown jewels of Kubernetes 1.9 Windows support, however, are the networking
1. Upstream L3 Routing - IP routes configured in upstream ToR
2. Host-Gateway - IP routes configured on each host
3. Open vSwitch (OVS) & Open Virtual Network (OVN) with Overlay - Supports STT and Geneve tunneling types
You can read more about each of their [configuration, setup, and runtime capabilities](https://kubernetes.io/docs/getting-started-guides/windows/) to make an informed selection for your networking stack in Kubernetes.
You can read more about each of their [configuration, setup, and runtime capabilities](/docs/getting-started-guides/windows/) to make an informed selection for your networking stack in Kubernetes.
Even though you have to continue running the Kubernetes Control Plane and Master Components in Linux, you are now able to introduce Windows Server as a Node in Kubernetes. As a community, this is a huge milestone and achievement. We will now start seeing .NET, .NET Core, ASP.NET, IIS, Windows Services, Windows executables and many more windows-based applications in Kubernetes.
@ -53,7 +53,7 @@ Even though we have not committed to a timeline for GA, SIG-Windows estimates a
### Get Involved
As we continue to make progress towards General Availability of this feature in Kubernetes, we welcome you to get involved, contribute code, provide feedback, deploy Windows Server containers to your Kubernetes cluster, or simply join our community.
- If you want to get started on deploying Windows Server containers in Kubernetes, read our getting started guide at [https://kubernetes.io/docs/getting-started-guides/windows/](https://kubernetes.io/docs/getting-started-guides/windows/)
- If you want to get started on deploying Windows Server containers in Kubernetes, read our getting started guide at [/docs/getting-started-guides/windows/](/docs/getting-started-guides/windows/)
- We meet every other Tuesday at 12:30 Eastern Standard Time (EST) at [https://zoom.us/my/sigwindows](https://zoom.us/my/sigwindows). All our meetings are recorded on youtube and referenced at [https://www.youtube.com/playlist?list=PL69nYSiGNLP2OH9InCcNkWNu2bl-gmIU4](https://www.youtube.com/playlist?list=PL69nYSiGNLP2OH9InCcNkWNu2bl-gmIU4)
- Chat with us on Slack at [https://kubernetes.slack.com/messages/sig-windows](https://kubernetes.slack.com/messages/sig-windows)
- Find us on GitHub at [https://github.com/kubernetes/community/tree/master/sig-windows](https://github.com/kubernetes/community/tree/master/sig-windows)

View File

@ -15,7 +15,7 @@ One use case of Boxs control plane is [public key infrastructure](https://en.
| Figure1: Block Diagram of the PKI flow |
If an application needs a new certificate, the application owner explicitly adds a [Custom Resource Definition](https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/) (CRD) to the applications Kubernetes config [1]. This CRD specifies parameters for the SSL certificate: _name, common name, and others_. A microservice in the control plane watches CRDs and triggers some processing for SSL certificate generation [2]. Once the certificate is ready, the same control plane service sends it to the API server in a Kubernetes [Secret](https://kubernetes.io/docs/concepts/configuration/secret/) [3]. After that, the application containers access their certificates using Kubernetes [Secret VolumeMounts](https://kubernetes.io/docs/concepts/storage/volumes/#secret) [4]. You can see a working demo of this system in our [example application](https://github.com/box/error-reporting-with-kubernetes-events) on GitHub.
If an application needs a new certificate, the application owner explicitly adds a [Custom Resource Definition](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/) (CRD) to the applications Kubernetes config [1]. This CRD specifies parameters for the SSL certificate: _name, common name, and others_. A microservice in the control plane watches CRDs and triggers some processing for SSL certificate generation [2]. Once the certificate is ready, the same control plane service sends it to the API server in a Kubernetes [Secret](/docs/concepts/configuration/secret/) [3]. After that, the application containers access their certificates using Kubernetes [Secret VolumeMounts](/docs/concepts/storage/volumes/#secret) [4]. You can see a working demo of this system in our [example application](https://github.com/box/error-reporting-with-kubernetes-events) on GitHub.
The rest of this post covers the error scenarios in this “triggered” processing in the control plane. In particular, we are especially concerned with user input errors. Because the SSL certificate parameters come from the applications config file in a CRD format, what should happen if there is an error in that CRD specification? Even a typo results in a failure of the SSL certificate creation. The error information is available in the control plane even though the root cause is most probably inside the applications config file. The application owner does not have access to the control planes state or logs.

View File

@ -61,14 +61,14 @@ providers, vendors, and other platform developers can now release binary
plugins to handle authentication for specific cloud-provider IAM services, or
that integrate with in-house authentication systems that arent supported
in-tree, such as Active Directory. This complements the [Cloud Controller
Manager](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/)
Manager](/docs/tasks/administer-cluster/running-cloud-controller/)
feature added in 1.9.
## Networking - CoreDNS as a DNS provider (beta)
The ability to [switch the DNS
service](https://github.com/kubernetes/website/pull/7638) to CoreDNS at
[install time](https://kubernetes.io/docs/tasks/administer-cluster/coredns/)
[install time](/docs/tasks/administer-cluster/coredns/)
is now in beta. CoreDNS has fewer moving parts: its a single executable and a
single process, and supports additional use cases.
@ -83,7 +83,7 @@ notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#11
Kubernetes 1.10 is available for [download on
GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.10.0). To get
started with Kubernetes, check out these i[nteractive
tutorials](https://kubernetes.io/docs/tutorials/).
tutorials](/docs/tutorials/).
## 2 Day Features Blog Series

View File

@ -4,7 +4,7 @@ date: 2018-04-04
slug: fixing-subpath-volume-vulnerability
---
On March 12, 2018, the Kubernetes Product Security team disclosed [CVE-2017-1002101](https://issue.k8s.io/60813), which allowed containers using [subpath](https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath) volume mounts to access files outside of the volume. This means that a container could access any file available on the host, including volumes for other containers that it should not have access to.
On March 12, 2018, the Kubernetes Product Security team disclosed [CVE-2017-1002101](https://issue.k8s.io/60813), which allowed containers using [subpath](/docs/concepts/storage/volumes/#using-subpath) volume mounts to access files outside of the volume. This means that a container could access any file available on the host, including volumes for other containers that it should not have access to.
The vulnerability has been fixed and released in the latest Kubernetes patch releases. We recommend that all users upgrade to get the fix. For more details on the impact and how to get the fix, please see the [announcement](https://groups.google.com/forum/#!topic/kubernetes-announce/6sNHO_jyBzE). (Note, some functional regressions were found after the initial fix and are being tracked in [issue #61563](https://github.com/kubernetes/kubernetes/issues/61563)).

View File

@ -24,7 +24,7 @@ With the promotion to beta CSI is now enabled by default on standard Kubernetes
The move of the Kubernetes implementation of CSI to beta also means:
* Kubernetes is compatible with [v0.2](https://github.com/container-storage-interface/spec/releases/tag/v0.2.0) of the CSI spec (instead of [v0.1](https://github.com/container-storage-interface/spec/releases/tag/v0.1.0))
* There were breaking changes between the CSI spec v0.1 and v0.2, so existing CSI drivers must be updated to be 0.2 compatible before use with Kubernetes 1.10.0+.
* [Mount propagation](https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation), a feature that allows bidirectional mounts between containers and host (a requirement for containerized CSI drivers), has also moved to beta.
* [Mount propagation](/docs/concepts/storage/volumes/#mount-propagation), a feature that allows bidirectional mounts between containers and host (a requirement for containerized CSI drivers), has also moved to beta.
* The Kubernetes `VolumeAttachment` object, introduced in v1.9 in the storage v1alpha1 group, has been added to the storage v1beta1 group.
* The Kubernetes `CSIPersistentVolumeSource` object has been promoted to beta.
A `VolumeAttributes` field was added to Kubernetes `CSIPersistentVolumeSource` object (in alpha this was passed around via annotations).
@ -161,7 +161,7 @@ As part of the suggested deployment process, the Kubernetes team provides the fo
* [driver-registrar](https://github.com/kubernetes-csi/driver-registrar)
* registers the CSI driver with kubelet (in the future) and adds the drivers custom `NodeId` (retrieved via `GetNodeID` call against the CSI endpoint) to an annotation on the Kubernetes Node API Object
* [livenessprobe](https://github.com/kubernetes-csi/livenessprobe)
* can be included in a CSI plugin pod to enable the [Kubernetes Liveness Probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) mechanism
* can be included in a CSI plugin pod to enable the [Kubernetes Liveness Probe](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) mechanism
Storage vendors can build Kubernetes deployments for their plugins using these components, while leaving their CSI driver completely unaware of Kubernetes.

View File

@ -4,7 +4,7 @@ date: 2018-04-13
slug: local-persistent-volumes-beta
---
The [Local Persistent Volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local) beta feature in Kubernetes 1.10 makes it possible to leverage local disks in your StatefulSets. You can specify directly-attached local disks as PersistentVolumes, and use them in StatefulSets with the same PersistentVolumeClaim objects that previously only supported remote volume types.
The [Local Persistent Volumes](/docs/concepts/storage/volumes/#local) beta feature in Kubernetes 1.10 makes it possible to leverage local disks in your StatefulSets. You can specify directly-attached local disks as PersistentVolumes, and use them in StatefulSets with the same PersistentVolumeClaim objects that previously only supported remote volume types.
Persistent storage is important for running stateful applications, and Kubernetes has supported these workloads with StatefulSets, PersistentVolumeClaims and PersistentVolumes. These primitives have supported remote volume types well, where the volumes can be accessed from any node in the cluster, but did not support local volumes, where the volumes can only be accessed from a specific node. The demand for using local, fast SSDs in replicated, stateful workloads has increased with demand to run more workloads in Kubernetes.
@ -129,7 +129,7 @@ kind: StatefulSet
## Documentation
The Kubernetes website provides full documentation for [local persistent volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local).
The Kubernetes website provides full documentation for [local persistent volumes](/docs/concepts/storage/volumes/#local).
## Future enhancements
@ -140,11 +140,11 @@ The local persistent volume beta feature is not complete by far. Some notable en
## Complementary features
[Pod priority and preemption](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/) is another Kubernetes feature that is complementary to local persistent volumes. When your application uses local storage, it must be scheduled to the specific node where the local volume resides. You can give your local storage workload high priority so if that node ran out of room to run your workload, Kubernetes can preempt lower priority workloads to make room for it.
[Pod priority and preemption](/docs/concepts/configuration/pod-priority-preemption/) is another Kubernetes feature that is complementary to local persistent volumes. When your application uses local storage, it must be scheduled to the specific node where the local volume resides. You can give your local storage workload high priority so if that node ran out of room to run your workload, Kubernetes can preempt lower priority workloads to make room for it.
[Pod disruption budget](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/) is also very important for those workloads that must maintain quorum. Setting a disruption budget for your workload ensures that it does not drop below quorum due to voluntary disruption events, such as node drains during upgrade.
[Pod disruption budget](/docs/concepts/workloads/pods/disruptions/) is also very important for those workloads that must maintain quorum. Setting a disruption budget for your workload ensures that it does not drop below quorum due to voluntary disruption events, such as node drains during upgrade.
[Pod affinity and anti-affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature) ensures that your workloads stay either co-located or spread out across failure domains. If you have multiple local persistent volumes available on a single node, it may be preferable to specify an pod anti-affinity policy to spread your workload across nodes. Note that if you want multiple pods to share the same local persistent volume, you do not need to specify a pod affinity policy. The scheduler understands the locality constraints of the local persistent volume and schedules your pod to the correct node.
[Pod affinity and anti-affinity](/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature) ensures that your workloads stay either co-located or spread out across failure domains. If you have multiple local persistent volumes available on a single node, it may be preferable to specify an pod anti-affinity policy to spread your workload across nodes. Note that if you want multiple pods to share the same local persistent volume, you do not need to specify a pod affinity policy. The scheduler understands the locality constraints of the local persistent volume and schedules your pod to the correct node.
## Getting involved

View File

@ -6,7 +6,7 @@ slug: open-source-charts-2017
2017 was a huge year for Kubernetes, and GitHubs latest [Octoverse report](https://octoverse.github.com) illustrates just how much attention this project has been getting.
Kubernetes, an [open source platform for running application containers](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/), provides a consistent interface that enables developers and ops teams to automate the deployment, management, and scaling of a wide variety of applications on just about any infrastructure.
Kubernetes, an [open source platform for running application containers](/docs/concepts/overview/what-is-kubernetes/), provides a consistent interface that enables developers and ops teams to automate the deployment, management, and scaling of a wide variety of applications on just about any infrastructure.
Solving these shared challenges by leveraging a wide community of expertise and industrial experience, as Kubernetes does, helps engineers focus on building their own products at the top of the stack, rather than needlessly duplicating work that now exists as a standard part of the “cloud native” toolkit.

View File

@ -21,7 +21,7 @@ As a developer you want to think about where the Kubernetes cluster youre dev
A number of tools support pure offline development including Minikube, Docker for Mac/Windows, Minishift, and the ones we discuss in detail below. Sometimes, for example, in a microservices setup where certain microservices already run in the cluster, a proxied setup (forwarding traffic into and from the cluster) is preferable and Telepresence is an example tool in this category. The live mode essentially means youre building and/or deploying against a remote cluster and, finally, the pure online mode means both your development environment and the cluster are remote, as this is the case with, for example, [Eclipse Che](https://www.eclipse.org/che/docs/kubernetes-single-user.html) or [Cloud 9](https://github.com/errordeveloper/k9c). Lets now have a closer look at the basics of offline development: running Kubernetes locally.
[Minikube](https://kubernetes.io/docs/getting-started-guides/minikube/) is a popular choice for those who prefer to run Kubernetes in a local VM. More recently Docker for [Mac](https://docs.docker.com/docker-for-mac/kubernetes/) and [Windows](https://docs.docker.com/docker-for-windows/kubernetes/) started shipping Kubernetes as an experimental package (in the “edge” channel). Some reasons why you may want to prefer using Minikube over the Docker desktop option are:
[Minikube](/docs/getting-started-guides/minikube/) is a popular choice for those who prefer to run Kubernetes in a local VM. More recently Docker for [Mac](https://docs.docker.com/docker-for-mac/kubernetes/) and [Windows](https://docs.docker.com/docker-for-windows/kubernetes/) started shipping Kubernetes as an experimental package (in the “edge” channel). Some reasons why you may want to prefer using Minikube over the Docker desktop option are:
* You already have Minikube installed and running
* You prefer to wait until Docker ships a stable package

View File

@ -150,7 +150,7 @@ Another important topic we are focusing on is disaster recovery. When a seed clu
In order to enable a more independent evolution of the Botanists, which contain the infrastructure provider specific parts of the implementation, we plan to describe well-defined interfaces and factor out the Botanists into their own components. This is similar to what Kubernetes is currently doing with the cloud-controller-manager. Currently, all the cloud specifics are part of the core Gardener repository presenting a soft barrier to extending or supporting new cloud providers.
When taking a look at how the shoots are actually provisioned, we need to gain more experience on how really large clusters with thousands of nodes and pods (or more) behave. Potentially, we will have to deploy e.g. the API server and other components in a scaled-out fashion for large clusters to spread the load. Fortunately, horizontal pod autoscaling based on custom metrics from Prometheus will make this relatively easy with our setup. Additionally, the feedback from teams who run production workloads on our clusters, is that Gardener should support with prearranged Kubernetes [QoS](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/). Needless to say, our aspiration is going to be the integration and contribution to the vision of [Kubernetes Autopilot](https://speakerdeck.com/thockin/a-few-things-to-know-about-resource-scheduling).
When taking a look at how the shoots are actually provisioned, we need to gain more experience on how really large clusters with thousands of nodes and pods (or more) behave. Potentially, we will have to deploy e.g. the API server and other components in a scaled-out fashion for large clusters to spread the load. Fortunately, horizontal pod autoscaling based on custom metrics from Prometheus will make this relatively easy with our setup. Additionally, the feedback from teams who run production workloads on our clusters, is that Gardener should support with prearranged Kubernetes [QoS](/docs/tasks/configure-pod-container/quality-service-pod/). Needless to say, our aspiration is going to be the integration and contribution to the vision of [Kubernetes Autopilot](https://speakerdeck.com/thockin/a-few-things-to-know-about-resource-scheduling).
[4] Prototypes already validated CTyun & Aliyun.

View File

@ -9,7 +9,7 @@ Once you've become accustomed to running Linux container workloads on Kubernetes
These sorts of workloads are often well-suited to running in virtual machines (VMs), and [KubeVirt](http://www.kubevirt.io/), a virtual machine management add-on for Kubernetes, is aimed at allowing users to run VMs right alongside containers in the their Kubernetes or OpenShift clusters.
KubeVirt extends Kubernetes by adding resource types for VMs and sets of VMs through Kubernetes' [Custom Resource Definitions API](https://kubernetes.io/docs/concepts/api-extension/custom-resources/#customresourcedefinitions) (CRD). KubeVirt VMs run within regular Kubernetes pods, where they have access to standard pod networking and storage, and can be managed using standard Kubernetes tools such as kubectl.
KubeVirt extends Kubernetes by adding resource types for VMs and sets of VMs through Kubernetes' [Custom Resource Definitions API](/docs/concepts/api-extension/custom-resources/#customresourcedefinitions) (CRD). KubeVirt VMs run within regular Kubernetes pods, where they have access to standard pod networking and storage, and can be managed using standard Kubernetes tools such as kubectl.
Running VMs with Kubernetes involves a bit of an adjustment compared to using something like oVirt or OpenStack, and understanding the basic architecture of KubeVirt is a good place to begin.
@ -21,7 +21,7 @@ In this post, well talk about some of the components that are involved in Kub
## Custom Resource Definitions
Kubernetes resources are endpoints in the Kubernetes API that store collections of related API objects. For instance, the built-in pods resource contains a collection of Pod objects. The Kubernetes [Custom Resource Definition](https://kubernetes.io/docs/concepts/api-extension/custom-resources/#customresourcedefinitions) API allows users to extend Kubernetes with additional resources by defining new objects with a given name and schema. Once you've applied a custom resource to your cluster, the Kubernetes API server serves and handles the storage of your custom resource.
Kubernetes resources are endpoints in the Kubernetes API that store collections of related API objects. For instance, the built-in pods resource contains a collection of Pod objects. The Kubernetes [Custom Resource Definition](/docs/concepts/api-extension/custom-resources/#customresourcedefinitions) API allows users to extend Kubernetes with additional resources by defining new objects with a given name and schema. Once you've applied a custom resource to your cluster, the Kubernetes API server serves and handles the storage of your custom resource.
KubeVirt's primary CRD is the VirtualMachine (VM) resource, which contains a collection of VM objects inside the Kubernetes API server. The VM resource defines all the properties of the Virtual machine itself, such as the machine and CPU type, the amount of RAM and vCPUs, and the number and type of NICs available in the VM.

View File

@ -60,7 +60,7 @@ _crictl_ is a tool providing a similar experience to the Docker CLI for Kubernet
The scope of _crictl_ is limited to troubleshooting, it is not a replacement to docker or kubectl. Docker's CLI provides a rich set of commands, making it a very useful development tool. But it is not the best fit for troubleshooting on Kubernetes nodes. Some Docker commands are not useful to Kubernetes, such as _docker network_ and _docker build_; and some may even break the system, such as _docker rename_. _crictl_ provides just enough commands for node troubleshooting, which is arguably safer to use on production nodes.
## Kubernetes Oriented
_crictl_ offers a more kubernetes-friendly view of containers. Docker CLI lacks core Kubernetes concepts, e.g. _pod_ and _[namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)_, so it can't provide a clear view of containers and pods. One example is that _docker ps_ shows somewhat obscure, long Docker container names, and shows pause containers and application containers together:
_crictl_ offers a more kubernetes-friendly view of containers. Docker CLI lacks core Kubernetes concepts, e.g. _pod_ and _[namespace](/docs/concepts/overview/working-with-objects/namespaces/)_, so it can't provide a clear view of containers and pods. One example is that _docker ps_ shows somewhat obscure, long Docker container names, and shows pause containers and application containers together:
<img src="/images/blog/2018-05-24-kubernetes-containerd-integration-goes-ga/docker-ps.png" width="100%" alt="docker ps" />

View File

@ -25,7 +25,7 @@ In this release, [IPVS-based in-cluster service load balancing](https://github.c
## Dynamic Kubelet Configuration Moves to Beta
This feature makes it possible for new Kubelet configurations to be rolled out in a live cluster. Currently, Kubelets are configured via command-line flags, which makes it difficult to update Kubelet configurations in a running cluster. With this beta feature, [users can configure Kubelets in a live cluster](https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/) via the API server.
This feature makes it possible for new Kubelet configurations to be rolled out in a live cluster. Currently, Kubelets are configured via command-line flags, which makes it difficult to update Kubelet configurations in a running cluster. With this beta feature, [users can configure Kubelets in a live cluster](/docs/tasks/administer-cluster/reconfigure-kubelet/) via the API server.
## Custom Resource Definitions Can Now Define Multiple Versions
@ -49,9 +49,9 @@ Each Special Interest Group (SIG) within the community continues to deliver the
## Availability
Kubernetes 1.11 is available for [download on GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.11.0). To get started with Kubernetes, check out these [interactive tutorials](https://kubernetes.io/docs/tutorials/).
Kubernetes 1.11 is available for [download on GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.11.0). To get started with Kubernetes, check out these [interactive tutorials](/docs/tutorials/).
You can also install 1.11 using Kubeadm. Version 1.11.0 will be available as Deb and RPM packages, installable using the [Kubeadm cluster installer](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/) sometime on June 28th.
You can also install 1.11 using Kubeadm. Version 1.11.0 will be available as Deb and RPM packages, installable using the [Kubeadm cluster installer](/docs/setup/independent/create-cluster-kubeadm/) sometime on June 28th.
## 4 Day Features Blog Series

View File

@ -42,7 +42,7 @@ primary configuration file that CoreDNS uses for configuration of all of its fea
Kubernetes related.
When upgrading from kube-dns to CoreDNS using `kubeadm`, your existing ConfigMap will be used to generate the
customized Corefile for you, including all of the configuration for stub domains, federation, and upstream nameservers. See [Using CoreDNS for Service Discovery](https://kubernetes.io/docs/tasks/administer-cluster/coredns/) for more details.
customized Corefile for you, including all of the configuration for stub domains, federation, and upstream nameservers. See [Using CoreDNS for Service Discovery](/docs/tasks/administer-cluster/coredns/) for more details.
## Bug fixes and enhancements

View File

@ -18,7 +18,7 @@ Dynamic Kubelet configuration gives cluster administrators and service providers
## What is Dynamic Kubelet Configuration?
Kubernetes v1.10 made it possible to configure the Kubelet via a beta [config file](https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/) API. Kubernetes already provides the ConfigMap abstraction for storing arbitrary file data in the API server.
Kubernetes v1.10 made it possible to configure the Kubelet via a beta [config file](/docs/tasks/administer-cluster/kubelet-config-file/) API. Kubernetes already provides the ConfigMap abstraction for storing arbitrary file data in the API server.
Dynamic Kubelet configuration extends the Node object so that a Node can refer to a ConfigMap that contains the same type of config file. When a Node is updated to refer to a new ConfigMap, the associated Kubelet will attempt to use the new configuration.
@ -45,4 +45,4 @@ See the following diagram for a high-level overview of a configuration update fo
## How can I learn more?
Please see the official tutorial at https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/, which contains more in-depth details on user workflow, how a configuration becomes "last-known-good," how the Kubelet "checkpoints" config, and possible failure modes.
Please see the official tutorial at /docs/tasks/administer-cluster/reconfigure-kubelet/, which contains more in-depth details on user workflow, how a configuration becomes "last-known-good," how the Kubelet "checkpoints" config, and possible failure modes.

View File

@ -94,7 +94,7 @@ JOSH BERKUS: That goes into release notes. I mean, keep in mind that one of the
However, stuff happens, and we do occasionally have to do those. And so far, our main way to identify that to people actually is in the release notes. If you look at [the current release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#no-really-you-must-do-this-before-you-upgrade), there are actually two things in there right now that are sort of breaking changes.
One of them is the bit with [priority and preemption](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/) in that preemption being on by default now allows badly behaved users of the system to cause trouble in new ways. I'd actually have to look at the release notes to see what the second one was...
One of them is the bit with [priority and preemption](/docs/concepts/configuration/pod-priority-preemption/) in that preemption being on by default now allows badly behaved users of the system to cause trouble in new ways. I'd actually have to look at the release notes to see what the second one was...
TIM PEPPER: The [JSON capitalization case sensitivity](https://github.com/kubernetes/kubernetes/issues/64612).

View File

@ -34,7 +34,7 @@ The control plane is Kubernetes&#39; brain. It has an overall view of every cont
> Note that some components and installation methods may enable local ports over HTTP and administrators should familiarize themselves with the settings of each component to identify potentially unsecured traffic.
[Source](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/#use-transport-level-security-tls-for-all-api-traffic)
[Source](/docs/tasks/administer-cluster/securing-a-cluster/#use-transport-level-security-tls-for-all-api-traffic)
This network diagram by [Lucas Käldström](https://docs.google.com/presentation/d/1Gp-2blk5WExI_QR59EUZdwfO2BWLJqa626mK2ej-huo/edit#slide=id.g1e639c415b_0_56) demonstrates some of the places TLS should ideally be applied: between every component on the master, and between the Kubelet and API server. [Kelsey Hightower](https://twitter.com/kelseyhightower/)&#39;s canonical [Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/1.9.0/docs/04-certificate-authority.md) provides detailed manual instructions, as does [etcd&#39;s security model](https://coreos.com/etcd/docs/latest/op-guide/security.html) documentation.
@ -62,11 +62,11 @@ Or use this flag to disable it in GKE:
--no-enable-legacy-authorization
```
There are plenty of [good examples](https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/) of [RBAC policies for cluster services](https://github.com/uruddarraju/kubernetes-rbac-policies), as well as [the docs](https://kubernetes.io/docs/admin/authorization/rbac/#role-binding-examples). And it doesn&#39;t have to stop there - fine-grained RBAC policies can be extracted from audit logs with [audit2rbac](https://github.com/liggitt/audit2rbac).
There are plenty of [good examples](https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/) of [RBAC policies for cluster services](https://github.com/uruddarraju/kubernetes-rbac-policies), as well as [the docs](/docs/admin/authorization/rbac/#role-binding-examples). And it doesn&#39;t have to stop there - fine-grained RBAC policies can be extracted from audit logs with [audit2rbac](https://github.com/liggitt/audit2rbac).
Incorrect or excessively permissive RBAC policies are a security threat in case of a compromised pod. Maintaining least privilege, and continuously reviewing and improving RBAC rules, should be considered part of the "technical debt hygiene" that teams build into their development lifecycle.
[Audit Logging](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/) (beta in 1.10) provides customisable API logging at the payload (e.g. request and response), and also metadata levels. Log levels can be tuned to your organisation&#39;s security policy - [GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/audit-logging#audit_policy) provides sane defaults to get you started.
[Audit Logging](/docs/tasks/debug-application-cluster/audit/) (beta in 1.10) provides customisable API logging at the payload (e.g. request and response), and also metadata levels. Log levels can be tuned to your organisation&#39;s security policy - [GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/audit-logging#audit_policy) provides sane defaults to get you started.
For read requests such as get, list, and watch, only the request object is saved in the audit logs; the response object is not. For requests involving sensitive data such as Secret and ConfigMap, only the metadata is exported. For all other requests, both request and response objects are saved in audit logs.
@ -94,9 +94,9 @@ etcd should be configured with [peer and client TLS certificates](https://github
**A security best practice is to regularly rotate encryption keys and certificates, in order to limit the &quot;blast radius&quot; of a key compromise.**
Kubernetes will [rotate some certificates automatically](https://kubernetes.io/docs/tasks/tls/certificate-rotation/) (notably, the kubelet client and server certs) by creating new CSRs as its existing credentials expire.
Kubernetes will [rotate some certificates automatically](/docs/tasks/tls/certificate-rotation/) (notably, the kubelet client and server certs) by creating new CSRs as its existing credentials expire.
However, the [symmetric encryption keys](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/) that the API server uses to encrypt etcd values are not automatically rotated - they must be [rotated manually](https://www.twistlock.com/2017/08/02/kubernetes-secrets-encryption/). Master access is required to do this, so managed services (such as GKE or AKS) abstract this problem from an operator.
However, the [symmetric encryption keys](/docs/tasks/administer-cluster/encrypt-data/) that the API server uses to encrypt etcd values are not automatically rotated - they must be [rotated manually](https://www.twistlock.com/2017/08/02/kubernetes-secrets-encryption/). Master access is required to do this, so managed services (such as GKE or AKS) abstract this problem from an operator.
# Part Two: Workloads
@ -109,7 +109,7 @@ With minimum viable security on the control plane the cluster is able to operate
Tools like [bane](https://github.com/genuinetools/bane) can help to generate AppArmor profiles, and [docker-slim](https://github.com/docker-slim/docker-slim#quick-seccomp-example) for seccomp profiles, but beware - a comprehensive test suite it required to exercise all code paths in your application when verifying the side effects of applying these policies.
[PodSecurityPolicies](https://kubernetes.io/docs/concepts/policy/pod-security-policy/) can be used to mandate the use of security extensions and other Kubernetes security directives. They provide a minimum contract that a pod must fulfil to be submitted to the API server - including security profiles, the privileged flag, and the sharing of host network, process, or IPC namespaces.
[PodSecurityPolicies](/docs/concepts/policy/pod-security-policy/) can be used to mandate the use of security extensions and other Kubernetes security directives. They provide a minimum contract that a pod must fulfil to be submitted to the API server - including security profiles, the privileged flag, and the sharing of host network, process, or IPC namespaces.
These directives are important, as they help to prevent containerised processes from escaping their isolation boundaries, and [Tim Allclair](https://twitter.com/tallclair)&#39;s [example PodSecurityPolicy](https://gist.github.com/tallclair/11981031b6bfa829bb1fb9dcb7e026b0) is a comprehensive resource that you can customise to your use case.
@ -135,7 +135,7 @@ Static analysis of YAML configuration can be used to establish a baseline for ru
}, {
"selector": "containers[] .securityContext .capabilities .drop",
"reason": "Reducing kernel capabilities available to a container limits its attack surface",
"href": "https://kubernetes.io/docs/tasks/configure-pod-container/security-context/"
"href": "/docs/tasks/configure-pod-container/security-context/"
}]
}
}
@ -197,7 +197,7 @@ Having to run workloads as a non-root user is not going to change until user nam
## 9. Use Network Policies
**By default, Kubernetes networking allows all pod to pod traffic; this can be restricted using a** [**Network Policy**](https://kubernetes.io/docs/concepts/services-networking/network-policies/) **.**
**By default, Kubernetes networking allows all pod to pod traffic; this can be restricted using a** [**Network Policy**](/docs/concepts/services-networking/network-policies/) **.**
<img src="/images/blog/2018-06-05-11-ways-not-to-get-hacked/kubernetes-networking.png" width="800" />
@ -263,7 +263,7 @@ Cloud provider metadata APIs are a constant source of escalation (as the recent
**Web servers present an attack surface to the network they&#39;re attached to: scanning an image&#39;s installed files ensures the absence of known vulnerabilities that an attacker could exploit to gain remote access to the container. An IDS (Intrusion Detection System) detects them if they do.**
Kubernetes permits pods into the cluster through a series of [admission controller](https://kubernetes.io/docs/admin/admission-controllers/) gates, which are applied to pods and other resources like deployments. These gates can validate each pod for admission or change its contents, and they now support backend webhooks.
Kubernetes permits pods into the cluster through a series of [admission controller](/docs/admin/admission-controllers/) gates, which are applied to pods and other resources like deployments. These gates can validate each pod for admission or change its contents, and they now support backend webhooks.
<img src="/images/blog/2018-06-05-11-ways-not-to-get-hacked/admission-controllers.png" width="800" />

View File

@ -6,13 +6,13 @@ date: 2018-07-24
**Authors**: Balaji Subramaniam ([Intel](mailto:balaji.subramaniam@intel.com)), Connor Doyle ([Intel](mailto:connor.p.doyle@intel.com))
This blog post describes the [CPU Manager](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/), a beta feature in [Kubernetes](https://kubernetes.io/). The CPU manager feature enables better placement of workloads in the [Kubelet](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/), the Kubernetes node agent, by allocating exclusive CPUs to certain pod containers.
This blog post describes the [CPU Manager](/docs/tasks/administer-cluster/cpu-management-policies/), a beta feature in [Kubernetes](https://kubernetes.io/). The CPU manager feature enables better placement of workloads in the [Kubelet](/docs/reference/command-line-tools-reference/kubelet/), the Kubernetes node agent, by allocating exclusive CPUs to certain pod containers.
![cpu manager](/images/blog/2018-07-24-cpu-manager/cpu-manager.png)
## Sounds Good! But Does the CPU Manager Help Me?
It depends on your workload. A single compute node in a Kubernetes cluster can run many [pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/) and some of these pods could be running CPU-intensive workloads. In such a scenario, the pods might contend for the CPU resources available in that compute node. When this contention intensifies, the workload can move to different CPUs depending on whether the pod is throttled and the availability of CPUs at scheduling time. There might also be cases where the workload could be sensitive to context switches. In all the above scenarios, the performance of the workload might be affected.
It depends on your workload. A single compute node in a Kubernetes cluster can run many [pods](/docs/concepts/workloads/pods/pod/) and some of these pods could be running CPU-intensive workloads. In such a scenario, the pods might contend for the CPU resources available in that compute node. When this contention intensifies, the workload can move to different CPUs depending on whether the pod is throttled and the availability of CPUs at scheduling time. There might also be cases where the workload could be sensitive to context switches. In all the above scenarios, the performance of the workload might be affected.
If your workload is sensitive to such scenarios, then CPU Manager can be enabled to provide better performance isolation by allocating exclusive CPUs for your workload.
@ -27,7 +27,7 @@ CPU manager might help workloads with the following characteristics:
## Ok! How Do I use it?
Using the CPU manager is simple. First, [enable CPU manager with the Static policy](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#cpu-management-policies) in the Kubelet running on the compute nodes of your cluster. Then configure your pod to be in the [Guaranteed Quality of Service (QoS) class](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-guaranteed). Request whole numbers of CPU cores (e.g., `1000m`, `4000m`) for containers that need exclusive cores. Create your pod in the same way as before (e.g., `kubectl create -f pod.yaml`). And _voilà_, the CPU manager will assign exclusive CPUs to each of container in the pod according to their CPU requests.
Using the CPU manager is simple. First, [enable CPU manager with the Static policy](/docs/tasks/administer-cluster/cpu-management-policies/#cpu-management-policies) in the Kubelet running on the compute nodes of your cluster. Then configure your pod to be in the [Guaranteed Quality of Service (QoS) class](/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-guaranteed). Request whole numbers of CPU cores (e.g., `1000m`, `4000m`) for containers that need exclusive cores. Create your pod in the same way as before (e.g., `kubectl create -f pod.yaml`). And _voilà_, the CPU manager will assign exclusive CPUs to each of container in the pod according to their CPU requests.
```
apiVersion: v1
@ -54,7 +54,7 @@ _Pod specification requesting two exclusive CPUs._
For Kubernetes, and the purposes of this blog post, we will discuss three kinds of CPU resource controls available in most Linux distributions. The first two are CFS shares (what's my weighted fair share of CPU time on this system) and CFS quota (what's my hard cap of CPU time over a period). The CPU manager uses a third control called CPU affinity (on what logical CPUs am I allowed to execute).
By default, all the pods and the containers running on a compute node of your Kubernetes cluster can execute on any available cores in the system. The total amount of allocatable shares and quota are limited by the CPU resources explicitly [reserved for kubernetes and system daemons](https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/). However, limits on the CPU time being used can be specified using [CPU limits in the pod spec](https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#specify-a-cpu-request-and-a-cpu-limit). Kubernetes uses [CFS quota](https://www.kernel.org/doc/Documentation/scheduler/sched-bwc.txt) to enforce CPU limits on pod containers.
By default, all the pods and the containers running on a compute node of your Kubernetes cluster can execute on any available cores in the system. The total amount of allocatable shares and quota are limited by the CPU resources explicitly [reserved for kubernetes and system daemons](/docs/tasks/administer-cluster/reserve-compute-resources/). However, limits on the CPU time being used can be specified using [CPU limits in the pod spec](/docs/tasks/configure-pod-container/assign-cpu-resource/#specify-a-cpu-request-and-a-cpu-limit). Kubernetes uses [CFS quota](https://www.kernel.org/doc/Documentation/scheduler/sched-bwc.txt) to enforce CPU limits on pod containers.
When CPU manager is enabled with the "static" policy, it manages a shared pool of CPUs. Initially this shared pool contains all the CPUs in the compute node. When a container with integer CPU request in a Guaranteed pod is created by the Kubelet, CPUs for that container are removed from the shared pool and assigned exclusively for the lifetime of the container. Other containers are migrated off these exclusively allocated CPUs.

View File

@ -8,7 +8,7 @@ date: 2018-07-27
## What is KubeVirt?
[KubeVirt](https://github.com/kubevirt/kubevirt) is a Kubernetes addon that provides users the ability to schedule traditional virtual machine workloads side by side with container workloads. Through the use of [Custom Resource Definitions](https://Kubernetes.io/docs/concepts/extend-Kubernetes/api-extension/custom-resources/) (CRDs) and other Kubernetes features, KubeVirt seamlessly extends existing Kubernetes clusters to provide a set of virtualization APIs that can be used to manage virtual machines.
[KubeVirt](https://github.com/kubevirt/kubevirt) is a Kubernetes addon that provides users the ability to schedule traditional virtual machine workloads side by side with container workloads. Through the use of [Custom Resource Definitions](/docs/concepts/extend-Kubernetes/api-extension/custom-resources/) (CRDs) and other Kubernetes features, KubeVirt seamlessly extends existing Kubernetes clusters to provide a set of virtualization APIs that can be used to manage virtual machines.
## Why Use CRDs Over an Aggregated API Server?
@ -44,7 +44,7 @@ One of the responsibilities of the Kubernetes API server is to intercept and val
This validation occurs during a process called admission control. Until recently, it was not possible to extend the default Kubernetes admission controllers without altering code and compiling/deploying an entirely new Kubernetes API server. This meant that if we wanted to perform admission control on KubeVirts CRD objects while they are posted to the cluster, wed have to build our own version of the Kubernetes API server and convince our users to use that instead. That was not a viable solution for us.
Using the new [Dynamic Admission Control](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) feature that first landed in Kubernetes 1.9, we now have a path for performing custom validation on KubeVirt API through the use of a [ValidatingAdmissionWebhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#external-admission-webhooks). This feature allows KubeVirt to dynamically register an HTTPS webhook with Kubernetes at KubeVirt install time. After registering the custom webhook, all requests related to KubeVirt API objects are forwarded from the Kubernetes API server to our HTTPS endpoint for validation. If our endpoint rejects a request for any reason, the object will not be persisted into etcd and the client receives our response outlining the reason for the rejection.
Using the new [Dynamic Admission Control](/docs/reference/access-authn-authz/extensible-admission-controllers/) feature that first landed in Kubernetes 1.9, we now have a path for performing custom validation on KubeVirt API through the use of a [ValidatingAdmissionWebhook](/docs/reference/access-authn-authz/extensible-admission-controllers/#external-admission-webhooks). This feature allows KubeVirt to dynamically register an HTTPS webhook with Kubernetes at KubeVirt install time. After registering the custom webhook, all requests related to KubeVirt API objects are forwarded from the Kubernetes API server to our HTTPS endpoint for validation. If our endpoint rejects a request for any reason, the object will not be persisted into etcd and the client receives our response outlining the reason for the rejection.
For example, if someone posts a malformed VirtualMachine object, theyll receive an error indicating what the problem is.
@ -57,7 +57,7 @@ In the example output above, that error response is coming directly from KubeVir
## CRD OpenAPIv3 Validation
In addition to the validating webhook, KubeVirt also uses the ability to provide an [OpenAPIv3 validation schema](https://kubernetes.io/docs/tasks/access-kubernetes-API/extend-api-custom-resource-definitions/#advanced-topics) when registering a CRD with the cluster. While the OpenAPIv3 schema does not let us express some of the more advanced validation checks that the validation webhook provides, it does offer the ability to enforce simple validation checks involving things like required fields, max/min value lengths, and verifying that values are formatted in a way that matches a regular expression string.
In addition to the validating webhook, KubeVirt also uses the ability to provide an [OpenAPIv3 validation schema](/docs/tasks/access-kubernetes-API/extend-api-custom-resource-definitions/#advanced-topics) when registering a CRD with the cluster. While the OpenAPIv3 schema does not let us express some of the more advanced validation checks that the validation webhook provides, it does offer the ability to enforce simple validation checks involving things like required fields, max/min value lengths, and verifying that values are formatted in a way that matches a regular expression string.
## Dynamic Webhooks for “PodPreset Like” Behavior
@ -93,7 +93,7 @@ One thing worth noting is that in Kubernetes 1.10 a very basic form of CRD subre
## CRD Finalizers
A [CRD finalizer](https://kubernetes.io/docs/tasks/access-kubernetes-API/extend-api-custom-resource-definitions/#advanced-topics) is a feature that lets us provide a pre-delete hook in order to perform actions before allowing a CRD object to be removed from persistent storage. In KubeVirt, we use finalizers to guarantee a virtual machine has completely terminated before we allow the corresponding VMI object to be removed from etcd.
A [CRD finalizer](/docs/tasks/access-kubernetes-API/extend-api-custom-resource-definitions/#advanced-topics) is a feature that lets us provide a pre-delete hook in order to perform actions before allowing a CRD object to be removed from persistent storage. In KubeVirt, we use finalizers to guarantee a virtual machine has completely terminated before we allow the corresponding VMI object to be removed from etcd.
## API Versioning for CRDs
@ -104,4 +104,3 @@ Prior to Kubernetes 1.11, CRDs did not have support for multiple versions. This
That strategy was not exactly a viable option for us.
Fortunately thanks to some recent [work to rectify this issue in Kubernetes](https://github.com/kubernetes/features/issues/544), the latest Kubernetes v1.11 now supports [CRDs with multiple versions](https://github.com/kubernetes/kubernetes/pull/63830). Note however that this initial multi version support is limited. While a CRD can now have multiple versions, the feature does not currently contain a path for performing conversions between versions. In KubeVirt, the lack of conversion makes it difficult us to evolve our API as we progress versions. Luckily, support for conversions between versions is underway and we look forward to taking advantage of that feature once it lands in a future Kubernetes release.

View File

@ -48,7 +48,7 @@ Risks to these components include hardware failures, software bugs, bad updates,
The API Server uses multiple instances behind a load balancer to achieve scale and availability. The load balancer is a critical component for purposes of high availability. Multiple DNS API Server A records might be an alternative if you dont have a load balancer.
The kube-scheduler and kube-controller-manager engage in a leader election process, rather than utilizing a load balancer. Since a [cloud-controller-manager](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/) is used for selected types of hosting infrastructure, and these have implementation variations, they will not be discussed, beyond indicating that they are a control plane component.
The kube-scheduler and kube-controller-manager engage in a leader election process, rather than utilizing a load balancer. Since a [cloud-controller-manager](/docs/tasks/administer-cluster/running-cloud-controller/) is used for selected types of hosting infrastructure, and these have implementation variations, they will not be discussed, beyond indicating that they are a control plane component.
Pods running on Kubernetes worker nodes are managed by the kubelet agent. Each worker instance runs the kubelet agent and a [CRI-compatible](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/) container runtime. Kubernetes itself is designed to monitor and recover from worker node outages. But for critical workloads, hypervisor resource management, workload isolation and availability features can be used to enhance availability and make performance more predictable.
@ -102,15 +102,15 @@ Running a hypervisor layer offers operational advantages and better workload iso
## Kubernetes configuration settings
Master and Worker nodes should be protected from overload and resource exhaustion. Hypervisor features can be used to isolate critical components and reserve resources. There are also Kubernetes configuration settings that can throttle things like API call rates and pods per node. Some install suites and commercial distributions take care of this, but if you are performing a custom Kubernetes deployment, you may find that the defaults are not appropriate, particularly if your resources are small or your cluster is large.
Resource consumption by the control plane will correlate with the number of pods and the pod churn rate. Very large and very small clusters will benefit from non-default [settings](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/) of kube-apiserver request throttling and memory. Having these too high can lead to request limit exceeded and out of memory errors.
Resource consumption by the control plane will correlate with the number of pods and the pod churn rate. Very large and very small clusters will benefit from non-default [settings](/docs/reference/command-line-tools-reference/kube-apiserver/) of kube-apiserver request throttling and memory. Having these too high can lead to request limit exceeded and out of memory errors.
On worker nodes, [Node Allocatable](https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/) should be configured based on a reasonable supportable workload density at each node. Namespaces can be created to subdivide the worker node cluster into multiple virtual clusters with resource CPU and memory [quotas](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/). Kubelet handling of [out of resource](https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/) conditions can be configured.
On worker nodes, [Node Allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/) should be configured based on a reasonable supportable workload density at each node. Namespaces can be created to subdivide the worker node cluster into multiple virtual clusters with resource CPU and memory [quotas](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/). Kubelet handling of [out of resource](/docs/tasks/administer-cluster/out-of-resource/) conditions can be configured.
## Security
Every Kubernetes cluster has a cluster root Certificate Authority (CA). The Controller Manager, API Server, Scheduler, kubelet client, kube-proxy and administrator certificates need to be generated and installed. If you use an install tool or a distribution this may be handled for you. A manual process is described [here](https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md). You should be prepared to reinstall certificates in the event of node replacements or expansions.
As Kubernetes is entirely API driven, controlling and limiting who can access the cluster and what actions they are allowed to perform is essential. Encryption and authentication options are addressed in this [documentation](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/).
As Kubernetes is entirely API driven, controlling and limiting who can access the cluster and what actions they are allowed to perform is essential. Encryption and authentication options are addressed in this [documentation](/docs/tasks/administer-cluster/securing-a-cluster/).
Kubernetes application workloads are based on container images. You want the source and content of these images to be trustworthy. This will almost always mean that you will host a local container image repository. Pulling images from the public Internet can present both reliability and security issues. You should choose a repository that supports image signing, security scanning, access controls on pushing and pulling images, and logging of activity.
@ -119,13 +119,13 @@ Processes must be in place to support applying updates for host firmware, hyperv
Recommendations:
* Tighten security settings on the control plane components beyond defaults (e.g., [locking down worker nodes](http://blog.kontena.io/locking-down-kubernetes-workers/))
* Utilize [Pod Security Policies](https://kubernetes.io/docs/concepts/policy/pod-security-policy/)
* Consider the [NetworkPolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) integration available with your networking solution, including how you will accomplish tracing, monitoring and troubleshooting.
* Utilize [Pod Security Policies](/docs/concepts/policy/pod-security-policy/)
* Consider the [NetworkPolicy](/docs/concepts/services-networking/network-policies/) integration available with your networking solution, including how you will accomplish tracing, monitoring and troubleshooting.
* Use RBAC to drive authorization decisions and enforcement.
* Consider physical security, especially when deploying to edge or remote office locations that may be unattended. Include storage encryption to limit exposure from stolen devices and protection from attachment of malicious devices like USB keys.
* Protect Kubernetes plain-text cloud provider credentials (access keys, tokens, passwords, etc.)
Kubernetes [secret](https://kubernetes.io/docs/concepts/configuration/secret/) objects are appropriate for holding small amounts of sensitive data. These are retained within etcd. These can be readily used to hold credentials for the Kubernetes API but there are times when a workload or an extension of the cluster itself needs a more full-featured solution. The HashiCorp Vault project is a popular solution if you need more than the built-in secret objects can provide.
Kubernetes [secret](/docs/concepts/configuration/secret/) objects are appropriate for holding small amounts of sensitive data. These are retained within etcd. These can be readily used to hold credentials for the Kubernetes API but there are times when a workload or an extension of the cluster itself needs a more full-featured solution. The HashiCorp Vault project is a popular solution if you need more than the built-in secret objects can provide.
## Disaster Recovery and Backup
@ -166,7 +166,7 @@ Some critical state is held outside etcd. Certificates, container images, and ot
* Cloud provider specific account and configuration data
## Considerations for your production workloads
Anti-affinity specifications can be used to split clustered services across backing hosts, but at this time the settings are used only when the pod is scheduled. This means that Kubernetes can restart a failed node of your clustered application, but does not have a native mechanism to rebalance after a fail back. This is a topic worthy of a separate blog, but supplemental logic might be useful to achieve optimal workload placements after host or worker node recoveries or expansions. The [Pod Priority and Preemption feature](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/) can be used to specify a preferred triage in the event of resource shortages caused by failures or bursting workloads.
Anti-affinity specifications can be used to split clustered services across backing hosts, but at this time the settings are used only when the pod is scheduled. This means that Kubernetes can restart a failed node of your clustered application, but does not have a native mechanism to rebalance after a fail back. This is a topic worthy of a separate blog, but supplemental logic might be useful to achieve optimal workload placements after host or worker node recoveries or expansions. The [Pod Priority and Preemption feature](/docs/concepts/configuration/pod-priority-preemption/) can be used to specify a preferred triage in the event of resource shortages caused by failures or bursting workloads.
For stateful services, external attached volume mounts are the standard Kubernetes recommendation for a non-clustered service (e.g., a typical SQL database). At this time Kubernetes managed snapshots of these external volumes is in the category of a [roadmap feature request](https://docs.google.com/presentation/d/1dgxfnroRAu0aF67s-_bmeWpkM1h2LCxe6lB1l1oS0EQ/edit#slide=id.g3ca07c98c2_0_47), likely to align with the Container Storage Interface (CSI) integration. Thus performing backups of such a service would involve application specific, in-pod activity that is beyond the scope of this document. While awaiting better Kubernetes support for a snapshot and backup workflow, running your database service in a VM rather than a container, and exposing it to your Kubernetes workload may be worth considering.
@ -174,9 +174,9 @@ Cluster-distributed stateful services (e.g., Cassandra) can benefit from splitti
## Other considerations
[Logs](https://kubernetes.io/docs/concepts/cluster-administration/logging/) and [metrics](https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/) (if collected and persistently retained) are valuable to diagnose outages, but given the variety of technologies available it will not be addressed in this blog. If Internet connectivity is available, it may be desirable to retain logs and metrics externally at a central location.
[Logs](/docs/concepts/cluster-administration/logging/) and [metrics](/docs/tasks/debug-application-cluster/resource-usage-monitoring/) (if collected and persistently retained) are valuable to diagnose outages, but given the variety of technologies available it will not be addressed in this blog. If Internet connectivity is available, it may be desirable to retain logs and metrics externally at a central location.
Your production deployment should utilize an automated installation, configuration and update tool (e.g., [Ansible](https://github.com/kubernetes-incubator/kubespray), [BOSH](https://github.com/cloudfoundry-incubator/kubo-deployment), [Chef](https://github.com/chef-cookbooks/kubernetes), [Juju](https://kubernetes.io/docs/getting-started-guides/ubuntu/installation/), [kubeadm](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/), [Puppet](https://forge.puppet.com/puppetlabs/kubernetes), etc.). A manual process will have repeatability issues, be labor intensive, error prone, and difficult to scale. [Certified distributions](https://www.cncf.io/certification/software-conformance/#logos) are likely to include a facility for retaining configuration settings across updates, but if you implement your own install and config toolchain, then retention, backup and recovery of the configuration artifacts is essential. Consider keeping your deployment components and settings under a version control system such as Git.
Your production deployment should utilize an automated installation, configuration and update tool (e.g., [Ansible](https://github.com/kubernetes-incubator/kubespray), [BOSH](https://github.com/cloudfoundry-incubator/kubo-deployment), [Chef](https://github.com/chef-cookbooks/kubernetes), [Juju](/docs/getting-started-guides/ubuntu/installation/), [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), [Puppet](https://forge.puppet.com/puppetlabs/kubernetes), etc.). A manual process will have repeatability issues, be labor intensive, error prone, and difficult to scale. [Certified distributions](https://www.cncf.io/certification/software-conformance/#logos) are likely to include a facility for retaining configuration settings across updates, but if you implement your own install and config toolchain, then retention, backup and recovery of the configuration artifacts is essential. Consider keeping your deployment components and settings under a version control system such as Git.
## Outage recovery

View File

@ -52,7 +52,7 @@ Improvements that will allow the [Horizontal Pod Autoscaler to reach proper size
## Availability
Kubernetes 1.12 is available for [download on GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.12.0). To get started with Kubernetes, check out these [interactive tutorials](https://kubernetes.io/docs/tutorials/). You can also install 1.12 using [Kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/).
Kubernetes 1.12 is available for [download on GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.12.0). To get started with Kubernetes, check out these [interactive tutorials](/docs/tutorials/). You can also install 1.12 using [Kubeadm](/docs/setup/independent/create-cluster-kubeadm/).
## 5 Day Features Blog Series

View File

@ -93,5 +93,5 @@ feedback](https://github.com/grpc-ecosystem/grpc-health-probe/).
## Further reading
- Protocol: [GRPC Health Checking Protocol](https://github.com/grpc/grpc/blob/v1.15.0/doc/health-checking.md) ([health.proto](https://github.com/grpc/grpc/blob/v1.15.0/src/proto/grpc/health/v1/health.proto))
- Documentation: [Kubernetes liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/)
- Documentation: [Kubernetes liveness and readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/)
- Article: [Advanced Kubernetes Health Check Patterns](https://ahmet.im/blog/advanced-kubernetes-health-checks/)

View File

@ -52,7 +52,7 @@ Kubernetes clusters are built on top of disks created in Azure. In a typical con
![](/images/blog/2018-10-08-support-for-azure-vmss/cloud-provider-components.png)
Kubernetes cloud provider interface provides interactions with clouds for managing cloud-specific resources, e.g. public IPs and routes. A good overview of these components is given in [[2]](https://kubernetes.io/docs/concepts/architecture/cloud-controller/). In case of Azure Kubernetes cluster, the Kubernetes interactions go through the Azure cloud provider layer and contact the various services running in the cloud.
Kubernetes cloud provider interface provides interactions with clouds for managing cloud-specific resources, e.g. public IPs and routes. A good overview of these components is given in [[2]](/docs/concepts/architecture/cloud-controller/). In case of Azure Kubernetes cluster, the Kubernetes interactions go through the Azure cloud provider layer and contact the various services running in the cloud.
The cloud provider implementation of K8s can be largely divided into the following component interfaces which we need to implement:
@ -142,7 +142,7 @@ In future there will be support for the following:
## Cluster Autoscaler
A Kubernetes cluster consists of nodes. These nodes can be virtual machines, bare metal servers or could be even virtual node (virtual kubelet). To avoid getting lost in permutations and combinations of Kubernetes ecosystem ;-), let's consider that the cluster we are discussing consists of virtual machines, which are hosted in a cloud (eg: Azure, Google or AWS). What this effectively means is that you have access to virtual machines which run Kubernetes agents and a master node which runs k8s services like API server. A detailed version of k8s architecture can be found here [[11]](https://kubernetes.io/docs/concepts/architecture/).
A Kubernetes cluster consists of nodes. These nodes can be virtual machines, bare metal servers or could be even virtual node (virtual kubelet). To avoid getting lost in permutations and combinations of Kubernetes ecosystem ;-), let's consider that the cluster we are discussing consists of virtual machines, which are hosted in a cloud (eg: Azure, Google or AWS). What this effectively means is that you have access to virtual machines which run Kubernetes agents and a master node which runs k8s services like API server. A detailed version of k8s architecture can be found here [[11]](/docs/concepts/architecture/).
The number of nodes which are required on a cluster depends on the workload on the cluster. When the load goes up there is a need to increase the nodes and when it subsides, there is a need to reduce the nodes and clean up the resources which are no longer in use. One way this can be taken care of is to manually scale up the nodes which are part of the Kubernetes cluster and manually scale down when the demand reduces. But shouldnt this be done automatically ? Answer to this question is the Cluster Autoscaler (CA).
@ -309,7 +309,7 @@ For the acs-engine (the unmanaged variety) on Azure docs can be found here: [[9]
1) https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview
2) https://kubernetes.io/docs/concepts/architecture/cloud-controller/
2) /docs/concepts/architecture/cloud-controller/
3) https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/azure/azure_vmss.go
@ -327,7 +327,7 @@ For the acs-engine (the unmanaged variety) on Azure docs can be found here: [[9]
10) https://github.com/kubernetes/client-go
11) https://kubernetes.io/docs/concepts/architecture/
11) /docs/concepts/architecture/
12) https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview

View File

@ -55,7 +55,7 @@ Similar to the API for managing Kubernetes Persistent Volumes, Kubernetes Volume
* `VolumeSnapshotClass`
* Created by cluster administrators to describe how snapshots should be created. including the driver information, the secrets to access the snapshot, etc.
It is important to note that unlike the core Kubernetes Persistent Volume objects, these Snapshot objects are defined as [CustomResourceDefinitions (CRDs)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions). The Kubernetes project is moving away from having resource types pre-defined in the API server, and is moving towards a model where the API server is independent of the API objects. This allows the API server to be reused for projects other than Kubernetes, and consumers (like Kubernetes) can simply install the resource types they require as CRDs.
It is important to note that unlike the core Kubernetes Persistent Volume objects, these Snapshot objects are defined as [CustomResourceDefinitions (CRDs)](/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions). The Kubernetes project is moving away from having resource types pre-defined in the API server, and is moving towards a model where the API server is independent of the API objects. This allows the API server to be reused for projects other than Kubernetes, and consumers (like Kubernetes) can simply install the resource types they require as CRDs.
[CSI Drivers](https://kubernetes-csi.github.io/docs/Drivers.html) that support snapshots will automatically install the required CRDs. Kubernetes end users only need to verify that a CSI driver that supports snapshots is deployed on their Kubernetes cluster.

View File

@ -27,7 +27,7 @@ Why is RuntimeClass a pod level concept? The Kubernetes resource model expects c
## What's next?
The RuntimeClass resource is an important foundation for surfacing runtime properties to the control plane. For example, to implement scheduler support for clusters with heterogeneous nodes supporting different runtimes, we might add [NodeAffinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) terms to the RuntimeClass definition. Another area to address is managing the variable resource requirements to run pods of different runtimes. The [Pod Overhead proposal](https://docs.google.com/document/d/1EJKT4gyl58-kzt2bnwkv08MIUZ6lkDpXcxkHqCvvAp4/preview) was an early take on this that aligns nicely with the RuntimeClass design, and may be pursued further.
The RuntimeClass resource is an important foundation for surfacing runtime properties to the control plane. For example, to implement scheduler support for clusters with heterogeneous nodes supporting different runtimes, we might add [NodeAffinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) terms to the RuntimeClass definition. Another area to address is managing the variable resource requirements to run pods of different runtimes. The [Pod Overhead proposal](https://docs.google.com/document/d/1EJKT4gyl58-kzt2bnwkv08MIUZ6lkDpXcxkHqCvvAp4/preview) was an early take on this that aligns nicely with the RuntimeClass design, and may be pursued further.
Many other RuntimeClass extensions have also been proposed, and will be revisited as the feature continues to develop and mature. A few more extensions that are being considered include:
@ -41,7 +41,7 @@ RuntimeClass will be under active development at least through 2019, and were
## Learn More
- Take it for a spin! As an alpha feature, there are some additional setup steps to use RuntimeClass. Refer to the [RuntimeClass documentation](https://kubernetes.io/docs/concepts/containers/runtime-class/#runtime-class) for how to get it running.
- Take it for a spin! As an alpha feature, there are some additional setup steps to use RuntimeClass. Refer to the [RuntimeClass documentation](/docs/concepts/containers/runtime-class/#runtime-class) for how to get it running.
- Check out the [RuntimeClass Kubernetes Enhancement Proposal](https://github.com/kubernetes/community/blob/master/keps/sig-node/0014-runtime-class.md) for more nitty-gritty design details.
- The [Sandbox Isolation Level Decision](https://docs.google.com/document/d/1fe7lQUjYKR0cijRmSbH_y0_l3CYPkwtQa5ViywuNo8Q/preview) documents the thought process that initially went into making RuntimeClass a pod-level choice.
- Join the discussions and help shape the future of RuntimeClass with the [SIG-Node community](https://github.com/kubernetes/community/tree/master/sig-node)

View File

@ -140,7 +140,7 @@ logs-web-1 us-central1-a
## How can I learn more?
Official documentation on the topology-aware dynamic provisioning feature is available here:
https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode
/docs/concepts/storage/storage-classes/#volume-binding-mode
Documentation for CSI drivers is available at https://kubernetes-csi.github.io/docs/

View File

@ -85,9 +85,9 @@ application would have to watch the Kubernetes API and keep itself up to date
with the pods.
Alternatively, in Kubernetes, we could deploy our app as [headless
services](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services).
services](/docs/concepts/services-networking/service/#headless-services).
In this case, Kubernetes [will create multiple A
records](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services)
records](/docs/concepts/services-networking/service/#headless-services)
in the DNS entry for the service. If our gRPC client is sufficiently advanced,
it can automatically maintain the load balancing pool from those DNS entries.
But this approach restricts us to certain gRPC clients, and it's rarely

View File

@ -51,7 +51,7 @@ We're looking forward to the [doc sprint in Shanghai](https://kccncchina2018engl
We're excited to continue supporting the Japanese and Korean l10n teams, who are making excellent progress.
If you're interested in localizing Kubernetes for your own language or region, check out our [guide to localizing Kubernetes docs](https://kubernetes.io/docs/contribute/localization/) and reach out to a [SIG Docs chair](https://github.com/kubernetes/community/tree/master/sig-docs#leadership) for support.
If you're interested in localizing Kubernetes for your own language or region, check out our [guide to localizing Kubernetes docs](/docs/contribute/localization/) and reach out to a [SIG Docs chair](https://github.com/kubernetes/community/tree/master/sig-docs#leadership) for support.
### Get involved with SIG Docs

View File

@ -17,7 +17,7 @@ Lets dive into the key features of this release:
## Simplified Kubernetes Cluster Management with kubeadm in GA
Most people who have gotten hands-on with Kubernetes have at some point been hands-on with kubeadm. It's an essential tool for managing the cluster lifecycle, from creation to configuration to upgrade; and now kubeadm is officially GA. [kubeadm](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/) handles the bootstrapping of production clusters on existing hardware and configuring the core Kubernetes components in a best-practice-manner to providing a secure yet easy joining flow for new nodes and supporting easy upgrades. Whats notable about this GA release are the now graduated advanced features, specifically around pluggability and configurability. The scope of kubeadm is to be a toolbox for both admins and automated, higher-level system and this release is a significant step in that direction.
Most people who have gotten hands-on with Kubernetes have at some point been hands-on with kubeadm. It's an essential tool for managing the cluster lifecycle, from creation to configuration to upgrade; and now kubeadm is officially GA. [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) handles the bootstrapping of production clusters on existing hardware and configuring the core Kubernetes components in a best-practice-manner to providing a secure yet easy joining flow for new nodes and supporting easy upgrades. Whats notable about this GA release are the now graduated advanced features, specifically around pluggability and configurability. The scope of kubeadm is to be a toolbox for both admins and automated, higher-level system and this release is a significant step in that direction.
## Container Storage Interface (CSI) Goes GA
@ -49,7 +49,7 @@ Each Special Interest Group (SIG) within the community continues to deliver the
## Availability
Kubernetes 1.13 is available for [download on GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.13.0). To get started with Kubernetes, check out these [interactive tutorials](https://kubernetes.io/docs/tutorials/). You can also easily install 1.13 using [kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/).
Kubernetes 1.13 is available for [download on GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.13.0). To get started with Kubernetes, check out these [interactive tutorials](/docs/tutorials/). You can also easily install 1.13 using [kubeadm](/docs/setup/independent/create-cluster-kubeadm/).
## Features Blog Series

Some files were not shown because too many files have changed in this diff Show More