From 4e22f60956a3ec4e69e54a5abf02fd5f1e7bcf52 Mon Sep 17 00:00:00 2001
From: tehut
Date: Tue, 3 Apr 2018 14:19:43 -0700
Subject: [PATCH] Blog Migration: Continuation of WIP PR #7144 (#7247)
* WIP
* fixing layouts and assets
updating Gemfile.lock with bundle install (rvm use ruby-2.2.9)
Updated links in header and footer
switch to paginate v2 gem
comment sidebar logic
fixing css regression
retry paginate
updated local ruby version
reverting to original pagination gem
remove text, make heading roboto, next/prev as buttons
* reintoduce dir tree changes
format checking new posts
shrink fab-icons down to 33px
* reintoduce dir tree changes
format checking new posts
shrink fab-icons down to 33px
* next/prev buttons on posts and index
* Change "latest" to v1.10
* rename .md to match old blog and added explict permalinks for urls
* replacing a link to old blog
---
Gemfile | 4 +-
Gemfile.lock | 51 +-
Makefile | 2 +-
_config.yml | 7 +
_includes/CommunityHangout/Apr3.html | 227 ++++
_includes/CommunityHangout/Mar27.html | 176 +++
_includes/footer.html | 4 +-
_includes/head.html | 1 +
_includes/header.html | 4 +-
_includes/youtubePlayer.html | 1 +
_layouts/blog.html | 193 ++++
.../2015-03-00-Kubernetes-Gathering-Videos.md | 12 +
...015-03-00-Paricipate-In-Kubernetes-User.md | 23 +
...-00-Weekly-Kubernetes-Community-Hangout.md | 69 ++
.../2015-03-00-Welcome-To-Kubernetes-Blog.md | 30 +
...15-04-00-Borg-Predecessor-To-Kubernetes.md | 49 +
.../2015-04-00-Faster-Than-Speeding-Latte.md | 10 +
...15-04-00-Introducing-Kubernetes-V1Beta3.md | 65 ++
...15-04-00-Kubernetes-And-Mesosphere-Dcos.md | 33 +
.../2015-04-00-Kubernetes-Release-0150.md | 86 ++
...-00-Weekly-Kubernetes-Community-Hangout.md | 111 ++
...-Weekly-Kubernetes-Community-Hangout_11.md | 114 ++
...-Weekly-Kubernetes-Community-Hangout_17.md | 173 +++
...-Weekly-Kubernetes-Community-Hangout_29.md | 62 +
...Appc-Support-For-Kubernetes-Through-Rkt.md | 28 +
...15-05-00-Docker-And-Kubernetes-And-Appc.md | 30 +
.../2015-05-00-Kubernetes-On-Openstack.md | 62 +
.../2015-05-00-Kubernetes-Release-0160.md | 72 ++
.../2015-05-00-Kubernetes-Release-0170.md | 621 ++++++++++
...00-Resource-Usage-Monitoring-Kubernetes.md | 104 ++
...-00-Weekly-Kubernetes-Community-Hangout.md | 48 +
...-Weekly-Kubernetes-Community-Hangout_18.md | 83 ++
...0-Cluster-Level-Logging-With-Kubernetes.md | 316 ++++++
...15-06-00-Slides-Cluster-Management-With.md | 11 +
...The-Distributed-System-Toolkit-Patterns.md | 52 +
...-00-Weekly-Kubernetes-Community-Hangout.md | 61 +
...-Announcing-First-Kubernetes-Enterprise.md | 20 +
...-How-Did-Quake-Demo-From-Dockercon-Work.md | 198 ++++
...-00-Kubernetes-10-Launch-Party-At-Oscon.md | 19 +
...-07-00-Strong-Simple-Ssl-For-Kubernetes.md | 276 +++++
...-07-00-The-Growing-Kubernetes-Ecosystem.md | 234 ++++
...-00-Weekly-Kubernetes-Community-Hangout.md | 64 ++
...-Weekly-Kubernetes-Community-Hangout_23.md | 136 +++
...0-Using-Kubernetes-Namespaces-To-Manage.md | 92 ++
...-00-Weekly-Kubernetes-Community-Hangout.md | 90 ++
...Kubernetes-Performance-Measurements-And.md | 163 +++
...-Things-You-Didnt-Know-About-Kubectl_28.md | 153 +++
...ing-Kubernetes-The-Shopping-List-Part-1.md | 58 +
...mproved-Tooling-And-A-Growing-Community.md | 55 +
...tes-As-Foundation-For-Cloud-Native-Paas.md | 83 ++
...11-00-Monitoring-Kubernetes-With-Sysdig.md | 236 ++++
...nd-Dynamic-Distributed-Systems-At-Scale.md | 38 +
...0-Creating-Raspberry-Pi-Cluster-Running.md | 201 ++++
...ent-Solution-For-Scope-Using-Kubernetes.md | 127 +++
...And-Replication-Controllers-With-Puppet.md | 76 ++
...1-00-Kubernetes-Community-Meeting-Notes.md | 103 ++
...0-Kubernetes-Community-Meeting-Notes_28.md | 67 ++
...-Simple-Leader-Election-With-Kubernetes.md | 142 +++
...00-Why-Kubernetes-Doesnt-Use-Libnetwork.md | 41 +
...Kubecon-Eu-2016-Kubernetes-Community-In.md | 39 +
...2-00-Kubernetes-Community-Meeting-Notes.md | 59 +
...rnetes-community-meeting-notes-20160128.md | 74 ++
...rnetes-community-meeting-notes-20160211.md | 62 +
...2-00-Sharethis-Kubernetes-In-Production.md | 95 ++
...0-State-Of-Container-World-January-2016.md | 62 +
...0-kubernetes-community-meeting-notes_23.md | 50 +
...netes-Performance-And-Scalability-In-12.md | 174 +++
...016-03-00-Appformix-Helping-Enterprises.md | 53 +
...ulti-Zone-Clusters-A.K.A-Ubernetes-Lite.md | 241 ++++
...00-Elasticbox-Introduces-Elastickube-To.md | 63 ++
.../2016-03-00-Five-Days-Of-Kubernetes-12.md | 55 +
...ner-Metadata-Changes-Your-Point-Of-View.md | 93 ++
...ifying-Advanced-Networking-With-Ingress.md | 117 ++
...-Application-Deployment-And-Management-.md | 82 ++
...3-00-Kubernetes-Community-Meeting-Notes.md | 51 +
...-Kubernetes-In-Enterprise-With-Fujitsus.md | 87 ++
...sing-Kubernetes-With-Tensorflow-Serving.md | 36 +
...-State-Of-Container-World-February-2016.md | 115 ++
...pelin-To-Process-Big-Data-On-Kubernetes.md | 138 +++
...dding-Support-For-Kubernetes-In-Rancher.md | 40 +
...-Awesome-User-Interfaces-For-Kubernetes.md | 34 +
...onfiguration-Management-With-Containers.md | 203 ++++
...-00-Container-Survey-Results-March-2016.md | 37 +
...00-Introducing-Kubernetes-Openstack-Sig.md | 67 ++
...16-04-00-Kubernetes-Network-Policy-APIs.md | 171 +++
.../_posts/2016-04-00-Kubernetes-On-Aws_15.md | 124 ++
...ty-And-Interoperability-Of-K8S-Clusters.md | 35 +
...016-04-00-Using-Deployment-Objects-With.md | 146 +++
...-00-Coreosfest2016-Kubernetes-Community.md | 45 +
...ecurity-And-Multi-Tenancy-In-Kubernetes.md | 203 ++++
...00-Bringing-End-To-End-Testing-To-Azure.md | 107 ++
.../2016-06-00-Container-Design-Patterns.md | 26 +
...lustrated-Childrens-Guide-To-Kubernetes.md | 20 +
...ion-Platform-At-Wercker-With-Kubernetes.md | 69 ++
.../2016-07-00-Autoscaling-In-Kubernetes.md | 412 +++++++
...nd-To-End-Kubernetes-Testing-To-Azure-2.md | 55 +
...6-07-00-Citrix-Netscaler-And-Kubernetes.md | 31 +
.../2016-07-00-Cross-Cluster-Services.md | 344 ++++++
...-Dashboard-Web-Interface-For-Kubernetes.md | 76 ++
.../2016-07-00-Five-Days-Of-Kubernetes-1.3.md | 61 +
...g-Cloud-Native-And-Enterprise-Workloads.md | 67 ++
...Kubernetes-In-Rancher-Further-Evolution.md | 197 ++++
...-Minikube-Easily-Run-Kubernetes-Locally.md | 135 +++
.../2016-07-00-Oh-The-Places-You-Will-Go.md | 38 +
...ings-Rkt-Container-Engine-To-Kubernetes.md | 86 ++
.../2016-07-00-The-Bet-On-Kubernetes.md | 48 +
...s-Of-Cassandra-Using-Kubernetes-Pet-Set.md | 377 +++++++
...ubernetes-For-Windows-Server-Containers.md | 108 ++
blog/_posts/2016-07-00-happy-k8sbday-1.md | 29 +
...-07-00-openstack-kubernetes-communities.md | 34 +
...l-applications-in-containers-kubernetes.md | 44 +
...ly-Managed-Onpremise-Kubernetes-Cluster.md | 61 +
...eate-Couchbase-Cluster-Using-Kubernetes.md | 427 +++++++
...ubernetes-Namespaces-Use-Cases-Insights.md | 148 +++
...ty-Best-Practices-Kubernetes-Deployment.md | 239 ++++
...-00-Sig-Apps-Running-Apps-In-Kubernetes.md | 44 +
...ul-Applications-Using-Kubernetes-Datera.md | 606 ++++++++++
...-00-Cloud-Native-Application-Interfaces.md | 61 +
...-Creating-Postgresql-Cluster-Using-Helm.md | 148 +++
...ploying-To-Multiple-Kubernetes-With-Kit.md | 68 ++
...Performance-Network-Policies-Kubernetes.md | 194 ++++
...-How-Qbox-Saved-50-Percent-On-Aws-Bills.md | 46 +
...-How-We-Made-Kubernetes-Easy-To-Install.md | 63 ++
...g-It-Easy-To-Run-On-Kuberentes-Anywhere.md | 88 ++
...-Provisioning-And-Storage-In-Kubernetes.md | 192 ++++
...-Services-Kubernetes-Cluster-Federation.md | 435 +++++++
...o-Package-And-Deploy-Apps-On-Kubernetes.md | 126 +++
...Kubernetes-And-Openstack-At-Yahoo-Japan.md | 197 ++++
...tes-Service-Technology-Partners-Program.md | 26 +
...ernetes-Dashboard-UI-1.4-improvements_3.md | 82 ++
.../2016-10-00-Tail-Kubernetes-With-Stern.md | 165 +++
...00-Bringing-Kubernetes-Support-To-Azure.md | 36 +
...ol-Go-From-Docker-Compose-To-Kubernetes.md | 193 ++++
...ng-And-Managed-Service-Provider-Program.md | 21 +
...ainers-Logging-Monitoring-With-Sematext.md | 236 ++++
...croservice-Architecture-With-Kubernetes.md | 177 +++
...Kubelet-Performance-With-Node-Dashboard.md | 124 ++
...00-Cluster-Federation-In-Kubernetes-1.5.md | 322 ++++++
...ner-Runtime-Interface-Cri-In-Kubernetes.md | 233 ++++
.../2016-12-00-Five-Days-Of-Kubernetes-1.5.md | 35 +
...m-Network-Policies-To-Security-Policies.md | 49 +
...tes-1.5-Supporting-Production-Workloads.md | 73 ++
.../2016-12-00-Kubernetes-Supports-Openapi.md | 198 ++++
...ale-Stateful-Applications-In-Kubernetes.md | 450 ++++++++
...12-00-Windows-Server-Support-Kubernetes.md | 79 ++
...ess-Functions-As-Service-For-Kubernetes.md | 134 +++
...un-Kubernetes-In-Kubernetes-Kubeception.md | 126 +++
...-01-00-Kubernetes-Ux-Survey-Infographic.md | 60 +
...Mongodb-On-Kubernetes-With-Statefulsets.md | 428 +++++++
...Deployments-With-Policy-Base-Networking.md | 60 +
...eating-And-Managing-Kubernetes-Clusters.md | 107 ++
...0-Caas-The-Foundation-For-Next-Gen-Paas.md | 52 +
...00-Highly-Available-Kubernetes-Clusters.md | 332 ++++++
...-Com-Shift-To-Kubernetes-From-Openstack.md | 135 +++
...gresql-Clusters-Kubernetes-Statefulsets.md | 314 ++++++
...earning-With-Paddlepaddle-On-Kubernetes.md | 172 +++
...03-00-Advanced-Scheduling-In-Kubernetes.md | 277 +++++
...isioning-And-Storage-Classes-Kubernetes.md | 216 ++++
.../2017-03-00-Five-Days-Of-Kubernetes-1.6.md | 31 +
...Sport-Engaging-The-Kubernetes-Community.md | 53 +
...1.6-Multi-User-Multi-Workloads-At-Scale.md | 114 ++
...0-Scalability-Updates-In-Kubernetes-1.6.md | 90 ++
...s-Zones-Upstream-Nameservers-Kubernetes.md | 155 +++
...nts-With-Kubernetes-In-The-Cloud-Onprem.md | 221 ++++
.../2017-04-00-Rbac-Support-In-Kubernetes.md | 134 +++
...-Draft-Kubernetes-Container-Development.md | 200 ++++
.../2017-05-00-Kubernetes-Monitoring-Guide.md | 90 ++
...0-Kubernetes-Security-Process-Explained.md | 39 +
...ay-Ansible-Collaborative-Kubernetes-Ops.md | 125 ++
...g-Microservices-With-Istio-Service-Mesh.md | 304 +++++
...teful-Application-Extensibility-Updates.md | 82 ++
...-07-00-Happy-Second-Birthday-Kubernetes.md | 166 +++
...7-07-00-How-Watson-Health-Cloud-Deploys.md | 164 +++
...00-High-Performance-Networking-With-Ec2.md | 82 ++
...00-Kompose-Helps-Developers-Move-Docker.md | 174 +++
...08-00-Kubernetes-Meets-High-Performance.md | 83 ++
...Introducing-Resource-Management-Working.md | 93 ++
...00-Kubernetes-18-Security-Workloads-And.md | 95 ++
...9-00-Kubernetes-Statefulsets-Daemonsets.md | 1003 +++++++++++++++++
...Windows-Networking-At-Parity-With-Linux.md | 60 +
...nforcing-Network-Policies-In-Kubernetes.md | 119 ++
.../2017-10-00-Five-Days-Of-Kubernetes-18.md | 29 +
...00-It-Takes-Village-To-Raise-Kubernetes.md | 39 +
.../_posts/2017-10-00-Kubeadm-V18-Released.md | 107 ++
...ity-Steering-Committee-Election-Results.md | 23 +
...0-Request-Routing-And-Policy-Management.md | 452 ++++++++
...0-00-Software-Conformance-Certification.md | 30 +
...10-00-Using-Rbac-Generally-Available-18.md | 269 +++++
.../2017-11-00-Autoscaling-In-Kubernetes.md | 15 +
...-11-00-Certified-Kubernetes-Conformance.md | 43 +
...rd-Container-Runtime-Options-Kubernetes.md | 132 +++
blog/_posts/2017-11-00-Kubernetes-Easy-Way.md | 231 ++++
...Kubernetes-Is-Still-Hard-For-Developers.md | 16 +
...-Securing-Software-Supply-Chain-Grafeas.md | 234 ++++
...7-12-00-Introducing-Kubeflow-Composable.md | 174 +++
...ernetes-19-Workloads-Expanded-Ecosystem.md | 104 ++
...00-Paddle-Paddle-Fluid-Elastic-Learning.md | 51 +
.../2017-12-00-Using-Ebpf-In-Kubernetes.md | 138 +++
.../2018-01-00-Core-Workloads-Api-Ga.md | 104 ++
...2018-01-00-Extensible-Admission-Is-Beta.md | 135 +++
.../2018-01-00-Five-Days-Of-Kubernetes-19.md | 32 +
...8-01-00-Introducing-Client-Go-Version-6.md | 423 +++++++
...Introducing-Container-Storage-Interface.md | 246 ++++
...-00-Kubernetes-V19-Beta-Windows-Support.md | 69 ++
...eporting-Errors-Using-Kubernetes-Events.md | 91 ++
...-Apache-Spark-23-With-Native-Kubernetes.md | 112 ++
...xpanding-User-Support-With-Office-Hours.md | 44 +
...0-First-Beta-Version-Of-Kubernetes-1-10.md | 104 ++
...How-To-Integrate-Rollingupdate-Strategy.md | 251 +++++
...3-00-Principles-Of-Container-App-Design.md | 50 +
blog/index.html | 28 +
css/blog.css | 493 ++++++++
.../kubernetes-basics/public/css/styles.css | 2 -
images/hpc-ec2-vpc-2.png | Bin 0 -> 35473 bytes
images/hpn-ec2-vpc.png | Bin 0 -> 44086 bytes
js/script.js | 3 +-
216 files changed, 26856 insertions(+), 32 deletions(-)
create mode 100644 _includes/CommunityHangout/Apr3.html
create mode 100644 _includes/CommunityHangout/Mar27.html
create mode 100644 _includes/youtubePlayer.html
create mode 100644 _layouts/blog.html
create mode 100644 blog/_posts/2015-03-00-Kubernetes-Gathering-Videos.md
create mode 100644 blog/_posts/2015-03-00-Paricipate-In-Kubernetes-User.md
create mode 100644 blog/_posts/2015-03-00-Weekly-Kubernetes-Community-Hangout.md
create mode 100644 blog/_posts/2015-03-00-Welcome-To-Kubernetes-Blog.md
create mode 100644 blog/_posts/2015-04-00-Borg-Predecessor-To-Kubernetes.md
create mode 100644 blog/_posts/2015-04-00-Faster-Than-Speeding-Latte.md
create mode 100644 blog/_posts/2015-04-00-Introducing-Kubernetes-V1Beta3.md
create mode 100644 blog/_posts/2015-04-00-Kubernetes-And-Mesosphere-Dcos.md
create mode 100644 blog/_posts/2015-04-00-Kubernetes-Release-0150.md
create mode 100644 blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout.md
create mode 100644 blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_11.md
create mode 100644 blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_17.md
create mode 100644 blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_29.md
create mode 100644 blog/_posts/2015-05-00-Appc-Support-For-Kubernetes-Through-Rkt.md
create mode 100644 blog/_posts/2015-05-00-Docker-And-Kubernetes-And-Appc.md
create mode 100644 blog/_posts/2015-05-00-Kubernetes-On-Openstack.md
create mode 100644 blog/_posts/2015-05-00-Kubernetes-Release-0160.md
create mode 100644 blog/_posts/2015-05-00-Kubernetes-Release-0170.md
create mode 100644 blog/_posts/2015-05-00-Resource-Usage-Monitoring-Kubernetes.md
create mode 100644 blog/_posts/2015-05-00-Weekly-Kubernetes-Community-Hangout.md
create mode 100644 blog/_posts/2015-05-00-Weekly-Kubernetes-Community-Hangout_18.md
create mode 100644 blog/_posts/2015-06-00-Cluster-Level-Logging-With-Kubernetes.md
create mode 100644 blog/_posts/2015-06-00-Slides-Cluster-Management-With.md
create mode 100644 blog/_posts/2015-06-00-The-Distributed-System-Toolkit-Patterns.md
create mode 100644 blog/_posts/2015-06-00-Weekly-Kubernetes-Community-Hangout.md
create mode 100644 blog/_posts/2015-07-00-Announcing-First-Kubernetes-Enterprise.md
create mode 100644 blog/_posts/2015-07-00-How-Did-Quake-Demo-From-Dockercon-Work.md
create mode 100644 blog/_posts/2015-07-00-Kubernetes-10-Launch-Party-At-Oscon.md
create mode 100644 blog/_posts/2015-07-00-Strong-Simple-Ssl-For-Kubernetes.md
create mode 100644 blog/_posts/2015-07-00-The-Growing-Kubernetes-Ecosystem.md
create mode 100644 blog/_posts/2015-07-00-Weekly-Kubernetes-Community-Hangout.md
create mode 100644 blog/_posts/2015-07-00-Weekly-Kubernetes-Community-Hangout_23.md
create mode 100644 blog/_posts/2015-08-00-Using-Kubernetes-Namespaces-To-Manage.md
create mode 100644 blog/_posts/2015-08-00-Weekly-Kubernetes-Community-Hangout.md
create mode 100644 blog/_posts/2015-09-00-Kubernetes-Performance-Measurements-And.md
create mode 100644 blog/_posts/2015-10-00-Some-Things-You-Didnt-Know-About-Kubectl_28.md
create mode 100644 blog/_posts/2015-11-00-Creating-A-Raspberry-Pi-Cluster-Running-Kubernetes-The-Shopping-List-Part-1.md
create mode 100644 blog/_posts/2015-11-00-Kubernetes-1-1-Performance-Upgrades-Improved-Tooling-And-A-Growing-Community.md
create mode 100644 blog/_posts/2015-11-00-Kubernetes-As-Foundation-For-Cloud-Native-Paas.md
create mode 100644 blog/_posts/2015-11-00-Monitoring-Kubernetes-With-Sysdig.md
create mode 100644 blog/_posts/2015-11-00-One-Million-Requests-Per-Second-Dependable-And-Dynamic-Distributed-Systems-At-Scale.md
create mode 100644 blog/_posts/2015-12-00-Creating-Raspberry-Pi-Cluster-Running.md
create mode 100644 blog/_posts/2015-12-00-How-Weave-Built-A-Multi-Deployment-Solution-For-Scope-Using-Kubernetes.md
create mode 100644 blog/_posts/2015-12-00-Managing-Kubernetes-Pods-Services-And-Replication-Controllers-With-Puppet.md
create mode 100644 blog/_posts/2016-01-00-Kubernetes-Community-Meeting-Notes.md
create mode 100644 blog/_posts/2016-01-00-Kubernetes-Community-Meeting-Notes_28.md
create mode 100644 blog/_posts/2016-01-00-Simple-Leader-Election-With-Kubernetes.md
create mode 100644 blog/_posts/2016-01-00-Why-Kubernetes-Doesnt-Use-Libnetwork.md
create mode 100644 blog/_posts/2016-02-00-Kubecon-Eu-2016-Kubernetes-Community-In.md
create mode 100644 blog/_posts/2016-02-00-Kubernetes-Community-Meeting-Notes.md
create mode 100644 blog/_posts/2016-02-00-Kubernetes-community-meeting-notes-20160128.md
create mode 100644 blog/_posts/2016-02-00-Kubernetes-community-meeting-notes-20160211.md
create mode 100644 blog/_posts/2016-02-00-Sharethis-Kubernetes-In-Production.md
create mode 100644 blog/_posts/2016-02-00-State-Of-Container-World-January-2016.md
create mode 100644 blog/_posts/2016-02-00-kubernetes-community-meeting-notes_23.md
create mode 100644 blog/_posts/2016-03-00-1000-Nodes-And-Beyond-Updates-To-Kubernetes-Performance-And-Scalability-In-12.md
create mode 100644 blog/_posts/2016-03-00-Appformix-Helping-Enterprises.md
create mode 100644 blog/_posts/2016-03-00-Building-Highly-Available-Applications-Using-Kubernetes-New-Multi-Zone-Clusters-A.K.A-Ubernetes-Lite.md
create mode 100644 blog/_posts/2016-03-00-Elasticbox-Introduces-Elastickube-To.md
create mode 100644 blog/_posts/2016-03-00-Five-Days-Of-Kubernetes-12.md
create mode 100644 blog/_posts/2016-03-00-How-Container-Metadata-Changes-Your-Point-Of-View.md
create mode 100644 blog/_posts/2016-03-00-Kubernetes-1.2-And-Simplifying-Advanced-Networking-With-Ingress.md
create mode 100644 blog/_posts/2016-03-00-Kubernetes-1.2-Even-More-Performance-Upgrades-Plus-Easier-Application-Deployment-And-Management-.md
create mode 100644 blog/_posts/2016-03-00-Kubernetes-Community-Meeting-Notes.md
create mode 100644 blog/_posts/2016-03-00-Kubernetes-In-Enterprise-With-Fujitsus.md
create mode 100644 blog/_posts/2016-03-00-Scaling-Neural-Network-Image-Classification-Using-Kubernetes-With-Tensorflow-Serving.md
create mode 100644 blog/_posts/2016-03-00-State-Of-Container-World-February-2016.md
create mode 100644 blog/_posts/2016-03-00-Using-Spark-And-Zeppelin-To-Process-Big-Data-On-Kubernetes.md
create mode 100644 blog/_posts/2016-04-00-Adding-Support-For-Kubernetes-In-Rancher.md
create mode 100644 blog/_posts/2016-04-00-Building-Awesome-User-Interfaces-For-Kubernetes.md
create mode 100644 blog/_posts/2016-04-00-Configuration-Management-With-Containers.md
create mode 100644 blog/_posts/2016-04-00-Container-Survey-Results-March-2016.md
create mode 100644 blog/_posts/2016-04-00-Introducing-Kubernetes-Openstack-Sig.md
create mode 100644 blog/_posts/2016-04-00-Kubernetes-Network-Policy-APIs.md
create mode 100644 blog/_posts/2016-04-00-Kubernetes-On-Aws_15.md
create mode 100644 blog/_posts/2016-04-00-Sig-Clusterops-Promote-Operability-And-Interoperability-Of-K8S-Clusters.md
create mode 100644 blog/_posts/2016-04-00-Using-Deployment-Objects-With.md
create mode 100644 blog/_posts/2016-05-00-Coreosfest2016-Kubernetes-Community.md
create mode 100644 blog/_posts/2016-05-00-Hypernetes-Security-And-Multi-Tenancy-In-Kubernetes.md
create mode 100644 blog/_posts/2016-06-00-Bringing-End-To-End-Testing-To-Azure.md
create mode 100644 blog/_posts/2016-06-00-Container-Design-Patterns.md
create mode 100644 blog/_posts/2016-06-00-Illustrated-Childrens-Guide-To-Kubernetes.md
create mode 100644 blog/_posts/2016-07-00-Automation-Platform-At-Wercker-With-Kubernetes.md
create mode 100644 blog/_posts/2016-07-00-Autoscaling-In-Kubernetes.md
create mode 100644 blog/_posts/2016-07-00-Bringing-End-To-End-Kubernetes-Testing-To-Azure-2.md
create mode 100644 blog/_posts/2016-07-00-Citrix-Netscaler-And-Kubernetes.md
create mode 100644 blog/_posts/2016-07-00-Cross-Cluster-Services.md
create mode 100644 blog/_posts/2016-07-00-Dashboard-Web-Interface-For-Kubernetes.md
create mode 100644 blog/_posts/2016-07-00-Five-Days-Of-Kubernetes-1.3.md
create mode 100644 blog/_posts/2016-07-00-Kubernetes-1.3-Bridging-Cloud-Native-And-Enterprise-Workloads.md
create mode 100644 blog/_posts/2016-07-00-Kubernetes-In-Rancher-Further-Evolution.md
create mode 100644 blog/_posts/2016-07-00-Minikube-Easily-Run-Kubernetes-Locally.md
create mode 100644 blog/_posts/2016-07-00-Oh-The-Places-You-Will-Go.md
create mode 100644 blog/_posts/2016-07-00-Rktnetes-Brings-Rkt-Container-Engine-To-Kubernetes.md
create mode 100644 blog/_posts/2016-07-00-The-Bet-On-Kubernetes.md
create mode 100644 blog/_posts/2016-07-00-Thousand-Instances-Of-Cassandra-Using-Kubernetes-Pet-Set.md
create mode 100644 blog/_posts/2016-07-00-Update-On-Kubernetes-For-Windows-Server-Containers.md
create mode 100644 blog/_posts/2016-07-00-happy-k8sbday-1.md
create mode 100644 blog/_posts/2016-07-00-openstack-kubernetes-communities.md
create mode 100644 blog/_posts/2016-07-00-stateful-applications-in-containers-kubernetes.md
create mode 100644 blog/_posts/2016-08-00-Challenges-Remotely-Managed-Onpremise-Kubernetes-Cluster.md
create mode 100644 blog/_posts/2016-08-00-Create-Couchbase-Cluster-Using-Kubernetes.md
create mode 100644 blog/_posts/2016-08-00-Kubernetes-Namespaces-Use-Cases-Insights.md
create mode 100644 blog/_posts/2016-08-00-Security-Best-Practices-Kubernetes-Deployment.md
create mode 100644 blog/_posts/2016-08-00-Sig-Apps-Running-Apps-In-Kubernetes.md
create mode 100644 blog/_posts/2016-08-00-Stateful-Applications-Using-Kubernetes-Datera.md
create mode 100644 blog/_posts/2016-09-00-Cloud-Native-Application-Interfaces.md
create mode 100644 blog/_posts/2016-09-00-Creating-Postgresql-Cluster-Using-Helm.md
create mode 100644 blog/_posts/2016-09-00-Deploying-To-Multiple-Kubernetes-With-Kit.md
create mode 100644 blog/_posts/2016-09-00-High-Performance-Network-Policies-Kubernetes.md
create mode 100644 blog/_posts/2016-09-00-How-Qbox-Saved-50-Percent-On-Aws-Bills.md
create mode 100644 blog/_posts/2016-09-00-How-We-Made-Kubernetes-Easy-To-Install.md
create mode 100644 blog/_posts/2016-09-00-Kubernetes-1.4-Making-It-Easy-To-Run-On-Kuberentes-Anywhere.md
create mode 100644 blog/_posts/2016-10-00-Dynamic-Provisioning-And-Storage-In-Kubernetes.md
create mode 100644 blog/_posts/2016-10-00-Globally-Distributed-Services-Kubernetes-Cluster-Federation.md
create mode 100644 blog/_posts/2016-10-00-Helm-Charts-Making-It-Simple-To-Package-And-Deploy-Apps-On-Kubernetes.md
create mode 100644 blog/_posts/2016-10-00-Kubernetes-And-Openstack-At-Yahoo-Japan.md
create mode 100644 blog/_posts/2016-10-00-Kubernetes-Service-Technology-Partners-Program.md
create mode 100644 blog/_posts/2016-10-00-Production-Kubernetes-Dashboard-UI-1.4-improvements_3.md
create mode 100644 blog/_posts/2016-10-00-Tail-Kubernetes-With-Stern.md
create mode 100644 blog/_posts/2016-11-00-Bringing-Kubernetes-Support-To-Azure.md
create mode 100644 blog/_posts/2016-11-00-Kompose-Tool-Go-From-Docker-Compose-To-Kubernetes.md
create mode 100644 blog/_posts/2016-11-00-Kubernetes-Certification-Training-And-Managed-Service-Provider-Program.md
create mode 100644 blog/_posts/2016-11-00-Kubernetes-Containers-Logging-Monitoring-With-Sematext.md
create mode 100644 blog/_posts/2016-11-00-Skytap-Modernizing-Microservice-Architecture-With-Kubernetes.md
create mode 100644 blog/_posts/2016-11-00-Visualize-Kubelet-Performance-With-Node-Dashboard.md
create mode 100644 blog/_posts/2016-12-00-Cluster-Federation-In-Kubernetes-1.5.md
create mode 100644 blog/_posts/2016-12-00-Container-Runtime-Interface-Cri-In-Kubernetes.md
create mode 100644 blog/_posts/2016-12-00-Five-Days-Of-Kubernetes-1.5.md
create mode 100644 blog/_posts/2016-12-00-From-Network-Policies-To-Security-Policies.md
create mode 100644 blog/_posts/2016-12-00-Kubernetes-1.5-Supporting-Production-Workloads.md
create mode 100644 blog/_posts/2016-12-00-Kubernetes-Supports-Openapi.md
create mode 100644 blog/_posts/2016-12-00-Statefulset-Run-Scale-Stateful-Applications-In-Kubernetes.md
create mode 100644 blog/_posts/2016-12-00-Windows-Server-Support-Kubernetes.md
create mode 100644 blog/_posts/2017-01-00-Fission-Serverless-Functions-As-Service-For-Kubernetes.md
create mode 100644 blog/_posts/2017-01-00-How-We-Run-Kubernetes-In-Kubernetes-Kubeception.md
create mode 100644 blog/_posts/2017-01-00-Kubernetes-Ux-Survey-Infographic.md
create mode 100644 blog/_posts/2017-01-00-Running-Mongodb-On-Kubernetes-With-Statefulsets.md
create mode 100644 blog/_posts/2017-01-00-Scaling-Kubernetes-Deployments-With-Policy-Base-Networking.md
create mode 100644 blog/_posts/2017-01-00-Stronger-Foundation-For-Creating-And-Managing-Kubernetes-Clusters.md
create mode 100644 blog/_posts/2017-02-00-Caas-The-Foundation-For-Next-Gen-Paas.md
create mode 100644 blog/_posts/2017-02-00-Highly-Available-Kubernetes-Clusters.md
create mode 100644 blog/_posts/2017-02-00-Inside-Jd-Com-Shift-To-Kubernetes-From-Openstack.md
create mode 100644 blog/_posts/2017-02-00-Postgresql-Clusters-Kubernetes-Statefulsets.md
create mode 100644 blog/_posts/2017-02-00-Run-Deep-Learning-With-Paddlepaddle-On-Kubernetes.md
create mode 100644 blog/_posts/2017-03-00-Advanced-Scheduling-In-Kubernetes.md
create mode 100644 blog/_posts/2017-03-00-Dynamic-Provisioning-And-Storage-Classes-Kubernetes.md
create mode 100644 blog/_posts/2017-03-00-Five-Days-Of-Kubernetes-1.6.md
create mode 100644 blog/_posts/2017-03-00-K8Sport-Engaging-The-Kubernetes-Community.md
create mode 100644 blog/_posts/2017-03-00-Kubernetes-1.6-Multi-User-Multi-Workloads-At-Scale.md
create mode 100644 blog/_posts/2017-03-00-Scalability-Updates-In-Kubernetes-1.6.md
create mode 100644 blog/_posts/2017-04-00-Configuring-Private-Dns-Zones-Upstream-Nameservers-Kubernetes.md
create mode 100644 blog/_posts/2017-04-00-Multi-Stage-Canary-Deployments-With-Kubernetes-In-The-Cloud-Onprem.md
create mode 100644 blog/_posts/2017-04-00-Rbac-Support-In-Kubernetes.md
create mode 100644 blog/_posts/2017-05-00-Draft-Kubernetes-Container-Development.md
create mode 100644 blog/_posts/2017-05-00-Kubernetes-Monitoring-Guide.md
create mode 100644 blog/_posts/2017-05-00-Kubernetes-Security-Process-Explained.md
create mode 100644 blog/_posts/2017-05-00-Kubespray-Ansible-Collaborative-Kubernetes-Ops.md
create mode 100644 blog/_posts/2017-05-00-Managing-Microservices-With-Istio-Service-Mesh.md
create mode 100644 blog/_posts/2017-06-00-Kubernetes-1.7-Security-Hardening-Stateful-Application-Extensibility-Updates.md
create mode 100644 blog/_posts/2017-07-00-Happy-Second-Birthday-Kubernetes.md
create mode 100644 blog/_posts/2017-07-00-How-Watson-Health-Cloud-Deploys.md
create mode 100644 blog/_posts/2017-08-00-High-Performance-Networking-With-Ec2.md
create mode 100644 blog/_posts/2017-08-00-Kompose-Helps-Developers-Move-Docker.md
create mode 100644 blog/_posts/2017-08-00-Kubernetes-Meets-High-Performance.md
create mode 100644 blog/_posts/2017-09-00-Introducing-Resource-Management-Working.md
create mode 100644 blog/_posts/2017-09-00-Kubernetes-18-Security-Workloads-And.md
create mode 100644 blog/_posts/2017-09-00-Kubernetes-Statefulsets-Daemonsets.md
create mode 100644 blog/_posts/2017-09-00-Windows-Networking-At-Parity-With-Linux.md
create mode 100644 blog/_posts/2017-10-00-Enforcing-Network-Policies-In-Kubernetes.md
create mode 100644 blog/_posts/2017-10-00-Five-Days-Of-Kubernetes-18.md
create mode 100644 blog/_posts/2017-10-00-It-Takes-Village-To-Raise-Kubernetes.md
create mode 100644 blog/_posts/2017-10-00-Kubeadm-V18-Released.md
create mode 100644 blog/_posts/2017-10-00-Kubernetes-Community-Steering-Committee-Election-Results.md
create mode 100644 blog/_posts/2017-10-00-Request-Routing-And-Policy-Management.md
create mode 100644 blog/_posts/2017-10-00-Software-Conformance-Certification.md
create mode 100644 blog/_posts/2017-10-00-Using-Rbac-Generally-Available-18.md
create mode 100644 blog/_posts/2017-11-00-Autoscaling-In-Kubernetes.md
create mode 100644 blog/_posts/2017-11-00-Certified-Kubernetes-Conformance.md
create mode 100644 blog/_posts/2017-11-00-Containerd-Container-Runtime-Options-Kubernetes.md
create mode 100644 blog/_posts/2017-11-00-Kubernetes-Easy-Way.md
create mode 100644 blog/_posts/2017-11-00-Kubernetes-Is-Still-Hard-For-Developers.md
create mode 100644 blog/_posts/2017-11-00-Securing-Software-Supply-Chain-Grafeas.md
create mode 100644 blog/_posts/2017-12-00-Introducing-Kubeflow-Composable.md
create mode 100644 blog/_posts/2017-12-00-Kubernetes-19-Workloads-Expanded-Ecosystem.md
create mode 100644 blog/_posts/2017-12-00-Paddle-Paddle-Fluid-Elastic-Learning.md
create mode 100644 blog/_posts/2017-12-00-Using-Ebpf-In-Kubernetes.md
create mode 100644 blog/_posts/2018-01-00-Core-Workloads-Api-Ga.md
create mode 100644 blog/_posts/2018-01-00-Extensible-Admission-Is-Beta.md
create mode 100644 blog/_posts/2018-01-00-Five-Days-Of-Kubernetes-19.md
create mode 100644 blog/_posts/2018-01-00-Introducing-Client-Go-Version-6.md
create mode 100644 blog/_posts/2018-01-00-Introducing-Container-Storage-Interface.md
create mode 100644 blog/_posts/2018-01-00-Kubernetes-V19-Beta-Windows-Support.md
create mode 100644 blog/_posts/2018-01-00-Reporting-Errors-Using-Kubernetes-Events.md
create mode 100644 blog/_posts/2018-03-00-Apache-Spark-23-With-Native-Kubernetes.md
create mode 100644 blog/_posts/2018-03-00-Expanding-User-Support-With-Office-Hours.md
create mode 100644 blog/_posts/2018-03-00-First-Beta-Version-Of-Kubernetes-1-10.md
create mode 100644 blog/_posts/2018-03-00-How-To-Integrate-Rollingupdate-Strategy.md
create mode 100644 blog/_posts/2018-03-00-Principles-Of-Container-App-Design.md
create mode 100644 blog/index.html
create mode 100644 css/blog.css
create mode 100644 images/hpc-ec2-vpc-2.png
create mode 100644 images/hpn-ec2-vpc.png
diff --git a/Gemfile b/Gemfile
index e07257c9127..638b5cf1e88 100644
--- a/Gemfile
+++ b/Gemfile
@@ -44,10 +44,10 @@ group :jekyll_plugins do
gem "jekyll-theme-tactile", "0.0.3"
gem "jekyll-theme-time-machine", "0.0.3"
gem "jekyll-titles-from-headings", "~> 0.1"
+ gem "jekyll-include-cache", "~> 0.1"
+ gem 'jekyll-youtube', '~> 1.0'
end
-gem "jekyll-include-cache", "~> 0.1"
-
gem "kramdown", "~> 1.11"
gem "rouge", "~> 2.0"
gem "pry"
diff --git a/Gemfile.lock b/Gemfile.lock
index 4af0fc40657..050410dfc11 100644
--- a/Gemfile.lock
+++ b/Gemfile.lock
@@ -12,9 +12,9 @@ GEM
ethon (0.11.0)
ffi (>= 1.3.0)
execjs (2.7.0)
- faraday (0.13.1)
+ faraday (0.14.0)
multipart-post (>= 1.2, < 3)
- ffi (1.9.18)
+ ffi (1.9.21)
forwardable-extended (2.6.0)
jekyll (3.6.0)
addressable (~> 2.4)
@@ -33,11 +33,11 @@ GEM
coffee-script (~> 2.2)
jekyll-default-layout (0.1.4)
jekyll (~> 3.0)
- jekyll-feed (0.9.2)
+ jekyll-feed (0.9.3)
jekyll (~> 3.3)
- jekyll-gist (1.4.1)
+ jekyll-gist (1.5.0)
octokit (~> 4.2)
- jekyll-github-metadata (2.9.3)
+ jekyll-github-metadata (2.9.4)
jekyll (~> 3.1)
octokit (~> 4.0, != 4.4.0)
jekyll-include-cache (0.1.0)
@@ -49,13 +49,13 @@ GEM
jekyll (~> 3.0)
jekyll-redirect-from (0.13.0)
jekyll (~> 3.3)
- jekyll-relative-links (0.5.1)
+ jekyll-relative-links (0.5.2)
jekyll (~> 3.3)
- jekyll-sass-converter (1.5.0)
+ jekyll-sass-converter (1.5.2)
sass (~> 3.4)
- jekyll-seo-tag (2.3.0)
+ jekyll-seo-tag (2.4.0)
jekyll (~> 3.3)
- jekyll-sitemap (1.1.1)
+ jekyll-sitemap (1.2.0)
jekyll (~> 3.3)
jekyll-swiss (0.4.0)
jekyll-theme-architect (0.0.3)
@@ -86,35 +86,41 @@ GEM
jekyll (~> 3.3)
jekyll-theme-time-machine (0.0.3)
jekyll (~> 3.3)
- jekyll-titles-from-headings (0.5.0)
+ jekyll-titles-from-headings (0.5.1)
jekyll (~> 3.3)
- jekyll-watch (1.5.0)
- listen (~> 3.0, < 3.1)
+ jekyll-watch (1.5.1)
+ listen (~> 3.0)
+ jekyll-youtube (1.0.0)
+ jekyll
json (1.8.6)
- kramdown (1.15.0)
+ kramdown (1.16.2)
liquid (4.0.0)
- listen (3.0.8)
+ listen (3.1.5)
rb-fsevent (~> 0.9, >= 0.9.4)
rb-inotify (~> 0.9, >= 0.9.7)
+ ruby_dep (~> 1.2)
mercenary (0.3.6)
method_source (0.9.0)
- minima (2.1.1)
- jekyll (~> 3.3)
+ minima (2.3.0)
+ jekyll (~> 3.5)
+ jekyll-feed (~> 0.9)
+ jekyll-seo-tag (~> 2.1)
multipart-post (2.0.0)
- octokit (4.7.0)
+ octokit (4.8.0)
sawyer (~> 0.8.0, >= 0.5.3)
- pathutil (0.16.0)
+ pathutil (0.16.1)
forwardable-extended (~> 2.6)
- pry (0.11.2)
+ pry (0.11.3)
coderay (~> 1.1.0)
method_source (~> 0.9.0)
- public_suffix (3.0.0)
+ public_suffix (3.0.2)
rb-fsevent (0.10.2)
rb-inotify (0.9.10)
ffi (>= 0.5.0, < 2)
rouge (2.2.1)
+ ruby_dep (1.5.0)
safe_yaml (1.0.4)
- sass (3.5.3)
+ sass (3.5.5)
sass-listen (~> 4.0.0)
sass-listen (4.0.0)
rb-fsevent (~> 0.9, >= 0.9.4)
@@ -164,6 +170,7 @@ DEPENDENCIES
jekyll-theme-tactile (= 0.0.3)
jekyll-theme-time-machine (= 0.0.3)
jekyll-titles-from-headings (~> 0.1)
+ jekyll-youtube (~> 1.0)
json (~> 1.7, >= 1.7.7)
kramdown (~> 1.11)
minima (~> 2.0)
@@ -173,4 +180,4 @@ DEPENDENCIES
unicode-display_width (~> 1.1)
BUNDLED WITH
- 1.15.4
+ 1.16.1
diff --git a/Makefile b/Makefile
index 999feab316e..85ea30f6471 100644
--- a/Makefile
+++ b/Makefile
@@ -9,7 +9,7 @@ build: ## Build site with production settings and put deliverables in _site.
bundle exec jekyll build
build-preview: ## Build site with drafts and future posts enabled.
- bundle exec jekyll build --drafts --future
+ bundle exec jekyll build --drafts --future --trace
serve: ## Boot the development server.
bundle exec jekyll serve
diff --git a/_config.yml b/_config.yml
index d0decddd540..2fe87ddc327 100644
--- a/_config.yml
+++ b/_config.yml
@@ -14,6 +14,10 @@ safe: false
lsi: false
latest: "v1.10"
+
+paginate: 7
+paginate_path: "/blog/page:num/"
+
defaults:
-
scope:
@@ -66,6 +70,8 @@ plugins:
- jekyll-sitemap
- jekyll-seo-tag
- jekyll-include-cache
+ - jekyll-paginate
+ - jekyll-youtube
# disabled gems
# - jekyll-redirect-from
@@ -90,3 +96,4 @@ tocs:
- samples
- search
- imported
+ - blog
diff --git a/_includes/CommunityHangout/Apr3.html b/_includes/CommunityHangout/Apr3.html
new file mode 100644
index 00000000000..e5637e0c520
--- /dev/null
+++ b/_includes/CommunityHangout/Apr3.html
@@ -0,0 +1,227 @@
+
+
+
+Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
+
+
+
+Agenda:
+
+
+
+Notes from meeting:
+
+
+Quinton - Cluster federation
+
+
+
+Ideas floating around after meetup in SF
+
+
+
+
+Please read and comment
+
+
+Not 1.0, but put a doc together to show roadmap
+
+
+Can be built outside of Kubernetes
+
+
+API to control things across multiple clusters, include some logic
+
+
+
+Auth(n)(z)
+
+
+Scheduling Policies
+
+
+…
+
+
+
+Different reasons for cluster federation
+
+
+
+Zone (un) availability : Resilient to zone failures
+
+
+Hybrid cloud: some in cloud, some on prem. for various reasons
+
+
+Avoid cloud provider lock-in. For various reasons
+
+
+“Cloudbursting” - automatic overflow into the cloud
+
+
+
+Hard problems
+
+
+
+Location affinity. How close do pods need to be?
+
+
+
+Workload coupling
+
+
+Absolute location (e.g. eu data needs to be in eu)
+
+
+
+Cross cluster service discovery
+
+
+
+How does service/DNS work across clusters
+
+
+
+Cross cluster workload migration
+
+
+
+How do you move an application piece by piece across clusters?
+
+
+
+Cross cluster scheduling
+
+
+
+How do know enough about clusters to know where to schedule
+
+
+Possibly use a cost function to achieve affinities with minimal complexity
+
+
+Can also use cost to determine where to schedule (under used clusters are cheaper than over-used clusters)
+
+
+
+
+Implicit requirements
+
+
+
+Cross cluster integration shouldn’t create cross-cluster failure modes
+
+
+
+Independently usable in a disaster situation where Ubernetes dies.
+
+
+
+Unified visibility
+
+
+
+Want to have unified monitoring, alerting, logging, introspection, ux, etc.
+
+
+
+Unified quota and identity management
+
+
+
+Want to have user database and auth(n)/(z) in a single place
+
+
+
+
+Important to note, most causes of software failure are not the infrastructure
+
+
+
+Botched software upgrades
+
+
+Botched config upgrades
+
+
+Botched key distribution
+
+
+Overload
+
+
+Failed external dependencies
+
+
+
+Discussion:
+
+
+
+Where do you draw the “ubernetes” line
+
+
+
+Likely at the availability zone, but could be at the rack, or the region
+
+
+
+Important to not pigeon hole and prevent other users
+
+
+
+
+
+
+
+Satnam - Soak Test
+
+
+
+Want to measure things that run for a long time to make sure that the cluster is stable over time. Performance doesn’t degrade, no memory leaks, etc.
+
+
+github.com/GoogleCloudPlatform/kubernetes/test/soak/…
+
+
+Single binary, puts lots of pods on each node, and queries each pod to make sure that it is running.
+
+
+Pods are being created much, much more quickly (even in the past week) to make things go more quickly.
+
+
+Once the pods are up running, we hit the pods via the proxy. Decision to hit the proxy was deliberate so that we test the kubernetes apiserver.
+
+
+Code is already checked in.
+
+
+Pin pods to each node, exercise every pod, make sure that you get a response for each node.
+
+
+Single binary, run forever.
+
+
+
+Brian - v1beta3 is enabled by default, v1beta1 and v1beta2 deprecated, turned off in June. Should still work with upgrading existing clusters, etc.
+
+
+
diff --git a/_includes/CommunityHangout/Mar27.html b/_includes/CommunityHangout/Mar27.html
new file mode 100644
index 00000000000..a3a892119c3
--- /dev/null
+++ b/_includes/CommunityHangout/Mar27.html
@@ -0,0 +1,176 @@
+
+
+
+
+
+ Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
+
+
+
+ Agenda:
+
+ - Andy - demo remote execution and port forwarding
+
+ - Quinton - Cluster federation - Postponed
+
+ - Clayton - UI code sharing and collaboration around Kubernetes
+
+
+
+ Notes from meeting:
+
+ 1. Andy from RedHat:
+
+
+ Demo remote execution
+
+
+
+ kubectl exec -p $POD -- $CMD
+
+
+ Makes a connection to the master as proxy, figures out which node the pod is on, proxies connection to kubelet, which does the interesting bit. via nsenter.
+
+
+ Multiplexed streaming over HTTP using SPDY
+
+
+ Also interactive mode:
+
+
+
+ Assumes first container. Can use -c $CONTAINER to pick a particular one.
+
+
+ If have gdb pre-installed in container, then can interactively attach it to running process
+
+
+
+ Can also with careful flag crafting run rsync over this or set up sshd inside container.
+
+
+ Some feedback via chat:
+
+
+
+
+ Andy also demoed port forwarding
+
+
+
+ nsenter vs. docker exec
+
+
+
+ want to inject a binary under control of the host, similar to pre-start hooks
+
+
+ socat, nsenter, whatever the pre-start hook needs
+
+
+
+ would be nice to blog post on this
+
+
+
+ version of nginx in wheezy is too old to support needed master-proxy functionality
+
+
+
+
+ 2. Clayton: where are we wrt a community organization for e.g. kubernetes UI components?
+
+
+ google-containers-ui IRC channel, mailing list.
+
+
+ Tim: google-containers prefix is historical, should just do “kubernetes-ui”
+
+
+ also want to put design resources in, and bower expects its own repo.
+
+
+ General agreement
+
+
+
+ 3. Brian Grant:
+
+
+ Testing v1beta3, getting that ready to go in.
+
+
+ Paul working on changes to commandline stuff.
+
+
+ Early to mid next week, try to enable v1beta3 by default?
+
+
+ For any other changes, file issue and CC thockin.
+
+
+
+ 4. General consensus that 30 minutes is better than 60
+
+
+ Shouldn’t artificially try to extend just to fill time.
+
+
diff --git a/_includes/footer.html b/_includes/footer.html
index 4d15180920e..edda25d1d10 100644
--- a/_includes/footer.html
+++ b/_includes/footer.html
@@ -1,9 +1,9 @@
-
+
Get Started
Documentation
- Blog
+ Blog
Partners
Community
Case Studies
diff --git a/_includes/head.html b/_includes/head.html
index 1c107cd2c15..5a666b8e0fa 100644
--- a/_includes/head.html
+++ b/_includes/head.html
@@ -29,4 +29,5 @@
{% if page.js %}{% assign jslist = page.js | split: ',' | compact %}{% for jsurl in jslist %}
{% endfor %}{% else %}{% endif %}
{% seo %}
+ {% feed_meta %}
diff --git a/_includes/header.html b/_includes/header.html
index 16b22650ba1..a6ced83bb45 100644
--- a/_includes/header.html
+++ b/_includes/header.html
@@ -6,7 +6,7 @@
-
+
Read the latest news for Kubernetes and the containers space in general, and get technical how-tos hot off the presses.
diff --git a/_includes/youtubePlayer.html b/_includes/youtubePlayer.html
new file mode 100644
index 00000000000..34996aa1394
--- /dev/null
+++ b/_includes/youtubePlayer.html
@@ -0,0 +1 @@
+VIDEO
diff --git a/_layouts/blog.html b/_layouts/blog.html
new file mode 100644
index 00000000000..096a5975a4f
--- /dev/null
+++ b/_layouts/blog.html
@@ -0,0 +1,193 @@
+---
+#empty front matter
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ {% if page.deprecated %} {% endif %}
+
+
+
+ {% if page.description %}
+
+ {% else %}
+
+ {% endif %}
+
+
+
+
+
+
+
+
+
+ {% feed_meta %}
+
+ {% seo %}
+
+
+
+
+
+
+
+
+
+
+
+
+
Kubernetes Blog
+
+
+
+
+
+
+
+
+
+
+
{{page.title}}
+ {{ content }}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ {% for post in site.posts %}
+ {% capture this_year %}{{ post.date | date: "%Y" }}{% endcapture %}
+ {% capture this_month %}{{ post.date | date: "%B" }}{% endcapture %}
+ {% capture next_year %}{{ post.previous.date | date: "%Y" }}{% endcapture %}
+ {% capture next_month %}{{ post.previous.date | date: "%B" }}{% endcapture %}
+ {% if forloop.first %}
+
+ {% else %}
+ {% if this_year != next_year %}
+
+
+
+
+
+
+
+
+
+{% include footer.html %}
+{% include footer-scripts.html %}
+
diff --git a/blog/_posts/2015-03-00-Kubernetes-Gathering-Videos.md b/blog/_posts/2015-03-00-Kubernetes-Gathering-Videos.md
new file mode 100644
index 00000000000..daf58eef64c
--- /dev/null
+++ b/blog/_posts/2015-03-00-Kubernetes-Gathering-Videos.md
@@ -0,0 +1,12 @@
+---
+
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes Gathering Videos "
+date: Tuesday, March 23, 2015
+pagination:
+ enabled: true
+---
+If you missed the Kubernetes Gathering in SF last month, fear not! Here are the videos from the evening presentations organized into a playlist on YouTube
+
+[](https://www.youtube.com/playlist?list=PL69nYSiGNLP2FBVvSLHpJE8_6hRHW8Kxe)
diff --git a/blog/_posts/2015-03-00-Paricipate-In-Kubernetes-User.md b/blog/_posts/2015-03-00-Paricipate-In-Kubernetes-User.md
new file mode 100644
index 00000000000..ce32db78a70
--- /dev/null
+++ b/blog/_posts/2015-03-00-Paricipate-In-Kubernetes-User.md
@@ -0,0 +1,23 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Paricipate in a Kubernetes User Experience Study "
+date: Wednesday, March 31, 2015
+pagination:
+ enabled: true
+---
+We need your help in shaping the future of Kubernetes and Google Container Engine, and we'd love to have you participate in a remote UX research study to help us learn about your experiences! If you're interested in participating, we invite you to take [this brief survey](http://goo.gl/AXFFMs) to see if you qualify. If you’re selected to participate, we’ll follow up with you directly.
+
+
+- Length: 60 minute interview
+- Date: April 7th-15th
+- Location: Remote
+- Your gift: $100 Perks gift code\*
+- Study format: Interview with our researcher
+
+
+Interested in participating? Take [this brief survey](http://goo.gl/AXFFMs).
+
+
+
+\* Perks gift codes can be redeemed for gift certificates from VISA and used at a number of online retailers ([http://www.google.com/forms/perks/index1.html](http://www.google.com/forms/perks/index1.html)). Gift codes are only for participants who successfully complete the study session. You’ll be emailed the gift code after you complete the study session.
diff --git a/blog/_posts/2015-03-00-Weekly-Kubernetes-Community-Hangout.md b/blog/_posts/2015-03-00-Weekly-Kubernetes-Community-Hangout.md
new file mode 100644
index 00000000000..c946e7fe2cc
--- /dev/null
+++ b/blog/_posts/2015-03-00-Weekly-Kubernetes-Community-Hangout.md
@@ -0,0 +1,69 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Weekly Kubernetes Community Hangout Notes - March 27 2015 "
+date: Sunday, March 28, 2015
+pagination:
+ enabled: true
+---
+Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
+
+Agenda:
+
+\- Andy - demo remote execution and port forwarding
+
+\- Quinton - Cluster federation - Postponed
+
+\- Clayton - UI code sharing and collaboration around Kubernetes
+
+Notes from meeting:
+
+1\. Andy from RedHat:
+
+* Demo remote execution
+
+ * kubectl exec -p $POD -- $CMD
+
+ * Makes a connection to the master as proxy, figures out which node the pod is on, proxies connection to kubelet, which does the interesting bit. via nsenter.
+
+ * Multiplexed streaming over HTTP using SPDY
+
+ * Also interactive mode:
+
+ * Assumes first container. Can use -c $CONTAINER to pick a particular one.
+
+ * If have gdb pre-installed in container, then can interactively attach it to running process
+
+ * backtrace, symbol tbles, print, etc. Most things you can do with gdb.
+
+ * Can also with careful flag crafting run rsync over this or set up sshd inside container.
+
+ * Some feedback via chat:
+* Andy also demoed port forwarding
+* nsenter vs. docker exec
+
+ * want to inject a binary under control of the host, similar to pre-start hooks
+
+ * socat, nsenter, whatever the pre-start hook needs
+* would be nice to blog post on this
+* version of nginx in wheezy is too old to support needed master-proxy functionality
+
+2\. Clayton: where are we wrt a community organization for e.g. kubernetes UI components?
+
+* google-containers-ui IRC channel, mailing list.
+* Tim: google-containers prefix is historical, should just do "kubernetes-ui"
+* also want to put design resources in, and bower expects its own repo.
+* General agreement
+
+3\. Brian Grant:
+
+* Testing v1beta3, getting that ready to go in.
+* Paul working on changes to commandline stuff.
+* Early to mid next week, try to enable v1beta3 by default?
+* For any other changes, file issue and CC thockin.
+
+4\. General consensus that 30 minutes is better than 60
+
+
+
+* Shouldn't artificially try to extend just to fill time.
diff --git a/blog/_posts/2015-03-00-Welcome-To-Kubernetes-Blog.md b/blog/_posts/2015-03-00-Welcome-To-Kubernetes-Blog.md
new file mode 100644
index 00000000000..dbfcc0d24c8
--- /dev/null
+++ b/blog/_posts/2015-03-00-Welcome-To-Kubernetes-Blog.md
@@ -0,0 +1,30 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: Welcome to the Kubernetes Blog!
+date: Saturday, March 20, 2015
+---
+Welcome to the new Kubernetes blog. Follow this blog to learn about the Kubernetes Open Source project. We plan to post release notes, how-to articles, events, and maybe even some off topic fun here from time to time.
+
+
+If you are using Kubernetes or contributing to the project and would like to do a guest post, [please let me know](mailto:kitm@google.com).
+
+
+
+To start things off, here's a roundup of recent Kubernetes posts from other sites:
+
+- [Scaling MySQL in the cloud with Vitess and Kubernetes](http://googlecloudplatform.blogspot.com/2015/03/scaling-MySQL-in-the-cloud-with-Vitess-and-Kubernetes.html)
+- [Container Clusters on VMs](http://googlecloudplatform.blogspot.com/2015/02/container-clusters-on-vms.html)
+- [Everything you wanted to know about Kubernetes but were afraid to ask](http://googlecloudplatform.blogspot.com/2015/01/everything-you-wanted-to-know-about-Kubernetes-but-were-afraid-to-ask.html)
+- [What makes a container cluster?](http://googlecloudplatform.blogspot.com/2015/01/what-makes-a-container-cluster.html)
+- [Integrating OpenStack and Kubernetes with Murano](https://www.mirantis.com/blog/integrating-openstack-and-kubernetes-with-murano/)
+- [An introduction to containers, Kubernetes, and the trajectory of modern cloud computing](http://googlecloudplatform.blogspot.com/2015/01/in-coming-weeks-we-will-be-publishing.html)
+- [What is Kubernetes and how to use it?](http://www.centurylinklabs.com/what-is-kubernetes-and-how-to-use-it/)
+- [OpenShift V3, Docker and Kubernetes Strategy](https://blog.openshift.com/v3-docker-kubernetes-interview/)
+- [An Introduction to Kubernetes](https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes)
+
+
+Happy cloud computing!
+
+
+ - Kit Merker - Product Manager, Google Cloud Platform
diff --git a/blog/_posts/2015-04-00-Borg-Predecessor-To-Kubernetes.md b/blog/_posts/2015-04-00-Borg-Predecessor-To-Kubernetes.md
new file mode 100644
index 00000000000..38559bcf08d
--- /dev/null
+++ b/blog/_posts/2015-04-00-Borg-Predecessor-To-Kubernetes.md
@@ -0,0 +1,49 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Borg: The Predecessor to Kubernetes "
+date: Friday, April 23, 2015
+pagination:
+ enabled: true
+---
+Google has been running containerized workloads in production for more than a decade. Whether it's service jobs like web front-ends and stateful servers, infrastructure systems like [Bigtable](http://research.google.com/archive/bigtable.html) and [Spanner](http://research.google.com/archive/spanner.html), or batch frameworks like [MapReduce](http://research.google.com/archive/mapreduce.html) and [Millwheel](http://research.google.com/pubs/pub41378.html), virtually everything at Google runs as a container. Today, we took the wraps off of Borg, Google’s long-rumored internal container-oriented cluster-management system, publishing details at the academic computer systems conference [Eurosys](http://eurosys2015.labri.fr/). You can find the paper [here](https://research.google.com/pubs/pub43438.html).
+
+
+
+Kubernetes traces its lineage directly from Borg. Many of the developers at Google working on Kubernetes were formerly developers on the Borg project. We've incorporated the best ideas from Borg in Kubernetes, and have tried to address some pain points that users identified with Borg over the years.
+
+
+
+To give you a flavor, here are four Kubernetes features that came from our experiences with Borg:
+
+
+
+1) [Pods](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md). A pod is the unit of scheduling in Kubernetes. It is a resource envelope in which one or more containers run. Containers that are part of the same pod are guaranteed to be scheduled together onto the same machine, and can share state via local volumes.
+
+
+
+Borg has a similar abstraction, called an alloc (short for “resource allocation”). Popular uses of allocs in Borg include running a web server that generates logs alongside a lightweight log collection process that ships the log to a cluster filesystem (not unlike fluentd or logstash); running a web server that serves data from a disk directory that is populated by a process that reads data from a cluster filesystem and prepares/stages it for the web server (not unlike a Content Management System); and running user-defined processing functions alongside a storage shard. Pods not only support these use cases, but they also provide an environment similar to running multiple processes in a single VM -- Kubernetes users can deploy multiple co-located, cooperating processes in a pod without having to give up the simplicity of a one-application-per-container deployment model.
+
+
+
+2) [Services](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md). Although Borg’s primary role is to manage the lifecycles of tasks and machines, the applications that run on Borg benefit from many other cluster services, including naming and load balancing. Kubernetes supports naming and load balancing using the service abstraction: a service has a name and maps to a dynamic set of pods defined by a label selector (see next section). Any container in the cluster can connect to the service using the service name. Under the covers, Kubernetes automatically load-balances connections to the service among the pods that match the label selector, and keeps track of where the pods are running as they get rescheduled over time due to failures.
+
+
+
+3) [Labels](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/labels.md). A container in Borg is usually one replica in a collection of identical or nearly identical containers that correspond to one tier of an Internet service (e.g. the front-ends for Google Maps) or to the workers of a batch job (e.g. a MapReduce). The collection is called a Job, and each replica is called a Task. While the Job is a very useful abstraction, it can be limiting. For example, users often want to manage their entire service (composed of many Jobs) as a single entity, or to uniformly manage several related instances of their service, for example separate canary and stable release tracks. At the other end of the spectrum, users frequently want to reason about and control subsets of tasks within a Job -- the most common example is during rolling updates, when different subsets of the Job need to have different configurations.
+
+
+
+Kubernetes supports more flexible collections than Borg by organizing pods using labels, which are arbitrary key/value pairs that users attach to pods (and in fact to any object in the system). Users can create groupings equivalent to Borg Jobs by using a “job:\” label on their pods, but they can also use additional labels to tag the service name, service instance (production, staging, test), and in general, any subset of their pods. A label query (called a “label selector”) is used to select which set of pods an operation should be applied to. Taken together, labels and [replication controllers](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/replication-controller.md) allow for very flexible update semantics, as well as for operations that span the equivalent of Borg Jobs.
+
+
+
+4) IP-per-Pod. In Borg, all tasks on a machine use the IP address of that host, and thus share the host’s port space. While this means Borg can use a vanilla network, it imposes a number of burdens on infrastructure and application developers: Borg must schedule ports as a resource; tasks must pre-declare how many ports they need, and take as start-up arguments which ports to use; the Borglet (node agent) must enforce port isolation; and the naming and RPC systems must handle ports as well as IP addresses.
+
+
+
+Thanks to the advent of software-defined overlay networks such as [flannel](https://coreos.com/blog/introducing-rudder/) or those built into [public clouds](https://cloud.google.com/compute/docs/networking), Kubernetes is able to give every pod and service its own IP address. This removes the infrastructure complexity of managing ports, and allows developers to choose any ports they want rather than requiring their software to adapt to the ones chosen by the infrastructure. The latter point is crucial for making it easy to run off-the-shelf open-source applications on Kubernetes--pods can be treated much like VMs or physical hosts, with access to the full port space, oblivious to the fact that they may be sharing the same physical machine with other pods.
+
+
+
+With the growing popularity of container-based microservice architectures, the lessons Google has learned from running such systems internally have become of increasing interest to the external DevOps community. By revealing some of the inner workings of our cluster manager Borg, and building our next-generation cluster manager as both an open-source project (Kubernetes) and a publicly available hosted service ([Google Container Engine](http://cloud.google.com/container-engine)), we hope these lessons can benefit the broader community outside of Google and advance the state-of-the-art in container scheduling and cluster management.
diff --git a/blog/_posts/2015-04-00-Faster-Than-Speeding-Latte.md b/blog/_posts/2015-04-00-Faster-Than-Speeding-Latte.md
new file mode 100644
index 00000000000..1287ac63041
--- /dev/null
+++ b/blog/_posts/2015-04-00-Faster-Than-Speeding-Latte.md
@@ -0,0 +1,10 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Faster than a speeding Latte "
+date: Tuesday, April 06, 2015
+pagination:
+ enabled: true
+---
+Check out Brendan Burns racing Kubernetes.
+[](https://www.youtube.com/watch?v=?7vZ9dRKRMyc)
diff --git a/blog/_posts/2015-04-00-Introducing-Kubernetes-V1Beta3.md b/blog/_posts/2015-04-00-Introducing-Kubernetes-V1Beta3.md
new file mode 100644
index 00000000000..c1308a45d9e
--- /dev/null
+++ b/blog/_posts/2015-04-00-Introducing-Kubernetes-V1Beta3.md
@@ -0,0 +1,65 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Introducing Kubernetes API Version v1beta3 "
+date: Friday, April 16, 2015
+pagination:
+ enabled: true
+---
+We've been hard at work on cleaning up the API over the past several months (see [https://github.com/GoogleCloudPlatform/kubernetes/issues/1519](https://github.com/GoogleCloudPlatform/kubernetes/issues/1519) for details). The result is v1beta3, which is considered to be the release candidate for the v1 API.
+
+We would like you to move to this new API version as soon as possible. v1beta1 and v1beta2 are deprecated, and will be removed by the end of June, shortly after we introduce the v1 API.
+
+As of the latest release, v0.15.0, v1beta3 is the primary, default API. We have changed the default kubectl and client API versions as well as the default storage version (which means objects persisted in etcd will be converted from v1beta1 to v1beta3 as they are rewritten).
+
+You can take a look at v1beta3 examples such as:
+
+[https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook/v1beta3](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook/v1beta3)
+
+[https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/walkthrough/v1beta3](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/walkthrough/v1beta3)
+
+[https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/update-demo/v1beta3](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/update-demo/v1beta3)
+
+
+
+To aid the transition, we've also created a conversion [tool](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/cluster_management.md#switching-your-config-files-to-a-new-api-version) and put together a list of important [different API changes](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/api.md#v1beta3-conversion-tips).
+
+
+- The resource `id` is now called `name`.
+- `name`, `labels`, `annotations`, and other metadata are now nested in a map called `metadata`
+- `desiredState` is now called `spec`, and `currentState` is now called `status`
+- `/minions` has been moved to `/nodes`, and the resource has kind `Node`
+- The namespace is required (for all namespaced resources) and has moved from a URL parameter to the path:`/api/v1beta3/namespaces/{namespace}/{resource_collection}/{resource_name}`
+- The names of all resource collections are now lower cased - instead of `replicationControllers`, use`replicationcontrollers`.
+- To watch for changes to a resource, open an HTTP or Websocket connection to the collection URL and provide the`?watch=true` URL parameter along with the desired `resourceVersion` parameter to watch from.
+- The container `entrypoint` has been renamed to `command`, and `command` has been renamed to `args`.
+- Container, volume, and node resources are expressed as nested maps (e.g., `resources{cpu:1}`) rather than as individual fields, and resource values support [scaling suffixes](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/resources.md#resource-quantities) rather than fixed scales (e.g., milli-cores).
+- Restart policy is represented simply as a string (e.g., "Always") rather than as a nested map ("always{}").
+- The volume `source` is inlined into `volume` rather than nested.
+- Host volumes have been changed to hostDir to hostPath to better reflect that they can be files or directories
+
+
+
+And the most recently generated Swagger specification of the API is here:
+
+[http://kubernetes.io/third\_party/swagger-ui/#!/v1beta3](http://kubernetes.io/third_party/swagger-ui/#!/v1beta3)
+
+
+
+More details about our approach to API versioning and the transition can be found here:
+
+[https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/api.md](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/api.md)
+
+
+
+Another change we discovered is that with the change to the default API version in kubectl, commands that use "-o template" will break unless you specify "--api-version=v1beta1" or update to v1beta3 syntax. An example of such a change can be seen here:
+
+[https://github.com/GoogleCloudPlatform/kubernetes/pull/6377/files](https://github.com/GoogleCloudPlatform/kubernetes/pull/6377/files)
+
+
+
+If you use "-o template", I recommend always explicitly specifying the API version rather than relying upon the default. We may add this setting to kubeconfig in the future.
+
+
+
+Let us know if you have any questions. As always, we're available on IRC (#google-containers) and github issues.
diff --git a/blog/_posts/2015-04-00-Kubernetes-And-Mesosphere-Dcos.md b/blog/_posts/2015-04-00-Kubernetes-And-Mesosphere-Dcos.md
new file mode 100644
index 00000000000..3f55591ac40
--- /dev/null
+++ b/blog/_posts/2015-04-00-Kubernetes-And-Mesosphere-Dcos.md
@@ -0,0 +1,33 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes and the Mesosphere DCOS "
+date: Thursday, April 22, 2015
+pagination:
+ enabled: true
+---
+
+# Kubernetes and the Mesosphere DCOS
+
+
+
+Today Mesosphere announced the addition of Kubernetes as a standard part of their [DCOS][1] offering. This is a great step forwards in bringing cloud native application management to the world, and should lay to rest many questions we hear about 'Kubernetes or Mesos, which one should I use?'. Now you can have your cake and eat it too: use both. Today's announcement extends the reach of Kubernetes to a new class of users, and add some exciting new capabilities for everyone.
+
+By way of background, Kubernetes is a cluster management framework that was started by Google nine months ago, inspired by the internal system known as Borg. You can learn a little more about Borg by checking out this [paper][2]. At the heart of it Kubernetes offers what has been dubbed 'cloud native' application management. To us, there are three things that together make something 'cloud native':
+
+
+
+* **Container oriented deployments** Package up your application components with all their dependencies and deploy them using technologies like Docker or Rocket. Containers radically simplify the deployment process, making rollouts repeatable and predictable.
+* **Dynamically managed** Rely on modern control systems to make moment-to-moment decisions around the health management and scheduling of applications to radically improve reliability and efficiency. There are some things that just machines do better than people, and actively running applications is one of those things.
+* **Micro-services oriented** Tease applications apart into small semi-autonomous services that can be consumed easily so that the resulting systems are easier to understand, extend and adapt.
+
+Kubernetes was designed from the start to make these capabilities available to everyone, and built by the same engineers that built the system internally known as Borg. For many users the promise of 'Google style app management' is interesting, but they want to run these new classes of applications on the same set of physical resources as their existing workloads like Hadoop, Spark, Kafka, etc. Now they will have access to commercially supported offering that brings the two worlds together.
+
+Mesosphere, one of the earliest supporters of the Kubernetes project, has been working closely with the core Kubernetes team to create a natural experience for users looking to get the best of both worlds, adding Kubernetes to every Mesos deployment they instantiate, whether it be in the public cloud, private cloud, or in a hybrid deployment model. This is well aligned with the overall Kubernetes vision of creating ubiquitous management framework that runs anywhere a container can. It will be interesting to see how you blend together the old world and the new on a commercially supported, versatile platform.
+
+Craig McLuckie
+
+Product Manager, Google and Kubernetes co-founder
+
+[1]: https://mesosphere.com/product/
+[2]: http://research.google.com/pubs/pub43438.html
diff --git a/blog/_posts/2015-04-00-Kubernetes-Release-0150.md b/blog/_posts/2015-04-00-Kubernetes-Release-0150.md
new file mode 100644
index 00000000000..a86a215f443
--- /dev/null
+++ b/blog/_posts/2015-04-00-Kubernetes-Release-0150.md
@@ -0,0 +1,86 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes Release: 0.15.0 "
+date: Friday, April 16, 2015
+pagination:
+ enabled: true
+---
+
+Release Notes:
+
+
+
+* Enables v1beta3 API and sets it to the default API version ([#6098][1])
+* Added multi-port Services ([#6182][2])
+ * New Getting Started Guides
+ * Multi-node local startup guide ([#6505][3])
+ * Mesos on Google Cloud Platform ([#5442][4])
+ * Ansible Setup instructions ([#6237][5])
+* Added a controller framework ([#5270][6], [#5473][7])
+* The Kubelet now listens on a secure HTTPS port ([#6380][8])
+* Made kubectl errors more user-friendly ([#6338][9])
+* The apiserver now supports client cert authentication ([#6190][10])
+* The apiserver now limits the number of concurrent requests it processes ([#6207][11])
+* Added rate limiting to pod deleting ([#6355][12])
+* Implement Balanced Resource Allocation algorithm as a PriorityFunction in scheduler package ([#6150][13])
+* Enabled log collection from master ([#6396][14])
+* Added an api endpoint to pull logs from Pods ([#6497][15])
+* Added latency metrics to scheduler ([#6368][16])
+* Added latency metrics to REST client ([#6409][17])
+* etcd now runs in a pod on the master ([#6221][18])
+* nginx now runs in a container on the master ([#6334][19])
+* Began creating Docker images for master components ([#6326][20])
+* Updated GCE provider to work with gcloud 0.9.54 ([#6270][21])
+* Updated AWS provider to fix Region vs Zone semantics ([#6011][22])
+* Record event when image GC fails ([#6091][23])
+* Add a QPS limiter to the kubernetes client ([#6203][24])
+* Decrease the time it takes to run make release ([#6196][25])
+* New volume support
+ * Added iscsi volume plugin ([#5506][26])
+ * Added glusterfs volume plugin ([#6174][27])
+ * AWS EBS volume support ([#5138][28])
+* Updated to heapster version to v0.10.0 ([#6331][29])
+* Updated to etcd 2.0.9 ([#6544][30])
+* Updated to Kibana to v1.2 ([#6426][31])
+* Bug Fixes
+ * Kube-proxy now updates iptables rules if a service's public IPs change ([#6123][32])
+ * Retry kube-addons creation if the initial creation fails ([#6200][33])
+ * Make kube-proxy more resiliant to running out of file descriptors ([#6727][34])
+
+To download, please visit https://github.com/GoogleCloudPlatform/kubernetes/releases/tag/v0.15.0
+
+[1]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6098 "Enabling v1beta3 api version by default in master"
+[2]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6182 "Implement multi-port Services"
+[3]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6505 "Docker multi-node"
+[4]: https://github.com/GoogleCloudPlatform/kubernetes/pull/5442 "Getting started guide for Mesos on Google Cloud Platform"
+[5]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6237 "example ansible setup repo"
+[6]: https://github.com/GoogleCloudPlatform/kubernetes/pull/5270 "Controller framework"
+[7]: https://github.com/GoogleCloudPlatform/kubernetes/pull/5473 "Add DeltaFIFO (a controller framework piece)"
+[8]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6380 "Configure the kubelet to use HTTPS (take 2)"
+[9]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6338 "Return a typed error for config validation, and make errors simple"
+[10]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6190 "Add client cert authentication"
+[11]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6207 "Add a limit to the number of in-flight requests that a server processes."
+[12]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6355 "Added rate limiting to pod deleting"
+[13]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6150 "Implement Balanced Resource Allocation (BRA) algorithm as a PriorityFunction in scheduler package."
+[14]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6396 "Enable log collection from master."
+[15]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6497 "Pod log subresource"
+[16]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6368 "Add basic latency metrics to scheduler."
+[17]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6409 "Add latency metrics to REST client"
+[18]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6221 "Run etcd 2.0.5 in a pod"
+[19]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6334 "Add an nginx docker image for use on the master."
+[20]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6326 "Create Docker images for master components "
+[21]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6270 "Updates for gcloud 0.9.54"
+[22]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6011 "Fix AWS region vs zone"
+[23]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6091 "Record event when image GC fails."
+[24]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6203 "Add a QPS limiter to the kubernetes client."
+[25]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6196 "Parallelize architectures in both the building and packaging phases of `make release`"
+[26]: https://github.com/GoogleCloudPlatform/kubernetes/pull/5506 "add iscsi volume plugin"
+[27]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6174 "implement glusterfs volume plugin"
+[28]: https://github.com/GoogleCloudPlatform/kubernetes/pull/5138 "AWS EBS volume support"
+[29]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6331 "Update heapster version to v0.10.0"
+[30]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6544 "Build etcd image (version 2.0.9), and upgrade kubernetes cluster to the new version"
+[31]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6426 "Update Kibana to v1.2 which paramaterizes location of Elasticsearch"
+[32]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6123 "Fix bug in kube-proxy of not updating iptables rules if a service's public IPs change"
+[33]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6200 "Retry kube-addons creation if kube-addons creation fails."
+[34]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6727 "pkg/proxy: panic if run out of fd"
diff --git a/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout.md b/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout.md
new file mode 100644
index 00000000000..d2909491c7c
--- /dev/null
+++ b/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout.md
@@ -0,0 +1,111 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Weekly Kubernetes Community Hangout Notes - April 3 2015 "
+date: Sunday, April 04, 2015
+pagination:
+ enabled: true
+---
+# Kubernetes: Weekly Kubernetes Community Hangout Notes
+
+Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
+
+Agenda:
+
+* Quinton - Cluster federation
+* Satnam - Performance benchmarking update
+
+*Notes from meeting:*
+
+1. Quinton - Cluster federation
+* Ideas floating around after meetup in SF
+* * Please read and comment
+* Not 1.0, but put a doc together to show roadmap
+* Can be built outside of Kubernetes
+* API to control things across multiple clusters, include some logic
+
+1. Auth(n)(z)
+
+2. Scheduling Policies
+
+3. …
+
+* Different reasons for cluster federation
+
+1. Zone (un) availability : Resilient to zone failures
+
+2. Hybrid cloud: some in cloud, some on prem. for various reasons
+
+3. Avoid cloud provider lock-in. For various reasons
+
+4. "Cloudbursting" - automatic overflow into the cloud
+* Hard problems
+
+1. Location affinity. How close do pods need to be?
+
+ 1. Workload coupling
+
+ 2. Absolute location (e.g. eu data needs to be in eu)
+
+2. Cross cluster service discovery
+
+ 1. How does service/DNS work across clusters
+
+3. Cross cluster workload migration
+
+ 1. How do you move an application piece by piece across clusters?
+
+4. Cross cluster scheduling
+
+ 1. How do know enough about clusters to know where to schedule
+
+ 2. Possibly use a cost function to achieve affinities with minimal complexity
+
+ 3. Can also use cost to determine where to schedule (under used clusters are cheaper than over-used clusters)
+
+* Implicit requirements
+
+1. Cross cluster integration shouldn't create cross-cluster failure modes
+
+ 1. Independently usable in a disaster situation where Ubernetes dies.
+
+2. Unified visibility
+
+ 1. Want to have unified monitoring, alerting, logging, introspection, ux, etc.
+
+3. Unified quota and identity management
+
+ 1. Want to have user database and auth(n)/(z) in a single place
+
+* Important to note, most causes of software failure are not the infrastructure
+
+1. Botched software upgrades
+
+2. Botched config upgrades
+
+3. Botched key distribution
+
+4. Overload
+
+5. Failed external dependencies
+
+* Discussion:
+
+1. Where do you draw the "ubernetes" line
+
+ 1. Likely at the availability zone, but could be at the rack, or the region
+
+2. Important to not pigeon hole and prevent other users
+
+
+
+2. Satnam - Soak Test
+* Want to measure things that run for a long time to make sure that the cluster is stable over time. Performance doesn't degrade, no memory leaks, etc.
+* github.com/GoogleCloudPlatform/kubernetes/test/soak/…
+* Single binary, puts lots of pods on each node, and queries each pod to make sure that it is running.
+* Pods are being created much, much more quickly (even in the past week) to make things go more quickly.
+* Once the pods are up running, we hit the pods via the proxy. Decision to hit the proxy was deliberate so that we test the kubernetes apiserver.
+* Code is already checked in.
+* Pin pods to each node, exercise every pod, make sure that you get a response for each node.
+* Single binary, run forever.
+* Brian - v1beta3 is enabled by default, v1beta1 and v1beta2 deprecated, turned off in June. Should still work with upgrading existing clusters, etc.
diff --git a/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_11.md b/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_11.md
new file mode 100644
index 00000000000..9d83349dce4
--- /dev/null
+++ b/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_11.md
@@ -0,0 +1,114 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Weekly Kubernetes Community Hangout Notes - April 10 2015 "
+date: Sunday, April 11, 2015
+pagination:
+ enabled: true
+---
+Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
+
+Agenda:
+
+* kubectl tooling, rolling update, deployments, imperative commands
+* Downward API / env. substitution, and maybe preconditions/dependencies
+
+
+**Notes from meeting:**
+
+1\. kubectl improvements
+
+* make it simpler to use, finish rolling update, higher-level deployment concepts
+* rolling update
+
+ * today
+ * can replace one rc by another rc specified by a file
+
+ * no explicit support for rollback, can sort of do it by doing rolling update to old version
+
+ * we keep annotations on rcs to keep track of desired # instances; won't work for rollback case b/c not symmetric
+
+ * need immutable image ids; currently no uuid that corresponds to image,version so if someone pushes on top you'll re-pull that; in API server we should translate images into uuids (as close to edge as possible)
+
+ * would be nice to auto-gen new rc instead of having user update it (e.g. when change image tag for container, etc.; currently need to change rc name and label value; could automate generating new rc)
+
+ * treating rcs as pets vs. cattle
+
+ * "roll me from v1 to v2" (or v2 to v1) - good enough for most people. don't care about record of what happened in the past.
+
+ * we're providing the module ansible can call to make something happen.
+
+ * how do you keep track of multiple templates; today we use multiple RCs
+
+ * if we had a deployment controller ; deployment config spawns pos that runs rolling update; trigger is level-based update of image repository
+
+ * alternative short-term proposal: create new rc as clone of old one, futz with counts so new one is old one and vv, bring prev-named one (pet) down to zero and bring it back up with new template (this is very similar to how Borg does job updates)
+ * is it worthwhile if we want to have the deployments anyway? yes b/c we have lots of concepts already; need to simplify
+
+ * deployment controller keeps track of multiple templates which is what you need for rolling updates and canaries
+
+ * only reason for new thing is to move the process into the server instead of the client?
+
+ * may not need to make it an API object; should provide experience where it's not an API object and is just something client side
+
+ * need an experience now so need to do it in client because object won't land before 1.0
+
+ * having simplified experience for people who only want to enageg w/ RCs
+
+ * how does rollback work: ctrl-c, rollout v2 v1. rollback pattern can be in person's head. 2 kinds of rollback: i'm at steady state and want to go back, and i've got canary deployment and hit ctrl-c how do i get rid of the canary deployment (e.g. new is failing). ctrl-c might not work. delete canary controller and its pods. wish there was a command to also delete pods (there is -- kbectl stop). argument for not reusing name: when you move fwd you can stop the new thing and you're ok, vs. if you replace the old one and you've created a copy if you hit ctrl-c you don't have anything you can stop. but you could wait to flip the name until the end, use naming convention so can figure out what is going on, etc.
+
+ * two different experiences: (1) i'm using version control, have version history of last week rollout this week, rolling update with two files -> create v2, ??? v1, don't have a pet - moved into world of version control where have cumulative history and; (1) imperative kubectl v1 v2 where sys takes care of details, that's where we use the snapshot pattern
+
+* other imperative commands
+
+ * run-container (or just run): spec command on command line which makes it more similar to docker run; but not multi-container pods.
+
+ * \--forever vs. not (one shot exec via simple commad)
+
+ * would like it go interactive - run -it and runs in cluster but you have interactive terminal to your process.
+
+ * how do command line args work. could say --image multiple times. will cobra support? in openshift we have clever syntax for grouping arguments together. doesn't work for real structured parameters.
+
+ * alternative: create pod; add container add container ...; run pod -- build and don't run object until 'run pod'
+
+ * \-- to separate container args
+
+ * create a pod, mutate it before you run it - like initializer pattern
+* kind discovery
+
+ * if we have run and sometimes it creates an rc and sometimes it doesn't, how does user know what to delete if they want to delete whatever they created with run
+
+ * bburns has proposal for don't specify kind if you do command like stop, delete; let kubectl figure it out
+
+ * alternative: allow you to define alias from name to set of resource types, eg. delete all which would follow that alias (all could mean everything in some namespace, or unscoped, etc.) - someone explicitly added something to a set vs. accidentally showed up like nodes
+
+ * would like to see extended to allow tools to specify their own aliases (not just users); e.g. resize can say i can handle RCs, delete can say I can handle everything, et.c so we can automatically do these things w/o users have to specify stuff. but right mechanism.
+
+ * resourcebuilder has conept of doing that kind of expansion depending on how we fit in targeted commands. for instance if you want to add a volume to pods and rcs, you need something to go find the pod template and change it. there's the search part of it (delete nginx -> you have to figure out what object they are referring to) and then command can say i got a pod i know what to do with a pod.
+
+ * alternative heuristic: what if default target of all commands was deployments. kubectl run -> deployment. too much work, easier to clean up existing CLI. leave door open for that. macro objects OK but a lot more work to make that work. eventually will want index to make these efficient. could rely more on swagger to tell us types.
+
+2\. paul/downward api: env substitution
+
+ * create ad-hoc env var like strings, e.g. k8s_pod_name that would get sub'd by system in objects
+ * allow people to create env vars that refer to fields of k8s objects w/o query api from inside their container; in some caes enables query api from their container (e.g. pass obj names, namespaces); e.g. sidecar containers need this for pulling things from api server
+ * another proposal similar: instead of env var like names, have JSON-path-like syntax for referring to object field names; e.g. $.[metadata.name][1] to refer to name of current object, maybe have some syntax for referring to related objects like node that a pod is on. advantage of JSON path-like syntax is that it's less ad hoc. disadvantage is that you can only refer to things that are fields of objects.
+ * for both, if you populate env vars then you have drawback that fields only set when container is created. but least degree of coupling -- off the shelf containers, containers don't need to know how to talk to k8s API. keeps the k8s concepts in the control plane.
+ * we were converging on JSON path like approach. but need prototype or at least deeper proposal to demo.
+ * paul: one variant is for env vars in addition to value field have different sources which is where you would plug in e.g. syntax you use to describe a field of an object; another source would be a source that described info about the host. have partial prototype. clean separation between what's in image vs. control plane. could use source idea for volume plugin.
+ * use case: provide info for sidecar container to contact API server
+ * use case: pass down unique identifiers or things like using UID as nique identifier
+ * clayton: for rocket or gce metadata service being available for every pod for more sophisticated things; most containers want to find endpoint of service,
+
+3\. preconditions/dependencies
+
+* when you create pods that talk to services, the service env vars only get populated if you create the objs in the right order. if you use dns it's less of a problem but some apps are fragile. may crash if svc they depend on is not there, may take a long time to restart. proposal to have preconds that block starting pods until objs they depend on exist.
+* infer automatically if we ask people to declare which env vars they wanted, or have dep mech at pod or rc or obj level to say this obj doesn't become active until this other thing exists.
+* can use event hook? only app owner knows their dependency or when service is ready to serve.
+* one proposal is to use pre-start hook. another is precondition probe - pre-start hook could do a probe. does anything respond when i hit this svc address or ip, then probe fails. could be implemented in pre-start hook. more useful than post-start. is part of rkt spec. has stages 0, 1, 2. hard to do in docker today, easy in rocket.
+* pre-start hook in container: how will affect readiness probe since the container might have a lock until some arbitrary condition is met if you implement with prestart hook. there has to be some compensation on when kubelet runs readiness/liveness probes if you have a hook. systemd has timeouts around the stages of process lifecycle.
+* if we go to black box model of container pre-start makes sense; if container spec becomes more descriptive of process model like systemd, then does kubelet need to know more about process model to do the right thing
+* ideally msg from inside the container to say i've done all of my pre-start actions. sdnotify for systemd does this. you tell systemd that you're done, it will communicate to to other deps that you're alive.
+* but... someone could just implement preconds inside their container. makes it easier to adapt an app w/o having to change their image. alternative is just have a pattern how they do it themselves but we don't do it for them.
+
+[1]: http://metadata.name/
diff --git a/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_17.md b/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_17.md
new file mode 100644
index 00000000000..4605f7ceddf
--- /dev/null
+++ b/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_17.md
@@ -0,0 +1,173 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Weekly Kubernetes Community Hangout Notes - April 17 2015 "
+date: Saturday, April 17, 2015
+pagination:
+ enabled: true
+---
+Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
+
+Agenda
+
+* Mesos Integration
+* High Availability (HA)
+* Adding performance and profiling details to e2e to track regressions
+* Versioned clients
+
+Notes
+
+
+* Mesos integration
+
+ * Mesos integration proposal:
+
+ * No blockers to integration.
+
+ * Documentation needs to be updated.
+* HA
+
+ * Proposal should land today.
+
+ * Etcd cluster.
+
+ * Load-balance apiserver.
+
+ * Cold standby for controller manager and other master components.
+* Adding performance and profiling details to e2e to track regression
+
+ * Want red light for performance regression
+
+ * Need a public DB to post the data
+
+ * See
+
+ * Justin working on multi-platform e2e dashboard
+* Versioned clients
+
+ *
+
+ *
+
+ * Client library currently uses internal API objects.
+
+ * Nobody reported that frequent changes to types.go have been painful, but we are worried about it.
+
+ * Structured types are useful in the client. Versioned structs would be ok.
+
+ * If start with json/yaml (kubectl), shouldn’t convert to structured types. Use swagger.
+* Security context
+
+ *
+
+ * Administrators can restrict who can run privileged containers or require specific unix uids
+
+ * Kubelet will be able to get pull credentials from apiserver
+
+ * Policy proposal coming in the next week or so
+* Discussing upstreaming of users, etc. into Kubernetes, at least as optional
+* 1.0 Roadmap
+
+ * Focus is performance, stability, cluster upgrades
+
+ * TJ has been making some edits to [roadmap.md][4] but hasn’t sent out a PR yet
+* Kubernetes UI
+
+ * Dependencies broken out into third-party
+
+ * @lavalamp is reviewer
+
+
+[1]: http://kubernetes.io/images/nav_logo.svg
+[2]: http://kubernetes.io/docs/
+[3]: http://blog.kubernetes.io/
+[4]: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/roadmap.md
+[5]: http://blog.kubernetes.io/2015/04/weekly-kubernetes-community-hangout_17.html "permanent link"
+[6]: https://resources.blogblog.com/img/icon18_edit_allbkg.gif
+[7]: https://www.blogger.com/post-edit.g?blogID=112706738355446097&postID=630924463010638300&from=pencil "Edit Post"
+[8]: https://www.blogger.com/share-post.g?blogID=112706738355446097&postID=630924463010638300&target=email "Email This"
+[9]: https://www.blogger.com/share-post.g?blogID=112706738355446097&postID=630924463010638300&target=blog "BlogThis!"
+[10]: https://www.blogger.com/share-post.g?blogID=112706738355446097&postID=630924463010638300&target=twitter "Share to Twitter"
+[11]: https://www.blogger.com/share-post.g?blogID=112706738355446097&postID=630924463010638300&target=facebook "Share to Facebook"
+[12]: https://www.blogger.com/share-post.g?blogID=112706738355446097&postID=630924463010638300&target=pinterest "Share to Pinterest"
+[13]: http://blog.kubernetes.io/search/label/community%20meetings
+[14]: http://blog.kubernetes.io/search/label/containers
+[15]: http://blog.kubernetes.io/search/label/docker
+[16]: http://blog.kubernetes.io/search/label/k8s
+[17]: http://blog.kubernetes.io/search/label/kubernetes
+[18]: http://blog.kubernetes.io/search/label/open%20source
+[19]: http://blog.kubernetes.io/2015/04/kubernetes-and-mesosphere-dcos.html "Newer Post"
+[20]: http://blog.kubernetes.io/2015/04/introducing-kubernetes-v1beta3.html "Older Post"
+[21]: http://blog.kubernetes.io/feeds/630924463010638300/comments/default
+[22]: https://img2.blogblog.com/img/widgets/arrow_dropdown.gif
+[23]: https://img1.blogblog.com/img/icon_feed12.png
+[24]: https://img1.blogblog.com/img/widgets/subscribe-netvibes.png
+[25]: https://www.netvibes.com/subscribe.php?url=http%3A%2F%2Fblog.kubernetes.io%2Ffeeds%2Fposts%2Fdefault
+[26]: https://img1.blogblog.com/img/widgets/subscribe-yahoo.png
+[27]: https://add.my.yahoo.com/content?url=http%3A%2F%2Fblog.kubernetes.io%2Ffeeds%2Fposts%2Fdefault
+[28]: http://blog.kubernetes.io/feeds/posts/default
+[29]: https://www.netvibes.com/subscribe.php?url=http%3A%2F%2Fblog.kubernetes.io%2Ffeeds%2F630924463010638300%2Fcomments%2Fdefault
+[30]: https://add.my.yahoo.com/content?url=http%3A%2F%2Fblog.kubernetes.io%2Ffeeds%2F630924463010638300%2Fcomments%2Fdefault
+[31]: https://resources.blogblog.com/img/icon18_wrench_allbkg.png
+[32]: //www.blogger.com/rearrange?blogID=112706738355446097&widgetType=Subscribe&widgetId=Subscribe1&action=editWidget§ionId=sidebar-right-1 "Edit"
+[33]: https://twitter.com/kubernetesio
+[34]: https://github.com/kubernetes/kubernetes
+[35]: http://slack.k8s.io/
+[36]: http://stackoverflow.com/questions/tagged/kubernetes
+[37]: http://get.k8s.io/
+[38]: //www.blogger.com/rearrange?blogID=112706738355446097&widgetType=HTML&widgetId=HTML2&action=editWidget§ionId=sidebar-right-1 "Edit"
+[39]: javascript:void(0)
+[40]: http://blog.kubernetes.io/2018/
+[41]: http://blog.kubernetes.io/2018/01/
+[42]: http://blog.kubernetes.io/2017/
+[43]: http://blog.kubernetes.io/2017/12/
+[44]: http://blog.kubernetes.io/2017/11/
+[45]: http://blog.kubernetes.io/2017/10/
+[46]: http://blog.kubernetes.io/2017/09/
+[47]: http://blog.kubernetes.io/2017/08/
+[48]: http://blog.kubernetes.io/2017/07/
+[49]: http://blog.kubernetes.io/2017/06/
+[50]: http://blog.kubernetes.io/2017/05/
+[51]: http://blog.kubernetes.io/2017/04/
+[52]: http://blog.kubernetes.io/2017/03/
+[53]: http://blog.kubernetes.io/2017/02/
+[54]: http://blog.kubernetes.io/2017/01/
+[55]: http://blog.kubernetes.io/2016/
+[56]: http://blog.kubernetes.io/2016/12/
+[57]: http://blog.kubernetes.io/2016/11/
+[58]: http://blog.kubernetes.io/2016/10/
+[59]: http://blog.kubernetes.io/2016/09/
+[60]: http://blog.kubernetes.io/2016/08/
+[61]: http://blog.kubernetes.io/2016/07/
+[62]: http://blog.kubernetes.io/2016/06/
+[63]: http://blog.kubernetes.io/2016/05/
+[64]: http://blog.kubernetes.io/2016/04/
+[65]: http://blog.kubernetes.io/2016/03/
+[66]: http://blog.kubernetes.io/2016/02/
+[67]: http://blog.kubernetes.io/2016/01/
+[68]: http://blog.kubernetes.io/2015/
+[69]: http://blog.kubernetes.io/2015/12/
+[70]: http://blog.kubernetes.io/2015/11/
+[71]: http://blog.kubernetes.io/2015/10/
+[72]: http://blog.kubernetes.io/2015/09/
+[73]: http://blog.kubernetes.io/2015/08/
+[74]: http://blog.kubernetes.io/2015/07/
+[75]: http://blog.kubernetes.io/2015/06/
+[76]: http://blog.kubernetes.io/2015/05/
+[77]: http://blog.kubernetes.io/2015/04/
+[78]: http://blog.kubernetes.io/2015/04/weekly-kubernetes-community-hangout_29.html
+[79]: http://blog.kubernetes.io/2015/04/borg-predecessor-to-kubernetes.html
+[80]: http://blog.kubernetes.io/2015/04/kubernetes-and-mesosphere-dcos.html
+[81]: http://blog.kubernetes.io/2015/04/weekly-kubernetes-community-hangout_17.html
+[82]: http://blog.kubernetes.io/2015/04/introducing-kubernetes-v1beta3.html
+[83]: http://blog.kubernetes.io/2015/04/kubernetes-release-0150.html
+[84]: http://blog.kubernetes.io/2015/04/weekly-kubernetes-community-hangout_11.html
+[85]: http://blog.kubernetes.io/2015/04/faster-than-speeding-latte.html
+[86]: http://blog.kubernetes.io/2015/04/weekly-kubernetes-community-hangout.html
+[87]: http://blog.kubernetes.io/2015/03/
+[88]: //www.blogger.com/rearrange?blogID=112706738355446097&widgetType=BlogArchive&widgetId=BlogArchive1&action=editWidget§ionId=sidebar-right-1 "Edit"
+[89]: //www.blogger.com/rearrange?blogID=112706738355446097&widgetType=HTML&widgetId=HTML1&action=editWidget§ionId=sidebar-right-1 "Edit"
+[90]: https://www.blogger.com
+[91]: //www.blogger.com/rearrange?blogID=112706738355446097&widgetType=Attribution&widgetId=Attribution1&action=editWidget§ionId=footer-3 "Edit"
+
+ [*[3:27 PM]: 2015-04-17T15:27:00-07:00
diff --git a/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_29.md b/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_29.md
new file mode 100644
index 00000000000..9f1422fb8fc
--- /dev/null
+++ b/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_29.md
@@ -0,0 +1,62 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Weekly Kubernetes Community Hangout Notes - April 24 2015 "
+date: Friday, April 30, 2015
+pagination:
+ enabled: true
+---
+Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
+
+
+Agenda:
+
+* Flocker and Kubernetes integration demo
+
+Notes:
+
+* flocker and kubernetes integration demo
+* * Flocker Q/A
+
+ * Does the file still exists on node1 after migration?
+
+ * Brendan: Any plan this to make it a volume? So we don't need powerstrip?
+
+ * Luke: Need to figure out interest to decide if we want to make it a first-class persistent disk provider in kube.
+
+ * Brendan: Removing need for powerstrip would make it simple to use. Totally go for it.
+
+ * Tim: Should take no more than 45 minutes to add it to kubernetes:)
+
+ * Derek: Contrast this with persistent volumes and claims?
+
+ * Luke: Not much difference, except for the novel ZFS based backend. Makes workloads really portable.
+
+ * Tim: very different than network-based volumes. Its interesting that it is the only offering that allows upgrading media.
+
+ * Brendan: claims, how does it look for replicated claims? eg Cassandra wants to have replicated data underneath. It would be efficient to scale up and down. Create storage on the fly based on load dynamically. Its step beyond taking snapshots - programmatically creating replicas with preallocation.
+
+ * Tim: helps with auto-provisioning.
+
+ * Brian: Does flocker requires any other component?
+
+ * Kai: Flocker control service co-located with the master. (dia on blog post). Powerstrip + Powerstrip Flocker. Very interested in mpersisting state in etcd. It keeps metadata about each volume.
+
+ * Brendan: In future, flocker can be a plugin and we'll take care of persistence. Post v1.0.
+
+ * Brian: Interested in adding generic plugin for services like flocker.
+
+ * Luke: Zfs can become really valuable when scaling to lot of containers on a single node.
+
+ * Alex: Can flocker service can be run as a pod?
+
+ * Kai: Yes, only requirement is the flocker control service should be able to talk to zfs agent. zfs agent needs to be installed on the host and zfs binaries need to be accessible.
+
+ * Brendan: In theory, all zfs bits can be put it into a container with devices.
+
+ * Luke: Yes, still working through cross-container mounting issue.
+
+ * Tim: pmorie is working through it to make kubelet work in a container. Possible re-use.
+
+ * Kai: Cinder support is coming. Few days away.
+* Bob: What's the process of pushing kube to GKE? Need more visibility for confidence.
diff --git a/blog/_posts/2015-05-00-Appc-Support-For-Kubernetes-Through-Rkt.md b/blog/_posts/2015-05-00-Appc-Support-For-Kubernetes-Through-Rkt.md
new file mode 100644
index 00000000000..06a90ffd94c
--- /dev/null
+++ b/blog/_posts/2015-05-00-Appc-Support-For-Kubernetes-Through-Rkt.md
@@ -0,0 +1,28 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " AppC Support for Kubernetes through RKT "
+date: Tuesday, May 04, 2015
+pagination:
+ enabled: true
+---
+We very recently accepted a pull request to the Kubernetes project to add appc support for the Kubernetes community. Appc is a new open container specification that was initiated by CoreOS, and is supported through CoreOS rkt container runtime.
+
+
+
+This is an important step forward for the Kubernetes project and for the broader containers community. It adds flexibility and choice to the container-verse and brings the promise of compelling new security and performance capabilities to the Kubernetes developer.
+
+
+
+Container based runtimes (like Docker or rkt) when paired with smart orchestration technologies (like Kubernetes and/or Apache Mesos) are a legitimate disruption to the way that developers build and run their applications. While the supporting technologies are relatively nascent, they do offer the promise of some very powerful new ways to assemble, deploy, update, debug and extend solutions. I believe that the world has not yet felt the full potential of containers and the next few years are going to be particularly exciting! With that in mind it makes sense for several projects to emerge with different properties and different purposes. It also makes sense to be able to plug together different pieces (whether it be the container runtime or the orchestrator) based on the specific needs of a given application.
+
+
+
+Docker has done an amazing job of democratizing container technologies and making them accessible to the outside world, and we expect Kubernetes to support Docker indefinitely. CoreOS has also started to do interesting work with rkt to create an elegant, clean, simple and open platform that offers some really interesting properties. It looks poised deliver a secure and performant operating environment for containers. The Kubernetes team has been working with the appc team at CoreOS for a while and in many ways they built rkt with Kubernetes in mind as a simple pluggable runtime component.
+
+
+
+The really nice thing is that with Kubernetes you can now pick the container runtime that works best for you based on your workloads’ needs, change runtimes without having the replace your cluster environment, or even mix together applications where different parts are running in different container runtimes in the same cluster. Additional choices can’t help but ultimately benefit the end developer.
+
+-- Craig McLuckie
+Google Product Manager and Kubernetes co-founder
diff --git a/blog/_posts/2015-05-00-Docker-And-Kubernetes-And-Appc.md b/blog/_posts/2015-05-00-Docker-And-Kubernetes-And-Appc.md
new file mode 100644
index 00000000000..72fd4cca993
--- /dev/null
+++ b/blog/_posts/2015-05-00-Docker-And-Kubernetes-And-Appc.md
@@ -0,0 +1,30 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Docker and Kubernetes and AppC "
+date: Tuesday, May 18, 2015
+pagination:
+ enabled: true
+---
+Recently we announced the intent in Kubernetes, our open source cluster manager, to support AppC and RKT, an alternative container format that has been driven by CoreOS with input from many companies (including Google). This announcement has generated a surprising amount of buzz and has been construed as a move from Google to support Appc over Docker. Many have taken it as signal that Google is moving away from supporting Docker. I would like to take a moment to clarify Google’s position in this.
+
+
+Google has consistently supported the Docker initiative and has invested heavily in Docker. In the early days of containers, we decided to de-emphasize our own open source offering (LMCTFY) and to instead focus on Docker. As a result of that we have two engineers that are active maintainers of LibContainer, a critical piece of the Docker ecosystem and are working closely with Docker to add many additional features and capabilities. Docker is currently the only supported runtime in GKE (Google Container Engine) our commercial containers product, and in GAE (Google App Engine), our Platform-as-a-Service product.
+
+
+While we may introduce AppC support at some point in the future to GKE based on our customers demand, we intend to continue to support the Docker project and product, and Docker the company indefinitely. To date Docker is by far the most mature and widely used container offering in the market, with over 400 million downloads. It has been production ready for almost a year and seen widespread use in industry, and also here inside Google.
+
+
+Beyond the obvious traction Docker has in the market, we are heartened by many of Docker’s recent initiatives to open the project and support ‘batteries included, but swappable options across the stack and recognize that it offers a great developer experience for engineers new to the containers world. We are encouraged, for example, by the separation of the Docker Machine and Swarm projects from the core runtime, and are glad to see support for Docker Machine emerging for Google Compute Engine.
+
+
+Our intent with our announcement for AppC and RKT support was to establish Kubernetes (our open source project) as a neutral ground in the world of containers. Customers should be able to pick their container runtime and format based solely on its technical merits, and we do see AppC as offering some legitimate potential merits as the technology matures. Somehow this was misconstrued as an ‘a vs b’ selection which is simply untrue. The world is almost always better for having choice, and it is perfectly natural that different tools should be available for different purposes.
+
+
+Stepping back a little, one must recognize that Docker has done remarkable work in democratizing container technologies and making them accessible to everyone. We believe that Docker will continue to drive great experiences for developers looking to use containers and plan to support this technology and its burgeoning community indefinitely. We, for one, are looking forward to the upcoming Dockercon where Brendan Burns (a Kubernetes co-founder) will be talking about the role of Docker in modern distributed systems design.
+
+
+
+-- Craig McLuckie
+
+Google Group Product Manager, and Kubernetes Project Co-Founder
diff --git a/blog/_posts/2015-05-00-Kubernetes-On-Openstack.md b/blog/_posts/2015-05-00-Kubernetes-On-Openstack.md
new file mode 100644
index 00000000000..e99c9c745d2
--- /dev/null
+++ b/blog/_posts/2015-05-00-Kubernetes-On-Openstack.md
@@ -0,0 +1,62 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes on OpenStack "
+date: Wednesday, May 19, 2015
+pagination:
+ enabled: true
+---
+
+
+
+[](http://3.bp.blogspot.com/-EOrCHChZJZE/VVZzq43g6CI/AAAAAAAAF-E/JUilRHk369E/s1600/Untitled%2Bdrawing.jpg)
+
+
+
+Today, the [OpenStack foundation](https://www.openstack.org/foundation/) made it even easier for you deploy and manage clusters of Docker containers on OpenStack clouds by including Kubernetes in its [Community App Catalog](http://apps.openstack.org/). At a keynote today at the OpenStack Summit in Vancouver, Mark Collier, COO of the OpenStack Foundation, and Craig Peters, [Mirantis](https://www.mirantis.com/) product line manager, demonstrated the Community App Catalog workflow by launching a Kubernetes cluster in a matter of seconds by leveraging the compute, storage, networking and identity systems already present in an OpenStack cloud.
+
+
+
+The entries in the catalog include not just the ability to [start a Kubernetes cluster](http://apps.openstack.org/#tab=murano-apps&asset=Kubernetes%20Cluster), but also a range of applications deployed in Docker containers managed by Kubernetes. These applications include:
+
+
+
+-
+Apache web server
+-
+Nginx web server
+-
+Crate - The Distributed Database for Docker
+-
+GlassFish - Java EE 7 Application Server
+-
+Tomcat - An open-source web server and servlet container
+-
+InfluxDB - An open-source, distributed, time series database
+-
+Grafana - Metrics dashboard for InfluxDB
+-
+Jenkins - An extensible open source continuous integration server
+-
+MariaDB database
+-
+MySql database
+-
+Redis - Key-value cache and store
+-
+PostgreSQL database
+-
+MongoDB NoSQL database
+-
+Zend Server - The Complete PHP Application Platform
+
+
+
+This list will grow, and is curated [here](https://github.com/openstack/murano-apps/tree/master/Docker/Kubernetes). You can examine (and contribute to) the YAML file that tells Murano how to install and start the Kubernetes cluster [here](https://github.com/openstack/murano-apps/blob/master/Docker/Kubernetes/KubernetesCluster/package/Classes/KubernetesCluster.yaml).
+
+
+
+[The Kubernetes open source project](https://github.com/GoogleCloudPlatform/kubernetes) has continued to see fantastic community adoption and increasing momentum, with over 11,000 commits and 7,648 stars on GitHub. With supporters ranging from Red Hat and Intel to CoreOS and Box.net, it has come to represent a range of customer interests ranging from enterprise IT to cutting edge startups. We encourage you to give it a try, give us your feedback, and get involved in our growing community.
+
+
+- Martin Buhr, Product Manager, Kubernetes Open Source Project
diff --git a/blog/_posts/2015-05-00-Kubernetes-Release-0160.md b/blog/_posts/2015-05-00-Kubernetes-Release-0160.md
new file mode 100644
index 00000000000..4c9f8732944
--- /dev/null
+++ b/blog/_posts/2015-05-00-Kubernetes-Release-0160.md
@@ -0,0 +1,72 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes Release: 0.16.0 "
+date: Tuesday, May 11, 2015
+pagination:
+ enabled: true
+---
+Release Notes:
+
+- Bring up a kuberenetes cluster using coreos image as worker nodes [#7445](https://github.com/GoogleCloudPlatform/kubernetes/pull/7445) (dchen1107)
+- Cloning v1beta3 as v1 and exposing it in the apiserver [#7454](https://github.com/GoogleCloudPlatform/kubernetes/pull/7454) (nikhiljindal)
+- API Conventions for Late-initializers [#7366](https://github.com/GoogleCloudPlatform/kubernetes/pull/7366) (erictune)
+- Upgrade Elasticsearch to 1.5.2 for cluster logging [#7455](https://github.com/GoogleCloudPlatform/kubernetes/pull/7455) (satnam6502)
+- Make delete actually stop resources by default. [#7210](https://github.com/GoogleCloudPlatform/kubernetes/pull/7210) (brendandburns)
+- Change kube2sky to use token-system-dns secret, point at https endpoint ... [#7154](https://github.com/GoogleCloudPlatform/kubernetes/pull/7154)(cjcullen)
+- Updated CoreOS bare metal docs for 0.15.0 [#7364](https://github.com/GoogleCloudPlatform/kubernetes/pull/7364) (hvolkmer)
+- Print named ports in 'describe service' [#7424](https://github.com/GoogleCloudPlatform/kubernetes/pull/7424) (thockin)
+- AWS
+- Return public & private addresses in GetNodeAddresses [#7040](https://github.com/GoogleCloudPlatform/kubernetes/pull/7040) (justinsb)
+- Improving getting existing VPC and subnet [#6606](https://github.com/GoogleCloudPlatform/kubernetes/pull/6606) (gust1n)
+- Set hostname\_override for minions, back to fully-qualified name [#7182](https://github.com/GoogleCloudPlatform/kubernetes/pull/7182) (justinsb)
+- Conversion to v1beta3
+- Convert node level logging agents to v1beta3 [#7274](https://github.com/GoogleCloudPlatform/kubernetes/pull/7274) (satnam6502)
+- Removing more references to v1beta1 from pkg/ [#7128](https://github.com/GoogleCloudPlatform/kubernetes/pull/7128) (nikhiljindal)
+- update examples/cassandra to api v1beta3 [#7258](https://github.com/GoogleCloudPlatform/kubernetes/pull/7258) (caesarxuchao)
+- Convert Elasticsearch logging to v1beta3 and de-salt [#7246](https://github.com/GoogleCloudPlatform/kubernetes/pull/7246) (satnam6502)
+- Update examples/storm for v1beta3 [#7231](https://github.com/GoogleCloudPlatform/kubernetes/pull/7231) (bcbroussard)
+- Update examples/spark for v1beta3 [#7230](https://github.com/GoogleCloudPlatform/kubernetes/pull/7230) (bcbroussard)
+- Update Kibana RC and service to v1beta3 [#7240](https://github.com/GoogleCloudPlatform/kubernetes/pull/7240) (satnam6502)
+- Updating the guestbook example to v1beta3 [#7194](https://github.com/GoogleCloudPlatform/kubernetes/pull/7194) (nikhiljindal)
+- Update Phabricator to v1beta3 example [#7232](https://github.com/GoogleCloudPlatform/kubernetes/pull/7232) (bcbroussard)
+- Update Kibana pod to speak to Elasticsearch using v1beta3 [#7206](https://github.com/GoogleCloudPlatform/kubernetes/pull/7206) (satnam6502)
+- Validate Node IPs; clean up validation code [#7180](https://github.com/GoogleCloudPlatform/kubernetes/pull/7180) (ddysher)
+- Add PortForward to runtime API. [#7391](https://github.com/GoogleCloudPlatform/kubernetes/pull/7391) (vmarmol)
+- kube-proxy uses token to access port 443 of apiserver [#7303](https://github.com/GoogleCloudPlatform/kubernetes/pull/7303) (erictune)
+- Move the logging-related directories to where I think they belong [#7014](https://github.com/GoogleCloudPlatform/kubernetes/pull/7014) (a-robinson)
+- Make client service requests use the default timeout now that external load balancers are created asynchronously [#6870](https://github.com/GoogleCloudPlatform/kubernetes/pull/6870) (a-robinson)
+- Fix bug in kube-proxy of not updating iptables rules if a service's public IPs change [#6123](https://github.com/GoogleCloudPlatform/kubernetes/pull/6123)(a-robinson)
+- PersistentVolumeClaimBinder [#6105](https://github.com/GoogleCloudPlatform/kubernetes/pull/6105) (markturansky)
+- Fixed validation message when trying to submit incorrect secret [#7356](https://github.com/GoogleCloudPlatform/kubernetes/pull/7356) (soltysh)
+- First step to supporting multiple k8s clusters [#6006](https://github.com/GoogleCloudPlatform/kubernetes/pull/6006) (justinsb)
+- Parity for namespace handling in secrets E2E [#7361](https://github.com/GoogleCloudPlatform/kubernetes/pull/7361) (pmorie)
+- Add cleanup policy to RollingUpdater [#6996](https://github.com/GoogleCloudPlatform/kubernetes/pull/6996) (ironcladlou)
+- Use narrowly scoped interfaces for client access [#6871](https://github.com/GoogleCloudPlatform/kubernetes/pull/6871) (ironcladlou)
+- Warning about Critical bug in the GlusterFS Volume Plugin [#7319](https://github.com/GoogleCloudPlatform/kubernetes/pull/7319) (wattsteve)
+- Rolling update
+- First part of improved rolling update, allow dynamic next replication controller generation. [#7268](https://github.com/GoogleCloudPlatform/kubernetes/pull/7268) (brendandburns)
+- Further implementation of rolling-update, add rename [#7279](https://github.com/GoogleCloudPlatform/kubernetes/pull/7279) (brendandburns)
+- Added basic apiserver authz tests. [#7293](https://github.com/GoogleCloudPlatform/kubernetes/pull/7293) (ashcrow)
+- Retry pod update on version conflict error in e2e test. [#7297](https://github.com/GoogleCloudPlatform/kubernetes/pull/7297) (quinton-hoole)
+- Add "kubectl validate" command to do a cluster health check. [#6597](https://github.com/GoogleCloudPlatform/kubernetes/pull/6597) (fabioy)
+- coreos/azure: Weave version bump, various other enhancements [#7224](https://github.com/GoogleCloudPlatform/kubernetes/pull/7224) (errordeveloper)
+- Azure: Wait for salt completion on cluster initialization [#6576](https://github.com/GoogleCloudPlatform/kubernetes/pull/6576) (jeffmendoza)
+- Tighten label parsing [#6674](https://github.com/GoogleCloudPlatform/kubernetes/pull/6674) (kargakis)
+- fix watch of single object [#7263](https://github.com/GoogleCloudPlatform/kubernetes/pull/7263) (lavalamp)
+- Upgrade go-dockerclient dependency to support CgroupParent [#7247](https://github.com/GoogleCloudPlatform/kubernetes/pull/7247) (guenter)
+- Make secret volume plugin idempotent [#7166](https://github.com/GoogleCloudPlatform/kubernetes/pull/7166) (pmorie)
+- Salt reconfiguration to get rid of nginx on GCE [#6618](https://github.com/GoogleCloudPlatform/kubernetes/pull/6618) (roberthbailey)
+- Revert "Change kube2sky to use token-system-dns secret, point at https e... [#7207](https://github.com/GoogleCloudPlatform/kubernetes/pull/7207) (fabioy)
+- Pod templates as their own type [#5012](https://github.com/GoogleCloudPlatform/kubernetes/pull/5012) (smarterclayton)
+- iscsi Test: Add explicit check for attach and detach calls. [#7110](https://github.com/GoogleCloudPlatform/kubernetes/pull/7110) (swagiaal)
+- Added field selector for listing pods [#7067](https://github.com/GoogleCloudPlatform/kubernetes/pull/7067) (ravigadde)
+- Record an event on node schedulable changes [#7138](https://github.com/GoogleCloudPlatform/kubernetes/pull/7138) (pravisankar)
+- Resolve [#6812](https://github.com/GoogleCloudPlatform/kubernetes/issues/6812), limit length of load balancer names [#7145](https://github.com/GoogleCloudPlatform/kubernetes/pull/7145) (caesarxuchao)
+- Convert error strings to proper validation errors. [#7131](https://github.com/GoogleCloudPlatform/kubernetes/pull/7131) (rjnagal)
+- ResourceQuota add object count support for secret and volume claims [#6593](https://github.com/GoogleCloudPlatform/kubernetes/pull/6593)(derekwaynecarr)
+- Use Pod.Spec.Host instead of Pod.Status.HostIP for pod subresources [#6985](https://github.com/GoogleCloudPlatform/kubernetes/pull/6985) (csrwng)
+- Prioritize deleting the non-running pods when reducing replicas [#6992](https://github.com/GoogleCloudPlatform/kubernetes/pull/6992) (yujuhong)
+- Kubernetes UI with Dashboard component [#7056](https://github.com/GoogleCloudPlatform/kubernetes/pull/7056) (preillyme)
+
+To download, please visit https://github.com/GoogleCloudPlatform/kubernetes/releases/tag/v0.16.0
diff --git a/blog/_posts/2015-05-00-Kubernetes-Release-0170.md b/blog/_posts/2015-05-00-Kubernetes-Release-0170.md
new file mode 100644
index 00000000000..f1def26fd04
--- /dev/null
+++ b/blog/_posts/2015-05-00-Kubernetes-Release-0170.md
@@ -0,0 +1,621 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes Release: 0.17.0 "
+date: Saturday, May 15, 2015
+pagination:
+ enabled: true
+---
+Release Notes:
+
+* Cleanups
+
+ * Remove old salt configs [#8065][4] (roberthbailey)
+ * Kubelet: minor cleanups [#8069][5] (yujuhong)
+* v1beta3
+
+ * update example/walkthrough to v1beta3 [#7940][6] (caesarxuchao)
+ * update example/rethinkdb to v1beta3 [#7946][7] (caesarxuchao)
+ * verify the v1beta3 yaml files all work; merge the yaml files [#7917][8] (caesarxuchao)
+ * update examples/cassandra to api v1beta3 [#7258][9] (caesarxuchao)
+ * update service.json in persistent-volume example to v1beta3 [#7899][10] (caesarxuchao)
+ * update mysql-wordpress example to use v1beta3 API [#7864][11] (caesarxuchao)
+ * Update examples/meteor to use API v1beta3 [#7848][12] (caesarxuchao)
+ * update node-selector example to API v1beta3 [#7872][13] (caesarxuchao)
+ * update logging-demo to use API v1beta3; modify the way to access Elasticsearch and Kibana services [#7824][14] (caesarxuchao)
+ * Convert the skydns rc to use v1beta3 and add a health check to it [#7619][15] (a-robinson)
+ * update the hazelcast example to API version v1beta3 [#7728][16] (caesarxuchao)
+ * Fix YAML parsing for v1beta3 objects in the kubelet for file/http [#7515][17] (brendandburns)
+ * Updated kubectl cluster-info to show v1beta3 addresses [#7502][18] (piosz)
+* Kubelet
+
+ * kubelet: Fix racy kubelet tests. [#7980][19] (yifan-gu)
+ * kubelet/container: Move prober.ContainerCommandRunner to container. [#8079][20] (yifan-gu)
+ * Kubelet: set host field in the pending pod status [#6127][21] (yujuhong)
+ * Fix the kubelet node watch [#6442][22] (yujuhong)
+ * Kubelet: recreate mirror pod if the static pod changes [#6607][23] (yujuhong)
+ * Kubelet: record the timestamp correctly in the runtime cache [#7749][24] (yujuhong)
+ * Kubelet: wait until container runtime is up [#7729][25] (yujuhong)
+ * Kubelet: replace DockerManager with the Runtime interface [#7674][26] (yujuhong)
+ * Kubelet: filter out terminated pods in SyncPods [#7301][27] (yujuhong)
+ * Kubelet: parallelize cleaning up containers in unwanted pods [#7048][28] (yujuhong)
+ * kubelet: Add container runtime option for rkt. [#7952][29] (yifan-gu)
+ * kubelet/rkt: Remove build label. [#7916][30] (yifan-gu)
+ * kubelet/metrics: Move instrumented_docker.go to dockertools. [#7327][31] (yifan-gu)
+ * kubelet/rkt: Add GetPods() for rkt. [#7599][32] (yifan-gu)
+ * kubelet/rkt: Add KillPod() and GetPodStatus() for rkt. [#7605][33] (yifan-gu)
+ * pkg/kubelet: Fix logging. [#4755][34] (yifan-gu)
+ * kubelet: Refactor RunInContainer/ExecInContainer/PortForward. [#6491][35] (yifan-gu)
+ * kubelet/DockerManager: Fix returning empty error from GetPodStatus(). [#6609][36] (yifan-gu)
+ * kubelet: Move pod infra container image setting to dockertools. [#6634][37] (yifan-gu)
+ * kubelet/fake_docker_client: Use self's PID instead of 42 in testing. [#6653][38] (yifan-gu)
+ * kubelet/dockertool: Move Getpods() to DockerManager. [#6778][39] (yifan-gu)
+ * kubelet/dockertools: Add puller interfaces in the containerManager. [#6776][40] (yifan-gu)
+ * kubelet: Introduce PodInfraContainerChanged(). [#6608][41] (yifan-gu)
+ * kubelet/container: Replace DockerCache with RuntimeCache. [#6795][42] (yifan-gu)
+ * kubelet: Clean up computePodContainerChanges. [#6844][43] (yifan-gu)
+ * kubelet: Refactor prober. [#7009][44] (yifan-gu)
+ * kubelet/container: Update the runtime interface. [#7466][45] (yifan-gu)
+ * kubelet: Refactor isPodRunning() in runonce.go [#7477][46] (yifan-gu)
+ * kubelet/rkt: Add basic rkt runtime routines. [#7465][47] (yifan-gu)
+ * kubelet/rkt: Add podInfo. [#7555][48] (yifan-gu)
+ * kubelet/container: Add GetContainerLogs to runtime interface. [#7488][49] (yifan-gu)
+ * kubelet/rkt: Add routines for converting kubelet pod to rkt pod. [#7543][50] (yifan-gu)
+ * kubelet/rkt: Add RunPod() for rkt. [#7589][51] (yifan-gu)
+ * kubelet/rkt: Add RunInContainer()/ExecInContainer()/PortForward(). [#7553][52] (yifan-gu)
+ * kubelet/container: Move ShouldContainerBeRestarted() to runtime. [#7613][53] (yifan-gu)
+ * kubelet/rkt: Add SyncPod() to rkt. [#7611][54] (yifan-gu)
+ * Kubelet: persist restart count of a container [#6794][55] (yujuhong)
+ * kubelet/container: Move pty*.go to container runtime package. [#7951][56] (yifan-gu)
+ * kubelet: Add container runtime option for rkt. [#7900][57] (yifan-gu)
+ * kubelet/rkt: Add docker prefix to image string. [#7803][58] (yifan-gu)
+ * kubelet/rkt: Inject dependencies to rkt. [#7849][59] (yifan-gu)
+ * kubelet/rkt: Remove dependencies on rkt.store [#7859][60] (yifan-gu)
+ * Kubelet talks securely to apiserver [#2387][61] (erictune)
+ * Rename EnvVarSource.FieldPath -> FieldRef and add example [#7592][62] (pmorie)
+ * Add containerized option to kubelet binary [#7741][63] (pmorie)
+ * Ease building kubelet image [#7948][64] (pmorie)
+ * Remove unnecessary bind-mount from dockerized kubelet run [#7854][65] (pmorie)
+ * Add ability to dockerize kubelet in local cluster [#7798][66] (pmorie)
+ * Create docker image for kubelet [#7797][67] (pmorie)
+ * Security context - types, kubelet, admission [#7343][68] (pweil-)
+ * Kubelet: Add rkt as a runtime option [#7743][69] (vmarmol)
+ * Fix kubelet's docker RunInContainer implementation [#7746][70] (vishh)
+* AWS
+
+ * AWS: Don't try to copy gce_keys in jenkins e2e job [#8018][71] (justinsb)
+ * AWS: Copy some new properties from config-default => config.test [#7992][72] (justinsb)
+ * AWS: make it possible to disable minion public ip assignment [#7928][73] (manolitto)
+ * update AWS CloudFormation template and cloud-configs [#7667][74] (antoineco)
+ * AWS: Fix variable naming that meant not all tokens were written [#7736][75] (justinsb)
+ * AWS: Change apiserver to listen on 443 directly, not through nginx [#7678][76] (justinsb)
+ * AWS: Improving getting existing VPC and subnet [#6606][77] (gust1n)
+ * AWS EBS volume support [#5138][78] (justinsb)
+* Introduce an 'svc' segment for DNS search [#8089][79] (thockin)
+* Adds ability to define a prefix for etcd paths [#5707][80] (kbeecher)
+* Add kubectl log --previous support to view last terminated container log [#7973][81] (dchen1107)
+* Add a flag to disable legacy APIs [#8083][82] (brendandburns)
+* make the dockerkeyring handle mutiple matching credentials [#7971][83] (deads2k)
+* Convert Fluentd to Cloud Logging pod specs to YAML [#8078][84] (satnam6502)
+* Use etcd to allocate PortalIPs instead of in-mem [#7704][85] (smarterclayton)
+* eliminate auth-path [#8064][86] (deads2k)
+* Record failure reasons for image pulling [#7981][87] (yujuhong)
+* Rate limit replica creation [#7869][88] (bprashanth)
+* Upgrade to Kibana 4 for cluster logging [#7995][89] (satnam6502)
+* Added name to kube-dns service [#8049][90] (piosz)
+* Fix validation by moving it into the resource builder. [#7919][91] (brendandburns)
+* Add cache with multiple shards to decrease lock contention [#8050][92] (fgrzadkowski)
+* Delete status from displayable resources [#8039][93] (nak3)
+* Refactor volume interfaces to receive pod instead of ObjectReference [#8044][94] (pmorie)
+* fix kube-down for provider gke [#7565][95] (jlowdermilk)
+* Service port names are required for multi-port [#7786][96] (thockin)
+* Increase disk size for kubernetes master. [#8051][97] (fgrzadkowski)
+* expose: Load input object for increased safety [#7774][98] (kargakis)
+* Improments to conversion methods generator [#7896][99] (wojtek-t)
+* Added displaying external IPs to kubectl cluster-info [#7557][100] (piosz)
+* Add missing Errorf formatting directives [#8037][101] (shawnps)
+* Add startup code to apiserver to migrate etcd keys [#7567][102] (kbeecher)
+* Use error type from docker go-client instead of string [#8021][103] (ddysher)
+* Accurately get hardware cpu count in Vagrantfile. [#8024][104] (BenTheElder)
+* Stop setting a GKE specific version of the kubeconfig file [#7921][105] (roberthbailey)
+* Make the API server deal with HEAD requests via the service proxy [#7950][106] (satnam6502)
+* GlusterFS Critical Bug Resolved - Removing warning in README [#7983][107] (wattsteve)
+* Don't use the first token `uname -n` as the hostname [#7967][108] (yujuhong)
+* Call kube-down in test-teardown for vagrant. [#7982][109] (BenTheElder)
+* defaults_tests: verify defaults when converting to an API object [#6235][110] (yujuhong)
+* Use the full hostname for mirror pod name. [#7910][111] (yujuhong)
+* Removes RunPod in the Runtime interface [#7657][112] (yujuhong)
+* Clean up dockertools/manager.go and add more unit tests [#7533][113] (yujuhong)
+* Adapt pod killing and cleanup for generic container runtime [#7525][114] (yujuhong)
+* Fix pod filtering in replication controller [#7198][115] (yujuhong)
+* Print container statuses in `kubectl get pods` [#7116][116] (yujuhong)
+* Prioritize deleting the non-running pods when reducing replicas [#6992][117] (yujuhong)
+* Fix locking issue in pod manager [#6872][118] (yujuhong)
+* Limit the number of concurrent tests in integration.go [#6655][119] (yujuhong)
+* Fix typos in different config comments [#7931][120] (pmorie)
+* Update cAdvisor dependency. [#7929][121] (vmarmol)
+* Ubuntu-distro: deprecate & merge ubuntu single node work to ubuntu cluster node stuff[#5498][122] (resouer)
+* Add control variables to Jenkins E2E script [#7935][123] (saad-ali)
+* Check node status as part of validate-cluster.sh. [#7932][124] (fabioy)
+* Add old endpoint cleanup function [#7821][125] (lavalamp)
+* Support recovery from in the middle of a rename. [#7620][126] (brendandburns)
+* Update Exec and Portforward client to use pod subresource [#7715][127] (csrwng)
+* Added NFS to PV structs [#7564][128] (markturansky)
+* Fix environment variable error in Vagrant docs [#7904][129] (posita)
+* Adds a simple release-note builder that scrapes the Github API for recent PRs [#7616][130](brendandburns)
+* Scheduler ignores nodes that are in a bad state [#7668][131] (bprashanth)
+* Set GOMAXPROCS for etcd [#7863][132] (fgrzadkowski)
+* Auto-generated conversion methods calling one another [#7556][133] (wojtek-t)
+* Bring up a kuberenetes cluster using coreos image as worker nodes [#7445][134] (dchen1107)
+* Godep: Add godep for rkt. [#7410][135] (yifan-gu)
+* Add volumeGetter to rkt. [#7870][136] (yifan-gu)
+* Update cAdvisor dependency. [#7897][137] (vmarmol)
+* DNS: expose 53/TCP [#7822][138] (thockin)
+* Set NodeReady=False when docker is dead [#7763][139] (wojtek-t)
+* Ignore latency metrics for events [#7857][140] (fgrzadkowski)
+* SecurityContext admission clean up [#7792][141] (pweil-)
+* Support manually-created and generated conversion functions [#7832][142] (wojtek-t)
+* Add latency metrics for etcd operations [#7833][143] (fgrzadkowski)
+* Update errors_test.go [#7885][144] (hurf)
+* Change signature of container runtime PullImage to allow pull w/ secret [#7861][145] (pmorie)
+* Fix bug in Service documentation: incorrect location of "selector" in JSON [#7873][146](bkeroackdsc)
+* Fix controller-manager manifest for providers that don't specify CLUSTER_IP_RANGE[#7876][147] (cjcullen)
+* Fix controller unittests [#7867][148] (bprashanth)
+* Enable GCM and GCL instead of InfluxDB on GCE [#7751][149] (saad-ali)
+* Remove restriction that cluster-cidr be a class-b [#7862][150] (cjcullen)
+* Fix OpenShift example [#7591][151] (derekwaynecarr)
+* API Server - pass path name in context of create request for subresource [#7718][152] (csrwng)
+* Rolling Updates: Add support for --rollback. [#7575][153] (brendandburns)
+* Update to container-vm-v20150505 (Also updates GCE to Docker 1.6) [#7820][154] (zmerlynn)
+* Fix metric label [#7830][155] (rhcarvalho)
+* Fix v1beta1 typos in v1beta2 conversions [#7838][156] (pmorie)
+* skydns: use the etcd-2.x native syntax, enable IANA attributed ports. [#7764][157](AntonioMeireles)
+* Added port 6443 to kube-proxy default IP address for api-server [#7794][158] (markllama)
+* Added client header info for authentication doc. [#7834][159] (ashcrow)
+* Clean up safe_format_and_mount spam in the startup logs [#7827][160] (zmerlynn)
+* Set allocate_node_cidrs to be blank by default. [#7829][161] (roberthbailey)
+* Fix sync problems in [#5246][162] [#7799][163] (cjcullen)
+* Fix event doc link [#7823][164] (saad-ali)
+* Cobra update and bash completions fix [#7776][165] (eparis)
+* Fix kube2sky flakes. Fix tools.GetEtcdVersion to work with etcd > 2.0.7 [#7675][166] (cjcullen)
+* Change kube2sky to use token-system-dns secret, point at https endpoint ... [#7154][167](cjcullen)
+* replica: serialize created-by reference [#7468][168] (simon3z)
+* Inject mounter into volume plugins [#7702][169] (pmorie)
+* bringing CoreOS cloud-configs up-to-date (against 0.15.x and latest OS' alpha) [#6973][170](AntonioMeireles)
+* Update kubeconfig-file doc. [#7787][171] (jlowdermilk)
+* Throw an API error when deleting namespace in termination [#7780][172] (derekwaynecarr)
+* Fix command field PodExecOptions [#7773][173] (csrwng)
+* Start ImageManager housekeeping in Run(). [#7785][174] (vmarmol)
+* fix DeepCopy to properly support runtime.EmbeddedObject [#7769][175] (deads2k)
+* fix master service endpoint system for multiple masters [#7273][176] (lavalamp)
+* Add genbashcomp to KUBE_TEST_TARGETS [#7757][177] (nak3)
+* Change the cloud provider TCPLoadBalancerExists function to GetTCPLoadBalancer...[#7669][178] (a-robinson)
+* Add containerized option to kubelet binary [#7772][179] (pmorie)
+* Fix swagger spec [#7779][180] (pmorie)
+* FIX: Issue [#7750][181] \- Hyperkube docker image needs certificates to connect to cloud-providers[#7755][182] (viklas)
+* Add build labels to rkt [#7752][183] (vmarmol)
+* Check license boilerplate for python files [#7672][184] (eparis)
+* Reliable updates in rollingupdate [#7705][185] (bprashanth)
+* Don't exit abruptly if there aren't yet any minions right after the cluster is created. [#7650][186](roberthbailey)
+* Make changes suggested in [#7675][166] [#7742][187] (cjcullen)
+* A guide to set up kubernetes multiple nodes cluster with flannel on fedora [#7357][188](aveshagarwal)
+* Setup generators in factory [#7760][189] (kargakis)
+* Reduce usage of time.After [#7737][190] (lavalamp)
+* Remove node status from "componentstatuses" call. [#7735][191] (fabioy)
+* React to failure by growing the remaining clusters [#7614][192] (tamsky)
+* Fix typo in runtime_cache.go [#7725][193] (pmorie)
+* Update non-GCE Salt distros to 1.6.0, fallback to ContainerVM Docker version on GCE[#7740][194] (zmerlynn)
+* Skip SaltStack install if it's already installed [#7744][195] (zmerlynn)
+* Expose pod name as a label on containers. [#7712][196] (rjnagal)
+* Log which SSH key is used in e2e SSH test [#7732][197] (mbforbes)
+* Add a central simple getting started guide with kubernetes guide. [#7649][198] (brendandburns)
+* Explicitly state the lack of support for 'Requests' for the purposes of scheduling [#7443][199](vishh)
+* Select IPv4-only from host interfaces [#7721][200] (smarterclayton)
+* Metrics tests can't run on Mac [#7723][201] (smarterclayton)
+* Add step to API changes doc for swagger regen [#7727][202] (pmorie)
+* Add NsenterMounter mount implementation [#7703][203] (pmorie)
+* add StringSet.HasAny [#7509][204] (deads2k)
+* Add an integration test that checks for the metrics we expect to be exported from the master [#6941][205] (a-robinson)
+* Minor bash update found by shellcheck.net [#7722][206] (eparis)
+* Add --hostport to run-container. [#7536][207] (rjnagal)
+* Have rkt implement the container Runtime interface [#7659][208] (vmarmol)
+* Change the order the different versions of API are registered [#7629][209] (caesarxuchao)
+* expose: Create objects in a generic way [#7699][210] (kargakis)
+* Requeue rc if a single get/put retry on status.Replicas fails [#7643][211] (bprashanth)
+* logs for master components [#7316][212] (ArtfulCoder)
+* cloudproviders: add ovirt getting started guide [#7522][213] (simon3z)
+* Make rkt-install a oneshot. [#7671][214] (vmarmol)
+* Provide container_runtime flag to Kubelet in CoreOS. [#7665][215] (vmarmol)
+* Boilerplate speedup [#7654][216] (eparis)
+* Log host for failed pod in Density test [#7700][217] (wojtek-t)
+* Removes spurious quotation mark [#7655][218] (alindeman)
+* Add kubectl_label to custom functions in bash completion [#7694][219] (nak3)
+* Enable profiling in kube-controller [#7696][220] (wojtek-t)
+* Set vagrant test cluster default NUM_MINIONS=2 [#7690][221] (BenTheElder)
+* Add metrics to measure cache hit ratio [#7695][222] (fgrzadkowski)
+* Change IP to IP(S) in service columns for kubectl get [#7662][223] (jlowdermilk)
+* annotate required flags for bash_completions [#7076][224] (eparis)
+* (minor) Add pgrep debugging to etcd error [#7685][225] (jayunit100)
+* Fixed nil pointer issue in describe when volume is unbound [#7676][226] (markturansky)
+* Removed unnecessary closing bracket [#7691][227] (piosz)
+* Added TerminationGracePeriod field to PodSpec and grace-period flag to kubectl stop[#7432][228] (piosz)
+* Fix boilerplate in test/e2e/scale.go [#7689][229] (wojtek-t)
+* Update expiration timeout based on observed latencies [#7628][230] (bprashanth)
+* Output generated conversion functions/names [#7644][231] (liggitt)
+* Moved the Scale tests into a scale file. [#7645][232] [#7646][233] (rrati)
+* Truncate GCE load balancer names to 63 chars [#7609][234] (brendandburns)
+* Add SyncPod() and remove Kill/Run InContainer(). [#7603][235] (vmarmol)
+* Merge release 0.16 to master [#7663][236] (brendandburns)
+* Update license boilerplate for examples/rethinkdb [#7637][237] (eparis)
+* First part of improved rolling update, allow dynamic next replication controller generation.[#7268][238] (brendandburns)
+* Add license boilerplate to examples/phabricator [#7638][239] (eparis)
+* Use generic copyright holder name in license boilerplate [#7597][240] (eparis)
+* Retry incrementing quota if there is a conflict [#7633][241] (derekwaynecarr)
+* Remove GetContainers from Runtime interface [#7568][242] (yujuhong)
+* Add image-related methods to DockerManager [#7578][243] (yujuhong)
+* Remove more docker references in kubelet [#7586][244] (yujuhong)
+* Add KillContainerInPod in DockerManager [#7601][245] (yujuhong)
+* Kubelet: Add container runtime option. [#7652][246] (vmarmol)
+* bump heapster to v0.11.0 and grafana to v0.7.0 [#7626][247] (idosh)
+* Build github.com/onsi/ginkgo/ginkgo as a part of the release [#7593][248] (ixdy)
+* Do not automatically decode runtime.RawExtension [#7490][249] (smarterclayton)
+* Update changelog. [#7500][250] (brendandburns)
+* Add SyncPod() to DockerManager and use it in Kubelet [#7610][251] (vmarmol)
+* Build: Push .md5 and .sha1 files for every file we push to GCS [#7602][252] (zmerlynn)
+* Fix rolling update --image [#7540][253] (bprashanth)
+* Update license boilerplate for docs/man/md2man-all.sh [#7636][254] (eparis)
+* Include shell license boilerplate in examples/k8petstore [#7632][255] (eparis)
+* Add --cgroup_parent flag to Kubelet to set the parent cgroup for pods [#7277][256] (guenter)
+* change the current dir to the config dir [#7209][257] (you-n-g)
+* Set Weave To 0.9.0 And Update Etcd Configuration For Azure [#7158][258] (idosh)
+* Augment describe to search for matching things if it doesn't match the original resource.[#7467][259] (brendandburns)
+* Add a simple cache for objects stored in etcd. [#7559][260] (fgrzadkowski)
+* Rkt gc [#7549][261] (yifan-gu)
+* Rkt pull [#7550][262] (yifan-gu)
+* Implement Mount interface using mount(8) and umount(8) [#6400][263] (ddysher)
+* Trim Fleuntd tag for Cloud Logging [#7588][264] (satnam6502)
+* GCE CoreOS cluster - set master name based on variable [#7569][265] (bakins)
+* Capitalization of KubeProxyVersion wrong in JSON [#7535][266] (smarterclayton)
+* Make nodes report their external IP rather than the master's. [#7530][267] (mbforbes)
+* Trim cluster log tags to pod name and container name [#7539][268] (satnam6502)
+* Handle conversion of boolean query parameters with a value of "false" [#7541][269] (csrwng)
+* Add image-related methods to Runtime interface. [#7532][270] (vmarmol)
+* Test whether auto-generated conversions weren't manually edited [#7560][271] (wojtek-t)
+* Mention :latest behavior for image version tag [#7484][272] (colemickens)
+* readinessProbe calls livenessProbe.Exec.Command which cause "invalid memory address or nil pointer dereference". [#7487][273] (njuicsgz)
+* Add RuntimeHooks to abstract Kubelet logic [#7520][274] (vmarmol)
+* Expose URL() on Request to allow building URLs [#7546][275] (smarterclayton)
+* Add a simple cache for objects stored in etcd [#7288][276] (fgrzadkowski)
+* Prepare for chaining autogenerated conversion methods [#7431][277] (wojtek-t)
+* Increase maxIdleConnection limit when creating etcd client in apiserver. [#7353][278] (wojtek-t)
+* Improvements to generator of conversion methods. [#7354][279] (wojtek-t)
+* Code to automatically generate conversion methods [#7107][280] (wojtek-t)
+* Support recovery for anonymous roll outs [#7407][281] (brendandburns)
+* Bump kube2sky to 1.2. Point it at https endpoint (3rd try). [#7527][282] (cjcullen)
+* cluster/gce/coreos: Add metadata-service in node.yaml [#7526][283] (yifan-gu)
+* Move ComputePodChanges to the Docker runtime [#7480][284] (vmarmol)
+* Cobra rebase [#7510][285] (eparis)
+* Adding system oom events from kubelet [#6718][286] (vishh)
+* Move Prober to its own subpackage [#7479][287] (vmarmol)
+* Fix parallel-e2e.sh to work on my macbook (bash v3.2) [#7513][288] (cjcullen)
+* Move network plugin TearDown to DockerManager [#7449][289] (vmarmol)
+* Fixes [#7498][290] \- CoreOS Getting Started Guide had invalid cloud config [#7499][291] (elsonrodriguez)
+* Fix invalid character '"' after object key:value pair [#7504][292] (resouer)
+* Fixed kubelet deleting data from volumes on stop ([#7317][293]). [#7503][294] (jsafrane)
+* Fixing hooks/description to catch API fields without description tags [#7482][295] (nikhiljindal)
+* cadvisor is obsoleted so kubelet service does not require it. [#7457][296] (aveshagarwal)
+* Set the default namespace for events to be "default" [#7408][297] (vishh)
+* Fix typo in namespace conversion [#7446][298] (liggitt)
+* Convert Secret registry to use update/create strategy, allow filtering by Type [#7419][299] (liggitt)
+* Use pod namespace when looking for its GlusterFS endpoints. [#7102][300] (jsafrane)
+* Fixed name of kube-proxy path in deployment scripts. [#7427][301] (jsafrane)
+
+To download, please visit https://github.com/GoogleCloudPlatform/kubernetes/releases/tag/v0.17.0
+
+
+Simple theme. Powered by [Blogger][385].
+
+[ ![][327] ][386]
+
+[1]: http://kubernetes.io/images/nav_logo.svg
+[2]: http://kubernetes.io/docs/
+[3]: http://blog.kubernetes.io/
+[4]: https://github.com/GoogleCloudPlatform/kubernetes/pull/8065 "Remove old salt configs"
+[5]: https://github.com/GoogleCloudPlatform/kubernetes/pull/8069 "Kubelet: minor cleanups"
+[6]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7940 "update example/walkthrough to v1beta3"
+[7]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7946 "update example/rethinkdb to v1beta3"
+[8]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7917 "verify the v1beta3 yaml files all work; merge the yaml files"
+[9]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7258 "update examples/cassandra to api v1beta3"
+[10]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7899 "update service.json in persistent-volume example to v1beta3"
+[11]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7864 "update mysql-wordpress example to use v1beta3 API"
+[12]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7848 "Update examples/meteor to use API v1beta3"
+[13]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7872 "update node-selector example to API v1beta3"
+[14]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7824 "update logging-demo to use API v1beta3; modify the way to access Elasticsearch and Kibana services"
+[15]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7619 "Convert the skydns rc to use v1beta3 and add a health check to it"
+[16]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7728 "update the hazelcast example to API version v1beta3"
+[17]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7515 "Fix YAML parsing for v1beta3 objects in the kubelet for file/http"
+[18]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7502 "Updated kubectl cluster-info to show v1beta3 addresses"
+[19]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7980 "kubelet: Fix racy kubelet tests."
+[20]: https://github.com/GoogleCloudPlatform/kubernetes/pull/8079 "kubelet/container: Move prober.ContainerCommandRunner to container."
+[21]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6127 "Kubelet: set host field in the pending pod status"
+[22]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6442 "Fix the kubelet node watch"
+[23]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6607 "Kubelet: recreate mirror pod if the static pod changes"
+[24]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7749 "Kubelet: record the timestamp correctly in the runtime cache"
+[25]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7729 "Kubelet: wait until container runtime is up"
+[26]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7674 "Kubelet: replace DockerManager with the Runtime interface"
+[27]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7301 "Kubelet: filter out terminated pods in SyncPods"
+[28]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7048 "Kubelet: parallelize cleaning up containers in unwanted pods"
+[29]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7952 "kubelet: Add container runtime option for rkt."
+[30]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7916 "kubelet/rkt: Remove build label."
+[31]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7327 "kubelet/metrics: Move instrumented_docker.go to dockertools."
+[32]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7599 "kubelet/rkt: Add GetPods() for rkt."
+[33]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7605 "kubelet/rkt: Add KillPod() and GetPodStatus() for rkt."
+[34]: https://github.com/GoogleCloudPlatform/kubernetes/pull/4755 "pkg/kubelet: Fix logging."
+[35]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6491 "kubelet: Refactor RunInContainer/ExecInContainer/PortForward."
+[36]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6609 "kubelet/DockerManager: Fix returning empty error from GetPodStatus()."
+[37]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6634 "kubelet: Move pod infra container image setting to dockertools."
+[38]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6653 "kubelet/fake_docker_client: Use self's PID instead of 42 in testing."
+[39]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6778 "kubelet/dockertool: Move Getpods() to DockerManager."
+[40]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6776 "kubelet/dockertools: Add puller interfaces in the containerManager."
+[41]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6608 "kubelet: Introduce PodInfraContainerChanged()."
+[42]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6795 "kubelet/container: Replace DockerCache with RuntimeCache."
+[43]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6844 "kubelet: Clean up computePodContainerChanges."
+[44]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7009 "kubelet: Refactor prober."
+[45]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7466 "kubelet/container: Update the runtime interface."
+[46]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7477 "kubelet: Refactor isPodRunning() in runonce.go"
+[47]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7465 "kubelet/rkt: Add basic rkt runtime routines."
+[48]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7555 "kubelet/rkt: Add podInfo."
+[49]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7488 "kubelet/container: Add GetContainerLogs to runtime interface."
+[50]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7543 "kubelet/rkt: Add routines for converting kubelet pod to rkt pod."
+[51]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7589 "kubelet/rkt: Add RunPod() for rkt."
+[52]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7553 "kubelet/rkt: Add RunInContainer()/ExecInContainer()/PortForward()."
+[53]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7613 "kubelet/container: Move ShouldContainerBeRestarted() to runtime."
+[54]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7611 "kubelet/rkt: Add SyncPod() to rkt."
+[55]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6794 "Kubelet: persist restart count of a container"
+[56]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7951 "kubelet/container: Move pty*.go to container runtime package."
+[57]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7900 "kubelet: Add container runtime option for rkt."
+[58]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7803 "kubelet/rkt: Add docker prefix to image string."
+[59]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7849 "kubelet/rkt: Inject dependencies to rkt."
+[60]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7859 "kubelet/rkt: Remove dependencies on rkt.store"
+[61]: https://github.com/GoogleCloudPlatform/kubernetes/pull/2387 "Kubelet talks securely to apiserver"
+[62]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7592 "Rename EnvVarSource.FieldPath -> FieldRef and add example"
+[63]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7741 "Add containerized option to kubelet binary"
+[64]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7948 "Ease building kubelet image"
+[65]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7854 "Remove unnecessary bind-mount from dockerized kubelet run"
+[66]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7798 "Add ability to dockerize kubelet in local cluster"
+[67]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7797 "Create docker image for kubelet"
+[68]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7343 "Security context - types, kubelet, admission"
+[69]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7743 "Kubelet: Add rkt as a runtime option"
+[70]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7746 "Fix kubelet's docker RunInContainer implementation "
+[71]: https://github.com/GoogleCloudPlatform/kubernetes/pull/8018 "AWS: Don't try to copy gce_keys in jenkins e2e job"
+[72]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7992 "AWS: Copy some new properties from config-default => config.test"
+[73]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7928 "AWS: make it possible to disable minion public ip assignment"
+[74]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7667 "update AWS CloudFormation template and cloud-configs"
+[75]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7736 "AWS: Fix variable naming that meant not all tokens were written"
+[76]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7678 "AWS: Change apiserver to listen on 443 directly, not through nginx"
+[77]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6606 "AWS: Improving getting existing VPC and subnet"
+[78]: https://github.com/GoogleCloudPlatform/kubernetes/pull/5138 "AWS EBS volume support"
+[79]: https://github.com/GoogleCloudPlatform/kubernetes/pull/8089 "Introduce an 'svc' segment for DNS search"
+[80]: https://github.com/GoogleCloudPlatform/kubernetes/pull/5707 "Adds ability to define a prefix for etcd paths"
+[81]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7973 "Add kubectl log --previous support to view last terminated container log"
+[82]: https://github.com/GoogleCloudPlatform/kubernetes/pull/8083 "Add a flag to disable legacy APIs"
+[83]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7971 "make the dockerkeyring handle mutiple matching credentials"
+[84]: https://github.com/GoogleCloudPlatform/kubernetes/pull/8078 "Convert Fluentd to Cloud Logging pod specs to YAML"
+[85]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7704 "Use etcd to allocate PortalIPs instead of in-mem"
+[86]: https://github.com/GoogleCloudPlatform/kubernetes/pull/8064 "eliminate auth-path"
+[87]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7981 "Record failure reasons for image pulling"
+[88]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7869 "Rate limit replica creation"
+[89]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7995 "Upgrade to Kibana 4 for cluster logging"
+[90]: https://github.com/GoogleCloudPlatform/kubernetes/pull/8049 "Added name to kube-dns service"
+[91]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7919 "Fix validation by moving it into the resource builder."
+[92]: https://github.com/GoogleCloudPlatform/kubernetes/pull/8050 "Add cache with multiple shards to decrease lock contention"
+[93]: https://github.com/GoogleCloudPlatform/kubernetes/pull/8039 "Delete status from displayable resources"
+[94]: https://github.com/GoogleCloudPlatform/kubernetes/pull/8044 "Refactor volume interfaces to receive pod instead of ObjectReference"
+[95]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7565 "fix kube-down for provider gke"
+[96]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7786 "Service port names are required for multi-port"
+[97]: https://github.com/GoogleCloudPlatform/kubernetes/pull/8051 "Increase disk size for kubernetes master."
+[98]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7774 "expose: Load input object for increased safety"
+[99]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7896 "Improments to conversion methods generator"
+[100]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7557 "Added displaying external IPs to kubectl cluster-info"
+[101]: https://github.com/GoogleCloudPlatform/kubernetes/pull/8037 "Add missing Errorf formatting directives"
+[102]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7567 "WIP: Add startup code to apiserver to migrate etcd keys"
+[103]: https://github.com/GoogleCloudPlatform/kubernetes/pull/8021 "Use error type from docker go-client instead of string"
+[104]: https://github.com/GoogleCloudPlatform/kubernetes/pull/8024 "Accurately get hardware cpu count in Vagrantfile."
+[105]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7921 "Stop setting a GKE specific version of the kubeconfig file"
+[106]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7950 "Make the API server deal with HEAD requests via the service proxy"
+[107]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7983 "GlusterFS Critical Bug Resolved - Removing warning in README"
+[108]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7967 "Don't use the first token `uname -n` as the hostname"
+[109]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7982 "Call kube-down in test-teardown for vagrant."
+[110]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6235 "defaults_tests: verify defaults when converting to an API object"
+[111]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7910 "Use the full hostname for mirror pod name."
+[112]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7657 "Removes RunPod in the Runtime interface"
+[113]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7533 "Clean up dockertools/manager.go and add more unit tests"
+[114]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7525 "Adapt pod killing and cleanup for generic container runtime"
+[115]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7198 "Fix pod filtering in replication controller"
+[116]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7116 "Print container statuses in `kubectl get pods`"
+[117]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6992 "Prioritize deleting the non-running pods when reducing replicas"
+[118]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6872 "Fix locking issue in pod manager"
+[119]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6655 "Limit the number of concurrent tests in integration.go"
+[120]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7931 "Fix typos in different config comments"
+[121]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7929 "Update cAdvisor dependency."
+[122]: https://github.com/GoogleCloudPlatform/kubernetes/pull/5498 "Ubuntu-distro: deprecate & merge ubuntu single node work to ubuntu cluster node stuff"
+[123]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7935 "Add control variables to Jenkins E2E script"
+[124]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7932 "Check node status as part of validate-cluster.sh."
+[125]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7821 "Add old endpoint cleanup function"
+[126]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7620 "Support recovery from in the middle of a rename."
+[127]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7715 "Update Exec and Portforward client to use pod subresource"
+[128]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7564 "Added NFS to PV structs"
+[129]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7904 "Fix environment variable error in Vagrant docs"
+[130]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7616 "Adds a simple release-note builder that scrapes the Github API for recent PRs"
+[131]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7668 "Scheduler ignores nodes that are in a bad state"
+[132]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7863 "Set GOMAXPROCS for etcd"
+[133]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7556 "Auto-generated conversion methods calling one another"
+[134]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7445 "Bring up a kuberenetes cluster using coreos image as worker nodes"
+[135]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7410 "Godep: Add godep for rkt."
+[136]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7870 "Add volumeGetter to rkt."
+[137]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7897 "Update cAdvisor dependency."
+[138]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7822 "DNS: expose 53/TCP"
+[139]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7763 "Set NodeReady=False when docker is dead"
+[140]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7857 "Ignore latency metrics for events"
+[141]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7792 "SecurityContext admission clean up"
+[142]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7832 "Support manually-created and generated conversion functions"
+[143]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7833 "Add latency metrics for etcd operations"
+[144]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7885 "Update errors_test.go"
+[145]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7861 "Change signature of container runtime PullImage to allow pull w/ secret"
+[146]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7873 "Fix bug in Service documentation: incorrect location of "selector" in JSON"
+[147]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7876 "Fix controller-manager manifest for providers that don't specify CLUSTER_IP_RANGE"
+[148]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7867 "Fix controller unittests"
+[149]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7751 "Enable GCM and GCL instead of InfluxDB on GCE"
+[150]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7862 "Remove restriction that cluster-cidr be a class-b"
+[151]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7591 "Fix OpenShift example"
+[152]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7718 "API Server - pass path name in context of create request for subresource"
+[153]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7575 "Rolling Updates: Add support for --rollback."
+[154]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7820 "Update to container-vm-v20150505 (Also updates GCE to Docker 1.6)"
+[155]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7830 "Fix metric label"
+[156]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7838 "Fix v1beta1 typos in v1beta2 conversions"
+[157]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7764 "skydns: use the etcd-2.x native syntax, enable IANA attributed ports."
+[158]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7794 "Added port 6443 to kube-proxy default IP address for api-server"
+[159]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7834 "Added client header info for authentication doc."
+[160]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7827 "Clean up safe_format_and_mount spam in the startup logs"
+[161]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7829 "Set allocate_node_cidrs to be blank by default."
+[162]: https://github.com/GoogleCloudPlatform/kubernetes/pull/5246 "Make nodecontroller configure nodes' pod IP ranges"
+[163]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7799 "Fix sync problems in #5246"
+[164]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7823 "Fix event doc link"
+[165]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7776 "Cobra update and bash completions fix"
+[166]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7675 "Fix kube2sky flakes. Fix tools.GetEtcdVersion to work with etcd > 2.0.7"
+[167]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7154 "Change kube2sky to use token-system-dns secret, point at https endpoint ..."
+[168]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7468 "replica: serialize created-by reference"
+[169]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7702 "Inject mounter into volume plugins"
+[170]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6973 "bringing CoreOS cloud-configs up-to-date (against 0.15.x and latest OS' alpha) "
+[171]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7787 "Update kubeconfig-file doc."
+[172]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7780 "Throw an API error when deleting namespace in termination"
+[173]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7773 "Fix command field PodExecOptions"
+[174]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7785 "Start ImageManager housekeeping in Run()."
+[175]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7769 "fix DeepCopy to properly support runtime.EmbeddedObject"
+[176]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7273 "fix master service endpoint system for multiple masters"
+[177]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7757 "Add genbashcomp to KUBE_TEST_TARGETS"
+[178]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7669 "Change the cloud provider TCPLoadBalancerExists function to GetTCPLoadBalancer..."
+[179]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7772 "Add containerized option to kubelet binary"
+[180]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7779 "Fix swagger spec"
+[181]: https://github.com/GoogleCloudPlatform/kubernetes/issues/7750 "Hyperkube image requires root certificates to work with cloud-providers (at least AWS)"
+[182]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7755 "FIX: Issue #7750 - Hyperkube docker image needs certificates to connect to cloud-providers"
+[183]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7752 "Add build labels to rkt"
+[184]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7672 "Check license boilerplate for python files"
+[185]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7705 "Reliable updates in rollingupdate"
+[186]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7650 "Don't exit abruptly if there aren't yet any minions right after the cluster is created."
+[187]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7742 "Make changes suggested in #7675"
+[188]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7357 "A guide to set up kubernetes multiple nodes cluster with flannel on fedora"
+[189]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7760 "Setup generators in factory"
+[190]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7737 "Reduce usage of time.After"
+[191]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7735 "Remove node status from "componentstatuses" call."
+[192]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7614 "React to failure by growing the remaining clusters"
+[193]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7725 "Fix typo in runtime_cache.go"
+[194]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7740 "Update non-GCE Salt distros to 1.6.0, fallback to ContainerVM Docker version on GCE"
+[195]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7744 "Skip SaltStack install if it's already installed"
+[196]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7712 "Expose pod name as a label on containers."
+[197]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7732 "Log which SSH key is used in e2e SSH test"
+[198]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7649 "Add a central simple getting started guide with kubernetes guide."
+[199]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7443 "Explicitly state the lack of support for 'Requests' for the purposes of scheduling"
+[200]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7721 "Select IPv4-only from host interfaces"
+[201]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7723 "Metrics tests can't run on Mac"
+[202]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7727 "Add step to API changes doc for swagger regen"
+[203]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7703 "Add NsenterMounter mount implementation"
+[204]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7509 "add StringSet.HasAny"
+[205]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6941 "Add an integration test that checks for the metrics we expect to be exported from the master"
+[206]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7722 "Minor bash update found by shellcheck.net"
+[207]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7536 "Add --hostport to run-container."
+[208]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7659 "Have rkt implement the container Runtime interface"
+[209]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7629 "Change the order the different versions of API are registered "
+[210]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7699 "expose: Create objects in a generic way"
+[211]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7643 "Requeue rc if a single get/put retry on status.Replicas fails"
+[212]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7316 "logs for master components"
+[213]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7522 "cloudproviders: add ovirt getting started guide"
+[214]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7671 "Make rkt-install a oneshot."
+[215]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7665 "Provide container_runtime flag to Kubelet in CoreOS."
+[216]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7654 "Boilerplate speedup"
+[217]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7700 "Log host for failed pod in Density test"
+[218]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7655 "Removes spurious quotation mark"
+[219]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7694 "Add kubectl_label to custom functions in bash completion"
+[220]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7696 "Enable profiling in kube-controller"
+[221]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7690 "Set vagrant test cluster default NUM_MINIONS=2"
+[222]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7695 "Add metrics to measure cache hit ratio"
+[223]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7662 "Change IP to IP(S) in service columns for kubectl get"
+[224]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7076 "annotate required flags for bash_completions"
+[225]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7685 "(minor) Add pgrep debugging to etcd error"
+[226]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7676 "Fixed nil pointer issue in describe when volume is unbound"
+[227]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7691 "Removed unnecessary closing bracket"
+[228]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7432 "Added TerminationGracePeriod field to PodSpec and grace-period flag to kubectl stop"
+[229]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7689 "Fix boilerplate in test/e2e/scale.go"
+[230]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7628 "Update expiration timeout based on observed latencies"
+[231]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7644 "Output generated conversion functions/names"
+[232]: https://github.com/GoogleCloudPlatform/kubernetes/issues/7645 "Move the scale tests into a separate file"
+[233]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7646 "Moved the Scale tests into a scale file. #7645"
+[234]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7609 "Truncate GCE load balancer names to 63 chars"
+[235]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7603 "Add SyncPod() and remove Kill/Run InContainer()."
+[236]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7663 "Merge release 0.16 to master"
+[237]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7637 "Update license boilerplate for examples/rethinkdb"
+[238]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7268 "First part of improved rolling update, allow dynamic next replication controller generation."
+[239]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7638 "Add license boilerplate to examples/phabricator"
+[240]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7597 "Use generic copyright holder name in license boilerplate"
+[241]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7633 "Retry incrementing quota if there is a conflict"
+[242]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7568 "Remove GetContainers from Runtime interface"
+[243]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7578 "Add image-related methods to DockerManager"
+[244]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7586 "Remove more docker references in kubelet"
+[245]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7601 "Add KillContainerInPod in DockerManager"
+[246]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7652 "Kubelet: Add container runtime option."
+[247]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7626 "bump heapster to v0.11.0 and grafana to v0.7.0"
+[248]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7593 "Build github.com/onsi/ginkgo/ginkgo as a part of the release"
+[249]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7490 "Do not automatically decode runtime.RawExtension"
+[250]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7500 "Update changelog."
+[251]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7610 "Add SyncPod() to DockerManager and use it in Kubelet"
+[252]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7602 "Build: Push .md5 and .sha1 files for every file we push to GCS"
+[253]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7540 "Fix rolling update --image "
+[254]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7636 "Update license boilerplate for docs/man/md2man-all.sh"
+[255]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7632 "Include shell license boilerplate in examples/k8petstore"
+[256]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7277 "Add --cgroup_parent flag to Kubelet to set the parent cgroup for pods"
+[257]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7209 "change the current dir to the config dir"
+[258]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7158 "Set Weave To 0.9.0 And Update Etcd Configuration For Azure"
+[259]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7467 "Augment describe to search for matching things if it doesn't match the original resource."
+[260]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7559 "Add a simple cache for objects stored in etcd."
+[261]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7549 "Rkt gc"
+[262]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7550 "Rkt pull"
+[263]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6400 "Implement Mount interface using mount(8) and umount(8)"
+[264]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7588 "Trim Fleuntd tag for Cloud Logging"
+[265]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7569 "GCE CoreOS cluster - set master name based on variable"
+[266]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7535 "Capitalization of KubeProxyVersion wrong in JSON"
+[267]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7530 "Make nodes report their external IP rather than the master's."
+[268]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7539 "Trim cluster log tags to pod name and container name"
+[269]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7541 "Handle conversion of boolean query parameters with a value of "false""
+[270]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7532 "Add image-related methods to Runtime interface."
+[271]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7560 "Test whether auto-generated conversions weren't manually edited"
+[272]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7484 "Mention :latest behavior for image version tag"
+[273]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7487 "readinessProbe calls livenessProbe.Exec.Command which cause "invalid memory address or nil pointer dereference"."
+[274]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7520 "Add RuntimeHooks to abstract Kubelet logic"
+[275]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7546 "Expose URL() on Request to allow building URLs"
+[276]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7288 "Add a simple cache for objects stored in etcd"
+[277]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7431 "Prepare for chaining autogenerated conversion methods "
+[278]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7353 "Increase maxIdleConnection limit when creating etcd client in apiserver."
+[279]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7354 "Improvements to generator of conversion methods."
+[280]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7107 "Code to automatically generate conversion methods"
+[281]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7407 "Support recovery for anonymous roll outs"
+[282]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7527 "Bump kube2sky to 1.2. Point it at https endpoint (3rd try)."
+[283]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7526 "cluster/gce/coreos: Add metadata-service in node.yaml"
+[284]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7480 "Move ComputePodChanges to the Docker runtime"
+[285]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7510 "Cobra rebase"
+[286]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6718 "Adding system oom events from kubelet"
+[287]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7479 "Move Prober to its own subpackage"
+[288]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7513 "Fix parallel-e2e.sh to work on my macbook (bash v3.2)"
+[289]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7449 "Move network plugin TearDown to DockerManager"
+[290]: https://github.com/GoogleCloudPlatform/kubernetes/issues/7498 "CoreOS Getting Started Guide not working"
+[291]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7499 "Fixes #7498 - CoreOS Getting Started Guide had invalid cloud config"
+[292]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7504 "Fix invalid character '"' after object key:value pair"
+[293]: https://github.com/GoogleCloudPlatform/kubernetes/issues/7317 "GlusterFS Volume Plugin deletes the contents of the mounted volume upon Pod deletion"
+[294]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7503 "Fixed kubelet deleting data from volumes on stop (#7317)."
+[295]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7482 "Fixing hooks/description to catch API fields without description tags"
+[296]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7457 "cadvisor is obsoleted so kubelet service does not require it."
+[297]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7408 "Set the default namespace for events to be "default""
+[298]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7446 "Fix typo in namespace conversion"
+[299]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7419 "Convert Secret registry to use update/create strategy, allow filtering by Type"
+[300]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7102 "Use pod namespace when looking for its GlusterFS endpoints."
+[301]: https://github.com/GoogleCloudPlatform/kubernetes/pull/7427 "Fixed name of kube-proxy path in deployment scripts."
+[
diff --git a/blog/_posts/2015-05-00-Resource-Usage-Monitoring-Kubernetes.md b/blog/_posts/2015-05-00-Resource-Usage-Monitoring-Kubernetes.md
new file mode 100644
index 00000000000..b048d234ccd
--- /dev/null
+++ b/blog/_posts/2015-05-00-Resource-Usage-Monitoring-Kubernetes.md
@@ -0,0 +1,104 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: Resource Usage Monitoring in Kubernetes
+date: Wednesday, May 12, 2015
+pagination:
+ enabled: true
+---
+
+Understanding how an application behaves when deployed is crucial to scaling the application and providing a reliable service. In a Kubernetes cluster, application performance can be examined at many different levels: containers, [pods](http://kubernetes.io/docs/user-guide/pods), [services](http://kubernetes.io/docs/user-guide/services), and whole clusters. As part of Kubernetes we want to provide users with detailed resource usage information about their running applications at all these levels. This will give users deep insights into how their applications are performing and where possible application bottlenecks may be found. In comes [Heapster](https://github.com/kubernetes/heapster), a project meant to provide a base monitoring platform on Kubernetes.
+
+
+**Overview**
+
+
+Heapster is a cluster-wide aggregator of monitoring and event data. It currently supports Kubernetes natively and works on all Kubernetes setups. Heapster runs as a pod in the cluster, similar to how any Kubernetes application would run. The Heapster pod discovers all nodes in the cluster and queries usage information from the nodes’ [Kubelets](https://github.com/kubernetes/kubernetes/blob/master/DESIGN.md#kubelet), the on-machine Kubernetes agent. The Kubelet itself fetches the data from [cAdvisor](https://github.com/google/cadvisor). Heapster groups the information by pod along with the relevant labels. This data is then pushed to a configurable backend for storage and visualization. Currently supported backends include [InfluxDB](http://influxdb.com/) (with [Grafana](http://grafana.org/) for visualization), [Google Cloud Monitoring](https://cloud.google.com/monitoring/) and many others described in more details here. The overall architecture of the service can be seen below:
+
+
+[](https://2.bp.blogspot.com/-6Bu15356Zqk/V4mGINP8eOI/AAAAAAAAAmk/-RwvkJUt4rY2cmjqYFBmRo25FQQPRb27ACEw/s1600/monitoring-architecture.png)
+
+Let’s look at some of the other components in more detail.
+
+
+
+**cAdvisor**
+
+
+
+cAdvisor is an open source container resource usage and performance analysis agent. It is purpose built for containers and supports Docker containers natively. In Kubernetes, cadvisor is integrated into the Kubelet binary. cAdvisor auto-discovers all containers in the machine and collects CPU, memory, filesystem, and network usage statistics. cAdvisor also provides the overall machine usage by analyzing the ‘root’? container on the machine.
+
+
+
+On most Kubernetes clusters, cAdvisor exposes a simple UI for on-machine containers on port 4194. Here is a snapshot of part of cAdvisor’s UI that shows the overall machine usage:
+
+
+[](https://3.bp.blogspot.com/-V5KAfomW7Cg/V4mGH6OTKSI/AAAAAAAAAmo/EZHcG0afrs0606eTDMCryT6j6SoNzu3PgCEw/s1600/cadvisor.png)
+
+**Kubelet**
+
+The Kubelet acts as a bridge between the Kubernetes master and the nodes. It manages the pods and containers running on a machine. Kubelet translates each pod into its constituent containers and fetches individual container usage statistics from cAdvisor. It then exposes the aggregated pod resource usage statistics via a REST API.
+
+
+
+**STORAGE BACKENDS**
+
+
+
+**InfluxDB and Grafana**
+
+
+
+A Grafana setup with InfluxDB is a very popular combination for monitoring in the open source world. InfluxDB exposes an easy to use API to write and fetch time series data. Heapster is setup to use this storage backend by default on most Kubernetes clusters. A detailed setup guide can be found [here](https://github.com/kubernetes/heapster/blob/master/docs/influxdb.md). InfluxDB and Grafana run in Pods. The pod exposes itself as a Kubernetes service which is how Heapster discovers it.
+
+
+
+The Grafana container serves Grafana’s UI which provides an easy to configure dashboard interface. The default dashboard for Kubernetes contains an example dashboard that monitors resource usage of the cluster and the pods inside of it. This dashboard can easily be customized and expanded. Take a look at the storage schema for InfluxDB [here](https://github.com/kubernetes/heapster/blob/master/docs/storage-schema.md#metrics).
+
+
+
+Here is a video showing how to monitor a Kubernetes cluster using heapster, InfluxDB and Grafana:
+
+
+ [](https://www.youtube.com/watch?SZgqjMrxo3g)
+
+
+
+
+Here is a snapshot of the default Kubernetes Grafana dashboard that shows the CPU and Memory usage of the entire cluster, individual pods and containers:
+
+
+
+[](https://1.bp.blogspot.com/-lHMeU_4UnAk/V4mGHyrWkBI/AAAAAAAAAms/SvnncgJ7ieAduBqQzpI86oaboIkAKEpEQCEw/s1600/influx.png)
+
+
+
+
+
+**Google Cloud Monitoring**
+
+
+
+Google Cloud Monitoring is a hosted monitoring service that allows you to visualize and alert on important metrics in your application. Heapster can be setup to automatically push all collected metrics to Google Cloud Monitoring. These metrics are then available in the [Cloud Monitoring Console](https://app.google.stackdriver.com/). This storage backend is the easiest to setup and maintain. The monitoring console allows you to easily create and customize dashboards using the exported data.
+
+
+
+Here is a video showing how to setup and run a Google Cloud Monitoring backed Heapster:
+"https://youtube.com/embed/xSMNR2fcoLs"
+Here is a snapshot of the a Google Cloud Monitoring dashboard showing cluster-wide resource usage.
+
+
+
+[](https://2.bp.blogspot.com/-F2j3kYn3IoA/V4mGH3M-0gI/AAAAAAAAAmg/aoml93zPeKsKbTX1tN5sTtRRTw7dAKsxwCEw/s1600/gcm.png)
+
+
+
+**Try it out!**
+
+
+
+Now that you’ve learned a bit about Heapster, feel free to try it out on your own clusters! The [Heapster repository](https://github.com/kubernetes/heapster) is available on GitHub. It contains detailed instructions to setup Heapster and its storage backends. Heapster runs by default on most Kubernetes clusters, so you may already have it! Feedback is always welcome. Please let us know if you run into any issues via the troubleshooting channels.
+
+
+
+_-- Vishnu Kannan and Victor Marmol, Google Software Engineers_
diff --git a/blog/_posts/2015-05-00-Weekly-Kubernetes-Community-Hangout.md b/blog/_posts/2015-05-00-Weekly-Kubernetes-Community-Hangout.md
new file mode 100644
index 00000000000..4115759896d
--- /dev/null
+++ b/blog/_posts/2015-05-00-Weekly-Kubernetes-Community-Hangout.md
@@ -0,0 +1,48 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Weekly Kubernetes Community Hangout Notes - May 1 2015 "
+date: Tuesday, May 11, 2015
+pagination:
+ enabled: true
+---
+Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
+
+* Simple rolling update - Brendan
+
+ * Rolling update = nice example of why RCs and Pods are good.
+
+ * ...pause… (Brendan needs demo recovery tips from Kelsey)
+
+ * Rolling update has recovery: Cancel update and restart, update continues from where it stopped.
+
+ * New controller gets name of old controller, so appearance is pure update.
+
+ * Can also name versions in update (won't do rename at the end).
+* Rocket demo - CoreOS folks
+
+ * 2 major differences between rocket & docker: Rocket is daemonless & pod-centric.
+
+ * Rocket has AppContainer format as native, but also supports docker image format.
+
+ * Can run AppContainer and docker containers in same pod.
+
+ * Changes are close to merged.
+* demo service accounts and secrets being added to pods - Jordan
+
+ * Problem: It's hard to get a token to talk to the API.
+
+ * New API object: "ServiceAccount"
+
+ * ServiceAccount is namespaced, controller makes sure that at least 1 default service account exists in a namespace.
+
+ * Typed secret "ServiceAccountToken", controller makes sure there is at least 1 default token.
+
+ * DEMO
+
+ * * Can create new service account with ServiceAccountToken. Controller will create token for it.
+
+ * Can create a pod with service account, pods will have service account secret mounted at /var/run/secrets/kubernetes.io/…
+* Kubelet running in a container - Paul
+
+ * Kubelet successfully ran pod w/ mounted secret.
diff --git a/blog/_posts/2015-05-00-Weekly-Kubernetes-Community-Hangout_18.md b/blog/_posts/2015-05-00-Weekly-Kubernetes-Community-Hangout_18.md
new file mode 100644
index 00000000000..23ecc71f1ce
--- /dev/null
+++ b/blog/_posts/2015-05-00-Weekly-Kubernetes-Community-Hangout_18.md
@@ -0,0 +1,83 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Weekly Kubernetes Community Hangout Notes - May 15 2015 "
+date: Tuesday, May 18, 2015
+pagination:
+ enabled: true
+---
+Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
+
+
+* [v1 API][1] \- what's in, what's out
+ * We're trying to fix critical issues we discover with v1beta3
+ * Would like to make a number of minor cleanups that will be expensive to do later
+ * defaulting replication controller spec default to 1
+ * deduplicating security context
+ * change id field to name
+ * rename host
+ * inconsistent times
+ * typo in container states terminated (termination vs. terminated)
+ * flatten structure (requested by heavy API user)
+ * pod templates - could be added after V1, field is not implemented, remove template ref field
+ * in general remove any fields not implemented (can be added later)
+ * if we want to change any of the identifier validation rules, should do it now
+ * recently changed label validation rules to be more precise
+ * Bigger changes
+ * generalized label selectors
+ * service - change the fields in a way that we can add features in a forward compatible manner if possible
+ * public IPs - what to do from a security perspective
+ * Support aci format - there is an image field - add properties to signify the image, or include it in a string
+ * inconsistent on object use / cross reference - needs design discussion
+ * Things to do later
+ * volume source cleanup
+ * multiple API prefixes
+ * watch changes - watch client is not notified of progress
+* A few other proposals
+ * swagger spec fixes - ongoing
+ * additional field selectors - additive, backward compatible
+ * additional status - additive, backward compatible
+ * elimination of phase - won't make it for v1
+* Service discussion - Public IPs
+ * with public ips as it exists we can't go to v1
+ * Tim has been developing a mitigation if we can't get Justin's overhaul in (but hopefully we will)
+ * Justin's fix will describe public IPs in a much better way
+ * The general problem is it's too flexible and you can do things that are scary, the mitigation is to restrict public ip usage to specific use cases -- validated public ips would be copied to status, which is what kube-proxy would use
+ * public ips used for -
+ * binding to nodes / node
+ * request a specific load balancer IP (GCE only)
+ * emulate multi-port services -- now we support multi-port services, so no longer necessary
+ * This is a large change, 70% code complete, Tim & Justin working together, parallel code review and updates, need to reconcile and test
+ * Do we want to allow people to request host ports - is there any value in letting people ask for a public port? or should we assign you one?
+ * Tim: we should assign one
+ * discussion of what to do with status - if users set to empty then probably their intention
+ * general answer to the pattern is binding
+ * post v1: if we can make portal ip a non-user settable field, then we need to figure out the transition plan. need to have a fixed ip for dns.
+ * we should be able to just randomly assign services a new port and everything should adjust, but this is not feasible for v1
+ * next iteration of the proposal: PR is being iterated on, testing over the weekend, so PR hopefully ready early next week - gonna be a doozie!
+* API transition
+ * actively removing all dependencies on v1beta1 and v1beta2, announced their going away
+ * working on a script that will touch everything in the system and will force everything to flip to v1beta3
+ * a release with both APIs supported and with this script can make sure clusters are moved over and we can move the API
+ * Should be gone by 0.19
+ * Help is welcome, especially for trivial things and will try to get as much done as possible in next few weeks
+ * Release candidate targeting mid june
+ * The new kubectl will not work for old APIs, will be a problem for GKE for clusters pinned to old version. Will be a problem for k8s users as well if they update kubectl
+ * Since there's no way to upgrade a GKE cluster, users are going to have to tear down and upgrade their cluster
+ * we're going to stop testing v1beta1 very soon, trying to streamline the testing paths in our CI pipelines
+* Did we decide we are not going to do namespace autoprovisioning?
+ * Brian would like to turn it off - no objections
+ * Documentation should include creating namepspaces
+ * Would like to impose a default CPU for the default namespace
+ * would cap the number of pods, would reduce the resource exhaustion issue
+ * would eliminate need to explicitly cap the number of pods on a node due to IP exhaustion
+ * could add resources as arguments to the porcelain commands
+ * kubectl run is a simplified command, but it could include some common things (image, command, ports). but could add resources
+* Kubernetes 1.0 Launch Event
+ * Save the date: July 21st in Portland, OR - a part of OSCON
+ * Blog posts, whitepapers, etc. welcome to be published
+ * Event will be live streamed, mostly demos & customer talks, keynote
+ * Big launch party in the evening
+ * Kit to send more info in next couple weeks
+
+[1]: https://github.com/GoogleCloudPlatform/kubernetes/issues/7018
diff --git a/blog/_posts/2015-06-00-Cluster-Level-Logging-With-Kubernetes.md b/blog/_posts/2015-06-00-Cluster-Level-Logging-With-Kubernetes.md
new file mode 100644
index 00000000000..685bb77f38d
--- /dev/null
+++ b/blog/_posts/2015-06-00-Cluster-Level-Logging-With-Kubernetes.md
@@ -0,0 +1,316 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Cluster Level Logging with Kubernetes "
+date: Friday, June 11, 2015
+pagination:
+ enabled: true
+---
+
+
+A Kubernetes cluster will typically be humming along running many system and application pods. How does the system administrator collect, manage and query the logs of the system pods? How does a user query the logs of their application which is composed of many pods which may be restarted or automatically generated by the Kubernetes system? These questions are addressed by the Kubernetes cluster level logging services.
+
+
+Cluster level logging for Kubernetes allows us to collect logs which persist beyond the lifetime of the pod’s container images or the lifetime of the pod or even cluster. In this article we assume that a Kubernetes cluster has been created with cluster level logging support for sending logs to [Google Cloud Logging](https://cloud.google.com/logging/docs/). This is an option when creating a [Google Container Engine](https://cloud.google.com/container-engine/) (GKE) cluster, and is enabled by default for the open source [Google Compute Engine](https://cloud.google.com/compute/) (GCE) Kubernetes distribution. After a cluster has been created you will have a collection of system [pods](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md) running that support monitoring, logging and DNS resolution for names of Kubernetes services:
+
+```
+$ kubectl get pods
+```
+```
+NAME READY REASON RESTARTS AGE
+
+fluentd-cloud-logging-kubernetes-minion-0f64 1/1 Running 0 32m
+
+fluentd-cloud-logging-kubernetes-minion-27gf 1/1 Running 0 32m
+
+fluentd-cloud-logging-kubernetes-minion-pk22 1/1 Running 0 31m
+
+fluentd-cloud-logging-kubernetes-minion-20ej 1/1 Running 0 31m
+
+kube-dns-v3-pk22 3/3 Running 0 32m
+
+
+monitoring-heapster-v1-20ej 0/1 Running 9 32m
+```
+Here is the same information in a picture which shows how the pods might be placed on specific nodes.
+
+
+[](https://1.bp.blogspot.com/-FSXnrHLDMJs/Vxfzx2rsreI/AAAAAAAAAbk/PaDTpksKEZk4e8YQff5-JhGPoEpgyWaHgCLcB/s1600/cloud-logging.png)
+
+
+
+
+Here is a close up of what is running on each node.
+
+
+[](https://4.bp.blogspot.com/-T7kPtjq8O9A/Vxfz6k7XogI/AAAAAAAAAbo/-59dO6F58sERDOQGJ7872ex_KkEKFpArwCLcB/s1600/0f64.png)
+
+
+
+[](https://3.bp.blogspot.com/-5VRLexsSJwA/Vxf0F0ccVDI/AAAAAAAAAbs/rh4KGFc95-cIdrTxAujYH2LMrCQ8vrdzQCLcB/s1600/27gf.png)
+
+
+
+[](https://4.bp.blogspot.com/-UXOxauNy8FQ/Vxf0SaGujNI/AAAAAAAAAb0/Pnf6e_iiUfoKkooGyrF3Gmd8wh0vPrteQCLcB/s1600/pk22.png)
+
+[](https://2.bp.blogspot.com/-UgpwCx4BNwQ/Vxf0Wc8-HwI/AAAAAAAAAb4/g3D1bE74FQA2k9uwc9ZbZuB1N7MTU7swgCLcB/s1600/20ej.png)
+
+
+
+
+The first diagram shows four nodes created on a GCE cluster with the name of each VM node on a purple background. The internal and public IPs of each node are shown on gray boxes and the pods running in each node are shown in green boxes. Each pod box shows the name of the pod and the namespace it runs in, the IP address of the pod and the images which are run as part of the pod’s execution. Here we see that every node is running a fluentd-cloud-logging pod which is collecting the log output of the containers running on the same node and sending them to Google Cloud Logging. A pod which provides a [cluster DNS service](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/dns.md) runs on one of the nodes and a pod which provides monitoring support runs on another node.
+
+
+To help explain how cluster level logging works let’s start off with a synthetic log generator pod specification [counter-pod.yaml](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/blog-logging/counter-pod.yaml):
+
+
+
+```
+ apiVersion : v1
+ kind : Pod
+ metadata :
+ name : counter
+ spec :
+ containers :
+ - name : count
+ image : ubuntu:14.04
+ args : [bash, -c,
+ 'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
+```
+
+
+This pod specification has one container which runs a bash script when the container is born. This script simply writes out the value of a counter and the date once per second and runs indefinitely. Let’s create the pod.
+
+```
+
+$ kubectl create -f counter-pod.yaml
+
+
+pods/counter
+
+```
+We can observe the running pod:
+```
+$ kubectl get pods
+```
+```
+NAME READY REASON RESTARTS AGE
+
+counter 1/1 Running 0 5m
+
+fluentd-cloud-logging-kubernetes-minion-0f64 1/1 Running 0 55m
+
+fluentd-cloud-logging-kubernetes-minion-27gf 1/1 Running 0 55m
+
+fluentd-cloud-logging-kubernetes-minion-pk22 1/1 Running 0 55m
+
+fluentd-cloud-logging-kubernetes-minion-20ej 1/1 Running 0 55m
+
+kube-dns-v3-pk22 3/3 Running 0 55m
+
+monitoring-heapster-v1-20ej 0/1 Running 9 56m
+```
+
+
+This step may take a few minutes to download the ubuntu:14.04 image during which the pod status will be shown as Pending.
+
+
+One of the nodes is now running the counter pod:
+
+
+[](https://4.bp.blogspot.com/-BI3zOVlrHwA/Vxf0KwcqtCI/AAAAAAAAAbw/vzv8X8vQrso9Iycx4qQHuOslE8So7smLgCLcB/s1600/27gf-counter.png)
+
+
+
+
+When the pod status changes to Running we can use the kubectl logs command to view the output of this counter pod.
+
+```
+$ kubectl logs counter
+
+0: Tue Jun 2 21:37:31 UTC 2015
+
+1: Tue Jun 2 21:37:32 UTC 2015
+
+2: Tue Jun 2 21:37:33 UTC 2015
+
+3: Tue Jun 2 21:37:34 UTC 2015
+
+4: Tue Jun 2 21:37:35 UTC 2015
+
+5: Tue Jun 2 21:37:36 UTC 2015
+
+```
+
+
+This command fetches the log text from the Docker log file for the image that is running in this container. We can connect to the running container and observe the running counter bash script.
+
+``` bash
+$ kubectl exec -i counter bash
+
+ps aux
+
+USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
+
+root 1 0.0 0.0 17976 2888 ? Ss 00:02 0:00 bash -c for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done
+
+root 468 0.0 0.0 17968 2904 ? Ss 00:05 0:00 bash
+
+root 479 0.0 0.0 4348 812 ? S 00:05 0:00 sleep 1
+
+root 480 0.0 0.0 15572 2212 ? R 00:05 0:00 ps aux
+```
+
+What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original container’s execution and only see the log lines for the new container? Let’s find out. First let’s stop the currently running counter.
+
+```
+$ kubectl stop pod counter
+
+pods/counter
+
+
+Now let’s restart the counter.
+
+
+$ kubectl create -f counter-pod.yaml
+
+pods/counter
+```
+
+Let’s wait for the container to restart and get the log lines again.
+
+```
+$ kubectl logs counter
+
+0: Tue Jun 2 21:51:40 UTC 2015
+
+1: Tue Jun 2 21:51:41 UTC 2015
+
+2: Tue Jun 2 21:51:42 UTC 2015
+
+3: Tue Jun 2 21:51:43 UTC 2015
+
+4: Tue Jun 2 21:51:44 UTC 2015
+
+5: Tue Jun 2 21:51:45 UTC 2015
+
+6: Tue Jun 2 21:51:46 UTC 2015
+
+7: Tue Jun 2 21:51:47 UTC 2015
+
+8: Tue Jun 2 21:51:48 UTC 2015
+
+```
+Oh no! We’ve lost the log lines from the first invocation of the container in this pod! Ideally, we want to preserve all the log lines from each invocation of each container in the pod. Furthermore, even if the pod is restarted we would still like to preserve all the log lines that were ever emitted by the containers in the pod. But don’t fear, this is the functionality provided by cluster level logging in Kubernetes. When a cluster is created, the standard output and standard error output of each container can be ingested using a [Fluentd](http://www.fluentd.org/) agent running on each node into either [Google Cloud Logging](https://cloud.google.com/logging/docs/) or into Elasticsearch and viewed with Kibana. This blog article focuses on Google Cloud Logging.
+
+
+
+When a Kubernetes cluster is created with logging to Google Cloud Logging enabled, the system creates a pod called fluentd-cloud-logging on each node of the cluster to collect Docker container logs. These pods were shown at the start of this blog article in the response to the first get pods command.
+
+
+This log collection pod has a specification which looks something like this [fluentd-gcp.yaml](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml):
+
+```
+apiVersion: v1
+
+kind: Pod
+
+metadata:
+
+ name: fluentd-cloud-logging
+
+spec:
+
+ containers:
+
+ - name: fluentd-cloud-logging
+
+ image: gcr.io/google\_containers/fluentd-gcp:1.6
+
+ env:
+
+ - name: FLUENTD\_ARGS
+
+ value: -qq
+
+ volumeMounts:
+
+ - name: containers
+
+ mountPath: /var/lib/docker/containers
+
+ volumes:
+
+ - name: containers
+
+ hostPath:
+
+ path: /var/lib/docker/containers
+ ```
+
+This pod specification maps the the directory on the host containing the Docker log files, /var/lib/docker/containers, to a directory inside the container which has the same path. The pod runs one image, gcr.io/google\_containers/fluentd-gcp:1.6, which is configured to collect the Docker log files from the logs directory and ingest them into Google Cloud Logging. One instance of this pod runs on each node of the cluster. Kubernetes will notice if this pod fails and automatically restart it.
+
+
+We can click on the Logs item under the Monitoring section of the Google Developer Console and select the logs for the counter container, which will be called kubernetes.counter\_default\_count. This identifies the name of the pod (counter), the namespace (default) and the name of the container (count) for which the log collection occurred. Using this name we can select just the logs for our counter container from the drop down menu:
+
+
+
+_(image-counter-new-logs.png)_
+
+When we view the logs in the Developer Console we observe the logs for both invocations of the container.
+
+_(image-screenshot-2015-06-02)_
+
+
+Note the first container counted to 108 and then it was terminated. When the next container image restarted the counting process resumed from 0. Similarly if we deleted the pod and restarted it we would capture the logs for all instances of the containers in the pod whenever the pod was running.
+
+
+
+Logs ingested into Google Cloud Logging may be exported to various other destinations including [Google Cloud Storage](https://cloud.google.com/storage/) buckets and [BigQuery](https://cloud.google.com/bigquery/). Use the Exports tab in the Cloud Logging console to specify where logs should be streamed to (or follow this link to the [settings tab](https://pantheon.corp.google.com/project/_/logs/settings)).
+
+
+
+We could query the ingested logs from BigQuery using the SQL query which reports the counter log lines showing the newest lines first.
+
+
+
+SELECT metadata.timestamp, structPayload.log FROM [mylogs.kubernetes\_counter\_default\_count\_20150611] ORDER BY metadata.timestamp DESC
+
+
+
+Here is some sample output:
+
+
+**_(image-bigquery-log-new.png)_**
+
+
+We could also fetch the logs from Google Cloud Storage buckets to our desktop or laptop and then search them locally. The following command fetches logs for the counter pod running in a cluster which is itself in a GCE project called myproject. Only logs for the date 2015-06-11 are fetched.
+
+
+```
+$ gsutil -m cp -r gs://myproject/kubernetes.counter\_default\_count/2015/06/11 .
+ ```
+Now we can run queries over the ingested logs. The example below uses the[jq](http://stedolan.github.io/jq/) program to extract just the log lines.
+
+
+```
+$ cat 21\:00\:00\_21\:59\:59\_S0.json | jq '.structPayload.log'
+
+"0: Thu Jun 11 21:39:38 UTC 2015\n"
+
+"1: Thu Jun 11 21:39:39 UTC 2015\n"
+
+"2: Thu Jun 11 21:39:40 UTC 2015\n"
+
+"3: Thu Jun 11 21:39:41 UTC 2015\n"
+
+"4: Thu Jun 11 21:39:42 UTC 2015\n"
+
+"5: Thu Jun 11 21:39:43 UTC 2015\n"
+
+"6: Thu Jun 11 21:39:44 UTC 2015\n"
+
+"7: Thu Jun 11 21:39:45 UTC 2015\n"
+```
+
+This article has touched briefly on the underlying mechanisms that support gathering cluster level logs on a Kubernetes deployment. The approach here only works for gathering the standard output and standard error output of the processes running in the pod’s containers. To gather other logs that are stored in files one can use a sidecar container to gather the required files as described at the page [Collecting log files within containers with Fluentd and sending them to the Google Cloud Logging service](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/contrib/logging/fluentd-sidecar-gcp).
diff --git a/blog/_posts/2015-06-00-Slides-Cluster-Management-With.md b/blog/_posts/2015-06-00-Slides-Cluster-Management-With.md
new file mode 100644
index 00000000000..bb92c5db6e8
--- /dev/null
+++ b/blog/_posts/2015-06-00-Slides-Cluster-Management-With.md
@@ -0,0 +1,11 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Slides: Cluster Management with Kubernetes, talk given at the University of Edinburgh "
+date: Saturday, June 26, 2015
+pagination:
+ enabled: true
+---
+On Friday 5 June 2015 I gave a talk called [Cluster Management with Kubernetes](https://docs.google.com/presentation/d/1H4ywDb4vAJeg8KEjpYfhNqFSig0Q8e_X5I36kM9S6q0/pub?start=false&loop=false&delayms=3000) to a general audience at the University of Edinburgh. The talk includes an example of a music store system with a Kibana front end UI and an Elasticsearch based back end which helps to make concrete concepts like pods, replication controllers and services.
+
+[Cluster Management with Kubernetes](https://docs.google.com/presentation/d/1H4ywDb4vAJeg8KEjpYfhNqFSig0Q8e_X5I36kM9S6q0/pub?start=false&loop=false&delayms=3000).
diff --git a/blog/_posts/2015-06-00-The-Distributed-System-Toolkit-Patterns.md b/blog/_posts/2015-06-00-The-Distributed-System-Toolkit-Patterns.md
new file mode 100644
index 00000000000..a2eea701af0
--- /dev/null
+++ b/blog/_posts/2015-06-00-The-Distributed-System-Toolkit-Patterns.md
@@ -0,0 +1,52 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " The Distributed System ToolKit: Patterns for Composite Containers "
+date: Tuesday, June 29, 2015
+pagination:
+ enabled: true
+---
+Having had the privilege of presenting some ideas from Kubernetes at DockerCon 2015, I thought I would make a blog post to share some of these ideas for those of you who couldn’t be there.
+
+Over the past two years containers have become an increasingly popular way to package and deploy code. Container images solve many real-world problems with existing packaging and deployment tools, but in addition to these significant benefits, containers offer us an opportunity to fundamentally re-think the way we build distributed applications. Just as service oriented architectures (SOA) encouraged the decomposition of applications into modular, focused services, containers should encourage the further decomposition of these services into closely cooperating modular containers. By virtue of establishing a boundary, containers enable users to build their services using modular, reusable components, and this in turn leads to services that are more reliable, more scalable and faster to build than applications built from monolithic containers.
+
+In many ways the switch from VMs to containers is like the switch from monolithic programs of the 1970s and early 80s to modular object-oriented programs of the late 1980s and onward. The abstraction layer provided by the container image has a great deal in common with the abstraction boundary of the class in object-oriented programming, and it allows the same opportunities to improve developer productivity and application quality. Just like the right way to code is the separation of concerns into modular objects, the right way to package applications in containers is the separation of concerns into modular containers. Fundamentally this means breaking up not just the overall application, but also the pieces within any one server into multiple modular containers that are easy to parameterize and re-use. In this way, just like the standard libraries that are ubiquitous in modern languages, most application developers can compose together modular containers that are written by others, and build their applications more quickly and with higher quality components.
+
+The benefits of thinking in terms of modular containers are enormous, in particular, modular containers provide the following:
+
+-
+Speed application development, since containers can be re-used between teams and even larger communities
+-
+Codify expert knowledge, since everyone collaborates on a single containerized implementation that reflects best-practices rather than a myriad of different home-grown containers with roughly the same functionality
+-
+Enable agile teams, since the container boundary is a natural boundary and contract for team responsibilities
+-
+Provide separation of concerns and focus on specific functionality that reduces spaghetti dependencies and un-testable components
+
+Building an application from modular containers means thinking about symbiotic groups of containers that cooperate to provide a service, not one container per service. In Kubernetes, the embodiment of this modular container service is a Pod. A Pod is a group of containers that share resources like file systems, kernel namespaces and an IP address. The Pod is the atomic unit of scheduling in a Kubernetes cluster, precisely because the symbiotic nature of the containers in the Pod require that they be co-scheduled onto the same machine, and the only way to reliably achieve this is by making container groups atomic scheduling units.
+
+
+When you start thinking in terms of Pods, there are naturally some general patterns of modular application development that re-occur multiple times. I’m confident that as we move forward in the development of Kubernetes more of these patterns will be identified, but here are three that we see commonly:
+
+## Example #1: Sidecar containers
+
+Sidecar containers extend and enhance the "main" container, they take existing containers and make them better. As an example, consider a container that runs the Nginx web server. Add a different container that syncs the file system with a git repository, share the file system between the containers and you have built Git push-to-deploy. But you’ve done it in a modular manner where the git synchronizer can be built by a different team, and can be reused across many different web servers (Apache, Python, Tomcat, etc). Because of this modularity, you only have to write and test your git synchronizer once and reuse it across numerous apps. And if someone else writes it, you don’t even need to do that.
+
+[](https://3.bp.blogspot.com/-IVsNKDqS0jE/WRnPX21pxEI/AAAAAAAABJg/lAj3NIFwhPwvJYrmCdVbq1bqNq3E4AkhwCLcB/s1600/Example%2B%25231-%2BSidecar%2Bcontainers%2B.png)
+
+## Example #2: Ambassador containers
+
+Ambassador containers proxy a local connection to the world. As an example, consider a Redis cluster with read-replicas and a single write master. You can create a Pod that groups your main application with a Redis ambassador container. The ambassador is a proxy is responsible for splitting reads and writes and sending them on to the appropriate servers. Because these two containers share a network namespace, they share an IP address and your application can open a connection on “localhost” and find the proxy without any service discovery. As far as your main application is concerned, it is simply connecting to a Redis server on localhost. This is powerful, not just because of separation of concerns and the fact that different teams can easily own the components, but also because in the development environment, you can simply skip the proxy and connect directly to a Redis server that is running on localhost.
+
+[](https://4.bp.blogspot.com/-yEmqGZ86mNQ/WRnPYG1m3jI/AAAAAAAABJo/94DlN54LA-oTsORjEBHfHS_UQTIbNPvcgCEw/s1600/Example%2B%25232-%2BAmbassador%2Bcontainers.png)
+
+## Example #3: Adapter containers
+
+Adapter containers standardize and normalize output. Consider the task of monitoring N different applications. Each application may be built with a different way of exporting monitoring data. (e.g. JMX, StatsD, application specific statistics) but every monitoring system expects a consistent and uniform data model for the monitoring data it collects. By using the adapter pattern of composite containers, you can transform the heterogeneous monitoring data from different systems into a single unified representation by creating Pods that groups the application containers with adapters that know how to do the transformation. Again because these Pods share namespaces and file systems, the coordination of these two containers is simple and straightforward.
+
+[](https://4.bp.blogspot.com/-4rfSCMwvSwo/WRnPYLLQZqI/AAAAAAAABJk/c29uQgM2lSMHaUL013scJo_z4O8w38mJgCEw/s1600/Example%2B%25233-%2BAdapter%2Bcontainers%2B.png)
+
+
+In all of these cases, we've used the container boundary as an encapsulation/abstraction boundary that allows us to build modular, reusable components that we combine to build out applications. This reuse enables us to more effectively share containers between different developers, reuse our code across multiple applications, and generally build more reliable, robust distributed systems more quickly. I hope you’ve seen how Pods and composite container patterns can enable you to build robust distributed systems more quickly, and achieve container code re-use. To try these patterns out yourself in your own applications. I encourage you to go check out open source Kubernetes or Google Container Engine.
+
+ - Brendan Burns, Software Engineer at Google
diff --git a/blog/_posts/2015-06-00-Weekly-Kubernetes-Community-Hangout.md b/blog/_posts/2015-06-00-Weekly-Kubernetes-Community-Hangout.md
new file mode 100644
index 00000000000..ce6912d4865
--- /dev/null
+++ b/blog/_posts/2015-06-00-Weekly-Kubernetes-Community-Hangout.md
@@ -0,0 +1,61 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Weekly Kubernetes Community Hangout Notes - May 22 2015 "
+date: Wednesday, June 02, 2015
+pagination:
+ enabled: true
+---
+Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
+
+
+Discussion / Topics
+
+* Code Freeze
+* Upgrades of cluster
+* E2E test issues
+
+Code Freeze process starts EOD 22-May, including
+
+* Code Slush -- draining PRs that are active. If there are issues for v1 to raise, please do so today.
+* Community PRs -- plan is to reopen in ~6 weeks.
+* Key areas for fixes in v1 -- docs, the experience.
+
+E2E issues and LGTM process
+
+* Seen end-to-end tests go red.
+* Plan is to limit merging to on-call. Quinton to communicate.
+* Can we expose Jenkins runs to community? (Paul)
+
+ * Question/concern to work out is securing Jenkins. Short term conclusion: Will look at pushing Jenkins logs into GCS bucket. Lavalamp will follow up with Jeff Grafton.
+
+ * Longer term solution may be a merge queue, where e2e runs for each merge (as opposed to multiple merges). This exists in Openshift today.
+
+Cluster Upgrades for Kubernetes as final v1 feature
+
+* GCE will use Persistent Disk (PD) to mount new image.
+* OpenShift will follow a tradition update model, with "yum update".
+* A strawman approach is to have an analog of "kube-push" to update the master, in-place. Feedback in the meeting was
+
+ * Upgrading Docker daemon on the master will kill the master's pods. Agreed. May consider an 'upgrade' phase or explicit step.
+
+ * How is this different than HA master upgrade? See HA case as a superset. The work to do an upgrade would be a prerequisite for HA master upgrade.
+* Mesos scheduler implements a rolling node upgrade.
+
+Attention requested for v1 in the Hangout
+
+* * Discussed that it's an eventually consistent design.*
+
+ * In the meeting, the outcome was: seeking a pattern for atomicity of update across multiple piece. Paul to ping Tim when ready to review.
+* Regression in e2e [#8499][1] (Eric Paris)
+* Asking for review of direction, if not review. [#8334][2] (Mark)
+* Handling graceful termination (e.g. sigterm to postgres) is not implemented. [#2789][3] (Clayton)
+
+ * Need is to bump up grace period or finish plumbing. In API, client tools, missing is kubelet does use and we don't set the timeout (>0) value.
+
+ * Brendan will look into this graceful term issue.
+* Load balancer almost ready by JustinSB.
+
+[1]: https://github.com/GoogleCloudPlatform/kubernetes/issues/8499
+[2]: https://github.com/GoogleCloudPlatform/kubernetes/pull/8334
+[3]: https://github.com/GoogleCloudPlatform/kubernetes/issues/2789
diff --git a/blog/_posts/2015-07-00-Announcing-First-Kubernetes-Enterprise.md b/blog/_posts/2015-07-00-Announcing-First-Kubernetes-Enterprise.md
new file mode 100644
index 00000000000..5c857107f7f
--- /dev/null
+++ b/blog/_posts/2015-07-00-Announcing-First-Kubernetes-Enterprise.md
@@ -0,0 +1,20 @@
+---
+layout: blog
+permalink: /blog/:year/:month/:title
+title: " Announcing the First Kubernetes Enterprise Training Course "
+date: Thursday, July 08, 2015
+pagination:
+ enabled: true
+---
+At Google we rely on Linux application containers to run our core infrastructure. Everything from Search to Gmail runs in containers. In fact, we like containers so much that even our Google Compute Engine VMs run in containers! Because containers are critical to our business, we have been working with the community on many of the basic container technologies (from cgroups to Docker’s LibContainer) and even decided to build the next generation of Google’s container scheduling technology, Kubernetes, in the open.
+
+
+
+One year into the Kubernetes project, and on the eve of our planned V1 release at OSCON, we are pleased to announce the first-ever formal Kubernetes enterprise-focused training session organized by a key Kubernetes contributor, Mesosphere. The inaugural session will be taught by Zed Shaw and Michael Hausenblas from Mesosphere, and will take place on July 20 at OSCON in Portland. [Pre-registration](https://mesosphere.com/training/kubernetes/) is free for early registrants, but space is limited so act soon!
+
+
+
+This one-day course will cover the basics of building and deploying containerized applications using Kubernetes. It will walk attendees through the end-to-end process of creating a Kubernetes application architecture, building and configuring Docker images, and deploying them on a Kubernetes cluster. Users will also learn the fundamentals of deploying Kubernetes applications and services on our Google Container Engine and Mesosphere’s Datacenter Operating System.
+
+
+The upcoming Kubernetes bootcamp will be a great way to learn how to apply Kubernetes to solve long-standing deployment and application management problems. This is just the first of what we hope are many, and from a broad set of contributors.
diff --git a/blog/_posts/2015-07-00-How-Did-Quake-Demo-From-Dockercon-Work.md b/blog/_posts/2015-07-00-How-Did-Quake-Demo-From-Dockercon-Work.md
new file mode 100644
index 00000000000..409fbbf5c2b
--- /dev/null
+++ b/blog/_posts/2015-07-00-How-Did-Quake-Demo-From-Dockercon-Work.md
@@ -0,0 +1,198 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " How did the Quake demo from DockerCon Work? "
+date: Friday, July 02, 2015
+pagination:
+ enabled: true
+---
+Shortly after its release in 2013, Docker became a very popular open source container management tool for Linux. Docker has a rich set of commands to control the execution of a container. Commands such as start, stop, restart, kill, pause, and unpause. However, what is still missing is the ability to Checkpoint and Restore (C/R) a container natively via Docker itself.
+
+
+
+We’ve been actively working with upstream and community developers to add support in Docker for native C/R and hope that checkpoint and restore commands will be introduced in Docker 1.8. As of this writing, it’s possible to C/R a container externally because this functionality was recently merged in libcontainer.
+
+
+
+External container C/R was demo’d at DockerCon 2015:
+
+
+
+ 
+
+
+
+Container C/R offers many benefits including the following:
+
+- Stop and restart the Docker daemon (say for an upgrade) without having to kill the running containers and restarting them from scratch, losing precious work they had done when they were stopped
+- Reboot the system without having to restart the containers from scratch. Same benefits as use case 1 above
+- Speed up the start time of slow-start applications
+- “Forensic debugging" of processes running in a container by examining their checkpoint images (open files, memory segments, etc.)
+- Migrate containers by restoring them on a different machine
+
+CRIU
+
+Implementing C/R functionality from scratch is a major undertaking and a daunting task. Fortunately, there is a powerful open source tool written in C that has been used in production for checkpointing and restoring entire process trees in Linux. The tool is called CRIU which stands for Checkpoint Restore In Userspace (http://criu.org). CRIU works by:
+
+- Freezing a running application.
+- Checkpointing the address space and state of the entire process tree to a collection of “image” files.
+- Restoring the process tree from checkpoint image files.
+- Resuming application from the point it was frozen.
+
+In April 2014, we decided to find out if CRIU could checkpoint and restore Docker containers to facilitate container migration.
+
+
+#### Phase 1 - External C/R
+
+The first phase of this effort invoking CRIU directly to dump a process tree running inside a container and determining why the checkpoint or restore operation failed. There were quite a few issues that caused CRIU failure. The following three issues were among the more challenging ones.
+
+#### External Bind Mounts
+
+Docker sets up /etc/{hostname,hosts,resolv.conf} as targets with source files outside the container's mount namespace.
+
+The --ext-mount-map command line option was added to CRIU to specify the path of the external bind mounts. For example, assuming default Docker configuration, /etc/hostname in the container's mount namespace is bind mounted from the source at /var/lib/docker/containers/\/hostname. When checkpointing, we tell CRIU to record /etc/hostname's "map" as, say, etc\_hostname. When restoring, we tell CRIU that that the file previously recorded as etc\_hostname should be mapped from the external bind mount at /var/lib/docker/containers/\/hostname.
+
+
+
+ 
+
+
+#### AUFS Pathnames
+
+Docker initially used AUFS as its preferred filesystem which is still in wide usage (the preferred filesystem is now OverlayFS).. Due to a bug, the AUFS symbolic link paths of /proc/\/map\_files point inside AUFS branches instead of their pathnames relative to the container's root. This problem has been fixed in AUFS source code but hasn't made it to all the distros yet. CRIU would get confused seeing the same file in its physical location (in the branch) and its logical location (from the root of mount namespace).
+
+The --root command line option that was used only during restore was generalized to understand the root of the mount namespace during checkpoint and automatically "fix" the exposed AUFS pathnames.
+
+
+#### Cgroups
+
+
+After checkpointing, the Docker daemon removes the container’s cgroups subdirectories (because the container has “exited”). This causes restore to fail.
+
+The --manage-cgroups command line option was added to CRIU to dump and restore the process's cgroups along with their properties.
+
+
+The CRIU command lines are a simple container are shown below:
+```
+$ docker run -d busybox:latest /bin/sh -c 'i=0; while true; do echo $i \>\> /foo; i=$(expr $i + 1); sleep 3; done'
+
+$ docker ps
+CONTAINER ID IMAGE COMMAND CREATED STATUS
+168aefb8881b busybox:latest "/bin/sh -c 'i=0; 6 seconds ago Up 4 seconds
+
+$ sudo criu dump -o dump.log -v4 -t 17810 \
+ -D /tmp/img/\ \
+ --root /var/lib/docker/aufs/mnt/\ \
+ --ext-mount-map /etc/resolv.conf:/etc/resolv.conf \
+ --ext-mount-map /etc/hosts:/etc/hosts \
+ --ext-mount-map /etc/hostname:/etc/hostname \
+ --ext-mount-map /.dockerinit:/.dockerinit \
+ --manage-cgroups \
+ --evasive-devices
+
+$ docker ps -a
+CONTAINER ID IMAGE COMMAND CREATED STATUS
+168aefb8881b busybox:latest "/bin/sh -c 'i=0; 6 minutes ago Exited (-1) 4 minutes ago
+
+$ sudo mount -t aufs -o br=\
+/var/lib/docker/aufs/diff/\:\
+/var/lib/docker/aufs/diff/\-init:\
+/var/lib/docker/aufs/diff/a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721:\
+/var/lib/docker/aufs/diff/120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16:\
+/var/lib/docker/aufs/diff/42eed7f1bf2ac3f1610c5e616d2ab1ee9c7290234240388d6297bc0f32c34229:\
+/var/lib/docker/aufs/diff/511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158:\
+none /var/lib/docker/aufs/mnt/\
+
+$ sudo criu restore -o restore.log -v4 -d
+ -D /tmp/img/\ \
+ --root /var/lib/docker/aufs/mnt/\ \
+ --ext-mount-map /etc/resolv.conf:/var/lib/docker/containers/\/resolv.conf \
+ --ext-mount-map /etc/hosts:/var/lib/docker/containers/\/hosts \
+ --ext-mount-map /etc/hostname:/var/lib/docker/containers/\/hostname \
+ --ext-mount-map /.dockerinit:/var/lib/docker/init/dockerinit-1.0.0 \
+ --manage-cgroups \
+ --evasive-devices
+
+$ ps -ef | grep /bin/sh
+root 18580 1 0 12:38 ? 00:00:00 /bin/sh -c i=0; while true; do echo $i \>\> /foo; i=$(expr $i + 1); sleep 3; done
+
+$ docker ps -a
+CONTAINER ID IMAGE COMMAND CREATED STATUS
+168aefb8881b busybox:latest "/bin/sh -c 'i=0; 7 minutes ago Exited (-1) 5 minutes ago
+
+docker\_cr.sh
+ ```
+
+Since the command line arguments to CRIU were long, a helper script called docker\_cr.sh was provided in the CRIU source tree to simplify the proces. So, for the above container, one would simply C/R the container as follows (for details see [http://criu.org/Docker](http://criu.org/Docker)):
+
+```
+$ sudo docker\_cr.sh -c 4397
+dump successful
+
+$ sudo docker\_cr.sh -r 4397
+restore successful
+```
+At the end of Phase 1, it was possible to externally checkpoint and restore a Docker 1.0 container using either VFS, AUFS, or UnionFS storage drivers with CRIU v1.3.
+
+#### Phase 2 - Native C/R
+
+While external C/R served as a successful proof of concept for container C/R, it had the following limitations:
+
+
+1. State of a checkpointed container would show as "Exited".
+2. Docker commands such as logs, kill, etc. will not work on a restored container.
+3. The restored process tree will be a child of /etc/init instead of the Docker daemon.
+
+Therefore, the second phase of the effort concentrated on adding native checkpoint and restore commands to Docker.
+
+
+#### libcontainer, nsinit
+
+Libcontainer is Docker’s native execution driver. It provides a set of APIs to create and manage containers. The first step of adding native support was the introduction of two methods, checkpoint() and restore(), to libcontainer and the corresponding checkpoint and restore subcommands to nsinit. Nsinit is a simple utility that is used to test and debug libcontainer.
+
+#### docker checkpoint, docker restore
+
+With C/R support in libcontainer, the next step was adding checkpoint and restore subcommands to Docker itself. A big challenge in this step was to rebuild the “plumbing” between the container and the daemon. When the daemon initially starts a container, it sets up individual pipes between itself (parent) and the standard input, output, and error file descriptors of the container (child). This is how docker logs can show the output of a container.
+
+When a container exits after being checkpointed, the pipes between it and the daemon are deleted. During container restore, it’s actually CRIU that is the parent. Therefore, setting up a pipe between the child (container) and an unrelated process (Docker daemon) required is not a bit of challenge.
+
+To address this issue, the --inherit-fd command line option was added to CRIU. Using this option, the Docker daemon tells CRIU to let the restored container “inherit” certain file descriptors passed from the daemon to CRIU.
+
+The first version of native C/R was demo'ed at the Linux Plumbers Conference (LPC) in October 2014 ([http://linuxplumbersconf.org/2014/ocw/proposals/1899](http://linuxplumbersconf.org/2014/ocw/proposals/1899)).
+
+
+ 
+
+
+The LPC demo was done with a simple container that did not require network connectivity. Support for restoring network connections was done in early 2015 and demonstrated in this 2-minute [video clip](https://www.youtube.com/watch?v=HFt9v6yqsXo).
+
+#### Current Status of Container C/R
+
+In May 2015, the criu branch of libcontainer was merged into master. Using the newly-introduced lightweight [runC](https://blog.docker.com/2015/06/runc/) container runtime, container migration was demo’ed at DockerCon15. In this
+[](https://www.youtube.com/watch?v=?mL9AFkJJAq0) (minute 23:00), a container running Quake was checkpointed and restored on a different machine, effectively implementing container migration.
+
+At the time of this writing, there are two repos on Github that have native C/R support in Docker:
+- [Docker 1.5](https://github.com/SaiedKazemi/docker/tree/cr) (old libcontainer, relatively stable)
+- [Docker 1.7](https://github.com/boucher/docker/tree/cr-combined) (newer, less stable)
+
+Work is underway to merge C/R functionality into Docker. You can use either of the above repositories to experiment with Docker C/R. If you are using OverlayFS or your container workload uses AIO, please note the following:
+
+
+
+#### OverlayFS
+When OverlayFS support was officially merged into the Linux kernel version 3.18, it became the preferred storage driver (instead of AUFS) . However, OverlayFS in 3.18 has the following issues:
+- /proc/\/fdinfo/\ contains mnt\_id which isn’t in /proc/\/mountinfo
+- /proc/\/fd/\ does not contain an absolute path to the opened file
+
+Both issues are fixed in this [patch](https://lkml.org/lkml/2015/3/20/372) but the patch has not been merged upstream yet.
+
+#### AIO
+If you are using a kernel older than 3.19 and your container uses AIO, you need the following kernel patches from 3.19:
+
+
+- [torvalds: bd9b51e7](https://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=bd9b51e7) by Al Viro
+- [torvalds: e4a0d3e72](https://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=e4a0d3e72) by Pavel Emelyanov
+
+
+
+- Saied Kazemi, Software Engineer at Google
diff --git a/blog/_posts/2015-07-00-Kubernetes-10-Launch-Party-At-Oscon.md b/blog/_posts/2015-07-00-Kubernetes-10-Launch-Party-At-Oscon.md
new file mode 100644
index 00000000000..0a8aabf6f4f
--- /dev/null
+++ b/blog/_posts/2015-07-00-Kubernetes-10-Launch-Party-At-Oscon.md
@@ -0,0 +1,19 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes 1.0 Launch Event at OSCON "
+date: Friday, July 02, 2015
+pagination:
+ enabled: true
+---
+In case you haven't heard, the Kubernetes project team & community have some awesome stuff lined up for our release event at OSCON in a few weeks.
+
+If you haven't already registered for in person or live stream, please do it now! check out [kuberneteslaunch.com](http://kuberneteslaunch.com/) for all the details. You can also find out there how to get a free expo pass for OSCON which you'll need to attend in person.
+
+We'll have talks from Google executives Brian Stevens, VP of Cloud Product, and Eric Brewer, VP of Google Infrastructure. They will share their perspective on where Kubernetes is and where it's going that you won't want to miss.
+
+Several of our community partners will be there including CoreOS, Redapt, Intel, Mesosphere, Mirantis, the OpenStack Foundation, CloudBees, Kismatic and Bitnami.
+
+And real life users of Kubernetes will be there too. We've announced that zulily Principal Engineer Steve Reed is speaking, and we will let you know about others over the next few days. Let's just say it's a pretty cool list.
+
+Check it out now - kuberneteslaunch.com
diff --git a/blog/_posts/2015-07-00-Strong-Simple-Ssl-For-Kubernetes.md b/blog/_posts/2015-07-00-Strong-Simple-Ssl-For-Kubernetes.md
new file mode 100644
index 00000000000..52eb8b89028
--- /dev/null
+++ b/blog/_posts/2015-07-00-Strong-Simple-Ssl-For-Kubernetes.md
@@ -0,0 +1,276 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Strong, Simple SSL for Kubernetes Services "
+date: Wednesday, July 14, 2015
+pagination:
+ enabled: true
+---
+Hi, I’m Evan Brown [(@evandbrown](http://twitter.com/evandbrown)) and I work on the solutions architecture team for Google Cloud Platform. I recently wrote an [article](https://cloud.google.com/solutions/automated-build-images-with-jenkins-kubernetes) and [tutorial](https://github.com/GoogleCloudPlatform/kube-jenkins-imager) about using Jenkins on Kubernetes to automate the Docker and GCE image build process. Today I’m going to discuss how I used Kubernetes services and secrets to add SSL to the Jenkins web UI. After reading this, you’ll be able to add SSL termination (and HTTP-\>HTTPS redirects + basic auth) to your public HTTP Kubernetes services.
+
+### In the beginning
+
+In the spirit of minimum viability, the first version of Jenkins-on-Kubernetes I built was very basic but functional:
+
+- The Jenkins leader was just a single container in one pod, but it was managed by a replication controller, so if it failed it would automatically respawn.
+- The Jenkins leader exposes two ports - TCP 8080 for the web UI and TCP 50000 for build agents to register - and those ports are made available as a Kubernetes service with a public load balancer.
+
+
+
+Here’s a visual of that first version:
+
+[](http://1.bp.blogspot.com/-ccmpTmulrng/VaVxOs7gysI/AAAAAAAAAU8/bCEzgGGm-pE/s1600/0.png)
+
+
+
+
+
+
+This works, but I have a few problems with it. First, authentication isn’t configured in a default Jenkins installation. The leader is sitting on the public Internet, accessible to anyone, until you connect and configure authentication. And since there’s no encryption, configuring authentication is kind of a symbolic gesture. We need SSL, and we need it now!
+
+### Do what you know
+
+For a few milliseconds I considered trying to get SSL working directly on Jenkins. I’d never done it before, and I caught myself wondering if it would be as straightforward as working with SSL on [Nginx](http://nginx.org/), something I do have experience with. I’m all for learning new things, but this seemed like a great place to not invent a new wheel: SSL on Nginx is straightforward and well documented (as are its reverse-proxy capabilities), and Kubernetes is all about building functionality by orchestrating and composing containers. Let’s use Nginx, and add a few bonus features that Nginx makes simple: HTTP-\>HTTPS redirection, and basic access authentication.
+
+### SSL termination proxy as an nginx service
+
+I started by putting together a [Dockerfile](https://github.com/GoogleCloudPlatform/nginx-ssl-proxy/blob/master/Dockerfile) that inherited from the standard nginx image, copied a few Nginx config files, and added a custom entrypoint (start.sh). The entrypoint script checks an environment variable (ENABLE\_SSL) and activates the correct Nginx config accordingly (meaning that unencrypted HTTP reverse proxy is possible, but that defeats the purpose). The script also configures basic access authentication if it’s enabled (the ENABLE\_BASIC\_AUTH env var).
+
+
+
+Finally, start.sh evaluates the SERVICE\_HOST\_ENV\_NAME and SERVICE\_PORT\_ENV\_NAME env vars. These variables should be set to the names of the environment variables for the Kubernetes service you want to proxy to. In this example, the service for our Jenkins leader is cleverly named jenkins, which means pods in the cluster will see an environment variable named JENKINS\_SERVICE\_HOST and JENKINS\_SERVICE\_PORT\_UI (the port that 8080 is mapped to on the Jenkins leader). SERVICE\_HOST\_ENV\_NAME and SERVICE\_PORT\_ENV\_NAME simply reference the correct service to use for a particular scenario, allowing the image to be used generically across deployments.
+
+### Defining the Controller and Service
+
+LIke every other pod in this example, we’ll deploy Nginx with a replication controller, allowing us to scale out or in, and recover automatically from container failures. This excerpt from a[complete descriptor in the sample app](https://github.com/GoogleCloudPlatform/kube-jenkins-imager/blob/master/ssl_proxy.yaml#L20-L48) shows some relevant bits of the pod spec:
+
+
+
+```
+ spec:
+
+ containers:
+
+ -
+
+ name: "nginx-ssl-proxy"
+
+ image: "gcr.io/cloud-solutions-images/nginx-ssl-proxy:latest"
+
+ env:
+
+ -
+
+ name: "SERVICE\_HOST\_ENV\_NAME"
+
+ value: "JENKINS\_SERVICE\_HOST"
+
+ -
+
+ name: "SERVICE\_PORT\_ENV\_NAME"
+
+ value: "JENKINS\_SERVICE\_PORT\_UI"
+
+ -
+
+ name: "ENABLE\_SSL"
+
+ value: "true"
+
+ -
+
+ name: "ENABLE\_BASIC\_AUTH"
+
+ value: "true"
+
+ ports:
+
+ -
+
+ name: "nginx-ssl-proxy-http"
+
+ containerPort: 80
+
+ -
+
+ name: "nginx-ssl-proxy-https"
+
+ containerPort: 443
+ ```
+
+
+
+
+The pod will have a service exposing TCP 80 and 443 to a public load balancer. Here’s the service descriptor [(also available in the sample app](https://github.com/GoogleCloudPlatform/kube-jenkins-imager/blob/master/service_ssl_proxy.yaml)):
+
+
+
+```
+ kind: "Service"
+
+ apiVersion: "v1"
+
+ metadata:
+
+ name: "nginx-ssl-proxy"
+
+ labels:
+
+ name: "nginx"
+
+ role: "ssl-proxy"
+
+ spec:
+
+ ports:
+
+ -
+
+ name: "https"
+
+ port: 443
+
+ targetPort: "nginx-ssl-proxy-https"
+
+ protocol: "TCP"
+
+ -
+
+ name: "http"
+
+ port: 80
+
+ targetPort: "nginx-ssl-proxy-http"
+
+ protocol: "TCP"
+
+ selector:
+
+ name: "nginx"
+
+ role: "ssl-proxy"
+
+ type: "LoadBalancer"
+ ```
+
+
+
+
+And here’s an overview with the SSL termination proxy in place. Notice that Jenkins is no longer directly exposed to the public Internet:
+
+[](http://3.bp.blogspot.com/-0B1BEQo_fWc/VaVxVUBkf3I/AAAAAAAAAVE/5yCCnA29C88/s1600/0%2B%25281%2529.png)
+
+
+
+
+
+Now, how did the Nginx pods get ahold of the super-secret SSL key/cert and htpasswd file (for basic access auth)?
+
+### Keep it secret, keep it safe
+
+Kubernetes has an [API and resource for Secrets](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/secrets.md). Secrets “are intended to hold sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a secret is safer and more flexible than putting it verbatim in a pod definition or in a docker image.”
+
+
+
+You can create secrets in your cluster in 3 simple steps:
+
+
+
+1.
+Base64-encode your secret data (i.e., SSL key pair or htpasswd file)
+
+```
+$ cat ssl.key | base64
+ LS0tLS1CRUdJTiBDRVJUS...
+ ```
+
+
+
+
+1.
+Create a json document describing your secret, and add the base64-encoded values:
+
+```
+ apiVersion: "v1"
+
+ kind: "Secret"
+
+ metadata:
+
+ name: "ssl-proxy-secret"
+
+ namespace: "default"
+
+ data:
+
+ proxycert: "LS0tLS1CRUd..."
+
+ proxykey: "LS0tLS1CR..."
+
+ htpasswd: "ZXZhb..."
+ ```
+
+
+
+
+1.
+Create the secrets resource:
+
+```
+$ kubectl create -f secrets.json
+ ```
+
+
+To access the secrets from a container, specify them as a volume mount in your pod spec. Here’s the relevant excerpt from the [Nginx proxy template](https://github.com/GoogleCloudPlatform/kube-jenkins-imager/blob/master/ssl_proxy.yaml###L41-L48) we saw earlier:
+
+
+
+```
+ spec:
+
+ containers:
+
+ -
+
+ name: "nginx-ssl-proxy"
+
+ image: "gcr.io/cloud-solutions-images/nginx-ssl-proxy:latest"
+
+ env: [...]
+
+ ports: ...[]
+
+ volumeMounts:
+
+ -
+
+ name: "secrets"
+
+ mountPath: "/etc/secrets"
+
+ readOnly: true
+
+ volumes:
+
+ -
+
+ name: "secrets"
+
+ secret:
+
+ secretName: "ssl-proxy-secret"
+ ```
+
+
+
+
+A volume of type secret that points to the ssl-proxy-secret secret resource is defined, and then mounted into /etc/secrets in the container. The secrets spec in the earlier example defined data.proxycert, data.proxykey, and data.htpasswd, so we would see those files appear (base64-decoded) in /etc/secrets/proxycert, /etc/secrets/proxykey, and /etc/secrets/htpasswd for the Nginx process to access.
+
+
+
+All together now
+
+I have “containers and Kubernetes are fun and cool!” moments all the time, like probably every day. I’m beginning to have “containers and Kubernetes are extremely useful and powerful and are adding value to what I do by helping me do important things with ease” more frequently. This SSL termination proxy with Nginx example is definitely one of the latter. I didn’t have to waste time learning a new way to use SSL. I was able to solve my problem using well-known tools, in a reusable way, and quickly (from idea to working took about 2 hours).
+
+
+Check out the complete [Automated Image Builds with Jenkins, Packer, and Kubernetes](https://github.com/GoogleCloudPlatform/kube-jenkins-imager) repo to see how the SSL termination proxy is used in a real cluster, or dig into the details of the proxy image in the [nginx-ssl-proxy repo](https://github.com/GoogleCloudPlatform/nginx-ssl-proxy) (complete with a Dockerfile and Packer template so you can build the image yourself).
diff --git a/blog/_posts/2015-07-00-The-Growing-Kubernetes-Ecosystem.md b/blog/_posts/2015-07-00-The-Growing-Kubernetes-Ecosystem.md
new file mode 100644
index 00000000000..bbff71e4d6c
--- /dev/null
+++ b/blog/_posts/2015-07-00-The-Growing-Kubernetes-Ecosystem.md
@@ -0,0 +1,234 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " The Growing Kubernetes Ecosystem "
+date: Saturday, July 24, 2015
+pagination:
+ enabled: true
+---
+Over the past year, we’ve seen fantastic momentum in the Kubernetes project, culminating with the release of [Kubernetes v1][4] earlier this week. We’ve also witnessed the ecosystem around Kubernetes blossom, and wanted to draw attention to some of the cooler offerings we’ve seen.
+
+
+| ----- |
+|
+
+![][5]
+
+ |
+
+[CloudBees][6] and the Jenkins community have created a Kubernetes plugin, allowing Jenkins slaves to be built as Docker images and run in Docker hosts managed by Kubernetes, either on the Google Cloud Platform or on a more local Kubernetes instance. These elastic slaves are then brought online as Jenkins schedules jobs for them and destroyed after their builds are complete, ensuring masters have steady access to clean workspaces and minimizing builds’ resource footprint.
+
+ |
+|
+
+![][7]
+
+ |
+
+[CoreOS][8] has created launched Tectonic, an opinionated enterprise distribution of Kubernetes, CoreOS and Docker. Tectonic includes a management console for workflows and dashboards, an integrated registry to build and share containers, and additional tools to automate deployment and customize rolling updates. At KuberCon, CoreOS launched Tectonic Preview, giving users easy access to Kubernetes 1.0, 24x7 enterprise ready support, Kubernetes guides and Kubernetes training to help enterprises begin experiencing the power of Kubernetes, CoreOS and Docker.
+
+ |
+|
+
+![][9]
+
+ |
+
+[Hitachi Data Systems][10] has announced that Kubernetes now joins the list of solutions validated to run on their enterprise Unified Computing Platform. With this announcement Hitachi has validated Kubernetes and VMware running side-by-side on the UCP platform, providing an enterprise solution for container-based applications and traditional virtualized workloads.
+
+ |
+|
+
+![][11]
+
+ |
+
+[Kismatic][12] is providing enterprise support for pure play open source Kubernetes. They have announced open source and commercially supported Kubernetes plug-ins specifically built for production-grade enterprise environments. Any Kubernetes deployment can now benefit from modular role-based access controls (RBAC), Kerberos for bedrock authentication, LDAP/AD integration, rich auditing and platform-agnostic Linux distro packages.
+
+ |
+|
+
+![][13]
+
+ |
+
+[Meteor Development Group][14], creators of Meteor, a JavaScript App Platform, are using Kubernetes to build [Galaxy][14] to run Meteor apps in production. Galaxy will scale from free test apps to production-suitable high-availability hosting.
+
+ |
+|
+
+![][15]
+
+ |
+
+Mesosphere has incorporated Kubernetes into its Data Center Operating System (DCOS) platform as a first class citizen. Using DCOS, enterprises can deploy Kubernetes across thousands of nodes, both bare-metal and virtualized machines that can run on-premise and in the cloud. Mesosphere also launched a beta of their [Kubernetes Training Bootcamp][16] and will be offering more in the future.
+
+ |
+|
+
+![][17]
+
+ |
+
+[Mirantis][18] is enabling hybrid cloud applications across OpenStack and other clouds supporting Kubernetes. An OpenStack Murano app package supports full application lifecycle actions such as deploy, create cluster, create pod, add containers to pods, scale up and scale down.
+
+ |
+|
+
+![][19]
+
+ |
+
+[OpenContrail][20] is creating a kubernetes-contrail plugin designed to stitch the cluster management capabilities of Kubernetes with the network service automation capabilities of OpenContrail. Given the event-driven abstractions of pods and services inherent in Kubernetes, it is a simple extension to address network service enforcement by leveraging OpenContrail’s Virtual Network policy approach and programmatic API’s.
+
+ |
+|
+
+![logo.png][21]
+
+ |
+
+[Pachyderm][22] is a containerized data analytics engine which provides the broad functionality of Hadoop with the ease of use of Docker. Users simply provide containers with their data analysis logic and Pachyderm will distribute that computation over the data. They have just released full deployment on Kubernetes for on premise deployments, and on Google Container Engine, eliminating all the operational overhead of running a cluster yourself.
+
+ |
+|
+
+![][23]
+
+ |
+
+[Platalytics, Inc][24]. and announced the release of one-touch deploy-anywhere feature for its Spark Application Platform. Based on Kubernetes, Docker, and CoreOS, it allows simple and automated deployment of Apache Hadoop, Spark, and Platalytics platform, with a single click, to all major public clouds, including Google, Amazon, Azure, Digital Ocean, and private on-premise clouds. It also enables hybrid cloud scenarios, where resources on public and private clouds can be mixed.
+
+ |
+|
+
+![][25]
+
+ |
+
+[Rackspace][26] has created Corekube as a simple, quick way to deploy Kubernetes on OpenStack. By using a decoupled infrastructure that is coordinated by etcd, fleet and flannel, it enables users to try Kubernetes and CoreOS without all the fuss of setting things up by hand.
+
+ |
+|
+
+![][27]
+
+ |
+
+[Red Hat][28] is a long time proponent of Kubernetes, and a significant contributor to the project. In their own words, “From Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux Atomic Host to OpenShift Enterprise 3 and the forthcoming Red Hat Atomic Enterprise Platform, we are well-suited to bring container innovations into the enterprise, leveraging Kubernetes as the common backbone for orchestration.”
+
+ |
+|
+
+![][29]
+
+ |
+
+[Redapt][30] has launching a variety of turnkey, on-premises Kubernetes solutions co-engineered with other partners in the Kubernetes partner ecosystem. These include appliances built to leverage the CoreOS/Tectonic, Mirantis OpenStack, and Mesosphere platforms for management and provisioning. Redapt also offers private, public, and multi-cloud solutions that help customers accelerate their Kubernetes deployments successfully into production.
+
+ |
+
+| ----- |
+|
+
+
+
+ |
+ |
+
+We’ve also seen a community of services partners spring up to assist in adopting Kubernetes and containers:
+
+
+
+| ----- |
+|
+
+![Screen Shot 2015-07-21 at 1.12.16 PM.png][31]
+
+ |
+
+
+
+[Biarca][32] is using Kubernetes to ease application deployment and scale on demand across available hybrid and multi-cloud clusters through strategically managed policy. A video on their website illustrates how to use Kubernetes to deploy applications in a private cloud infrastructure based on OpenStack and use a public cloud like GCE to address bursting demand for applications.
+
+ |
+|
+
+![][33]
+
+ |
+
+[Cloud Technology Partners][34] has developed a Container Services Offering featuring Kubernetes to assist enterprises with container best practices, adoption and implementation. This offering helps organizations understand how containers deliver competitive edge.
+
+ |
+|
+
+![][35]
+
+ |
+
+[DoIT International][36] is offering a Kubernetes Bootcamp which consists of a series of hands-on exercises interleaved with mini-lectures covering hands on topics such as Container Basics, Using Docker, Kubernetes and Google Container Engine.
+
+ |
+|
+
+![][37]
+
+ |
+
+[OpenCredo][38] provides a practical, lab style container and scheduler course in addition to consulting and solution delivery. The three-day course allows development teams to quickly ramp up and make effective use of containers in real world scenarios, covering containers in general along with Docker and Kubernetes.
+
+|
+|
+
+![][39]
+
+ |
+
+[Pythian][40] focuses on helping clients design, implement, and manage systems that directly contribute to revenue and business success. They provide small, [dedicated teams of highly trained and experienced data experts][41] have the deep Kubernetes and container experience necessary to help companies solve Big Data problems with containers.
+
+ |
+
+ \- Martin Buhr, Product Manager at Google
+
+ [1]: https://lh4.googleusercontent.com/2dJvY1Cl9i6SQ8apKARcisvFZPDYY5LltIsmz3W-jmon7DFE4p7cz3gsBPuz9KM_LSiuwx1xIPYr9Ygm5DTQ2f-DUyWsg7zs7YL7O3JMCHQ8Ji4B3EGpx26fbF_glQPPPp4RQTE
+[2]: http://blog.cloudbees.com/2015/07/on-demand-jenkins-slaves-with.html
+[3]: https://lh4.googleusercontent.com/vC0B6UWRaxOq9ar-7naIX9HNs9ANfq8f5VTP-MpIOpRTxHeE7kMDAcmswsDF6SVsd_xtRa7Kr2z3wJCXbGj2Lp6fp7pfhaWd5bHuA9_cYHhvY1WmQEjXHdPZxYzwBqExAmtTdiA
+[4]: https://tectonic.com/
+[5]: https://lh6.googleusercontent.com/Y6MY5k_Eq6CddNzfRrRo14kLuJwe1KYtJq_7KcIGy1bRf65KwoX1uAuCBwEL0P_FGSomZPQZ-hs7CG8Vze7qDKsISZrLEyRZkm5OSHngjjXfCItCiMXI3FtnD9iyDvYurd5sRXQ
+[6]: https://www.hds.com/corporate/press-analyst-center/press-releases/2015/gl150721.html
+[7]: https://lh4.googleusercontent.com/iHZAfjvGPHYsIwUgevTTPN74fBU53Y1qdwq9hUsIixLWIbbv7P_02CQR6V5LPi4n4BCeg1LK3g5Iaizpkm5dXCmI7TdYKEaC7H2wLa9tzSkp8TyR93U1SilcGvpLDlzPLWhY664
+[8]: https://www.kismatic.com/
+[9]: https://lh5.googleusercontent.com/kTu3RRmc1LH1vgdHQeCibALfJJCxE9JR5ZRE30xAn_bphO_uk-2n3RRolw3Yrb1uheyXMQRsH8ps7v3mrvhjkJo0f2ye7unVd1PT0trv8cE5VP1Pnq5P4oUx6m7DWKANZyyBnsg
+[10]: http://info.meteor.com/blog/meteor-and-a-galaxy-of-containers-with-kubernetes
+[11]: https://lh5.googleusercontent.com/H1r-80pX8-ixDHCJDBKLWkNA1keMUvjv058e87-B80Wr8LSxP7SjSXc-5ru3MT4k18zYxl0L8aqJv3aylx8UYNGAXEmCCuHKwjZ4Z5tbG-LFCTiyRVdrlVUukHhi8QtsbuR1u3c
+[12]: https://mesosphere.com/training/kubernetes/
+[13]: https://lh6.googleusercontent.com/7BkcAAf9SoEDyzjgGsNg_YVi8cRb1mdPHsc4FtK7JQkl2iVR_zIy9wkDPT7bls-z7FhgTIekAj1Z7q6Y_4oaZ2OLygkHxPmxZ3MNnkI4f8C78cjyk2gvt40Yk-m3_VSt8sIXz2Q
+[14]: https://www.mirantis.com/blog/kubernetes-docker-mirantis-openstack-6-1/
+[15]: https://lh6.googleusercontent.com/Zi_nKEcB6uWZYXMOBStKPFLkHIXQn2FsnFP4ab2BFeBbUWv-d1oEBLQos-OpYpfwO3mao6xGusvX9O1JiyL4357XJBsmTXmcSnTnrBXCBOxJkB1uhOjntfAv8fN2YjZ6ITK53YU
+[16]: http://www.opencontrail.org/opencontrail-kubernetes-integration/
+[17]: https://lh5.googleusercontent.com/F9dS-UFz8L50xoj8jCjgUvOo-r3pNLs4cEGRczHu5mD8YdMgnJctyzBuWQ0LmZeBB3cDHc1LB_4kHZDmjuP6KGr_n3W8Q0fGbBHxinRZggdMC0NDDWl-xDwy68GO6qotJr2JcOA
+[18]: http://pachyderm.io/
+[19]: https://lh6.googleusercontent.com/dXhnvnlWtL9-oTd_irtLYTu8g78l9-LKj9PwjV5v4mpvGcPh4GQlHeQZpnIMJGwEyBxagut94Onagb0GsVJuVx10VVp-GHZ0vG_Z-jbxthLHhuzhQaBSFfA9pfoOI3cl6Rh7Hk4
+[20]: http://www.platalytics.com/
+[21]: https://lh3.googleusercontent.com/0EQQc3sjVbw1cEYVeT0S5rT1iPLEMHteiKlSMDNqw8lNVOf4vG5qE6pVfvmZlRcg-NoOABC-mMcMSdD8ayrmpok0T91N15QqqmH378ydxK1843dcuJdtEsCnr1Y_RQQo-hWrBfI
+[22]: https://github.com/metral/corekube
+[23]: https://lh4.googleusercontent.com/qxQciTVBkyYDWeSgoxtg7InxQuuXsGSLBDfdxJB9Czo71BzQN5bUugLZhQKkERHqWAnkqHIY2VWi2J7g-pGn4V4AzPE0alBksedou78r0KMZm4QqYTN8QYHIMo4RtVmdw90azYw
+[24]: http://www.redhat.com/en/about/blog/welcoming-kubernetes-officially-enterprise-open-source-world
+[25]: https://lh3.googleusercontent.com/0EQQc3sjVbw1cEYVeT0S5rT1iPLEMHteiKlSMDNqw8lNVOf4vG5qE6pVfvmZlRcg-NoOABC-mMcMSdD8ayrmpok0T91N15QqqmH378ydxK1843dcuJdtEsCnr1Y_RQQo-hWrBfI
+[26]: https://github.com/metral/corekube
+[27]: https://lh4.googleusercontent.com/qxQciTVBkyYDWeSgoxtg7InxQuuXsGSLBDfdxJB9Czo71BzQN5bUugLZhQKkERHqWAnkqHIY2VWi2J7g-pGn4V4AzPE0alBksedou78r0KMZm4QqYTN8QYHIMo4RtVmdw90azYw
+[28]: http://www.redhat.com/en/about/blog/welcoming-kubernetes-officially-enterprise-open-source-world
+[29]: https://lh5.googleusercontent.com/8FfYhnwb__NUuoXEC-tNzuAuA6rFGz6IgQnVYh-fQ89i685-3t_2UjN291S-VZAAkyrPJ-MaAPMr36uV0PLWlv_GE1aE99shx_XzrEi4c8OKcEkiRs3z_tsB20w5ZiZ7UeZgzT8
+[30]: http://www.redapt.com/kubernetes/%20%E2%80%8E
+[31]: https://lh3.googleusercontent.com/dOHU9NjLGrG6UgGuNjvhuR5oDkrR5z1AZ0sM8BkLgaMuXY7pfDev8ukVbD1nrBeRj9LKryJcoGEvhZSo_dHIP8ahHIkAWqsT_QSOoiu7rfM9WX3lubCI4N1WKmE7yrRquaL7nAc
+[32]: http://biarca.io/building-distributed-multi-cloud-applications-using-kubernetes-and-containers/
+[33]: https://lh3.googleusercontent.com/Ac0FiR1FJ4tfp90zBVX7fr36BAVxUqRW7VIOFw12Rp6BzHRR0x_BwTfbaheXLYSYMuPZouf4huql04Uu9fVEn956b7BWIUcTzUgWuB5JYSFawwrP_AA6uzdOHZAQ2aROo1vhm1s
+[34]: http://www.cloudtp.com/container-adoption-services/
+[35]: https://lh6.googleusercontent.com/tBtFRPzI6OAPKvaak9X3QWcrzGuBsrk1szFGi-Bq3EQweBo6nZ0Qmwxk9EwLZ9ItP9-1Zip4rxtwtFa0ILylO1CySuOa1qLcO2ab0yJCN1SCe-r_BNPX8hD5Qigxb7sqqXgx09A
+[36]: http://doit-intl.com/kubernetes
+[37]: https://lh3.googleusercontent.com/qO2YK7IxIVPpIsdN0Ry7B5zc_cdzfZb6DlgAJWpy-VJajL84m3u2nyo3-6QRZ_wFCY0-r4ryltiT4j1D_y_BeguxGXWap2YlSfdqyYAIbi2__p0uLXymtYkAu5VFVfA___eMbUY
+[38]: https://www.opencredo.com/2015/04/20/kubernetes/
+[39]: https://lh5.googleusercontent.com/XgMDUbRt_UKn4v4D7roz4mpE4qqUYpLI2c9460vt65yXrLxhcrM3rmH9Xcg-C0RMylhRxTWIMFInHYLN1O9v9FZ1NoUVI6ynsmoAQUGMN1Nc27jhXzIRiRXwWzx_HOH5TtX3NaE
+[40]: http://www.pythian.com/google-kubernetes/
+[41]: http://www.pythian.com/blog/lessons-learned-kubernetes/
diff --git a/blog/_posts/2015-07-00-Weekly-Kubernetes-Community-Hangout.md b/blog/_posts/2015-07-00-Weekly-Kubernetes-Community-Hangout.md
new file mode 100644
index 00000000000..427ab3ef5b8
--- /dev/null
+++ b/blog/_posts/2015-07-00-Weekly-Kubernetes-Community-Hangout.md
@@ -0,0 +1,64 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Weekly Kubernetes Community Hangout Notes - July 10 2015 "
+date: Tuesday, July 13, 2015
+pagination:
+ enabled: true
+---
+
+Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
+
+Here are the notes from today's meeting:
+
+* Eric Paris: replacing salt with ansible (if we want)
+ * In contrib, there is a provisioning tool written in ansible
+ * The goal in the rewrite was to eliminate as much of the cloud provider stuff as possible
+ * The salt setup does a bunch of setup in scripts and then the environment is setup with salt
+ * This means that things like generating certs is done differently on GCE/AWS/Vagrant
+ * For ansible, everything must be done within ansible
+ * Background on ansible
+ * Does not have clients
+ * Provisioner ssh into the machine and runs scripts on the machine
+ * You define what you want your cluster to look like, run the script, and it sets up everything at once
+ * If you make one change in a config file, ansible re-runs everything (which isn’t always desirable)
+ * Uses a jinja2 template
+ * Create machines with minimal software, then use ansible to get that machine into a runnable state
+ * Sets up all of the add-ons
+ * Eliminates the provisioner shell scripts
+ * Full cluster setup currently takes about 6 minutes
+ * CentOS with some packages
+ * Redeploy to the cluster takes 25 seconds
+ * Questions for Eric
+ * Where does the provider-specific configuration go?
+ * The only network setup that the ansible config does is flannel; you can turn it off
+ * What about init vs. systemd?
+ * Should be able to support in the code w/o any trouble (not yet implemented)
+ * Discussion
+ * Why not push the setup work into containers or kubernetes config?
+ * To bootstrap a cluster drop a kubelet and a manifest
+ * Running a kubelet and configuring the network should be the only things required. We can cut a machine image that is preconfigured minus the data package (certs, etc)
+ * The ansible scripts install kubelet & docker if they aren’t already installed
+ * Each OS (RedHat, Debian, Ubuntu) could have a different image. We could view this as part of the build process instead of the install process.
+ * There needs to be solution for bare metal as well.
+ * In favor of the overall goal -- reducing the special configuration in the salt configuration
+ * Everything except the kubelet should run inside a container (eventually the kubelet should as well)
+ * Running in a container doesn’t cut down on the complexity that we currently have
+ * But it does more clearly define the interface about what the code expects
+ * These tools (Chef, Puppet, Ansible) conflate binary distribution with configuration
+ * Containers more clearly separate these problems
+ * The mesos deployment is not completely automated yet, but the mesos deployment is completely different: kubelets get put on top on an existing mesos cluster
+ * The bash scripts allow the mesos devs to see what each cloud provider is doing and re-use the relevant bits
+ * There was a large reverse engineering curve, but the bash is at least readable as opposed to the salt
+ * Openstack uses a different deployment as well
+ * We need a well documented list of steps (e.g. create certs) that are necessary to stand up a cluster
+ * This would allow us to compare across cloud providers
+ * We should reduce the number of steps as much as possible
+ * Ansible has 241 steps to launch a cluster
+* 1.0 Code freeze
+ * How are we getting out of code freeze?
+ * This is a topic for next week, but the preview is that we will move slowly rather than totally opening the firehose
+ * We want to clear the backlog as fast as possible while maintaining stability both on HEAD and on the 1.0 branch
+ * The backlog of almost 300 PRs but there are also various parallel feature branches that have been developed during the freeze
+ * Cutting a cherry pick release today (1.0.1) that fixes a few issues
+ * Next week we will discuss the cadence for patch releases
diff --git a/blog/_posts/2015-07-00-Weekly-Kubernetes-Community-Hangout_23.md b/blog/_posts/2015-07-00-Weekly-Kubernetes-Community-Hangout_23.md
new file mode 100644
index 00000000000..98b9b7ec64b
--- /dev/null
+++ b/blog/_posts/2015-07-00-Weekly-Kubernetes-Community-Hangout_23.md
@@ -0,0 +1,136 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Weekly Kubernetes Community Hangout Notes - July 17 2015 "
+date: Friday, July 23, 2015
+pagination:
+ enabled: true
+---
+
+
+
+
+Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
+
+Here are the notes from today's meeting:
+
+
+
+-
+Eric Paris: replacing salt with ansible (if we want)
+
+ -
+In contrib, there is a provisioning tool written in ansible
+ -
+The goal in the rewrite was to eliminate as much of the cloud provider stuff as possible
+ -
+The salt setup does a bunch of setup in scripts and then the environment is setup with salt
+
+ -
+This means that things like generating certs is done differently on GCE/AWS/Vagrant
+ -
+For ansible, everything must be done within ansible
+ -
+Background on ansible
+
+ -
+Does not have clients
+ -
+Provisioner ssh into the machine and runs scripts on the machine
+ -
+You define what you want your cluster to look like, run the script, and it sets up everything at once
+ -
+If you make one change in a config file, ansible re-runs everything (which isn’t always desirable)
+ -
+Uses a jinja2 template
+ -
+Create machines with minimal software, then use ansible to get that machine into a runnable state
+
+ -
+Sets up all of the add-ons
+ -
+Eliminates the provisioner shell scripts
+ -
+Full cluster setup currently takes about 6 minutes
+
+ -
+CentOS with some packages
+ -
+Redeploy to the cluster takes 25 seconds
+ -
+Questions for Eric
+
+ -
+Where does the provider-specific configuration go?
+
+ -
+The only network setup that the ansible config does is flannel; you can turn it off
+ -
+What about init vs. systemd?
+
+ -
+Should be able to support in the code w/o any trouble (not yet implemented)
+ -
+Discussion
+
+ -
+Why not push the setup work into containers or kubernetes config?
+
+ -
+To bootstrap a cluster drop a kubelet and a manifest
+ -
+Running a kubelet and configuring the network should be the only things required. We can cut a machine image that is preconfigured minus the data package (certs, etc)
+
+ -
+The ansible scripts install kubelet & docker if they aren’t already installed
+ -
+Each OS (RedHat, Debian, Ubuntu) could have a different image. We could view this as part of the build process instead of the install process.
+ -
+There needs to be solution for bare metal as well.
+ -
+In favor of the overall goal -- reducing the special configuration in the salt configuration
+ -
+Everything except the kubelet should run inside a container (eventually the kubelet should as well)
+
+ -
+Running in a container doesn’t cut down on the complexity that we currently have
+ -
+But it does more clearly define the interface about what the code expects
+ -
+These tools (Chef, Puppet, Ansible) conflate binary distribution with configuration
+
+ -
+Containers more clearly separate these problems
+ -
+The mesos deployment is not completely automated yet, but the mesos deployment is completely different: kubelets get put on top on an existing mesos cluster
+
+ -
+The bash scripts allow the mesos devs to see what each cloud provider is doing and re-use the relevant bits
+ -
+There was a large reverse engineering curve, but the bash is at least readable as opposed to the salt
+ -
+Openstack uses a different deployment as well
+ -
+We need a well documented list of steps (e.g. create certs) that are necessary to stand up a cluster
+
+ -
+This would allow us to compare across cloud providers
+ -
+We should reduce the number of steps as much as possible
+ -
+Ansible has 241 steps to launch a cluster
+-
+1.0 Code freeze
+
+ -
+How are we getting out of code freeze?
+ -
+This is a topic for next week, but the preview is that we will move slowly rather than totally opening the firehose
+
+ -
+We want to clear the backlog as fast as possible while maintaining stability both on HEAD and on the 1.0 branch
+ -
+The backlog of almost 300 PRs but there are also various parallel feature branches that have been developed during the freeze
+ -
+Cutting a cherry pick release today (1.0.1) that fixes a few issues
+ - Next week we will discuss the cadence for patch releases
diff --git a/blog/_posts/2015-08-00-Using-Kubernetes-Namespaces-To-Manage.md b/blog/_posts/2015-08-00-Using-Kubernetes-Namespaces-To-Manage.md
new file mode 100644
index 00000000000..032672926b9
--- /dev/null
+++ b/blog/_posts/2015-08-00-Using-Kubernetes-Namespaces-To-Manage.md
@@ -0,0 +1,92 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Using Kubernetes Namespaces to Manage Environments "
+date: Saturday, August 28, 2015
+pagination:
+ enabled: true
+---
+##### One of the advantages that Kubernetes provides is the ability to manage various environments easier and better than traditional deployment strategies. For most nontrivial applications, you have test, staging, and production environments. You can spin up a separate cluster of resources, such as VMs, with the same configuration in staging and production, but that can be costly and managing the differences between the environments can be difficult.
+
+##### Kubernetes includes a cool feature called [namespaces][4], which enable you to manage different environments within the same cluster. For example, you can have different test and staging environments in the same cluster of machines, potentially saving resources. You can also run different types of server, batch, or other jobs in the same cluster without worrying about them affecting each other.
+
+
+
+### The Default Namespace
+
+Specifying the namespace is optional in Kubernetes because by default Kubernetes uses the "default" namespace. If you've just created a cluster, you can check that the default namespace exists using this command:
+```
+$ kubectl get namespaces
+NAME LABELS STATUS
+default Active
+kube-system Active
+```
+
+Here you can see that the default namespace exists and is active. The status of the namespace is used later when turning down and deleting the namespace.
+
+#### Creating a New Namespace
+
+You create a namespace in the same way you would any other resource. Create a my-namespace.yaml file and add these contents:
+
+```
+kind: Namespace
+apiVersion: v1
+metadata:
+ name: my-namespace
+ labels:
+ name: my-namespace
+```
+
+Then you can run this command to create it:
+```
+$ kubectl create -f my-namespace.yaml
+```
+#### Service Names
+
+With namespaces you can have your apps point to static service endpoints that don't change based on the environment. For instance, your MySQL database service could be named mysql in production and staging even though it runs on the same infrastructure.
+
+This works because each of the resources in the cluster will by default only "see" the other resources in the same namespace. This means that you can avoid naming collisions by creating pods, services, and replication controllers with the same names provided they are in separate namespaces. Within a namespace, short DNS names of services resolve to the IP of the service within that namespace. So for example, you might have an Elasticsearch service that can be accessed via the DNS name elasticsearch as long as the containers accessing it are located in the same namespace.
+
+You can still access services in other namespaces by looking it up via the full DNS name which takes the form of SERVICE-NAME.NAMESPACE-NAME. So for example, elasticsearch.prod or elasticsearch.canary for the production and canary environments respectively.
+
+#### An Example
+
+Lets look at an example application. Let’s say you want to deploy your music store service MyTunes in Kubernetes. You can run the application production and staging environment as well as some one-off apps running in the same cluster. You can get a better idea of what’s going on by running some commands:
+
+
+
+```
+~$ kubectl get namespaces
+NAME LABELS STATUS
+default Active
+mytunes-prod Active
+mytunes-staging Active
+my-other-app Active
+```
+
+Here you can see a few namespaces running. Next let’s list the services in staging:
+
+```
+~$ kubectl get services --namespace=mytunes-staging
+NAME LABELS SELECTOR IP(S) PORT(S)
+mytunes name=mytunes,version=1 name=mytunes 10.43.250.14 80/TCP
+ 104.185.824.125
+mysql name=mysql name=mysql 10.43.250.63 3306/TCP
+```
+Next check production:
+```
+~$ kubectl get services --namespace=mytunes-prod
+NAME LABELS SELECTOR IP(S) PORT(S)
+mytunes name=mytunes,version=1 name=mytunes 10.43.241.145 80/TCP
+ 104.199.132.213
+mysql name=mysql name=mysql 10.43.245.77 3306/TCP
+```
+Notice that the IP addresses are different depending on which namespace is used even though the names of the services themselves are the same. This capability makes configuring your app extremely easy—since you only have to point your app at the service name—and has the potential to allow you to configure your app exactly the same in your staging or test environments as you do in production.
+
+#### Caveats
+
+While you can run staging and production environments in the same cluster and save resources and money by doing so, you will need to be careful to set up resource limits so that your staging environment doesn't starve production for CPU, memory, or disk resources. Setting resource limits properly, and testing that they are working takes a lot of time and effort so unless you can measurably save money by running production in the same cluster as staging or test, you may not really want to do that.
+
+Whether or not you run staging and production in the same cluster, namespaces are a great way to partition different apps within the same cluster. Namespaces will also serve as a level where you can apply resource limits so look for more resource management features at the namespace level in the future.
+
+\- Posted by Ian Lewis, Developer Advocate at Google
diff --git a/blog/_posts/2015-08-00-Weekly-Kubernetes-Community-Hangout.md b/blog/_posts/2015-08-00-Weekly-Kubernetes-Community-Hangout.md
new file mode 100644
index 00000000000..1204f6d4af1
--- /dev/null
+++ b/blog/_posts/2015-08-00-Weekly-Kubernetes-Community-Hangout.md
@@ -0,0 +1,90 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Weekly Kubernetes Community Hangout Notes - July 31 2015 "
+date: Wednesday, August 04, 2015
+pagination:
+ enabled: true
+---
+
+Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
+
+Here are the notes from today's meeting:
+
+
+
+* Private Registry Demo - Muhammed
+
+ * Run docker-registry as an RC/Pod/Service
+
+ * Run a proxy on every node
+
+ * Access as localhost:5000
+
+ * Discussion:
+
+ * Should we back it by GCS or S3 when possible?
+
+ * Run real registry backed by $object_store on each node
+
+ * DNS instead of localhost?
+
+ * disassemble image strings?
+
+ * more like DNS policy?
+* Running Large Clusters - Joe
+
+ * Samsung keen to see large scale O(1000)
+
+ * Starting on AWS
+
+ * RH also interested - test plan needed
+
+ * Plan for next week: discuss working-groups
+
+ * If you are interested in joining conversation on cluster scalability send mail to [joe@0xBEDA.com][4]
+* Resource API Proposal - Clayton
+
+ * New stuff wants more info on resources
+
+ * Proposal for resources API - ask apiserver for info on pods
+
+ * Send feedback to: #11951
+
+ * Discussion on snapshot vs time-series vs aggregates
+* Containerized kubelet - Clayton
+
+ * Open pull
+
+ * Docker mount propagation - RH carries patches
+
+ * Big issues around whole bootstrap of the system
+
+ * dual: boot-docker/system-docker
+
+ * Kube-in-docker is really nice, but maybe not critical
+
+ * Do the small stuff to make progress
+
+ * Keep pressure on docker
+* Web UI (preilly)
+
+ * Where does web UI stand?
+
+ * OK to split it back out
+
+ * Use it as a container image
+
+ * Build image as part of kube release process
+
+ * Vendor it back in? Maybe, maybe not.
+
+ * Will DNS be split out?
+
+ * Probably more tightly integrated, instead
+
+ * Other potential spin-outs:
+
+ * apiserver
+
+ * clients
diff --git a/blog/_posts/2015-09-00-Kubernetes-Performance-Measurements-And.md b/blog/_posts/2015-09-00-Kubernetes-Performance-Measurements-And.md
new file mode 100644
index 00000000000..aaaefaff4a0
--- /dev/null
+++ b/blog/_posts/2015-09-00-Kubernetes-Performance-Measurements-And.md
@@ -0,0 +1,163 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes Performance Measurements and Roadmap "
+date: Friday, September 10, 2015
+pagination:
+ enabled: true
+---
+No matter how flexible and reliable your container orchestration system is, ultimately, you have some work to be done, and you want it completed quickly. For big problems, a common answer is to just throw more machines at the problem. After all, more compute = faster, right?
+
+
+Interestingly, adding more nodes is a little like the [tyranny of the rocket equation][4] \- in some systems, adding more machines can actually make your processing slower. However, unlike the rocket equation, we can do better. Kubernetes in v1.0 version supports clusters with up to 100 nodes. However, we have a goal to 10x the number of nodes we will support by the end of 2015. This blog post will cover where we are and how we intend to achieve the next level of performance.
+
+
+##### What do we measure?
+
+The first question we need to answer is: “what does it mean that Kubernetes can manage an N-node cluster?” Users expect that it will handle all operations “reasonably quickly,” but we need a precise definition of that. We decided to define performance and scalability goals based on the following two metrics:
+
+1. 1.*“API-responsiveness”*: 99% of all our API calls return in less than 1 second
+
+2. 2.*“Pod startup time”*: 99% of pods (with pre-pulled images) start within 5 seconds
+
+
+Note that for “pod startup time” we explicitly assume that all images necessary to run a pod are already pre-pulled on the machine where it will be running. In our experiments, there is a high degree of variability (network throughput, size of image, etc) between images, and these variations have little to do with Kubernetes’ overall performance.
+
+
+The decision to choose those metrics was made based on our experience spinning up 2 billion containers a week at Google. We explicitly want to measure the latency of user-facing flows since that’s what customers will actually care about.
+
+
+##### How do we measure?
+
+To monitor performance improvements and detect regressions we set up a continuous testing infrastructure. Every 2-3 hours we create a 100-node cluster from [HEAD][5] and run our scalability tests on it. We use a GCE n1-standard-4 (4 cores, 15GB of RAM) machine as a master and n1-standard-1 (1 core, 3.75GB of RAM) machines for nodes.
+
+
+In scalability tests, we explicitly focus only on the full-cluster case (full N-node cluster is a cluster with 30 * N pods running in it) which is the most demanding scenario from a performance point of view. To reproduce what a customer might actually do, we run through the following steps:
+
+* Populate pods and replication controllers to fill the cluster
+
+* Generate some load (create/delete additional pods and/or replication controllers, scale the existing ones, etc.) and record performance metrics
+
+* Stop all running pods and replication controllers
+
+* Scrape the metrics and check whether they match our expectations
+
+
+It is worth emphasizing that the main parts of the test are done on full clusters (30 pods per node, 100 nodes) - starting a pod in an empty cluster, even if it has 100 nodes will be much faster.
+
+
+To measure pod startup latency we are using very simple pods with just a single container running the “gcr.io/google_containers/pause:go” image, which starts and then sleeps forever. The container is guaranteed to be already pre-pulled on nodes (we use it as the so-called pod-infra-container).
+
+
+##### Performance data
+
+The following table contains percentiles (50th, 90th and 99th) of pod startup time in 100-node clusters which are 10%, 25%, 50% and 100% full.
+
+
+| | 10%-full |25%-full | 50%-full | 100%-full |
+| ------------ | ------------ | ------------ | ------------ | ------------ |
+|50th percentile | .90s | 1.08s | 1.33s | 1.94s |
+|90th percentile | 1.29s | 1.49s | 1.72s | 2.50s |
+| 99th percentile | 1.59s | 1.86s | 2.56s | 4.32s |
+{: .post-table}
+
+As for api-responsiveness, the following graphs present 50th, 90th and 99th percentiles of latencies of API calls grouped by kind of operation and resource type. However, note that this also includes internal system API calls, not just those issued by users (in this case issued by the test itself).
+
+
+
+![get.png][6]![put.png][7]
+
+
+
+![delete.png][8]![post.png][9]
+
+![list.png][10]
+
+
+Some resources only appear on certain graphs, based on what was running during that operation (e.g. no namespace was put at that time).
+
+
+As you can see in the results, we are ahead of target for our 100-node cluster with pod startup time even in a fully-packed cluster occurring 14% faster in the 99th percentile than 5 seconds. It’s interesting to point out that LISTing pods is significantly slower than any other operation. This makes sense: in a full cluster there are 3000 pods and each of pod is roughly few kilobytes of data, meaning megabytes of data that need to processed for each LIST.
+
+
+#####Work done and some future plans
+
+The initial performance work to make 100-node clusters stable enough to run any tests on them involved a lot of small fixes and tuning, including increasing the limit for file descriptors in the apiserver and reusing tcp connections between different requests to etcd.
+
+
+However, building a stable performance test was just step one to increasing the number of nodes our cluster supports by tenfold. As a result of this work, we have already taken on significant effort to remove future bottlenecks, including:
+
+* Rewriting controllers to be watch-based: Previously they were relisting objects of a given type every few seconds, which generated a huge load on the apiserver.
+
+* Using code generators to produce conversions and deep-copy functions: Although the default implementation using Go reflections are very convenient, they proved to be extremely slow, as much as 10X in comparison to the generated code.
+
+* Adding a cache to apiserver to avoid deserialization of the same data read from etcd multiple times
+
+* Reducing frequency of updating statuses: Given the slow changing nature of statutes, it only makes sense to update pod status only on change and node status only every 10 seconds.
+
+* Implemented watch at the apiserver instead of redirecting the requests to etcd: We would prefer to avoid watching for the same data from etcd multiple times, since, in many cases, it was filtered out in apiserver anyway.
+
+
+Looking further out to our 1000-node cluster goal, proposed improvements include:
+
+
+* Moving events out from etcd: They are more like system logs and are neither part of system state nor are crucial for Kubernetes to work correctly.
+
+* Using better json parsers: The default parser implemented in Go is very slow as it is based on reflection.
+
+* Rewriting the scheduler to make it more efficient and concurrent
+
+* Improving efficiency of communication between apiserver and Kubelets: In particular, we plan to reduce the size of data being sent on every update of node status.
+
+
+This is by no means an exhaustive list. We will be adding new elements (or removing existing ones) based on the observed bottlenecks while running the existing scalability tests and newly-created ones. If there are particular use cases or scenarios that you’d like to see us address, please join in!
+
+- We have weekly meetings for our Kubernetes Scale Special Interest Group on Thursdays 11am PST where we discuss ongoing issues and plans for performance tracking and improvements.
+- If you have specific performance or scalability questions before then, please join our scalability special interest group on Slack: https://kubernetes.slack.com/messages/sig-scale
+- General questions? Feel free to join our Kubernetes community on Slack: https://kubernetes.slack.com/messages/kubernetes-users/
+- Submit a pull request or file an issue! You can do this in our GitHub repository. Everyone is also enthusiastically encouraged to contribute with their own experiments (and their result) or PR contributions improving Kubernetes.
+\- Wojciech Tyczynski, Google Software Engineer
+
+[1]: http://kubernetes.io/images/nav_logo.svg
+[2]: http://kubernetes.io/docs/
+[3]: http://blog.kubernetes.io/
+[4]: http://www.nasa.gov/mission_pages/station/expeditions/expedition30/tryanny.html
+[5]: https://github.com/kubernetes/kubernetes
+[6]: https://lh4.googleusercontent.com/NrKLoz2iB-TNdOxISL7OcqquCKL-MijDBCokf-u4ASAqgmo6zT7ZU24mXDvIwUUlRsFSsL3KF17dEAfUT41TSgNPvId5HN5ELQTXJSSBF0dp9EOccx4Y4WZ9fC9v9B_kCA=s1600
+[7]: https://lh4.googleusercontent.com/53AtIdoGQ477Ju0FD4S76xbZs490JnmibhSZh67aq1-MU4Jw4B-7FBgzvFoJXHcAMeSU9r3bzJHpBFAfcSf7FIS3JGZ4TiAiHucyjH3ErrarKrwYNFopvxYSBo0qxP-U0w=s1600
+[8]: https://lh4.googleusercontent.com/-wsLEXPfgtXNlu-pDfM4c0Qvr8lU7-G2w_nSgVeqg04D7RnhgSzg6Z5-mVmIYOzTWF7XaJ0zsDZBBlyZLqj4R1fkwWq-uaKJJI8xLAQ1gYWbh5qKXr5-rzkjm6CT3kBU=s1600
+[9]: https://lh6.googleusercontent.com/It8dH6iM2ZPypZ99KSUo_kJY4DnR2QD8yGJj26TiZ3U4owyf-WXoxrDfBAc1hcSn3i3LuxE3KGlUzQOaPgH6XVjSAU9Z2zMfZCKFAxEGtuCQiKlJPX4vH2JgQf3h1BXMRJQ=s1600
+[10]: https://lh6.googleusercontent.com/6Gy-UKBZUoEwJ9iFytq-k_wrdvh6FsTJexSpn6nNnBwOvxv-Sp6PV7vmArCL22MUkz0tWH7MxhaIc-JE8YpEc0X4nDUMn-cKWF3ANHtgd2aJ5t3osoaezDe_xqjpi748Cbw=s1600
+[11]: https://kubernetes.slack.com/messages/sig-scale/
+[12]: http://blog.kubernetes.io/2015/09/kubernetes-performance-measurements-and.html "permanent link"
+[13]: https://resources.blogblog.com/img/icon18_edit_allbkg.gif
+[14]: https://www.blogger.com/post-edit.g?blogID=112706738355446097&postID=6564816295503649669&from=pencil "Edit Post"
+[15]: https://www.blogger.com/share-post.g?blogID=112706738355446097&postID=6564816295503649669&target=email "Email This"
+[16]: https://www.blogger.com/share-post.g?blogID=112706738355446097&postID=6564816295503649669&target=blog "BlogThis!"
+[17]: https://www.blogger.com/share-post.g?blogID=112706738355446097&postID=6564816295503649669&target=twitter "Share to Twitter"
+[18]: https://www.blogger.com/share-post.g?blogID=112706738355446097&postID=6564816295503649669&target=facebook "Share to Facebook"
+[19]: https://www.blogger.com/share-post.g?blogID=112706738355446097&postID=6564816295503649669&target=pinterest "Share to Pinterest"
+[20]: http://blog.kubernetes.io/search/label/containers
+[21]: http://blog.kubernetes.io/search/label/k8s
+[22]: http://blog.kubernetes.io/search/label/kubernetes
+[23]: http://blog.kubernetes.io/search/label/performance
+[24]: http://blog.kubernetes.io/2015/10/some-things-you-didnt-know-about-kubectl_28.html "Newer Post"
+[25]: http://blog.kubernetes.io/2015/08/using-kubernetes-namespaces-to-manage.html "Older Post"
+[26]: http://blog.kubernetes.io/feeds/6564816295503649669/comments/default
+[27]: https://img2.blogblog.com/img/widgets/arrow_dropdown.gif
+[28]: https://img1.blogblog.com/img/icon_feed12.png
+[29]: https://img1.blogblog.com/img/widgets/subscribe-netvibes.png
+[30]: https://www.netvibes.com/subscribe.php?url=http%3A%2F%2Fblog.kubernetes.io%2Ffeeds%2Fposts%2Fdefault
+[31]: https://img1.blogblog.com/img/widgets/subscribe-yahoo.png
+[32]: https://add.my.yahoo.com/content?url=http%3A%2F%2Fblog.kubernetes.io%2Ffeeds%2Fposts%2Fdefault
+[33]: http://blog.kubernetes.io/feeds/posts/default
+[34]: https://www.netvibes.com/subscribe.php?url=http%3A%2F%2Fblog.kubernetes.io%2Ffeeds%2F6564816295503649669%2Fcomments%2Fdefault
+[35]: https://add.my.yahoo.com/content?url=http%3A%2F%2Fblog.kubernetes.io%2Ffeeds%2F6564816295503649669%2Fcomments%2Fdefault
+[36]: https://resources.blogblog.com/img/icon18_wrench_allbkg.png
+[37]: //www.blogger.com/rearrange?blogID=112706738355446097&widgetType=Subscribe&widgetId=Subscribe1&action=editWidget§ionId=sidebar-right-1 "Edit"
+[38]: https://twitter.com/kubernetesio
+[39]: http://slack.k8s.io/
+[40]: http://stackoverflow.com/questions/tagged/kubernetes
+[41]: http://get.k8s.io/
+[42]: //www.blogger.com/rearrange?blogID=112706738355446097&widgetType=HTML&widgetId=HTML2&action=editWidget§ionId=sidebar-right-1 "Edit"
+[43]: javascript:void(0)
diff --git a/blog/_posts/2015-10-00-Some-Things-You-Didnt-Know-About-Kubectl_28.md b/blog/_posts/2015-10-00-Some-Things-You-Didnt-Know-About-Kubectl_28.md
new file mode 100644
index 00000000000..68eb0780e4c
--- /dev/null
+++ b/blog/_posts/2015-10-00-Some-Things-You-Didnt-Know-About-Kubectl_28.md
@@ -0,0 +1,153 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Some things you didn’t know about kubectl "
+date: Thursday, October 28, 2015
+pagination:
+ enabled: true
+---
+[kubectl](https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/kubectl-overview.md) is the command line tool for interacting with Kubernetes clusters. Many people use it every day to deploy their container workloads into production clusters. But there’s more to kubectl than just `kubectl create -f or kubectl rolling-update`. kubectl is a veritable multi-tool of container orchestration and management. Below we describe some of the features of kubectl that you may not have seen.
+
+**Important Note** : Most of these features are part of the upcoming 1.1 release of Kubernetes. They are not present in the current stable 1.0.x release series.
+
+
+##### Run interactive commands
+
+`kubectl run` has been in kubectl since the 1.0 release, but recently we added the ability to run interactive containers in your cluster. That means that an interactive shell in your Kubernetes cluster is as close as:
+
+```
+$> kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
+Waiting for pod default/busybox-tv9rm to be running, status is Pending, pod ready: false
+Waiting for pod default/busybox-tv9rm to be running, status is Running, pod ready: false
+$> # ls
+bin dev etc home proc root sys tmp usr var
+$> # exit
+```
+The above `kubectl` command is equivalent to `docker run -i -t busybox sh.` Sadly we mistakenly used `-t` for template in kubectl 1.0, so we need to retain backwards compatibility with existing CLI user. But the existing use of `-t` is deprecated and we’ll eventually shorten `--tty` to `-t`.
+
+In this example, `-i` indicates that you want an allocated `stdin` for your container and indicates that you want an interactive session, `--restart=Never` indicates that the container shouldn’t be restarted after you exit the terminal and `--tty` requests that you allocate a TTY for that session.
+
+
+##### View your Pod’s logs
+
+Sometimes you just want to watch what’s going on in your server. For this, `kubectl logs` is the subcommand to use. Adding the -f flag lets you live stream new logs to your terminal, just like tail -f.
+ $> kubectl logs -f redis-izl09
+
+##### Attach to existing containers
+
+In addition to interactive execution of commands, you can now also attach to any running process. Like kubectl logs, you’ll get stderr and stdout data, but with attach, you’ll also be able to send stdin from your terminal to the program. Awesome for interactive debugging, or even just sending ctrl-c to a misbehaving application.
+
+ $> kubectl attach redis -i
+
+
+1:C 12 Oct 23:05:11.848 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
+
+```
+ _._
+ _.-``__''-._
+ _.-`` `. `_. ''-._ Redis 3.0.3 (00000000/0) 64 bit
+ .-`` .-```. ```\/ _.,_ ''-._
+ ( ' , .-` | `, ) Running in standalone mode
+ |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
+ | `-._ `._ / _.-' | PID: 1
+ `-._ `-._ `-./ _.-' _.-'
+ |`-._`-._ `-.__.-' _.-'_.-'|
+ | `-._`-._ _.-'_.-' | http://redis.io
+`-._ `-._`-.__.-'_.-' _.-'
+ |`-._`-._ `-.__.-' _.-'_.-'|
+ | `-._`-._ _.-'_.-' |
+ `-._ `-._`-.__.-'_.-' _.-'
+ `-._ `-.__.-' _.-'
+ `-._ _.-'
+ `-.__.-'
+
+1:M 12 Oct 23:05:11.849 # Server started, Redis version 3.0.3
+```
+
+##### Forward ports from Pods to your local machine
+
+Often times you want to be able to temporarily communicate with applications in your cluster without exposing them to the public internet for security reasons. To achieve this, the port-forward command allows you to securely forward a port on your local machine through the kubernetes API server to a Pod running in your cluster. For example:
+
+`$> kubectl port-forward redis-izl09 6379`
+
+Opens port 6379 on your local machine and forwards communication to that port to the Pod or Service in your cluster. For example, you can use the ‘telnet’ command to poke at a Redis service in your cluster:
+
+```
+$> telnet localhost 6379
+INCR foo
+:1
+INCR foo
+:2
+```
+
+### Execute commands inside an existing container
+In addition to being able to attach to existing processes inside a container, the “exec” command allows you to spawn new processes inside existing containers. This can be useful for debugging, or examining your pods to see what’s going on inside without interrupting a running service. `kubectl exec` is different from `kubectl run`, because it runs a command inside of an _existing_ container, rather than spawning a new container for execution.
+
+```
+$> kubectl exec redis-izl09 -- ls /
+bin
+boot
+data
+dev
+entrypoint.sh
+etc
+home
+```
+
+
+##### Add or remove Labels
+
+Sometimes you want to dynamically add or remove labels from a Pod, Service or Replication controller. Maybe you want to add an existing Pod to a Service, or you want to remove a Pod from a Service. No matter what you want, you can easily and dynamically add or remove labels using the `kubectl label` subcommand:
+
+`$> kubectl label pods redis-izl09 mylabel=awesome `
+`pod "redis-izl09" labeled`
+
+
+##### Add annotations to your objects
+
+Just like labels, you can add or remove annotations from API objects using the kubectl annotate subcommand. Unlike labels, annotations are there to help describe your object, but aren’t used to identify pods via label queries ([more details on annotations](https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/annotations.md#annotations)). For example, you might add an annotation of an icon for a GUI to use for displaying your pods.
+
+`$> kubectl annotate pods redis-izl09 icon-url=http://goo.gl/XXBTWq `
+`pod "redis-izl09" annotated`
+
+
+##### Output custom format
+
+Sometimes, you want to customize the fields displayed when kubectl summarizes an object from your cluster. To do this, you can use the `custom-columns-file` format. `custom-columns-file` takes in a template file for rendering the output. Again, JSONPath expressions are used in the template to specify fields in the API object. For example, the following template first shows the number of restarts, and then the name of the object:
+
+```
+$> cat cols.tmpl
+RESTARTS NAME
+.status.containerStatuses[0].restartCount .metadata.name
+```
+
+If you pass this template to the `kubectl get pods` command you get a list of pods with the specified fields displayed.
+
+```
+ $> kubectl get pods redis-izl09 -o=custom-columns-file --template=cols.tmpl RESTARTS NAME
+ 0 redis-izl09
+ 1 redis-abl42
+```
+
+##### Easily manage multiple Kubernetes clusters
+
+If you’re running multiple Kubernetes clusters, you know it can be tricky to manage all of the credentials for the different clusters. Using the `kubectl config` subcommands, switching between different clusters is as easy as:
+
+ $> kubectl config use-context
+
+Not sure what clusters are available? You can view currently configured clusters with:
+
+ $> kubectl config view
+
+Phew, that outputs a lot of text. To restrict it down to only the things we’re interested in, we can use a JSONPath template:
+
+ $> kubectl config view -o jsonpath="{.context[*].name}"
+
+Ahh, that’s better.
+
+
+##### Conclusion
+
+So there you have it, nine new and exciting things you can do with your Kubernetes cluster and the kubectl command line. If you’re just getting started with Kubernetes, check out [Google Container Engine](https://cloud.google.com/container-engine/) or other ways to [get started with Kubernetes](http://kubernetes.io/gettingstarted/).
+
+- Brendan Burns, Google Software Engineer
diff --git a/blog/_posts/2015-11-00-Creating-A-Raspberry-Pi-Cluster-Running-Kubernetes-The-Shopping-List-Part-1.md b/blog/_posts/2015-11-00-Creating-A-Raspberry-Pi-Cluster-Running-Kubernetes-The-Shopping-List-Part-1.md
new file mode 100644
index 00000000000..f28a95012bc
--- /dev/null
+++ b/blog/_posts/2015-11-00-Creating-A-Raspberry-Pi-Cluster-Running-Kubernetes-The-Shopping-List-Part-1.md
@@ -0,0 +1,58 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Creating a Raspberry Pi cluster running Kubernetes, the shopping list (Part 1) "
+date: Thursday, November 25, 2015
+pagination:
+ enabled: true
+---
+At Devoxx Belgium and Devoxx Morocco, Ray Tsang and I showed a Raspberry Pi cluster we built at Quintor running HypriotOS, Docker and Kubernetes. For those who did not see the talks, you can check out [an abbreviated version of the demo](https://www.youtube.com/watch?v=AAS5Mq9EktI) or the full talk by Ray on [developing and deploying Java-based microservices](https://www.youtube.com/watch?v=kT1vmK0r184) in Kubernetes. While we received many compliments on the talk, the most common question was about how to build a Pi cluster themselves! We’ll be doing just that, in two parts. This first post will cover the shopping list for the cluster, and the second will show you how to get it up and running . . .
+
+### Wait! Why the heck build a Raspberry Pi cluster running Kubernetes?
+
+We had two big reasons to build the Pi cluster at Quintor. First of all we wanted to experiment with container technology at scale on real hardware. You can try out container technology using virtual machines, but Kubernetes runs great on on bare metal too. To explore what that’d be like, we built a Raspberry Pi cluster just like we would build a cluster of machines in a production datacenter. This allowed us to understand and simulate how Kubernetes would work when we move it to our data centers.
+
+Secondly, we did not want to blow the budget to do this exploration. And what is cheaper than a Raspberry Pi! If you want to build a cluster comprising many nodes, each node should have a good cost to performance ratio. Our Pi cluster has 20 CPU cores, which is more than many servers, yet cost us less than $400. Additionally, the total power consumption is low and the form factor is small, which is great for these kind of demo systems.
+
+So, without further ado, let’s get to the hardware.
+
+### The Shopping List:
+
+| | | |
+| ------------ | ------------ | ------------ |
+| 5 | Raspberry Pi 2 model B | [~$200](https://www.raspberrypi.org/products/raspberry-pi-2-model-b/) |
+| 5 | 16 GB micro SD-card class 10 | ~ $45 |
+| 1 | D-Link Switch GO-SW-8E 8-Port | [~$15](http://www.dlink.com/uk/en/home-solutions/connect/go/go-sw-8e) |
+| 1 | Anker 60W 6-Port PowerPort USB Charger (white) | [~$35](http://www.ianker.com/product/A2123122) |
+| 3 | ModMyPi Multi-Pi Stackable Raspberry Pi Case | [~$60](http://www.modmypi.com/raspberry-pi/cases/multi-pi-stacker/multi-pi-stackable-raspberry-pi-case) |
+| 1 | ModMyPi Multi-Pi Stackable Raspberry Pi Case - Bolt Pack | [~$7](http://www.modmypi.com/raspberry-pi/cases/multi-pi-stacker/multi-pi-stackable-raspberry-pi-case-bolt-pack) |
+| 5 | Micro USB cable (white) 1ft long | ~ $10 |
+| 5 | UTP cat5 cable (white) 1ft long | ~ $10 |
+{: .post-table}
+
+
+For a total of approximately $380 you will have a building set to create a Raspberry Pi cluster like we built! [1](#1)
+
+
+### Some of our considerations
+
+We used the Raspberry Pi 2 model B boards in our cluster rather than the Pi 1 boards because of the CPU power (quadcore @ 900MHz over a dualcore @ 700MHz) and available memory (1 GB over 512MB). These specs allowed us to run multiple containers on each Pi to properly experiment with Kubernetes.
+
+We opted for a 16GB SD-card in each Pi to be at the save side on filesystem storage. In hindsight, 8GB seemed to be enough.
+
+Note the GeauxRobot Stackable Case looks like an alternative for the ModMyPi Stackable Case, but it’s smaller which can cause a problem fitting in the Anker USB Adapter and placing the D-Link Network Switch. So, we stuck with the ModMyPi case.
+
+
+### Putting it together
+
+Building the Raspberry Pi cluster is pretty straight forward. Most of the work is putting the stackable casing together and mounting the Pi boards on the plexiglass panes. We mounted the network switch and USB Adapter using double side foam tape, which feels strong enough for most situations. Finally, we connected the USB and UTP cables. Next, we installed HypriotOS on every Pi. HypriotOS is a Raspbian based Linux OS for Raspberry Pi’s extended with Docker support. The Hypriot team has an excellent tutorial on [Getting started with Docker on your Raspberry Pi](http://blog.hypriot.com/getting-started-with-docker-on-your-arm-device/). Follow this tutorial to get Linux and Docker running on all Pi’s.
+
+With that, you’re all set! Next up will be running Kubernetes on the Raspberry Pi cluster. We’ll be covering this the [next post](http://blog.kubernetes.io/2015/12/creating-raspberry-pi-cluster-running.html), so stay tuned!
+
+
+Arjen Wassink, Java Architect and Team Lead, Quintor
+
+
+
+** ## [1] ## **
+**[1] **To save ~$90 by making a stack of four Pi’s (instead of five). This also means you can use a 5-Port Anker USB Charger instead of the 6-Port one.
diff --git a/blog/_posts/2015-11-00-Kubernetes-1-1-Performance-Upgrades-Improved-Tooling-And-A-Growing-Community.md b/blog/_posts/2015-11-00-Kubernetes-1-1-Performance-Upgrades-Improved-Tooling-And-A-Growing-Community.md
new file mode 100644
index 00000000000..8cad28c97ad
--- /dev/null
+++ b/blog/_posts/2015-11-00-Kubernetes-1-1-Performance-Upgrades-Improved-Tooling-And-A-Growing-Community.md
@@ -0,0 +1,55 @@
+---
+layout: blog
+permalink: /blog/:year/:month/:title
+title: " Kubernetes 1.1 Performance upgrades, improved tooling and a growing community "
+date: Tuesday, November 09, 2015
+pagination:
+ enabled: true
+---
+Since the Kubernetes 1.0 release in July, we’ve seen tremendous adoption by companies building distributed systems to manage their container clusters. We’re also been humbled by the rapid growth of the community who help make Kubernetes better everyday. We have seen commercial offerings such as Tectonic by CoreOS and RedHat Atomic Host emerge to deliver deployment and support of Kubernetes. And a growing ecosystem has added Kubernetes support including tool vendors such as Sysdig and Project Calico.
+
+With the help of hundreds of contributors, we’re proud to announce the availability of Kubernetes 1.1, which offers major performance upgrades, improved tooling, and new features that make applications even easier to build and deploy.
+
+Some of the work we’d like to highlight includes:
+
+- **Substantial performance improvements** : We have architected Kubernetes from day one to handle Google-scale workloads, and our customers have put it through their paces. In Kubernetes 1.1, we have made further investments to ensure that you can run in extremely high-scale environments; later this week, we will be sharing examples of running thousand node clusters, and running over a million QPS against a single cluster.
+
+- **Significant improvement in network throughput** : Running Google-scale workloads also requires Google-scale networking. In Kubernetes 1.1, we have included an option to use native IP tables offering an 80% reduction in tail latency, an almost complete elimination of CPU overhead and improvements in reliability and system architecture ensuring Kubernetes can handle high-scale throughput well into the future.
+
+- **Horizontal pod autoscaling (Beta)**: Many workloads can go through spiky periods of utilization, resulting in uneven experiences for your users. Kubernetes now has support for horizontal pod autoscaling, meaning your pods can scale up and down based on CPU usage. Read more about [Horizontal pod autoscaling](http://kubernetes.io/v1.1/docs/user-guide/horizontal-pod-autoscaler.html).
+
+- **HTTP load balancer (Beta)**: Kubernetes now has the built-in ability to route HTTP traffic based on the packets introspection. This means you can have ‘http://foo.com/bar’ go to one service, and ‘http://foo.com/meep’ go to a completely independent service. Read more about the [Ingress object](http://kubernetes.io/v1.1/docs/user-guide/ingress.html).
+
+- **Job objects (Beta)**: We’ve also had frequent request for integrated batch jobs, such as processing a batch of images to create thumbnails or a particularly large data file that has been broken down into many chunks. [Job objects](https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/jobs.md#writing-a-job-spec) introduces a new API object that runs a workload, restarts it if it fails, and keeps trying until it’s successfully completed. Read more about the[Job object](http://kubernetes.io/v1.1/docs/user-guide/jobs.html).
+
+- **New features to shorten the test cycle for developers** : We continue to work on making developing for applications for Kubernetes quick and easy. Two new features that speeds developer’s workflows include the ability to run containers interactively, and improved schema validation to let you know if there are any issues with your configuration files before you deploy them.
+
+- **Rolling update improvements** : Core to the DevOps movement is being able to release new updates without any affect on a running service. Rolling updates now ensure that updated pods are healthy before continuing the update.
+
+- And many more. For a complete list of updates, see the [1.1. release](https://github.com/kubernetes/kubernetes/releases) notes on GitHub
+
+
+
+Today, we’re also proud to mark the inaugural Kubernetes conference, [KubeCon](https://kubecon.io/), where some 400 community members along with dozens of vendors are in attendance supporting the Kubernetes project.
+
+We’d love to highlight just a few of the many partners making Kubernetes better:
+
+> “We are betting our major product, Tectonic – which enables any company to deploy, manage and secure its containers anywhere – on Kubernetes because we believe it is the future of the data center. The release of Kubernetes 1.1 is another major milestone that will create more widespread adoption of distributed systems and containers, and puts us on a path that will inevitably lead to a whole new generation of products and services.” – Alex Polvi, CEO, CoreOS.
+
+> “Univa’s customers are looking for scalable, enterprise-caliber solutions to simplify managing container and non-container workloads in the enterprise. We selected Kubernetes as a foundational element of our new Navops suite which will help IT and DevOps rapidly integrate containerized workloads into their production systems and extend these workloads into cloud services.” – Gary Tyreman, CEO, Univa.
+
+> “The tremendous customer demand we’re seeing to run containers at scale with Kubernetes is a critical element driving growth in our professional services business at Redapt. As a trusted advisor, it’s great to have a tool like Kubernetes in our tool belt to help our customers achieve their objectives.” – Paul Welch, SR VP Cloud Solutions, Redapt
+
+>
+
+As we mentioned above, we would love your help:
+
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Connect with the community on [Slack](http://slack.kubernetes.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+- Post questions (or answer questions) on Stackoverflow
+- Get started running, deploying, and using Kubernetes [guides](http://kubernetes.io/gettingstarted/)
+
+But, most of all, just let us know how you are transforming your business using Kubernetes, and how we can help you do it even faster. Thank you for your support!
+
+ - David Aronchick, Senior Product Manager for Kubernetes and Google Container Engine
diff --git a/blog/_posts/2015-11-00-Kubernetes-As-Foundation-For-Cloud-Native-Paas.md b/blog/_posts/2015-11-00-Kubernetes-As-Foundation-For-Cloud-Native-Paas.md
new file mode 100644
index 00000000000..0f6d65ea680
--- /dev/null
+++ b/blog/_posts/2015-11-00-Kubernetes-As-Foundation-For-Cloud-Native-Paas.md
@@ -0,0 +1,83 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes as Foundation for Cloud Native PaaS "
+date: Wednesday, November 03, 2015
+pagination:
+ enabled: true
+---
+With Kubernetes continuing to gain momentum as a critical tool for building and scaling container based applications, we’ve been thrilled to see a growing number of platform as a service (PaaS) offerings adopt it as a foundation. PaaS developers have been drawn to Kubernetes by its rapid rate of maturation, the soundness of its core architectural concepts, and the strength of its contributor community. The [Kubernetes ecosystem](http://blog.kubernetes.io/2015/07/the-growing-kubernetes-ecosystem.html) continues to grow, and these PaaS projects are great additions to it.
+
+[](http://1.bp.blogspot.com/-xX93tnoIlGo/Vjj2fSc_CDI/AAAAAAAAAi0/lvTkT9jyFog/s1600/k8%2Bipaas%2B1.png)
+
+
+
+
+
+
+
+
+
+
+> “[Deis](http://deis.io/) is the leading Docker PaaS with over a million downloads, actively used by companies like Mozilla, The RealReal, ShopKeep and Coinbase. Deis provides software teams with a turn-key platform for running containers in production, featuring the ability to build and store Docker images, production-grade load balancing, a streamlined developer interface and an ops-ready suite of logging and monitoring infrastructure backed by world-class 24x7x365 support. After a community-led evaluation of alternative orchestrators, it was clear that Kubernetes represents a decade of experience running containers at scale inside Google. The Deis project is proud to be rebasing onto Kubernetes and is thrilled to join its vibrant community." - Gabriel Monroy, CTO of [Engine Yard](https://www.engineyard.com/), Inc.
+
+
+
+
+
+[](http://1.bp.blogspot.com/-1XZFGRHGb34/Vjj2wUtA6pI/AAAAAAAAAi8/SD-qRhVIiIs/s1600/k8%2Bipaas%2B2.png)
+
+
+
+
+
+
+[OpenShift](http://www.openshift.org/) by Red Hat helps organizations accelerate application delivery by enabling development and IT operations teams to be more agile, responsive and efficient. OpenShift Enterprise 3 is the first fully supported, enterprise-ready, web-scale container application platform that natively integrates the Docker container runtime and packaging format, Kubernetes container orchestration and management engine, on a foundation of Red Hat Enterprise Linux 7, all fully supported by Red Hat from the operating system to application runtimes.
+
+
+> “Kubernetes provides OpenShift users with a powerful model for application orchestration, leveraging concepts like pods and services, to deploy (micro)services that inherently span multiple containers and application topologies that will require wiring together multiple services. Pods can be optionally mapped to storage, which means you can run both stateful and stateless services in OpenShift. Kubernetes also provides a powerful declarative management model to manage the lifecycle of application containers. Customers can then use Kubernetes’ integrated scheduler to deploy and manage containers across multiple hosts. As a leading contributor to both the Docker and Kubernetes open source projects, Red Hat is not just adopting these technologies but actively building them upstream in the community.” - Joe Fernandes, Director of Product Management for Red Hat OpenShift.
+
+
+
+
+[](http://2.bp.blogspot.com/-t3L1CANyhUs/Vjj28Zpf9WI/AAAAAAAAAjE/Ef-PLLmHGvU/s1600/k8%2Bipaas%2B3.png)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Huawei, a leading global ICT technology solution provider, will offer container as a service (CaaS) built on Kubernetes in the public cloud for customers with Docker based applications. Huawei CaaS services will manage multiple clusters across data centers, and deploy, monitor and scale containers with high availability and high resource utilization for their customers. For example, one of Huawei’s current software products for their telecom customers utilizes tens of thousands of modules and hundreds of instances in virtual machines. By moving to a container based PaaS platform powered by Kubernetes, Huawei is migrating this product into a micro-service based, cloud native architecture. By decoupling the modules, they’re creating a high performance, scalable solution that runs hundreds, even thousands of containers in the system. Decoupling existing heavy modules could have been a painful exercise. However, using several key concepts introduced by Kubernetes, such as pods, services, labels, and proxies, Huawei has been able to re-architect their software with great ease.
+
+Huawei has made Kubernetes the core runtime engine for container based applications/services, and they’ve been building other PaaS components or capabilities around Kubernetes, such as user access management, composite API, Portal and multiple cluster management. Additionally, as part of the migration to the new platform, they’re enhancing their PaaS solution in the areas of advanced scheduling algorithm, multi tenant support and enhanced container network communication to support customer needs.
+
+
+> “Huawei chose Kubernetes as the foundation for our offering because we like the abstract concepts of services, pod and label for modeling and distributed applications. We developed an application model based on these concepts to model existing complex applications which works well for moving legacy applications into the cloud. In addition, Huawei intends for our PaaS platform to support many scenarios, and Kubernetes’ flexible architecture with its plug-in capability is key to our platform architecture.”- Ying Xiong, Chief Architect of PaaS at Huawei.
+
+
+
+
+
+[](http://2.bp.blogspot.com/-Ys0Zn4IQzn0/Vjj3JIE0BVI/AAAAAAAAAjM/ktwltzVa1GE/s1600/k8%2Bipaas%2B4.png)
+
+
+
+
+
+
+
+
+[Gondor](https://gondor.io/)is a PaaS with a focus on application hosting throughout the lifecycle, from development to testing to staging to production. It supports Python, Go, and Node.js applications as well as technologies such as Postgres, Redis and Elasticsearch. The Gondor team recently re-architected Gondor to incorporate Kubernetes, and discussed this in a [blog post.](https://gondor.io/blog/2015/07/21/rebuilding-gondor-kubernetes/)
+
+
+> “There are two main reasons for our move to Kubernetes: One, by taking care of the lower layers in a truly scalable fashion, Kubernetes lets us focus on providing a great product at the application layer. Two, the portability of Kubernetes allows us to expand our PaaS offering to on-premises, private cloud and a multitude of alternative infrastructure providers.” - Brian Rosner, Chief Architect at Eldarion (the driving force behind Gondor)
+
+- Martin Buhr, Google Business Product Manager
diff --git a/blog/_posts/2015-11-00-Monitoring-Kubernetes-With-Sysdig.md b/blog/_posts/2015-11-00-Monitoring-Kubernetes-With-Sysdig.md
new file mode 100644
index 00000000000..8e407e4f5f8
--- /dev/null
+++ b/blog/_posts/2015-11-00-Monitoring-Kubernetes-With-Sysdig.md
@@ -0,0 +1,236 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Monitoring Kubernetes with Sysdig "
+date: Friday, November 19, 2015
+pagination:
+ enabled: true
+---
+_Today we’re sharing a guest post by Chris Crane from Sysdig about their monitoring integration into Kubernetes. _
+
+Kubernetes offers a full environment to write scalable and service-based applications. It takes care of things like container grouping, discovery, load balancing and healing so you don’t have to worry about them. The design is elegant, scalable and the APIs are a pleasure to use.
+
+And like any new infrastructure platform, if you want to run Kubernetes in production, you’re going to want to be able to monitor and troubleshoot it. We’re big fans of Kubernetes here at Sysdig, and, well: we’re here to help.
+
+Sysdig offers native visibility into Kubernetes across the full Sysdig product line. That includes [sysdig](http://www.sysdig.org/), our open source, CLI system exploration tool, and [Sysdig Cloud](https://sysdig.com/), the first and only monitoring platform designed from the ground up to support containers and microservices.
+
+At a high level, Sysdig products are aware of the entire Kubernetes cluster hierarchy, including **namespaces, services, replication controllers** and **labels**. So all of the rich system and application data gathered is now available in the context of your Kubernetes infrastructure. What does this mean for you? In a nutshell, we believe Sysdig can be your go-to tool for making Kubernetes environments significantly easier to monitor and troubleshoot!
+
+In this post I will quickly preview the Kubernetes visibility in both open source sysdig and Sysdig Cloud, and show off a couple interesting use cases. Let’s start with the open source solution.
+
+
+### Exploring a Kubernetes Cluster with csysdig
+
+The easiest way to take advantage of sysdig’s Kubernetes support is by launching csysdig, the sysdig ncurses UI:
+
+` > csysdig -k http://127.0.0.1:8080`
+*Note: specify the address of your Kubernetes API server with the -k command, and sysdig will poll all the relevant information, leveraging both the standard and the watch API.
+
+Now that csysdig is running, hit F2 to bring up the views panel, and you'll notice the presence of a bunch of new views. The **k8s Namespaces** view can be used to see the list of namespaces and observe the amount of CPU, memory, network and disk resources each of them is using on this machine:
+
+
+[](http://2.bp.blogspot.com/-9kXfpo76r0k/Vkz8AkpctEI/AAAAAAAAAss/yvf9oc759Wg/s1600/sisdig%2B6.png)
+
+
+
+
+
+
+
+
+
+Similarly, you can select **k8s Services** to see the same information broken up by service:
+
+
+[](http://2.bp.blogspot.com/-Ya1W3Z_ETcs/Vkz8AN3XtfI/AAAAAAAAAs8/HNv_TvHpfHU/s1600/sisdig%2B2.png)
+
+
+
+
+
+
+
+
+
+or **k8s Controllers** to see the replication controllers:
+
+
+[](http://3.bp.blogspot.com/-gGkgXRC5P6g/Vkz8A1RVyAI/AAAAAAAAAtQ/SFlHQeNrDjQ/s1600/sysdig%2B1.png)
+
+
+
+
+
+
+
+
+
+or **k8s Pods** to see the list of pods running on this machine and the resources they use:
+
+
+[](http://3.bp.blogspot.com/-PrDfWzi9F3c/Vkz8H6rPlII/AAAAAAAAAtc/f46tE6EKvoo/s1600/sisdig%2B7.png)
+
+
+
+### Drill Down-Based Navigation
+A cool feature in csysdig is the ability to drill down: just select an element, click on enter and – boom – now you're looking inside it. Drill down is also aware of the Kubernetes hierarchy, which means I can start from a service, get the list of its pods, see which containers run inside one of the pods, and go inside one of the containers to explore files, network connections, processes or even threads. Check out the video below.
+
+
+[](http://1.bp.blogspot.com/-lQ-P2gLywlY/Vkz9MOoTgGI/AAAAAAAAAtk/UB6pW7sUbQA/s1600/image09.gif)
+
+
+### Actions!
+One more thing about csysdig. As [recently announced](https://sysdig.com/csysdigs-hotkeys-turning-csysdig-into-a-control-panel-for-processes-connections-and-containers/), csysdig also offers “control panel” functionality, making it possible to use hotkeys to execute command lines based on the element currently selected. So we made sure to enrich the Kubernetes views with a bunch of useful hotkeys. For example, you can delete a namespace or a service by pressing "x," or you can describe them by pressing "d."
+
+My favorite hotkeys, however, are "f," to follow the logs that a pod is generating, and "b," which leverages `kubectl` exec to give you a shell inside a pod. Being brought into a bash prompt for the pod you’re observing is really useful and, frankly, a bit magic. :-)
+
+So that’s a quick preview of Kubernetes in sysdig. Note though, that all of this functionality is only for a single machine. What happens if you want to monitor a distributed Kubernetes cluster? Enter Sysdig Cloud.
+
+
+### Monitoring Kubernetes with Sysdig Cloud
+Let’s start with a quick review of Kubernetes’ architecture. From the physical/infrastructure point of view, a Kubernetes cluster is made up of a set of **minion** machines overseen by a **master** machine. The master’s tasks include orchestrating containers across minions, keeping track of state and exposing cluster control through a REST API and a UI.
+
+On the other hand, from the logical/application point of view, Kubernetes clusters are arranged in the hierarchical fashion shown in this picture:
+
+[](http://1.bp.blogspot.com/-p_x0bLRdFJo/Vkz8IPR5q4I/AAAAAAAAAtg/D9UU2MfPmcI/s1600/sisdig%2B4.png)
+
+
+
+
+* All containers run inside **pods**. A pod can host a single container, or multiple cooperating containers; in the latter case, the containers in the pod are guaranteed to be co-located on the same machine and can share resources.
+* Pods typically sit behind **services** , which take care of balancing the traffic, and also expose the set of pods as a single discoverable IP address/port.
+* Services are scaled horizontally by **replication controllers** (“RCs”) which create/destroy pods for each service as needed.
+* **Namespaces** are virtual clusters that can include one or more services.
+
+So just to be clear, multiple services and even multiple namespaces can be scattered across the same physical infrastructure.
+
+
+
+After talking to hundreds of Kubernetes users, it seems that the typical cluster administrator is often interested in looking at things from the physical point of view, while service/application developers tend to be more interested in seeing things from the logical point of view.
+
+
+
+With both these use cases in mind, Sysdig Cloud’s support for Kubernetes works like this:
+
+* By automatically connecting to a Kubernetes’ cluster API Server and querying the API (both the regular and the watch API), Sysdig Cloud is able to infer both the physical and the logical structure of your microservice application.
+* In addition, we transparently extract important metadata such as labels.
+* This information is combined with our patent-pending ContainerVision technology, which makes it possible to inspect applications running inside containers without requiring any instrumentation of the container or application.
+Based on this, Sysdig Cloud can provide rich visibility and context from both an **infrastructure-centric** and an **application-centric** point of view. Best of both worlds! Let’s check out what this actually looks like.
+
+
+
+One of the core features of Sysdig Cloud is groups, which allow you to define the hierarchy of metadata for your applications and infrastructure. By applying the proper groups, you can explore your containers based on their physical hierarchy (for example, physical cluster \> minion machine \> pod \> container) or based on their logical microservice hierarchy (for example, namespace \> replication controller \> pod \> container – as you can see in this example).
+
+
+
+If you’re interested in the utilization of your underlying physical resource – e.g., identifying noisy neighbors – then the physical hierarchy is great. But if you’re looking to explore the performance of your applications and microservices, then the logical hierarchy is often the best place to start.
+
+[](http://4.bp.blogspot.com/-80u3oSEi_Fw/Vkz8AZgE6eI/AAAAAAAAAtE/3iRDMJKBNmc/s1600/sisdig%2B5.png)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+For example: here you can see the overall performance of our WordPress service:
+
+[](http://4.bp.blogspot.com/-QAsedrM2UxI/Vkz8Aas-26I/AAAAAAAAAtM/9B7Z33vUQrg/s1600/sisdig%2B3.png)
+
+Keep in mind that the pods implementing this service are scattered across multiple machines, but we can still total request counts, response times and URL statistics aggregated together for this service. And don’t forget: this doesn’t require any configuration or instrumentation of wordpress, apache, or the underlying containers!
+
+
+
+And from this view, I can now easily create alerts for these service-level metrics, and I can dig down into any individual container for deep inspection - down to the process level – whenever I want, including back in time!
+
+
+
+### Visualizing Your Kubernetes Services
+
+We’ve also included Kubernetes awareness in Sysdig Cloud’s famous topology view, at both the physical and logical level.
+
+[](http://2.bp.blogspot.com/-2is-UJatmPk/Vk0AtdfvYvI/AAAAAAAAAt0/9SEsl2LCpYI/s1600/image02.gif)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+[](http://2.bp.blogspot.com/-hGQtaIV9XTA/Vk0RnwtlcGI/AAAAAAAAAuM/7ndiyAWpSvU/s1600/image08.gif)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+The two pictures below show the exact same infrastructure and services. But the first one depicts the physical hierarchy, with a master node and three minion nodes; while the second one groups containers into namespaces, services and pods, while abstracting the physical location of the containers.
+
+
+
+Hopefuly it’s self-evident how much more natural and intuitive the second (services-oriented) view is. The structure of the application and the various dependencies are immediately clear. The interactions between various microservices become obvious, despite the fact that these microservices are intermingled across our machine cluster!
+
+
+
+### Conclusion
+
+I’m pretty confident that what we’re delivering here represents a huge leap in visibility into Kubernetes environments and it won’t disappoint you. I also hope it can be a useful tool enabling you to use Kubernetes in production with a little more peace of mind. Thanks, and happy digging!
+
+
+
+ Chris Crane, VP Product, Sysdig
+
+
+
+_You can find open source sysdig on [github](https://github.com/draios/sysdig) and at [sysdig.org](http://sysdig.org/), and you can sign up for free trial of Sysdig Cloud at [sysdig.com](http://sysdig.com/). _
+
+
+
+_To see a live demo and meet some of the folks behind the project join us this Thursday for a [Kubernetes and Sysdig Meetup in San Francisco](http://www.meetup.com/Bay-Area-Kubernetes-Meetup/events/226574438/)._
diff --git a/blog/_posts/2015-11-00-One-Million-Requests-Per-Second-Dependable-And-Dynamic-Distributed-Systems-At-Scale.md b/blog/_posts/2015-11-00-One-Million-Requests-Per-Second-Dependable-And-Dynamic-Distributed-Systems-At-Scale.md
new file mode 100644
index 00000000000..d699960282b
--- /dev/null
+++ b/blog/_posts/2015-11-00-One-Million-Requests-Per-Second-Dependable-And-Dynamic-Distributed-Systems-At-Scale.md
@@ -0,0 +1,38 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " One million requests per second: Dependable and dynamic distributed systems at scale "
+date: Thursday, November 11, 2015
+pagination:
+ enabled: true
+---
+
+Recently, I’ve gotten in the habit of telling people that building a reliable service isn’t that hard. If you give me two Compute Engine virtual machines, a Cloud Load balancer, supervisord and nginx, I can create you a static web service that will serve a static web page, effectively forever.
+
+The real challenge is building agile AND reliable services. In the new world of software development it's trivial to spin up enormous numbers of machines and push software to them. Developing a successful product must _also_ include the ability to respond to changes in a predictable way, to handle upgrades elegantly and to minimize downtime for users. Missing on any one of these elements results in an _unsuccessful_ product that's flaky and unreliable. I remember a time, not that long ago, when it was common for websites to be unavailable for an hour around midnight each day as a safety window for software upgrades. My bank still does this. It’s really not cool.
+
+Fortunately, for developers, our infrastructure is evolving along with the requirements that we’re placing on it. Kubernetes has been designed from the ground up to make it easy to design, develop and deploy dependable, dynamic services that meet the demanding requirements of the cloud native world.
+
+To demonstrate exactly what we mean by this, I've developed a simple demo of a Container Engine cluster serving 1 million HTTP requests per second. In all honesty, serving 1 million requests per second isn’t really that exciting. In fact, it’s really so very [2013](http://googlecloudplatform.blogspot.com/2013/11/compute-engine-load-balancing-hits-1-million-requests-per-second.html).
+
+[](http://4.bp.blogspot.com/-eACCKAzuQFQ/VkO1rwW1DRI/AAAAAAAAAko/zKu-19QCCBU/s1600/image01.gif)
+
+
+What _is_ exciting is that while successfully handling 1 million HTTP requests per second with uninterrupted availability, we have Kubernetes perform a zero-downtime rolling upgrade of the service to a new version of the software _while we're **still** serving 1 million requests per second_.
+
+
+[](http://2.bp.blogspot.com/-_96_QwNRHLo/VkO1oDAyLLI/AAAAAAAAAkk/B_y5Uh5ngPU/s1600/image00.gif)
+
+
+This is only possible due to a large number of performance tweaks and enhancements that have gone into the [Kubernetes 1.1 release](http://blog.kubernetes.io/2015/11/Kubernetes-1-1-Performance-upgrades-improved-tooling-and-a-growing-community.html). I’m incredibly proud of all of the features that our community has built into this release. Indeed in addition to making it possible to serve 1 million requests per second, we’ve also added an auto-scaler, so that you won’t even have to wake up in the middle of the night to scale your service in response to load or memory pressures.
+
+If you want to try this out on your own cluster (or use the load test framework to test your own service) the code for the [demo is available on github](https://github.com/kubernetes/contrib/pull/226). And the [full video](https://www.youtube.com/watch?v=7TOWLerX0Ps) is available.
+
+I hope I’ve shown you how Kubernetes can enable developers of distributed systems to achieve both reliability and agility at scale, and as always, if you’re interested in learning more, head over to [kubernetes.io](http://kubernetes.io/) or [github](https://github.com/kubernetes/kubernetes) and connect with the community on our [Slack](http://slack.kubernetes.io/) channel.
+
+
+ "https://www.youtube.com/embed/7TOWLerX0Ps"
+
+
+
+- Brendan Burns, Senior Staff Software Engineer, Google, Inc.
diff --git a/blog/_posts/2015-12-00-Creating-Raspberry-Pi-Cluster-Running.md b/blog/_posts/2015-12-00-Creating-Raspberry-Pi-Cluster-Running.md
new file mode 100644
index 00000000000..1d8ef2b3054
--- /dev/null
+++ b/blog/_posts/2015-12-00-Creating-Raspberry-Pi-Cluster-Running.md
@@ -0,0 +1,201 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Creating a Raspberry Pi cluster running Kubernetes, the installation (Part 2) "
+date: Wednesday, December 22, 2015
+pagination:
+ enabled: true
+---
+At Devoxx Belgium and Devoxx Morocco, [Ray Tsang](https://twitter.com/saturnism) and I ([Arjen Wassink](https://twitter.com/ArjenWassink)) showed a Raspberry Pi cluster we built at Quintor running HypriotOS, Docker and Kubernetes. While we received many compliments on the talk, the most common question was about how to build a Pi cluster themselves! We’ll be doing just that, in two parts. The [first part covered the shopping list for the cluster](http://blog.kubernetes.io/2015/11/creating-a-Raspberry-Pi-cluster-running-Kubernetes-the-shopping-list-Part-1.html), and this second one will show you how to get kubernetes up and running . . .
+
+
+Now you got your Raspberry Pi Cluster all setup, it is time to run some software on it. As mentioned in the previous blog I based this tutorial on the Hypriot linux distribution for the ARM processor. Main reason is the bundled support for Docker. I used [this version of Hypriot](http://downloads.hypriot.com/hypriot-rpi-20151004-132414.img.zip) for this tutorial, so if you run into trouble with other versions of Hypriot, please consider the version I’ve used.
+
+First step is to make sure every Pi has Hypriot running, if not yet please check the [getting started guide](http://blog.hypriot.com/getting-started-with-docker-on-your-arm-device/) of them. Also hook up the cluster switch to a network so that Internet is available and every Pi get an IP-address assigned via DHCP. Because we will be running multiple Pi’s it is practical to give each Pi a unique hostname. I renamed my Pi’s to rpi-master, rpi-node-1, rpi-node-2, etc for my convenience. Note that on Hypriot the hostname is set by editing the /boot/occidentalis.txt file, not the /etc/hostname. You could also set the hostname using the Hypriot flash tool.
+
+
+The most important thing about running software on a Pi is the availability of an ARM distribution. Thanks to [Brendan Burns](https://twitter.com/brendandburns), there are Kubernetes components for ARM available in the [Google Cloud Registry](https://cloud.google.com/container-registry/docs/). That’s great. The second hurdle is how to install Kubernetes. There are two ways; directly on the system or in a Docker container. Although the container support has an experimental status, I choose to go for that because it makes it easier to install Kubernetes for you. Kubernetes requires several processes (etcd, flannel, kubeclt, etc) to run on a node, which should be started in a specific order. To ease that, systemd services are made available to start the necessary processes in the right way. Also the systemd services make sure that Kubernetes is spun up when a node is (re)booted. To make the installation real easy I created an simple install script for the master node and the worker nodes. All is available at [Github](https://github.com/awassink/k8s-on-rpi). So let’s get started now!
+
+### Installing the Kubernetes master node
+
+First we will be installing Kubernetes on the master node and add the worker nodes later to the cluster. It comes basically down to getting the git repository content and executing the installation script.
+
+```
+$ curl -L -o k8s-on-rpi.zip https://github.com/awassink/k8s-on-rpi/archive/master.zip
+
+$ apt-get update
+
+$ apt-get install unzip
+
+$ unzip k8s-on-rpi.zip
+
+$ k8s-on-rpi-master/install-k8s-master.sh
+```
+
+The install script will install five services:
+
+* docker-bootstrap.service - is a separate Docker daemon to run etcd and flannel because flannel needs to be running before the standard Docker daemon (docker.service) because of network configuration.
+* k8s-etcd.service - is the etcd service for storing flannel and kubelet data.
+* k8s-flannel.service - is the flannel process providing an overlay network over all nodes in the cluster.
+* docker.service - is the standard Docker daemon, but with flannel as a network bridge. It will run all Docker containers.
+* k8s-master.service - is the kubernetes master service providing the cluster functionality.
+
+The basic details of this installation procedure is also documented in the [Getting Started Guide](https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/getting-started-guides/docker-multinode/master.md) of Kubernetes. Please check it to get more insight on how a multi node Kubernetes cluster is setup.
+
+
+Let’s check if everything is working correctly. Two docker daemon processes must be running.
+
+```
+$ ps -ef|grep docker
+root 302 1 0 04:37 ? 00:00:14 /usr/bin/docker daemon -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --storage-driver=overlay --storage-opt dm.basesize=10G --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap
+
+root 722 1 11 04:38 ? 00:16:11 /usr/bin/docker -d -bip=10.0.97.1/24 -mtu=1472 -H fd:// --storage-driver=overlay -D
+```
+{: .scale-yaml}
+
+
+The etcd and flannel containers must be up.
+```
+$ docker -H unix:///var/run/docker-bootstrap.sock ps
+
+CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
+
+4855cc1450ff andrewpsuedonym/flanneld "flanneld --etcd-endp" 2 hours ago Up 2 hours k8s-flannel
+
+ef410b986cb3 andrewpsuedonym/etcd:2.1.1 "/bin/etcd --addr=127" 2 hours ago Up 2 hours k8s-etcd
+
+
+The hyperkube kubelet, apiserver, scheduler, controller and proxy must be up.
+
+$ docker ps
+
+CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
+
+a17784253dd2 gcr.io/google\_containers/hyperkube-arm:v1.1.2 "/hyperkube controlle" 2 hours ago Up 2 hours k8s\_controller-manager.7042038a\_k8s-master-127.0.0.1\_default\_43160049df5e3b1c5ec7bcf23d4b97d0\_2174a7c3
+
+a0fb6a169094 gcr.io/google\_containers/hyperkube-arm:v1.1.2 "/hyperkube scheduler" 2 hours ago Up 2 hours k8s\_scheduler.d905fc61\_k8s-master-127.0.0.1\_default\_43160049df5e3b1c5ec7bcf23d4b97d0\_511945f8
+
+d93a94a66d33 gcr.io/google\_containers/hyperkube-arm:v1.1.2 "/hyperkube apiserver" 2 hours ago Up 2 hours k8s\_apiserver.f4ad1bfa\_k8s-master-127.0.0.1\_default\_43160049df5e3b1c5ec7bcf23d4b97d0\_b5b4936d
+
+db034473b334 gcr.io/google\_containers/hyperkube-arm:v1.1.2 "/hyperkube kubelet -" 2 hours ago Up 2 hours k8s-master
+
+f017f405ff4b gcr.io/google\_containers/hyperkube-arm:v1.1.2 "/hyperkube proxy --m" 2 hours ago Up 2 hours k8s-master-proxy
+```
+{: .scale-yaml}
+
+### Deploying the first pod and service on the cluster
+
+When that’s looking good we’re able to access the master node of the Kubernetes cluster with kubectl. Kubectl for ARM can be downloaded from googleapis storage. kubectl get nodes shows which cluster nodes are registered with its status. The master node is named 127.0.0.1.
+```
+$ curl -fsSL -o /usr/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/arm/kubectl
+
+$ kubectl get nodes
+
+NAME LABELS STATUS AGE
+
+127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready 1h
+
+
+An easy way to test the cluster is by running a busybox docker image for ARM. kubectl run can be used to run the image as a container in a pod. kubectl get pods shows the pods that are registered with its status.
+
+$ kubectl run busybox --image=hypriot/rpi-busybox-httpd
+
+$ kubectl get pods -o wide
+
+NAME READY STATUS RESTARTS AGE NODE
+
+busybox-fry54 1/1 Running 1 1h 127.0.0.1
+
+k8s-master-127.0.0.1 3/3 Running 6 1h 127.0.0.1
+```
+{: .scale-yaml}
+
+Now the pod is running but the application is not generally accessible. That can be achieved by creating a service. The cluster IP-address is the IP-address the service is avalailable within the cluster. Use the IP-address of your master node as external IP and the service becomes available outside of the cluster (e.g. at http://192.168.192.161 in my case).
+```
+$ kubectl expose rc busybox --port=90 --target-port=80 --external-ip=\
+
+$ kubectl get svc
+
+NAME CLUSTER\_IP EXTERNAL\_IP PORT(S) SELECTOR AGE
+
+busybox 10.0.0.87 192.168.192.161 90/TCP run=busybox 1h
+
+kubernetes 10.0.0.1 \ 443/TCP \ 2h
+
+$ curl http://10.0.0.87:90/
+```
+{: .scale-yaml}
+
+```
+\
+
+\\Pi armed with Docker by Hypriot\
+
+ \
+
+ \
+
+ \
+
+ \
+
+ \
+
+\
+```
+### Installing the Kubernetes worker nodes
+
+The next step is installing Kubernetes on each worker node and add it to the cluster. This also comes basically down to getting the git repository content and executing the installation script. Though in this installation the k8s.conf file needs to be copied on forehand and edited to contain the IP-address of the master node.
+
+```
+$ curl -L -o k8s-on-rpi.zip https://github.com/awassink/k8s-on-rpi/archive/master.zip
+
+$ apt-get update
+
+$ apt-get install unzip
+
+$ unzip k8s-on-rpi.zip
+
+$ mkdir /etc/kubernetes
+
+$ cp k8s-on-rpi-master/rootfs/etc/kubernetes/k8s.conf /etc/kubernetes/k8s.conf
+```
+### Change the ip-address in /etc/kubernetes/k8s.conf to match the master node ###
+```
+$ k8s-on-rpi-master/install-k8s-worker.sh
+```
+{: .scale-yaml}
+
+
+The install script will install four services. These are the quite similar to ones on the master node, but with the difference that no etcd service is running and the kubelet service is configured as worker node.
+
+Once all the services on the worker node are up and running we can check that the node is added to the cluster on the master node.
+```
+$ kubectl get nodes
+
+NAME LABELS STATUS AGE
+
+127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready 2h
+
+192.168.192.160 kubernetes.io/hostname=192.168.192.160 Ready 1h
+
+$ kubectl scale --replicas=2 rc/busybox
+
+$ kubectl get pods -o wide
+
+NAME READY STATUS RESTARTS AGE NODE
+
+busybox-fry54 1/1 Running 1 1h 127.0.0.1
+
+busybox-j2slu 1/1 Running 0 1h 192.168.192.160
+
+k8s-master-127.0.0.1 3/3 Running 6 2h 127.0.0.1
+```
+{: .scale-yaml}
+
+### Enjoy your Kubernetes cluster!
+
+Congratulations! You now have your Kubernetes Raspberry Pi cluster running and can start playing with Kubernetes and start learning. Checkout the [Kubernetes User Guide](https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/README.md) to find out what you all can do. And don’t forget to pull some plugs occasionally like Ray and I do :-)
+
+
+Arjen Wassink, Java Architect and Team Lead, Quintor
diff --git a/blog/_posts/2015-12-00-How-Weave-Built-A-Multi-Deployment-Solution-For-Scope-Using-Kubernetes.md b/blog/_posts/2015-12-00-How-Weave-Built-A-Multi-Deployment-Solution-For-Scope-Using-Kubernetes.md
new file mode 100644
index 00000000000..6ac4cff330d
--- /dev/null
+++ b/blog/_posts/2015-12-00-How-Weave-Built-A-Multi-Deployment-Solution-For-Scope-Using-Kubernetes.md
@@ -0,0 +1,127 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " How Weave built a multi-deployment solution for Scope using Kubernetes "
+date: Sunday, December 12, 2015
+pagination:
+ enabled: true
+---
+_Today we hear from Peter Bourgon, Software Engineer at Weaveworks, a company that provides software for developers to network, monitor and control microservices-based apps in docker containers. Peter tells us what was involved in selecting and deploying Kubernetes _
+
+Earlier this year at Weaveworks we launched [Weave Scope](http://weave.works/product/scope/index.html), an open source solution for visualization and monitoring of containerised apps and services. Recently we released a hosted Scope service into an [Early Access Program](http://blog.weave.works/2015/10/08/weave-the-fastest-path-to-docker-on-amazon-ec2-container-service/). Today, we want to walk you through how we initially prototyped that service, and how we ultimately chose and deployed Kubernetes as our platform.
+
+
+##### A cloud-native architecture
+
+Scope already had a clean internal line of demarcation between data collection and user interaction, so it was straightforward to split the application on that line, distribute probes to customers, and host frontends in the cloud. We built out a small set of microservices in the [12-factor model](http://12factor.net/), which includes:
+
+
+* A users service, to manage and authenticate user accounts
+* A provisioning service, to manage the lifecycle of customer Scope instances
+* A UI service, hosting all of the fancy HTML and JavaScript content
+* A frontend service, to route requests according to their properties
+* A monitoring service, to introspect the rest of the system
+
+
+
+All services are built as Docker images, [FROM scratch](https://medium.com/@kelseyhightower/optimizing-docker-images-for-static-binaries-b5696e26eb07#.qqjkud6i0) where possible. We knew that we wanted to offer at least 3 deployment environments, which should be as near to identical as possible.
+
+
+* An "Airplane Mode" local environment, on each developer's laptop
+* A development or staging environment, on the same infrastructure that hosts production, with different user credentials
+* The production environment itself
+
+
+
+These were our application invariants. Next, we had to choose our platform and deployment model.
+
+
+##### Our first prototype
+There are a seemingly infinite set of choices, with an infinite set of possible combinations. After surveying the landscape in mid-2015, we decided to make a prototype with
+
+
+* [Amazon EC2](https://aws.amazon.com/ec2/) as our cloud platform, including RDS for persistence
+* [Docker Swarm](https://docs.docker.com/swarm/) as our "scheduler"
+* [Consul](https://consul.io/) for service discovery when bootstrapping Swarm
+* [Weave Net](http://weave.works/product/net/) for our network and service discovery for the application itself
+* [Terraform](https://terraform.io/) as our provisioner
+
+
+
+This setup was fast to define and fast to deploy, so it was a great way to validate the feasibility of our ideas. But we quickly hit problems.
+
+
+
+* Terraform's support for [Docker as a provisioner](https://terraform.io/docs/providers/docker) is barebones, and we uncovered [some bugs](https://github.com/hashicorp/terraform/issues/3526) when trying to use it to drive Swarm.
+* Largely as a consequence of the above, managing a zero-downtime deploy of Docker containers with Terraform was very difficult.
+* Swarm's _raison d'être_ is to abstract the particulars of multi-node container scheduling behind the familiar Docker CLI/API commands. But we concluded that the API is insufficiently expressive for the kind of operations that are necessary at scale in production.
+* Swarm provides no fault tolerance in the case of e.g. node failure.
+
+
+
+We also made a number of mistakes when designing our workflow.
+
+
+* We tagged each container with its target environment at build time, which simplified our Terraform definitions, but effectively forced us to manage our versions via image repositories. That responsibility belongs in the scheduler, not the artifact store.
+* As a consequence, every deploy required artifacts to be pushed to all hosts. This made deploys slow, and rollbacks unbearable.
+* Terraform is designed to provision infrastructure, not cloud applications. The process is slower and more deliberate than we’d like. Shipping a new version of something to prod took about 30 minutes, all-in.
+
+
+
+When it became clear that the service had potential, we re-evaluated the deployment model with an eye towards the long-term.
+
+
+##### Rebasing on Kubernetes
+It had only been a couple of months, but a lot had changed in the landscape.
+
+
+* HashiCorp released [Nomad](https://nomadproject.io/)
+* [Kubernetes](https://kubernetes.io/) hit 1.0
+* Swarm was soon to hit 1.0
+
+
+
+While many of our problems could be fixed without making fundamental architectural changes, we wanted to capitalize on the advances in the industry, by joining an existing ecosystem, and leveraging the experience and hard work of its contributors.
+
+After some internal deliberation, we did a small-scale audition of Nomad and Kubernetes. We liked Nomad a lot, but felt it was just too early to trust it with our production service. Also, we found the Kubernetes developers to be the most responsive to issues on GitHub. So, we decided to go with Kubernetes.
+
+
+##### Local Kubernetes
+
+First, we would replicate our Airplane Mode local environment with Kubernetes. Because we have developers on both Mac and Linux laptops, it’s important that the local environment is containerised. So, we wanted the Kubernetes components themselves (kubelet, API server, etc.) to run in containers.
+
+We encountered two main problems. First, and most broadly, creating Kubernetes clusters from scratch is difficult, as it requires deep knowledge of how Kubernetes works, and quite some time to get the pieces to fall in place together. [local-cluster-up.sh](http://local-cluster-up.sh/) seems like a Kubernetes developer’s tool and didn’t leverage containers, and the third-party solutions we found, like [Kubernetes Solo](https://github.com/rimusz/coreos-osx-kubernetes-solo), require a dedicated VM or are platform-specific.
+
+Second, containerised Kubernetes is still missing several important pieces. Following the [official Kubernetes Docker guide](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md) yields a barebones cluster without certificates or service discovery. We also encountered a couple of usability issues ([#16586](https://github.com/kubernetes/kubernetes/issues/16586), [#17157](https://github.com/kubernetes/kubernetes/issues/17157)), which we resolved by [submitting a patch](https://github.com/kubernetes/kubernetes/pull/17159) and building our own [hyperkube image](https://hub.docker.com/r/2opremio/hyperkube/) from master.
+
+In the end, we got things working by creating our own provisioning script. It needs to do things like [generate the PKI keys and certificates](https://github.com/kubernetes/kubernetes/blob/master/docs/admin/authentication.md#creating-certificates) and [provision the DNS add-on](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns), which took a few attempts to get right. We’ve also learned of a [commit to add certificate generation to the Docker build](https://github.com/kubernetes/kubernetes/commit/ce90b83689f08cb5ebb6b632dab7f95a48060425), so things will likely get easier in the near term.
+
+
+##### Kubernetes on AWS
+
+Next, we would deploy Kubernetes to AWS, and wire it up with the other AWS components. We wanted to stand up the service in production quickly, and we only needed to support Amazon, so we decided to do so without Weave Net and to use a pre-existing provisioning solution. But we’ll definitely revisit this decision in the near future, leveraging Weave Net via Kubernetes plugins.
+
+Ideally we would have used Terraform resources, and we found a couple: [kraken](https://github.com/Samsung-AG/kraken) (using Ansible), [kubestack](https://github.com/kelseyhightower/kubestack) (coupled to GCE), [kubernetes-coreos-terraform](https://github.com/bakins/kubernetes-coreos-terraform) (outdated Kubernetes) and [coreos-kubernetes](https://github.com/coreos/coreos-kubernetes). But they all build on CoreOS, which was an extra moving part we wanted to avoid in the beginning. (On our next iteration, we’ll probably audition CoreOS.) If you use Ansible, there are [playbooks available](https://github.com/kubernetes/contrib/tree/master/ansible) in the main repo. There are also community-drive [Chef cookbooks](https://github.com/evilmartians/chef-kubernetes) and [Puppet modules](https://forge.puppetlabs.com/cristifalcas/kubernetes). I’d expect the community to grow quickly here.
+
+The only other viable option seemed to be kube-up, which is a collection of scripts that provision Kubernetes onto a variety of cloud providers. By default, kube-up onto AWS puts the master and minion nodes into their own VPC, or Virtual Private Cloud. But our RDS instances were provisioned in the region-default VPC, which meant that communication from a Kubernetes minion to the DB would be possible only via [VPC peering](http://ben.straub.cc/2015/08/19/kubernetes-aws-vpc-peering/) or by opening the RDS VPC's firewall rules manually.
+
+To get traffic to traverse a VPC peer link, your destination IP needs to be in the target VPC's private address range. But [it turns out](https://forums.aws.amazon.com/thread.jspa?messageID=681125) that resolving the RDS instance's hostname from anywhere outside the same VPC will yield the public IP. And performing the resolution is important, because RDS reserves the right to change the IP for maintenance. This wasn't ever a concern in the previous infrastructure, because our Terraform scripts simply placed everything in the same VPC. So I thought I'd try the same with Kubernetes; the kube-up script ostensibly supports installing to an existing VPC by specifying a VPC\_ID environment variable, so I tried installing Kubernetes to the RDS VPC. kube-up appeared to succeed, but [service integration via ELBs broke](https://github.com/kubernetes/kubernetes/issues/17647) and[teardown via kube-down stopped working](https://github.com/kubernetes/kubernetes/issues/17219). After some time, we judged it best to let kube-up keep its defaults, and poked a hole in the RDS VPC.
+
+This was one hiccup among several that we encountered. Each one could be fixed in isolation, but the inherent fragility of using a shell script to provision remote state seemed to be the actual underlying cause. We fully expect the Terraform, Ansible, Chef, Puppet, etc. packages to continue to mature, and hope to switch soon.
+
+Provisioning aside, there are great things about the Kubernetes/AWS integration. For example, Kubernetes [services](http://kubernetes.io/v1.1/docs/user-guide/services.html) of the correct type automatically generate ELBs, and Kubernetes does a great job of lifecycle management there. Further, the Kubernetes domain model—services, [pods](http://kubernetes.io/v1.1/docs/user-guide/pods.html), [replication controllers](http://kubernetes.io/v1.1/docs/user-guide/replication-controller.html), the [labels and selector model](http://kubernetes.io/v1.1/docs/user-guide/labels.html), and so on—is coherent, and seems to give the user the right amount of expressivity, though the definition files do [tend to stutter needlessly](https://github.com/kubernetes/kubernetes/blob/643cb7a1c7499df4e569f4f0fbd3b18c0c4e63ce/examples/guestbook/redis-master-controller.yaml). The kubectl tool is good, albeit [daunting at first glance](http://i.imgur.com/nEyTWej.png). The [rolling-update](http://kubernetes.io/v1.1/docs/user-guide/update-demo/README.html) command in particular is brilliant: exactly the semantics and behavior I'd expect from a system like this. Indeed, once Kubernetes was up and running, _it just worked_, and exactly as I expected it to. That’s a huge thing.
+
+
+##### Conclusions
+
+After a couple weeks of fighting with the machines, we were able to resolve all of our integration issues, and have rolled out a reasonably robust Kubernetes-based system to production.
+
+
+* **Provisioning Kubernetes is difficult** , owing to a complex architecture and young provisioning story. This shows all signs of improving.
+* Kubernetes’ non-optional **security model takes time to get right**.
+* The Kubernetes **domain language is a great match** to the problem domain.
+* We have **a lot more confidence** in operating our application (It's a lot faster, too.).
+* And we're **very happy to be part of a growing Kubernetes userbase** , contributing issues and patches as we can and benefitting from the virtuous cycle of open-source development that powers the most exciting software being written today.
+ - Peter Bourgon, Software Engineer at Weaveworks
+
+_Weave Scope is an open source solution for visualization and monitoring of containerised apps and services. For a hosted Scope service, request an invite to Early Access program at scope.weave.works._
diff --git a/blog/_posts/2015-12-00-Managing-Kubernetes-Pods-Services-And-Replication-Controllers-With-Puppet.md b/blog/_posts/2015-12-00-Managing-Kubernetes-Pods-Services-And-Replication-Controllers-With-Puppet.md
new file mode 100644
index 00000000000..d69029b3b54
--- /dev/null
+++ b/blog/_posts/2015-12-00-Managing-Kubernetes-Pods-Services-And-Replication-Controllers-With-Puppet.md
@@ -0,0 +1,76 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Managing Kubernetes Pods, Services and Replication Controllers with Puppet "
+date: Friday, December 17, 2015
+pagination:
+ enabled: true
+---
+_Today’s guest post is written by Gareth Rushgrove, Senior Software Engineer at Puppet Labs, a leader in IT automation. Gareth tells us about a new Puppet module that helps manage resources in Kubernetes. _
+
+People familiar with [Puppet](https://github.com/puppetlabs/puppet) might have used it for managing files, packages and users on host computers. But Puppet is first and foremost a configuration management tool, and config management is a much broader discipline than just managing host-level resources. A good definition of configuration management is that it aims to solve four related problems: identification, control, status accounting and verification and audit. These problems exist in the operation of any complex system, and with the new [Puppet Kubernetes module](https://forge.puppetlabs.com/garethr/kubernetes) we’re starting to look at how we can solve those problems for Kubernetes.
+
+
+### The Puppet Kubernetes Module
+
+The Puppet Kubernetes module currently assumes you already have a Kubernetes cluster [up and running](http://kubernetes.io/gettingstarted/). Its focus is on managing the resources in Kubernetes, like Pods, Replication Controllers and Services, not (yet) on managing the underlying kubelet or etcd services. Here’s a quick snippet of code describing a Pod in Puppet’s DSL.
+
+
+```
+kubernetes_pod { 'sample-pod':
+ ensure => present,
+ metadata => {
+ namespace => 'default',
+ },
+ spec => {
+ containers => [{
+ name => 'container-name',
+ image => 'nginx',
+ }]
+ },
+```
+}
+
+
+If you’re familiar with the YAML file format, you’ll probably recognise the structure immediately. The interface is intentionally identical to aid conversion between different formats — in fact, the code powering this is autogenerated from the Kubernetes API Swagger definitions. Running the above code, assuming we save it as pod.pp, is as simple as:
+
+
+```
+puppet apply pod.pp
+```
+
+Authentication uses the standard kubectl configuration file. You can find complete [installation instructions in the module's README](https://github.com/garethr/garethr-kubernetes/blob/master/README.md).
+
+Kubernetes has several resources, from Pods and Services to Replication Controllers and Service Accounts. You can see an example of the module managing these resources in the [Kubernetes guestbook sample in Puppet](https://puppetlabs.com/blog/kubernetes-guestbook-example-puppet) post. This demonstrates converting the canonical hello-world example to use Puppet code.
+
+One of the main advantages of using Puppet for this, however, is that you can create your own higher-level and more business-specific interfaces to Kubernetes-managed applications. For instance, for the guestbook, you could create something like the following:
+
+
+```
+guestbook { 'myguestbook':
+ redis_slave_replicas => 2,
+ frontend_replicas => 3,
+ redis_master_image => 'redis',
+ redis_slave_image => 'gcr.io/google_samples/gb-redisslave:v1',
+ frontend_image => 'gcr.io/google_samples/gb-frontend:v3',
+}
+```
+
+You can read more about using Puppet’s defined types, and see lots more code examples, in the Puppet blog post, [Building Your Own Abstractions for Kubernetes in Puppet](https://puppetlabs.com/blog/building-your-own-abstractions-kubernetes-puppet).
+
+
+### Conclusions
+
+The advantages of using Puppet rather than just the standard YAML files and kubectl are:
+
+
+- The ability to create your own abstractions to cut down on repetition and craft higher-level user interfaces, like the guestbook example above.
+- Use of Puppet’s development tools for validating code and for writing unit tests.
+- Integration with other tools such as Puppet Server, for ensuring that your model in code matches the state of your cluster, and with PuppetDB for storing reports and tracking changes.
+- The ability to run the same code repeatedly against the Kubernetes API, to detect any changes or remediate configuration drift.
+
+It’s also worth noting that most large organisations will have very heterogenous environments, running a wide range of software and operating systems. Having a single toolchain that unifies those discrete systems can make adopting new technology like Kubernetes much easier.
+
+It’s safe to say that Kubernetes provides an excellent set of primitives on which to build cloud-native systems. And with Puppet, you can address some of the operational and configuration management issues that come with running any complex system in production. [Let us know](mailto:gareth@puppetlabs.com) what you think if you try the module out, and what else you’d like to see supported in the future.
+
+ - Gareth Rushgrove, Senior Software Engineer, Puppet Labs
diff --git a/blog/_posts/2016-01-00-Kubernetes-Community-Meeting-Notes.md b/blog/_posts/2016-01-00-Kubernetes-Community-Meeting-Notes.md
new file mode 100644
index 00000000000..7aa4a9a4582
--- /dev/null
+++ b/blog/_posts/2016-01-00-Kubernetes-Community-Meeting-Notes.md
@@ -0,0 +1,103 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes Community Meeting Notes - 20160114 "
+date: Friday, January 28, 2016
+pagination:
+ enabled: true
+---
+##### January 14 - RackN demo, testing woes, and KubeCon EU CFP.
+---
+ Note taker: Joe Beda
+---
+* Demonstration: Automated Deploy on Metal, AWS and others w/ Digital Rebar, Rob Hirschfeld and Greg Althaus from RackN
+
+ * Greg Althaus. CTO. Digital Rebar is the product. Bare metal provisioning tool.
+
+ * Detect hardware, bring it up, configure raid, OS and get workload deployed.
+
+ * Been working on Kubernetes workload.
+
+ * Seeing trend to start in cloud and then move back to bare metal.
+
+ * New provider model to use provisioning system on both cloud and bare metal.
+
+ * UI, REST API, CLI
+
+ * Demo: Packet -- bare metal as a service
+
+ * 4 nodes running grouped into a "deployment"
+
+ * Functional roles/operations selected per node.
+
+ * Decomposed the kubernetes bring up into units that can be ordered and synchronized. Dependency tree -- things like wait for etcd to be up before starting k8s master.
+
+ * Using the Ansible playbook under the covers.
+
+ * Demo brings up 5 more nodes -- packet will build those nodes
+
+ * Pulled out basic parameters from the ansible playbook. Things like the network config, dns set up, etc.
+
+ * Hierarchy of roles pulls in other components -- making a node a master brings in a bunch of other roles that are necessary for that.
+
+ * Has all of this combined into a command line tool with a simple config file.
+
+ * Forward: extending across multiple clouds for test deployments. Also looking to create split/replicated across bare metal and cloud.
+
+ * Q: secrets?
+A: using ansible playbooks. Builds own certs and then distributes them. Wants to abstract them out and push that stuff upstream.
+
+ * Q: Do you support bringing up from real bare metal with PXE boot?
+A: yes -- will discover bare metal systems and install OS, install ssh keys, build networking, etc.
+* [from SIG-scalability] Q: What is the status of moving to golang 1.5?
+A: At HEAD we are 1.5 but will support 1.4 also. Some issues with flakiness but looks like things are stable now.
+
+ * Also looking to use the 1.5 vendor experiment. Move away from godep. But can't do that until 1.5 is the baseline.
+
+ * Sarah: one of the things we are working on is rewards for doing stuff like this. Cloud credits, tshirts, poker chips, ponies.
+* [from SIG-scalability] Q: What is the status of cleaning up the jenkins based submit queue? What can the community do to help out?
+A: It has been rocky the last few days. There should be issues associated with each of these. There is a [flake label][1] on those issues.
+
+ * Still working on test federation. More test resources now. Happening slowly but hopefully faster as new people come up to speed. Will be great to having lots of folks doing e2e tests on their environments.
+
+ * Erick Fjeta is the new test lead
+
+ * Brendan is happy to help share details on Jenkins set up but that shouldn't be necessary.
+
+ * Federation may use Jenkins API but doesn't require Jenkins itself.
+
+ * Joe bitches about the fact that running the e2e tests in the way Jenkins is tricky. Brendan says it should be runnable easily. Joe will take another look.
+
+ * Conformance tests? etune did this but he isn't here. - revisit 20150121
+* * March 10-11 in London. Venue to be announced this week.
+
+ * Please send talks! CFP deadline looks to be Feb 5.
+
+ * Lots of excitement. Looks to be 700-800 people. Bigger than SF version (560 ppl).
+
+ * Buy tickets early -- early bird prices will end soon and price will go up 100 GBP.
+
+ * Accommodations provided for speakers?
+
+ * Q from Bob @ Samsung: Can we get more warning/planning for stuff like this:
+
+ * A: Sarah -- I don't hear about this stuff much in advance but will try to pull together a list. Working to make the events page on kubernetes.io easier to use.
+
+ * A: JJ -- we'll make sure we give more info earlier for the next US conf.
+* Scale tests [Rob Hirschfeld from RackN] -- if you want to help coordinate on scale tests we'd love to help.
+
+ * Bob invited Rob to join the SIG-scale group.
+
+ * There is also a big bare metal cluster through the CNCF (from Intel) that will be useful too. No hard dates yet on that.
+* Notes/video going to be posted on k8s blog. (Video for 20150114 wasn't recorded. Fail.)
+
+To get involved in the Kubernetes community consider joining our [Slack channel][2], taking a look at the [Kubernetes project][3] on GitHub, or join the [Kubernetes-dev Google group][4]. If you're really excited, you can do all of the above and join us for the next community conversation - January 27th, 2016. Please add yourself or a topic you want to know about to the [agenda][5] and get a calendar invitation by joining [this group][6].
+
+
+
+[1]: https://github.com/kubernetes/kubernetes/labels/kind%2Fflake
+[2]: http://slack.k8s.io/
+[3]: https://github.com/kubernetes/
+[4]: https://groups.google.com/forum/#!forum/kubernetes-dev
+[5]: https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY/edit#
+[6]: https://groups.google.com/forum/#!forum/kubernetes-community-video-chat
diff --git a/blog/_posts/2016-01-00-Kubernetes-Community-Meeting-Notes_28.md b/blog/_posts/2016-01-00-Kubernetes-Community-Meeting-Notes_28.md
new file mode 100644
index 00000000000..07d3ffb9157
--- /dev/null
+++ b/blog/_posts/2016-01-00-Kubernetes-Community-Meeting-Notes_28.md
@@ -0,0 +1,67 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: Kubernetes Community Meeting Notes - 20160121
+date: Thursday, January 28, 2016
+pagination:
+ enabled: true
+---
+#### January 21 - Configuration, Federation and Testing, oh my.
+
+
+Note taker: Rob Hirshfeld
+ - Use Case (10 min): [SFDC Paul Brown](https://docs.google.com/a/google.com/presentation/d/1MEI97efplr3f-GDX1GcWGfkEuGKKV-4niu27kHOeMLk/edit?usp=sharing_eid&ts=56a114f8)
+ - SIG Report - SIG-config and the story of [#18215](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}
+
+ - A[p](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}p[l](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[c](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[o](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}n[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}c[o](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}n[f](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[g](https://github.com/kubernetes/kubernetes/pull/18215) [I](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}N[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}K[8](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}s[,](https://github.com/kubernetes/kubernetes/pull/18215) [n](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[t](https://github.com/kubernetes/kubernetes/pull/18215) [d](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[p](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}l[o](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}y[m](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[n](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}t[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[f](https://github.com/kubernetes/kubernetes/pull/18215) [K](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}8[s](https://github.com/kubernetes/kubernetes/pull/18215)
+ - [T](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[p](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[c](https://github.com/kubernetes/kubernetes/pull/18215) [h](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[s](https://github.com/kubernetes/kubernetes/pull/18215) [b](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}n[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}r[e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}u[s](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[f](https://github.com/kubernetes/kubernetes/pull/18215) [c](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[n](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}f[i](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}g[u](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}r[a](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}t[i](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[n](https://github.com/kubernetes/kubernetes/pull/18215),[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}s[p](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[c](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[f](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[c](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[l](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}l[y](https://github.com/kubernetes/kubernetes/pull/18215) [p](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[r](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[m](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[r](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[z](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[o](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}n[](https://github.com/kubernetes/kubernetes/pull/18215)([a](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}k[a](https://github.com/kubernetes/kubernetes/pull/18215) [t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[m](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}p[l](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[s](https://github.com/kubernetes/kubernetes/pull/18215))[. Needs:](https://github.com/kubernetes/kubernetes/pull/18215)
+
+ - n[e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[d](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}s[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[n](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}c[l](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}u[d](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}s[c](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[p](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[n](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}g[](https://github.com/kubernetes/kubernetes/pull/18215)([c](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}l[u](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}s[t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[r](https://github.com/kubernetes/kubernetes/pull/18215) [n](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[m](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[s](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}p[a](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}c[e](https://github.com/kubernetes/kubernetes/pull/18215))
+ - s[l](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[g](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}h[t](https://github.com/kubernetes/kubernetes/pull/18215) [c](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}u[s](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}t[o](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}m[i](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}z[a](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}t[i](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[n (naming changes, but not major config)](https://github.com/kubernetes/kubernetes/pull/18215)
+ - [m](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}u[l](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}t[i](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}p[l](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}p[o](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}s[i](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}t[i](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[n](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}s[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[n](https://github.com/kubernetes/kubernetes/pull/18215) [h](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[w](https://github.com/kubernetes/kubernetes/pull/18215) [t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}d[o](https://github.com/kubernetes/kubernetes/pull/18215) [t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}h[i](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}s[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[n](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}c[l](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}u[d](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[n](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}g[](https://github.com/kubernetes/kubernetes/pull/18215)
+
+ -
+a[l](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}l[o](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}w[i](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}n[g](https://github.com/kubernetes/kubernetes/pull/18215) [e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}x[t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[r](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}n[a](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}l[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[r](https://github.com/kubernetes/kubernetes/pull/18215) [s](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[m](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}p[l](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[x](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}t[e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}n[s](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[o](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}n[s](https://github.com/kubernetes/kubernetes/pull/18215).
+ -
+P[e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}t[S](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[t](https://github.com/kubernetes/kubernetes/pull/18215) [c](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}r[e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[s](https://github.com/kubernetes/kubernetes/pull/18215) [i](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}n[s](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}t[a](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}n[c](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[s](https://github.com/kubernetes/kubernetes/pull/18215) [w](https://github.com/kubernetes/kubernetes/pull/18215)/[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}s[t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[b](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}l[e](https://github.com/kubernetes/kubernetes/pull/18215) [n](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[m](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[s](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}p[a](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}c[e](https://github.com/kubernetes/kubernetes/pull/18215)
+ -
+[W](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[r](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}k[f](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}l[o](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}w[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}p[r](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[p](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[s](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[l](https://github.com/kubernetes/kubernetes/pull/18215)
+ -
+[D](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[s](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}t[r](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[b](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}u[t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[d](https://github.com/kubernetes/kubernetes/pull/18215) [C](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}h[r](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[n](https://github.com/kubernetes/kubernetes/pull/18215).
+ -
+C[h](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[l](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}l[e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}n[g](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[s](https://github.com/kubernetes/kubernetes/pull/18215) [t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}h[a](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}t[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}c[o](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}n[f](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[g](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}s[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}n[e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[d](https://github.com/kubernetes/kubernetes/pull/18215) [t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}c[r](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[a](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}t[e](https://github.com/kubernetes/kubernetes/pull/18215) [m](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}u[l](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}t[i](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}p[l](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[b](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}j[e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}c[t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}s[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[n](https://github.com/kubernetes/kubernetes/pull/18215) [s](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[q](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}u[e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}n[c](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e
+ -
+T[r](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}y[i](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}n[g](https://github.com/kubernetes/kubernetes/pull/18215) [t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}f[i](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}g[u](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}r[e](https://github.com/kubernetes/kubernetes/pull/18215) [o](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}u[t](https://github.com/kubernetes/kubernetes/pull/18215) [h](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[w](https://github.com/kubernetes/kubernetes/pull/18215) [b](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[l](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[n](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}c[e](https://github.com/kubernetes/kubernetes/pull/18215) [t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}h[e](https://github.com/kubernetes/kubernetes/pull/18215) [m](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[n](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}y[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}c[o](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}n[f](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[g](https://github.com/kubernetes/kubernetes/pull/18215) [o](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}p[t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[o](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}n[s](https://github.com/kubernetes/kubernetes/pull/18215) [o](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}u[t](https://github.com/kubernetes/kubernetes/pull/18215) [t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}h[e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}r[e](https://github.com/kubernetes/kubernetes/pull/18215) [(](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}c[o](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}m[p](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[s](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[,](https://github.com/kubernetes/kubernetes/pull/18215) [t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[r](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}r[a](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}f[o](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}r[m](https://github.com/kubernetes/kubernetes/pull/18215),[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[n](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}s[i](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}b[l](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[/](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}c[)](https://github.com/kubernetes/kubernetes/pull/18215)
+ -
+[G](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[a](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}l[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[s](https://github.com/kubernetes/kubernetes/pull/18215) [t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[](https://github.com/kubernetes/kubernetes/pull/18215)“[m](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}t[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}p[e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[p](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}l[e](https://github.com/kubernetes/kubernetes/pull/18215) [w](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}h[e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}r[e](https://github.com/kubernetes/kubernetes/pull/18215) [t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}h[e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}y[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[r](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[”](https://github.com/kubernetes/kubernetes/pull/18215) [t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}k[e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[p](https://github.com/kubernetes/kubernetes/pull/18215) [i](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}t[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}s[i](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}m[p](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}l[e](https://github.com/kubernetes/kubernetes/pull/18215)
+ -
+[Q](https://github.com/kubernetes/kubernetes/pull/18215): is there an opinion for the keystore sizing
+
+ -
+large size / data blob would not be appropriate
+ -
+you can pull data(config) from another store for larger objects
+ -
+[SIG Report - SIG-federation - progress on Ubernetes-Lite & Ubernetes design](https://github.com/kubernetes/kubernetes/pull/18215)
+
+ -
+[G](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[a](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}l[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}i[t](https://github.com/kubernetes/kubernetes/pull/18215) [t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}b[e](https://github.com/kubernetes/kubernetes/pull/18215) [a](https://github.com/kubernetes/kubernetes/pull/18215)b[l](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}t[o](https://github.com/kubernetes/kubernetes/pull/18215) [h](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[v](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}c[l](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}u[s](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}t[e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}r[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}m[a](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}n[a](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}g[e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}r[,](https://github.com/kubernetes/kubernetes/pull/18215) [s](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}o[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}y[o](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}u[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}c[a](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}n[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}f[e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}d[e](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}r[a](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}t[e](https://github.com/kubernetes/kubernetes/pull/18215) [c](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}l[u](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}s[t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[r](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}s[. They will automatically distribute the pods.](https://github.com/kubernetes/kubernetes/pull/18215)
+ -
+P[l](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[n](https://github.com/kubernetes/kubernetes/pull/18215) [i](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}s[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}t[o](https://github.com/kubernetes/kubernetes/pull/18215) [u](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}s[e](https://github.com/kubernetes/kubernetes/pull/18215) [t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}h[e](https://github.com/kubernetes/kubernetes/pull/18215) [s](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}a[m](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}A[P](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}I[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}f[o](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}r[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}t[h](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}m[a](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}s[t](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}e[r](https://github.com/kubernetes/kubernetes/pull/18215) [c](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}l[u](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}s[te](https://github.com/kubernetes/kubernetes/pull/18215){: .inline-link}r
+ - Quinton's Ubernetes Talk: https://youtu.be/L2ZK24JojB4
+ - Design for Ubernetes: https://github.com/kubernetes/kubernetes/pull/19313
+
+
+ - Conformance testing Q+A [Isaac Hollander McCreery]
+
+ - status on conformance testing for release process
+
+ - expect to be forward compatible but not backwards
+ - is there interest for a sig-testing meeting
+ - testing needs to a higher priority for the project
+ - lots of focus on trying to make this a higher priority
+To get involved in the Kubernetes community consider joining our [Slack channel](http://slack.k8s.io/), taking a look at the [Kubernetes project](https://github.com/kubernetes/) on GitHub, or join the [Kubernetes-dev Google group](https://groups.google.com/forum/#!forum/kubernetes-dev). If you’re really excited, you can do all of the above and join us for the next community conversation -- January 27th, 2016. Please add yourself or a topic you want to know about to the [agenda](https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY/edit) and get a calendar invitation by joining [this group](https://groups.google.com/forum/#!forum/kubernetes-community-video-chat).
+
+
+
+Still want more Kubernetes? Check out the [recording](https://www.youtube.com/watch?v=izQLFx_6kwY&feature=youtu.be&list=PL69nYSiGNLP1pkHsbPjzAewvMgGUpkCnJ) of this meeting and the growing of the archive of [Kubernetes Community Meetings](https://www.youtube.com/playlist?list=PL69nYSiGNLP1pkHsbPjzAewvMgGUpkCnJ).
diff --git a/blog/_posts/2016-01-00-Simple-Leader-Election-With-Kubernetes.md b/blog/_posts/2016-01-00-Simple-Leader-Election-With-Kubernetes.md
new file mode 100644
index 00000000000..d40ee4ee8d1
--- /dev/null
+++ b/blog/_posts/2016-01-00-Simple-Leader-Election-With-Kubernetes.md
@@ -0,0 +1,142 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Simple leader election with Kubernetes and Docker "
+date: Tuesday, January 11, 2016
+pagination:
+ enabled: true
+---
+
+#### Overview
+
+
+Kubernetes simplifies the deployment and operational management of services running on clusters. However, it also simplifies the development of these services. In this post we'll see how you can use Kubernetes to easily perform leader election in your distributed application. Distributed applications usually replicate the tasks of a service for reliability and scalability, but often it is necessary to designate one of the replicas as the leader who is responsible for coordination among all of the replicas.
+
+Typically in leader election, a set of candidates for becoming leader is identified. These candidates all race to declare themselves the leader. One of the candidates wins and becomes the leader. Once the election is won, the leader continually "heartbeats" to renew their position as the leader, and the other candidates periodically make new attempts to become the leader. This ensures that a new leader is identified quickly, if the current leader fails for some reason.
+
+Implementing leader election usually requires either deploying software such as ZooKeeper, etcd or Consul and using it for consensus, or alternately, implementing a consensus algorithm on your own. We will see below that Kubernetes makes the process of using leader election in your application significantly easier.
+
+#### Implementing leader election in Kubernetes
+
+
+The first requirement in leader election is the specification of the set of candidates for becoming the leader. Kubernetes already uses _Endpoints_ to represent a replicated set of pods that comprise a service, so we will re-use this same object. (aside: You might have thought that we would use _ReplicationControllers_, but they are tied to a specific binary, and generally you want to have a single leader even if you are in the process of performing a rolling update)
+
+To perform leader election, we use two properties of all Kubernetes API objects:
+
+* ResourceVersions - Every API object has a unique ResourceVersion, and you can use these versions to perform compare-and-swap on Kubernetes objects
+* Annotations - Every API object can be annotated with arbitrary key/value pairs to be used by clients.
+
+Given these primitives, the code to use master election is relatively straightforward, and you can find it [here][1]. Let's run it ourselves.
+
+```
+$ kubectl run leader-elector --image=gcr.io/google_containers/leader-elector:0.4 --replicas=3 -- --election=example
+```
+
+This creates a leader election set with 3 replicas:
+
+```
+$ kubectl get pods
+NAME READY STATUS RESTARTS AGE
+leader-elector-inmr1 1/1 Running 0 13s
+leader-elector-qkq00 1/1 Running 0 13s
+leader-elector-sgwcq 1/1 Running 0 13s
+```
+
+To see which pod was chosen as the leader, you can access the logs of one of the pods, substituting one of your own pod's names in place of
+
+```
+${pod_name}, (e.g. leader-elector-inmr1 from the above)
+
+$ kubectl logs -f ${name}
+leader is (leader-pod-name)
+```
+… Alternately, you can inspect the endpoints object directly:
+
+
+_'example' is the name of the candidate set from the above kubectl run … command_
+```
+$ kubectl get endpoints example -o yaml
+```
+Now to validate that leader election actually works, in a different terminal, run:
+
+```
+$ kubectl delete pods (leader-pod-name)
+```
+This will delete the existing leader. Because the set of pods is being managed by a replication controller, a new pod replaces the one that was deleted, ensuring that the size of the replicated set is still three. Via leader election one of these three pods is selected as the new leader, and you should see the leader failover to a different pod. Because pods in Kubernetes have a _grace period_ before termination, this may take 30-40 seconds.
+
+The leader-election container provides a simple webserver that can serve on any address (e.g. http://localhost:4040). You can test this out by deleting the existing leader election group and creating a new one where you additionally pass in a --http=(host):(port) specification to the leader-elector image. This causes each member of the set to serve information about the leader via a webhook.
+
+```
+# delete the old leader elector group
+$ kubectl delete rc leader-elector
+
+# create the new group, note the --http=localhost:4040 flag
+$ kubectl run leader-elector --image=gcr.io/google_containers/leader-elector:0.4 --replicas=3 -- --election=example --http=0.0.0.0:4040
+
+# create a proxy to your Kubernetes api server
+$ kubectl proxy
+```
+
+You can then access:
+
+
+http://localhost:8001/api/v1/proxy/namespaces/default/pods/(leader-pod-name):4040/
+
+
+And you will see:
+
+```
+{"name":"(name-of-leader-here)"}
+```
+#### Leader election with sidecars
+
+
+Ok, that's great, you can do leader election and find out the leader over HTTP, but how can you use it from your own application? This is where the notion of sidecars come in. In Kubernetes, Pods are made up of one or more containers. Often times, this means that you add sidecar containers to your main application to make up a Pod. (for a much more detailed treatment of this subject see my earlier blog post).
+
+The leader-election container can serve as a sidecar that you can use from your own application. Any container in the Pod that's interested in who the current master is can simply access http://localhost:4040 and they'll get back a simple JSON object that contains the name of the current master. Since all containers in a Pod share the same network namespace, there's no service discovery required!
+
+For example, here is a simple Node.js application that connects to the leader election sidecar and prints out whether or not it is currently the master. The leader election sidecar sets its identifier to `hostname` by default.
+
+```
+var http = require('http');
+// This will hold info about the current master
+var master = {};
+
+ // The web handler for our nodejs application
+ var handleRequest = function(request, response) {
+ response.writeHead(200);
+ response.end("Master is " + master.name);
+ };
+
+ // A callback that is used for our outgoing client requests to the sidecar
+ var cb = function(response) {
+ var data = '';
+ response.on('data', function(piece) { data = data + piece; });
+ response.on('end', function() { master = JSON.parse(data); });
+ };
+
+ // Make an async request to the sidecar at http://localhost:4040
+ var updateMaster = function() {
+ var req = http.get({host: 'localhost', path: '/', port: 4040}, cb);
+ req.on('error', function(e) { console.log('problem with request: ' + e.message); });
+ req.end();
+ };
+
+ / / Set up regular updates
+ updateMaster();
+ setInterval(updateMaster, 5000);
+
+ // set up the web server
+ var www = http.createServer(handleRequest);
+ www.listen(8080);
+ ```
+ Of course, you can use this sidecar from any language that you choose that supports HTTP and JSON.
+
+#### Conclusion
+
+
+ Hopefully I've shown you how easy it is to build leader election for your distributed application using Kubernetes. In future installments we'll show you how Kubernetes is making building distributed systems even easier. In the meantime, head over to [Google Container Engine][2] or [kubernetes.io][3] to get started with Kubernetes.
+
+ [1]: https://github.com/kubernetes/contrib/pull/353
+ [2]: https://cloud.google.com/container-engine/
+ [3]: http://kubernetes.io/
diff --git a/blog/_posts/2016-01-00-Why-Kubernetes-Doesnt-Use-Libnetwork.md b/blog/_posts/2016-01-00-Why-Kubernetes-Doesnt-Use-Libnetwork.md
new file mode 100644
index 00000000000..1bcaae88cd8
--- /dev/null
+++ b/blog/_posts/2016-01-00-Why-Kubernetes-Doesnt-Use-Libnetwork.md
@@ -0,0 +1,41 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Why Kubernetes doesn’t use libnetwork "
+date: Friday, January 14, 2016
+pagination:
+ enabled: true
+---
+Kubernetes has had a very basic form of network plugins since before version 1.0 was released — around the same time as Docker's [libnetwork](https://github.com/docker/libnetwork) and Container Network Model ([CNM](https://github.com/docker/libnetwork/blob/master/docs/design.md)) was introduced. Unlike libnetwork, the Kubernetes plugin system still retains its "alpha" designation. Now that Docker's network plugin support is released and supported, an obvious question we get is why Kubernetes has not adopted it yet. After all, vendors will almost certainly be writing plugins for Docker — we would all be better off using the same drivers, right?
+
+Before going further, it's important to remember that Kubernetes is a system that supports multiple container runtimes, of which Docker is just one. Configuring networking is a facet of each runtime, so when people ask "will Kubernetes support CNM?" what they really mean is "will kubernetes support CNM drivers with the Docker runtime?" It would be great if we could achieve common network support across runtimes, but that’s not an explicit goal.
+
+Indeed, Kubernetes has not adopted CNM/libnetwork for the Docker runtime. In fact, we’ve been investigating the alternative Container Network Interface ([CNI](https://github.com/appc/cni/blob/master/SPEC.md)) model put forth by CoreOS and part of the App Container ([appc](https://github.com/appc)) specification. Why? There are a number of reasons, both technical and non-technical.
+
+First and foremost, there are some fundamental assumptions in the design of Docker's network drivers that cause problems for us.
+
+Docker has a concept of "local" and "global" drivers. Local drivers (such as "bridge") are machine-centric and don’t do any cross-node coordination. Global drivers (such as "overlay") rely on [libkv](https://github.com/docker/libkv) (a key-value store abstraction) to coordinate across machines. This key-value store is a another plugin interface, and is very low-level (keys and values, no semantic meaning). To run something like Docker's overlay driver in a Kubernetes cluster, we would either need cluster admins to run a whole different instance of [consul](https://github.com/hashicorp/consul), [etcd](https://github.com/coreos/etcd) or [zookeeper](https://zookeeper.apache.org/) (see [multi-host networking](https://docs.docker.com/engine/userguide/networking/get-started-overlay/)), or else we would have to provide our own libkv implementation that was backed by Kubernetes.
+
+The latter sounds attractive, and we tried to implement it, but the libkv interface is very low-level, and the schema is defined internally to Docker. We would have to either directly expose our underlying key-value store or else offer key-value semantics (on top of our structured API which is itself implemented on a key-value system). Neither of those are very attractive for performance, scalability and security reasons. The net result is that the whole system would significantly be more complicated, when the goal of using Docker networking is to simplify things.
+
+For users that are willing and able to run the requisite infrastructure to satisfy Docker global drivers and to configure Docker themselves, Docker networking should "just work." Kubernetes will not get in the way of such a setup, and no matter what direction the project goes, that option should be available. For default installations, though, the practical conclusion is that this is an undue burden on users and we therefore cannot use Docker's global drivers (including "overlay"), which eliminates a lot of the value of using Docker's plugins at all.
+
+Docker's networking model makes a lot of assumptions that aren’t valid for Kubernetes. In docker versions 1.8 and 1.9, it includes a fundamentally flawed implementation of "discovery" that results in corrupted `/etc/hosts` files in containers ([docker #17190](https://github.com/docker/docker/issues/17190)) — and this cannot be easily turned off. In version 1.10 Docker is planning to [bundle a new DNS server](https://github.com/docker/docker/issues/17195), and it’s unclear whether this will be able to be turned off. Container-level naming is not the right abstraction for Kubernetes — we already have our own concepts of service naming, discovery, and binding, and we already have our own DNS schema and server (based on the well-established [SkyDNS](https://github.com/skynetservices/skydns)). The bundled solutions are not sufficient for our needs but are not disableable.
+
+Orthogonal to the local/global split, Docker has both in-process and out-of-process ("remote") plugins. We investigated whether we could bypass libnetwork (and thereby skip the issues above) and drive Docker remote plugins directly. Unfortunately, this would mean that we could not use any of the Docker in-process plugins, "bridge" and "overlay" in particular, which again eliminates much of the utility of libnetwork.
+
+On the other hand, CNI is more philosophically aligned with Kubernetes. It's far simpler than CNM, doesn't require daemons, and is at least plausibly cross-platform (CoreOS’s [rkt](https://coreos.com/rkt/docs/) container runtime supports it). Being cross-platform means that there is a chance to enable network configurations which will work the same across runtimes (e.g. Docker, Rocket, Hyper). It follows the UNIX philosophy of doing one thing well.
+
+Additionally, it's trivial to wrap a CNI plugin and produce a more customized CNI plugin — it can be done with a simple shell script. CNM is much more complex in this regard. This makes CNI an attractive option for rapid development and iteration. Early prototypes have proven that it's possible to eject almost 100% of the currently hard-coded network logic in kubelet into a plugin.
+
+We investigated [writing a "bridge" CNM driver](https://groups.google.com/forum/#!topic/kubernetes-sig-network/5MWRPxsURUw) for Docker that ran CNI drivers. This turned out to be very complicated. First, the CNM and CNI models are very different, so none of the "methods" lined up. We still have the global vs. local and key-value issues discussed above. Assuming this driver would declare itself local, we have to get info about logical networks from Kubernetes.
+
+Unfortunately, Docker drivers are hard to map to other control planes like Kubernetes. Specifically, drivers are not told the name of the network to which a container is being attached — just an ID that Docker allocates internally. This makes it hard for a driver to map back to any concept of network that exists in another system.
+
+This and other issues have been brought up to Docker developers by network vendors, and are usually closed as "working as intended" ([libnetwork #139](https://github.com/docker/libnetwork/issues/139), [libnetwork #486](https://github.com/docker/libnetwork/issues/486), [libnetwork #514](https://github.com/docker/libnetwork/pull/514), [libnetwork #865](https://github.com/docker/libnetwork/issues/865), [docker #18864](https://github.com/docker/docker/issues/18864)), even though they make non-Docker third-party systems more difficult to integrate with. Throughout this investigation Docker has made it clear that they’re not very open to ideas that deviate from their current course or that delegate control. This is very worrisome to us, since Kubernetes complements Docker and adds so much functionality, but exists outside of Docker itself.
+
+For all of these reasons we have chosen to invest in CNI as the Kubernetes plugin model. There will be some unfortunate side-effects of this. Most of them are relatively minor (for example, `docker inspect` will not show an IP address), but some are significant. In particular, containers started by `docker run` might not be able to communicate with containers started by Kubernetes, and network integrators will have to provide CNI drivers if they want to fully integrate with Kubernetes. On the other hand, Kubernetes will get simpler and more flexible, and a lot of the ugliness of early bootstrapping (such as configuring Docker to use our bridge) will go away.
+
+As we proceed down this path, we’ll certainly keep our eyes and ears open for better ways to integrate and simplify. If you have thoughts on how we can do that, we really would like to hear them — find us on [slack](http://slack.k8s.io/) or on our [network SIG mailing-list](https://groups.google.com/forum/#!forum/kubernetes-sig-network).
+
+Tim Hockin, Software Engineer, Google
diff --git a/blog/_posts/2016-02-00-Kubecon-Eu-2016-Kubernetes-Community-In.md b/blog/_posts/2016-02-00-Kubecon-Eu-2016-Kubernetes-Community-In.md
new file mode 100644
index 00000000000..4161318f9c3
--- /dev/null
+++ b/blog/_posts/2016-02-00-Kubecon-Eu-2016-Kubernetes-Community-In.md
@@ -0,0 +1,39 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " KubeCon EU 2016: Kubernetes Community in London "
+date: Thursday, February 24, 2016
+pagination:
+ enabled: true
+---
+
+KubeCon EU 2016 is the inaugural [European Kubernetes](http://kubernetes.io/) community conference that follows on the American launch in November 2015. KubeCon is fully dedicated to education and community engagement for[Kubernetes](http://kubernetes.io/) enthusiasts, production users and the surrounding ecosystem.
+
+Come join us in London and hang out with hundreds from the Kubernetes community and experience a wide variety of deep technical expert talks and use cases.
+
+Don’t miss these great speaker sessions at the conference:
+
+* “Kubernetes Hardware Hacks: Exploring the Kubernetes API Through Knobs, Faders, and Sliders” by Ian Lewis and Brian Dorsey, Developer Advocate, Google -* [http://sched.co/6Bl3](http://sched.co/6Bl3)
+
+* “rktnetes: what's new with container runtimes and Kubernetes” by Jonathan Boulle, Developer and Team Lead at CoreOS -* [http://sched.co/6BY7](http://sched.co/6BY7)
+
+* “Kubernetes Documentation: Contributing, fixing issues, collecting bounties” by John Mulhausen, Lead Technical Writer, Google -* [http://sched.co/6BUP](http://sched.co/6BUP)
+* “[What is OpenStack's role in a Kubernetes world?](https://kubeconeurope2016.sched.org/event/6BYC/what-is-openstacks-role-in-a-kubernetes-world?iframe=yes&w=i:0;&sidebar=yes&bg=no#?iframe=yes&w=i:100;&sidebar=yes&bg=no)” By Thierry Carrez, Director of Engineering, OpenStack Foundation -* http://sched.co/6BYC
+* “A Practical Guide to Container Scheduling” by Mandy Waite, Developer Advocate, Google -* [http://sched.co/6BZa](http://sched.co/6BZa)
+
+* “[Kubernetes in Production in The New York Times newsroom](https://kubeconeurope2016.sched.org/event/67f2/kubernetes-in-production-in-the-new-york-times-newsroom?iframe=yes&w=i:0;&sidebar=yes&bg=no#?iframe=yes&w=i:100;&sidebar=yes&bg=no)” Eric Lewis, Web Developer, New York Times -* [http://sched.co/67f2](http://sched.co/67f2)
+* “[Creating an Advanced Load Balancing Solution for Kubernetes with NGINX](https://kubeconeurope2016.sched.org/event/6Bc9/creating-an-advanced-load-balancing-solution-for-kubernetes-with-nginx?iframe=yes&w=i:0;&sidebar=yes&bg=no#?iframe=yes&w=i:100;&sidebar=yes&bg=no)” by Andrew Hutchings, Technical Product Manager, NGINX -* http://sched.co/6Bc9
+* And many more http://kubeconeurope2016.sched.org/
+
+
+Get your KubeCon EU [tickets here](https://ti.to/kubecon/kubecon-eu-2016).
+
+Venue Location: CodeNode * 10 South Pl, London, United Kingdom
+Accommodations: [hotels](https://skillsmatter.com/contact-us#hotels)
+Website: [kubecon.io](https://www.kubecon.io/)
+Twitter: [@KubeConio](https://twitter.com/kubeconio) #KubeCon
+Google is a proud Diamond sponsor of KubeCon EU 2016. Come to London next month, March 10th & 11th, and visit booth #13 to learn all about Kubernetes, Google Container Engine (GKE) and Google Cloud Platform!
+
+_KubeCon is organized by KubeAcademy, LLC, a community-driven group of developers focused on the education of developers and the promotion of Kubernetes._
+
+-* Sarah Novotny, Kubernetes Community Manager, Google
diff --git a/blog/_posts/2016-02-00-Kubernetes-Community-Meeting-Notes.md b/blog/_posts/2016-02-00-Kubernetes-Community-Meeting-Notes.md
new file mode 100644
index 00000000000..cfd83f62b3d
--- /dev/null
+++ b/blog/_posts/2016-02-00-Kubernetes-Community-Meeting-Notes.md
@@ -0,0 +1,59 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes Community Meeting Notes - 20160204 "
+date: Wednesday, February 09, 2016
+pagination:
+ enabled: true
+---
+#### February 4th - rkt demo (congratulations on the 1.0, CoreOS!), eBay puts k8s on Openstack and considers Openstack on k8s, SIGs, and flaky test surge makes progress.
+
+The Kubernetes contributing community meets most Thursdays at 10:00PT to discuss the project's status via a videoconference. Here are the notes from the latest meeting.
+
+* Note taker: Rob Hirschfeld
+* Demo (20 min): CoreOS rkt + Kubernetes [Shaya Potter]
+ * expect to see integrations w/ rkt & k8s in the coming months ("rkt-netes"). not integrated into the v1.2 release.
+ * Shaya gave a demo (8 minutes into meeting for video reference)
+ * CLI of rkt shown spinning up containers
+ * [note: audio is garbled at points]
+ * Discussion about integration w/ k8s & rkt
+ * rkt community sync next week: https://groups.google.com/forum/#!topic/rkt-dev/FlwZVIEJGbY
+
+ * Dawn Chen:
+ * The remaining issues of integrating rkt with kubernetes: 1) cadivsor 2) DNS 3) bugs related to logging
+ * But need more work on e2e test suites
+* Use Case (10 min): eBay k8s on OpenStack and OpenStack on k8s [Ashwin Raveendran]
+ * eBay is currently running Kubernetes on OpenStack
+ * Goal for eBay is to manage the OpenStack control plane w/ k8s. Goal would be to achieve upgrades
+ * OpenStack Kolla creates containers for the control plane. Uses Ansible+Docker for management of the containers.
+ * Working on k8s control plan management - Saltstack is proving to be a management challenge at the scale they want to operate. Looking for automated management of the k8s control plane.
+* SIG Report
+* Testing update [Jeff, Joe, and Erick]
+ * Working to make the workflow about contributing to K8s easier to understanding
+ * [pull/19714][1] has flow chart of the bot flow to help users understand
+ * Need a consistent way to run tests w/ hacking config scripts (you have to fake a Jenkins process right now)
+ * Want to create necessary infrastructure to make test setup less flaky
+ * want to decouple test start (single or full) from Jenkins
+ * goal is to get to point where you have 1 script to run that can be pointed to any cluster
+ * demo included Google internal views - working to try get that external.
+ * want to be able to collect test run results
+ * Bob Wise calls for testing infrastructure to be a blocker on v1.3
+ * Long discussion about testing practices…
+ * consensus that we want to have tests work over multiple platforms.
+ * would be helpful to have a comprehensive state dump for test reports
+ * "phone-home" to collect stack traces - should be available
+* 1.2 Release Watch
+* CoC [Sarah]
+* GSoC [Sarah]
+
+To get involved in the Kubernetes community consider joining our [Slack channel][2], taking a look at the [Kubernetes project][3] on GitHub, or join the [Kubernetes-dev Google group][4]. If you're really excited, you can do all of the above and join us for the next community conversation — February 11th, 2016. Please add yourself or a topic you want to know about to the [agenda][5] and get a calendar invitation by joining [this group][6].
+
+ "https://youtu.be/IScpP8Cj0hw?list=PL69nYSiGNLP1pkHsbPjzAewvMgGUpkCnJ"
+
+
+[1]: https://github.com/kubernetes/kubernetes/pull/19714
+[2]: http://slack.k8s.io/
+[3]: https://github.com/kubernetes/
+[4]: https://groups.google.com/forum/#!forum/kubernetes-dev
+[5]: https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY/edit#
+[6]: https://groups.google.com/forum/#!forum/kubernetes-community-video-chat
diff --git a/blog/_posts/2016-02-00-Kubernetes-community-meeting-notes-20160128.md b/blog/_posts/2016-02-00-Kubernetes-community-meeting-notes-20160128.md
new file mode 100644
index 00000000000..3e68853a0b3
--- /dev/null
+++ b/blog/_posts/2016-02-00-Kubernetes-community-meeting-notes-20160128.md
@@ -0,0 +1,74 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes Community Meeting Notes - 20160128 "
+date: Wednesday, February 02, 2016
+pagination:
+ enabled: true
+---
+##### January 28 - 1.2 release update, Deis demo, flaky test surge and SIGs
+
+The Kubernetes contributing community meets once a week to discuss the project's status via a videoconference. Here are the notes from the latest meeting.
+
+ Note taker: Erin Boyd
+* Discuss process around code freeze/code slush (TJ Goltermann)
+ * Code wind down was happening during holiday (for 1.1)
+ * Releasing ~ every 3 months
+ * Build stability is still missing
+ * Issue on Transparency (Bob Wise)
+ * Email from Sarah for call to contribute (Monday, January 25)
+ * Concern over publishing dates / understanding release schedule /etc…
+ * Release targeted for early March
+ * Where does one find information on the release schedule with the committed features?
+ * For 1.2 - Send email / Slack to TJ
+ * For 1.3 - Working on better process to communicate to the community
+ * Twitter
+ * Wiki
+ * GitHub Milestones
+ * How to better communicate issues discovered in the SIG
+ * AI: People need to email the kubernetes-dev@ mailing list with summary of findings
+ * AI: Each SIG needs a note taker
+* Release planning vs Release testing
+ * Testing SIG lead Ike McCreery
+ * Also part of the testing infrastructure team at Google
+ * Community being able to integrate into the testing framework
+ * Federated testing
+ * Release Manager = David McMahon
+ * Request to introduce him to the community meeting
+* Demo: Jason Hansen Deis
+ * Implemented simple REST API to interact with the platform
+ * Deis managed application (deployed via)
+ * Source -\> image
+ * Rolling upgrades -\> Rollbacks
+ * AI: Jason will provide the slides & notes
+ * Slides: [https://speakerdeck.com/slack/kubernetes-community-meeting-demo-january-28th-2016](https://speakerdeck.com/slack/kubernetes-community-meeting-demo-january-28th-2016)
+ * Alpha information: [https://groups.google.com/forum/#!topic/deis-users/Qhia4DD2pv4](https://groups.google.com/forum/#!topic/deis-users/Qhia4DD2pv4)
+ * Adding an administrative component (dashboard)
+ * Helm wraps kubectl
+* Testing
+ * Called for community interaction
+ * Need to understand friction points from community
+ * Better documentation
+ * Better communication on how things “should work"
+ * Internally, Google is having daily calls to resolve test flakes
+ * Started up SIG testing meetings (Tuesday at 10:30 am PT)
+ * Everyone wants it, but no one want to pony up the time to make it happen
+ * Google is dedicating headcount to it (3-4 people, possibly more)
+ * [https://groups.google.com/forum/?hl=en#!forum/kubernetes-sig-testing](https://groups.google.com/forum/?hl=en#%21forum/kubernetes-sig-testing)
+* Best practices for labeling
+ * Are there tools built on top of these to leverage
+ * AI: Generate artifact for labels and what they do (Create doc)
+ * Help Wanted Label - good for new community members
+ * Classify labels for team and area
+ * User experience, test infrastructure, etc..
+* SIG Config (not about deployment)
+ * Any interest in ansible, etc.. type
+* SIG Scale meeting (Bob Wise & Tim StClair)
+ * Tests related to performance SLA get relaxed in order to get the tests to pass
+ * exposed process issues
+ * AI: outline of a proposal for a notice policy if things are being changed that are critical to the system (Bob Wise/Samsung)
+ * Create a Best Practices of set of constants into well documented place
+
+To get involved in the Kubernetes community consider joining our [Slack channel](http://slack.k8s.io/), taking a look at the [Kubernetes project](https://github.com/kubernetes/) on GitHub, or join the [Kubernetes-dev Google group](https://groups.google.com/forum/#!forum/kubernetes-dev). If you’re really excited, you can do all of the above and join us for the next community conversation — February 4th, 2016. Please add yourself or a topic you want to know about to the [agenda](https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY/edit) and get a calendar invitation by joining [this group](https://groups.google.com/forum/#!forum/kubernetes-community-video-chat).
+
+The full recording is available on YouTube in the growing archive of [Kubernetes Community Meetings](https://www.youtube.com/playlist?list=PL69nYSiGNLP1pkHsbPjzAewvMgGUpkCnJ).
diff --git a/blog/_posts/2016-02-00-Kubernetes-community-meeting-notes-20160211.md b/blog/_posts/2016-02-00-Kubernetes-community-meeting-notes-20160211.md
new file mode 100644
index 00000000000..418b3b94a5c
--- /dev/null
+++ b/blog/_posts/2016-02-00-Kubernetes-community-meeting-notes-20160211.md
@@ -0,0 +1,62 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes Community Meeting Notes - 20160211 "
+date: Wednesday, February 16, 2016
+pagination:
+ enabled: true
+---
+
+##### February 11th - Pangaea Demo, #AWS SIG formed, release automation and documentation team introductions. 1.2 update and planning 1.3.
+
+
+The Kubernetes contributing community meets most Thursdays at 10:00PT to discuss the project's status via videoconference. Here are the notes from the latest meeting.
+
+Note taker: Rob Hirschfeld
+* Demo: [Pangaea][1] [Shahidh K Muhammed, Tanmai Gopal, and Akshaya Acharya]
+
+ * Microservices packages
+ * Focused on Application developers
+ * Demo at recording +4 minutes
+ * Single node kubernetes cluster — runs locally using Vagrant CoreOS image
+ * Single user/system cluster allows use of DNS integration (unlike Compose)
+ * Can run locally or in cloud
+ * *SIG Report:*
+ * Release Automation and an introduction to David McMahon
+ * Docs and k8s website redesign proposal and an introduction to John Mulhausen
+ * This will allow the system to build docs correctly from Github w/ minimal effort
+ * Will be check-in triggered
+ * Getting website style updates
+ * Want to keep authoring really light
+ * There will be some automated checks
+ * Next week: preview of the new website during the community meeting
+* [@goltermann] 1.2 Release Watch (time +34 minutes)
+ * code slush date: 2/9/2016
+ * no major features or refactors accepted
+ * discussion about release criteria: we will hold release date for bugs
+* Testing flake surge is over (one time event and then maintain test stability)
+* 1.3 Planning (time +40 minutes)
+ * working to cleanup the Github milestones — they should be a source of truth. you can use Github for bug reporting
+ * push off discussion while 1.2 crunch is under
+ * Framework
+ * dates
+ * prioritization
+ * feedback
+ * Design Review meetings
+ * General discussion about the PRD process — still at the beginning states
+ * Working on a contributor conference
+ * Rob suggested tracking relationships between PRD/Mgmr authors
+ * PLEASE DO REVIEWS — talked about the way people are authorized to +2 reviews.
+
+
+To get involved in the Kubernetes community consider joining our [Slack channel,][2] taking a look at the [Kubernetes][3] project on GitHub, or join the [Kubernetes-dev Google group][4]. If you're really excited, you can do all of the above and join us for the next community conversation — February 18th, 2016. Please add yourself or a topic you want to know about to the [agenda][5] and get a calendar invitation by joining [this group][6].
+
+The full recording is available on YouTube in the growing archive of [Kubernetes Community Meetings][7].
+
+[1]: http://hasura.io/blog/pangaea-point-and-shoot-kubernetes/
+[2]: http://slack.k8s.io/
+[3]: https://github.com/kubernetes/
+[4]: https://groups.google.com/forum/#!forum/kubernetes-dev
+[5]: https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY/edit
+[6]: https://groups.google.com/forum/#!forum/kubernetes-community-video-chat
+[7]: https://www.youtube.com/playlist?list=PL69nYSiGNLP1pkHsbPjzAewvMgGUpkCnJ
diff --git a/blog/_posts/2016-02-00-Sharethis-Kubernetes-In-Production.md b/blog/_posts/2016-02-00-Sharethis-Kubernetes-In-Production.md
new file mode 100644
index 00000000000..6e4f9b59564
--- /dev/null
+++ b/blog/_posts/2016-02-00-Sharethis-Kubernetes-In-Production.md
@@ -0,0 +1,95 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " ShareThis: Kubernetes In Production "
+date: Friday, February 11, 2016
+pagination:
+ enabled: true
+---
+Today’s guest blog post is by Juan Valencia, Technical Lead at ShareThis, a service that helps website publishers drive engagement and consumer sharing behavior across social networks.
+
+ShareThis has grown tremendously since its first days as a tiny widget that allowed you to share to your favorite social services. It now serves over 4.5 million domains per month, helping publishers create a more authentic digital experience.
+
+Fast growth came with a price. We leveraged technical debt to scale fast and to grow our products, particularly when it came to infrastructure. As our company expanded, the infrastructure costs mounted as well - both in terms of inefficient utilization and in terms of people costs. About 1 year ago, it became clear something needed to change.
+
+### TL;DRKubernetes has been a key component for us to reduce technical debt in our infrastructure by:
+
+* Fostering the Adoption of Docker
+* Simplifying Container Management
+* Onboarding Developers On Infrastructure
+* Unlocking Continuous Integration and Delivery
+We accomplished this by radically adopting Kubernetes and switching our DevOps team to a Cloud Platform team that worked in terms of containers and microservices. This included creating some tools to get around our own legacy debt.
+
+### The Problem
+
+Alas, the cloud was new and we were young. We started with a traditional data-center mindset. We managed all of our own services: MySQL, Cassandra, Aerospike, Memcache, you name it. We set up VM’s just like you would traditional servers, installed our applications on them, and managed them in Nagios or Ganglia.
+
+Unfortunately, this way of thinking was antithetical to a cloud-centric approach. Instead of thinking in terms of services, we were thinking in terms of servers. Instead of using modern cloud approaches such as autoscaling, microservices, or even managed VM’s, we were thinking in terms of scripted setups, server deployments, and avoiding vendor lock-in.
+
+These ways of thinking were not bad per se, they were simply inefficient. They weren’t taking advantage of the changes to the cloud that were happening very quickly. It also meant that when changes needed to take place, we were treating those changes as big slow changes to a datacenter, rather than small fast changes to the cloud.
+
+### The Solution
+
+#### Kubernetes As A Tool To Foster Docker Adoption
+
+As Docker became more of a force in our industry, engineers at ShareThis also started experimenting with it to good effect. It soon became obvious that we needed to have a working container for every app in our company just so we could simplify testing in our development environment.
+
+Some apps moved quickly into Docker because they were simple and had few dependencies. For those that had small dependencies, we were able to manage using Fig (Fig was the original name of Docker Compose). Still, many of our data pipelines or interdependent apps were too gnarly to be directly dockerized. We still wanted to do it, but Docker was not enough.
+
+In late 2015, we were frustrated enough with our legacy infrastructure that we finally bit the bullet. We evaluated Docker’s tools, ECS, Kubernetes, and Mesosphere. It was quickly obvious that Kubernetes was in a more stable and user friendly state than its competitors for our infrastructure. As a company, we could solidify our infrastructure on Docker by simply setting the goal of having all of our infrastructure on Kubernetes.
+
+Engineers were skeptical at first. However, once they saw applications scale effortlessly into hundreds of instances per application, they were hooked. Now, not only was there the pain points driving us forward into Docker and by extension Kubernetes, but there was genuine excitement for the technology pulling us in. This has allowed us to make an incredibly difficult migration fairly quickly. We now run Kubernetes in multiple regions on about 65 large VMs and increasing to over 100 in the next couple months. Our Kubernetes cluster currently processes 800 million requests per day with the plan to process over 2 billion requests per day in the coming months.
+
+#### Kubernetes As A Tool To Manage Containers
+
+Our earliest use of Docker was promising for development, but not so much so for production. The biggest friction point was the inability to manage Docker components at scale. Knowing which containers were running where, what version of a deployment was running, what state an app was in, how to manage subnets and VPCs, etc, plagued any chance of it going to production. The tooling required would have been substantial.
+
+
+
+When you look at Kubernetes, there are several key features that were immediately attractive:
+
+* It is easy to install on AWS (where all our apps were running)
+* There is a direct path from a Dockerfile to a replication controller through a yaml/json file
+* Pods are able to scale in number easily
+* We can easily scale the number of VM’s running on AWS in a Kubernetes cluster
+* Rolling deployments and rollback are built into the tooling
+* Each pod gets monitored through health checks
+* Service endpoints are managed by the tool
+* There is an active and vibrant community
+
+
+
+Unfortunately, one of the biggest pain points was that the tooling didn’t solve our existing legacy infrastructure, it just provided an infrastructure to move onto. There were still a variety of network quirks which disallowed us from directly moving our applications onto a new VPC. In addition, the reworking of so many applications required developers to jump onto problems that have classically been solved by sys admins and operations teams.
+
+#### Kubernetes As A Tool For Onboarding Developers On Infrastructure
+
+When we decided to make the switch from what was essentially a Chef-run setup to Kubernetes, I do not think we understood all of the pain points that we would hit. We ran our servers in a variety of different ways in a variety of different network configurations that were considerably different than the clean setup that you find on a fresh Kubernetes VPC.
+
+In production we ran in both AWS VPCs and AWS classic across multiple regions. This means that we managed several subnets with different access controls across different applications. Our most recent applications were also very secure, having no public endpoints. This meant that we had a combination of VPC peering, network address translation (NAT), and proxies running in varied configurations.
+
+In the Kubernetes world, there’s only the VPC. All the pods can theoretically talk to each other, and services endpoints are explicitly defined. It’s easy for the developer to gloss over some of the details and it removes the need for operations (mostly).
+
+We made the decision to convert all of our infrastructure / DevOps developers into application developers (really!). We had already started hiring them on the basis of their development skills rather than their operational skills anyway, so perhaps that is not as wild as it sounds.
+
+We then made the decision to onboard our entire engineering organization onto Operations. Developers are flexible, they enjoy challenges, and they enjoy learning. It was remarkable. After 1 month, our organization went from having a few DevOps folks, to having every engineer capable of modifying our architecture.
+
+The training ground for onboarding on networking, productionization, problem solving, root cause analysis, etc, was getting Kubernetes into prod at scale. After the first month, I was biting my nails and worrying about our choices. After 2 months, it looked like it might some day be viable. After 3 months, we were deploying 10 times per week. After 4 months, 40 apps per week. Only 30% of our apps have been migrated, yet the gains are not only remarkable, they are astounding. Kubernetes allowed us to go from an infrastructure-is-slowing-us-down-ugh! organization, to an infrastructure-is-speeding-us-up-yay! organization.
+
+#### Kubernetes As A Means To Unlock Continuous Integration And Delivery
+
+How did we get to 40+ deployments per week? Put simply, continuous integration and deployment (CI/CD) came as a byproduct of our migration. Our first application in Kubernetes was Jenkins, and every app that went in also was added to Jenkins. As we moved forward, we made Jenkins more automatic until pods were being added and taken from Kubernetes faster than we could keep track.
+
+Interestingly, our problems with scaling are now about wanting to push out too many changes at once and people having to wait until their turn. Our goal is to get 100 deployments per week through the new infrastructure. This is achievable if we can continue to execute on our migration and on our commitment to a CI/CD process on Kubernetes and Jenkins.
+
+### Next Steps
+
+We need to finish our migration. At this point the problems are mostly solved, the biggest difficulties are in the tedium of the task at hand. To move things out of our legacy infrastructure meant changing the network configurations to allow access to and from the Kubernetes VPC and across the regions. This is still a very real pain, and one we continue to address.
+
+Some services do not play well in Kubernetes -- think stateful distributed databases. Luckily, we can usually migrate those to a 3rd party who will manage it for us. At the end of this migration, we will only be running pods on Kubernetes. Our infrastructure will become much simpler.
+
+All these changes do not come for free; committing our entire infrastructure to Kubernetes means that we need to have Kubernetes experts. Our team has been unblocked in terms of infrastructure and they are busy adding business value through application development (as they should). However, we do not (yet) have committed engineers to stay up to date with changes to Kubernetes and cloud computing.
+
+As such, we have transferred one engineer to a new “cloud platform team” and will hire a couple of others (have I mentioned [we’re hiring](http://www.sharethis.com/hiring.html)!). They will be responsible for developing tools that we can use to interface well with Kubernetes and manage all of our cloud resources. In addition, they will be working in the Kubernetes source code, part of Kubernetes SIGs, and ideally, pushing code into the open source project.
+
+### Summary
+All in all, while the move to Kubernetes initially seemed daunting, it was far less complicated and disruptive than we thought. And the reward at the other end was a company that could respond as fast as our customers wanted._Editor's note: at a recent Kubernetes meetup, the team at ShareThis gave a talk about their production use of Kubernetes. Video is embedded below._
diff --git a/blog/_posts/2016-02-00-State-Of-Container-World-January-2016.md b/blog/_posts/2016-02-00-State-Of-Container-World-January-2016.md
new file mode 100644
index 00000000000..af9b7d9c11f
--- /dev/null
+++ b/blog/_posts/2016-02-00-State-Of-Container-World-January-2016.md
@@ -0,0 +1,62 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " State of the Container World, January 2016 "
+date: Tuesday, February 01, 2016
+pagination:
+ enabled: true
+---
+At the start of the new year, we sent out a survey to gauge the state of the container world. We’re ready to send the [February edition](https://docs.google.com/forms/d/13yxxBqb5igUhwrrnDExLzZPjREiCnSs-AH-y4SSZ-5c/viewform), but before we do, let’s take a look at the January data from the 119 responses (thank you for participating!).
+
+A note about these numbers: First, you may notice that the numbers don’t add up to 100%, the choices were not exclusive in most cases and so percentages given are the percentage of all respondents who selected a particular choice. Second, while we attempted to reach a broad cross-section of the cloud community, the survey was initially sent out via Twitter to followers of [@brendandburns](https://twitter.com/brendandburns), [@kelseyhightower](https://twitter.com/kelseyhightower), [@sarahnovotny](https://twitter.com/sarahnovotny), [@juliaferraioli](https://twitter.com/juliaferraioli), [@thagomizer\_rb](https://twitter.com/thagomizer_rb), so the audience is likely not a perfect cross-section. We’re working to broaden our sample size (have I mentioned our February survey? [Come take it now](https://docs.google.com/forms/d/13yxxBqb5igUhwrrnDExLzZPjREiCnSs-AH-y4SSZ-5c/viewform)).
+
+#### Now, without further ado, the data:
+First off, lots of you are using containers! 71% are currently using containers, while 24% of you are considering using them soon. Obviously this indicates a somewhat biased sample set. Numbers for container usage in the broader community vary, but are definitely lower than 71%. Consequently, take all of the rest of these numbers with a grain of salt.
+
+So what are folks using containers for? More than 80% of respondents are using containers for development, while only 50% are using containers for production. But you plan to move to production soon, as 78% of container users said that you were planning on moving to production sometime soon.
+
+Where do you deploy containers? Your laptop was the clear winner here, with 53% of folks deploying to laptops. Next up was 44% of people running on their own VMs (Vagrant? OpenStack? we’ll try dive into this in the February survey), followed by 33% of folks running on physical infrastructure, and 31% on public cloud VMs.
+
+And how are you deploying containers? 54% of you are using Kubernetes, awesome to see, though likely somewhat biased by the sample set (see the notes above), possibly more surprising, 45% of you are using shell scripts. Is it because of the extensive (and awesome) Bash scripting going on in the Kubernetes repository? Go on, you can tell me the truth… Rounding out the numbers, 25% are using CAPS (Chef/Ansible/Puppet/Salt) systems, and roughly 13% are using Docker Swarm, Mesos or other systems.
+
+Finally, we asked people for free-text answers about the challenges of working with containers. Some of the most interesting answers are grouped and reproduced here:
+
+
+###### Development Complexity
+
+- “Silo'd development environments / workflows can be fragmented, ease of access to tools like logs is available when debugging containers but not intuitive at times, massive amounts of knowledge is required to grasp the whole infrastructure stack and best practices from say deploying / updating kubernetes, to underlying networking etc.”
+- “Migrating developer workflow. People uninitiated with containers, volumes, etc just want to work.”
+
+
+###### Security
+
+- “Network Security”
+- “Secrets”
+
+
+###### Immaturity
+
+- “Lack of a comprehensive non-proprietary standard (i.e. non-Docker) like e.g runC / OCI”
+- “Still early stage with few tools and many missing features.”
+- “Poor CI support, a lot of tooling still in very early days.”
+- "We've never done it that way before."
+
+
+###### Complexity
+
+- “Networking support, providing ip per pod on bare metal for kubernetes”
+- “Clustering is still too hard”
+- “Setting up Mesos and Kubernetes too damn complicated!!”
+
+###### Data
+
+- “Lack of flexibility of volumes (which is the same problem with VMs, physical hardware, etc)”
+- “Persistency”
+- “Storage”
+- “Persistent Data”
+
+_Download the full survey results [here](https://docs.google.com/spreadsheets/d/18wZe7wEDvRuT78CEifs13maXoSGem_hJvbOSmsuJtkA/pub?gid=530616014&single=true&output=csv) (CSV file)._
+
+_Update: 2/1/2015 - Fixed the CSV link._
+
+-- Brendan Burns, Software Engineer, Google
diff --git a/blog/_posts/2016-02-00-kubernetes-community-meeting-notes_23.md b/blog/_posts/2016-02-00-kubernetes-community-meeting-notes_23.md
new file mode 100644
index 00000000000..5f21848d658
--- /dev/null
+++ b/blog/_posts/2016-02-00-kubernetes-community-meeting-notes_23.md
@@ -0,0 +1,50 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes Community Meeting Notes - 20160218 "
+date: Wednesday, February 23, 2016
+pagination:
+ enabled: true
+---
+##### February 18th - kmachine demo, clusterops SIG formed, new k8s.io website preview, 1.2 update and planning 1.3
+The Kubernetes contributing community meets most Thursdays at 10:00PT to discuss the project's status via videoconference. Here are the notes from the latest meeting.
+
+* Note taker: Rob Hirschfeld
+* Demo (10 min): [kmachine][1] [Sebastien Goasguen]
+ * started :01 intro video
+ * looking to create mirror of Docker tools for Kubernetes (similar to machine, compose, etc)
+ * kmachine (forked from Docker Machine, so has the same endpoints)
+* Use Case (10 min): started at :15
+* SIG Report starter
+ * Cluster Ops launch meeting Friday ([doc][2]). [Rob Hirschfeld]
+* Time Zone Discussion [:22]
+ * This timezone does not work for Asia.
+ * Considering rotation - once per month
+ * Likely 5 or 6 PT
+ * Rob suggested moving the regular meeting up a little
+* k8s.io website preview [John Mulhausen] [:27]
+ * using github for docs. you can fork and do a pull request against the site
+ * will be it's own kubernetes organization BUT not in the code repo
+ * Google will offer a "doc bounty" where you can get GCP credits for working on docs
+ * Uses Jekyll to generate the site (e.g. the ToC)
+ * Principle will be to 100% GitHub Pages; no script trickery or plugins, just fork/clone, edit, and push
+ * Hope to launch at Kubecon EU
+ * Home Page Only Preview: http://kub.unitedcreations.xyz
+* 1.2 Release Watch [T.J. Goltermann] [:38]
+* 1.3 Planning update [T.J. Goltermann]
+* GSoC participation -- deadline 2/19 [Sarah Novotny]
+* March 10th meeting? [Sarah Novotny]
+
+To get involved in the Kubernetes community consider joining our [Slack channel][3], taking a look at the [Kubernetes project][4] on GitHub, or join the [Kubernetes-dev Google group][5]. If you're really excited, you can do all of the above and join us for the next community conversation — February 25th, 2016. Please add yourself or a topic you want to know about to the [agenda][6] and get a calendar invitation by joining [this group][7].
+
+ "https://youtu.be/L5BgX2VJhlY?list=PL69nYSiGNLP1pkHsbPjzAewvMgGUpkCnJ"
+
+_\-- Kubernetes Community_
+
+[1]: https://github.com/skippbox/kmachine
+[2]: https://docs.google.com/document/d/1IhN5v6MjcAUrvLd9dAWtKcGWBWSaRU8DNyPiof3gYMY/edit#
+[3]: http://slack.k8s.io/
+[4]: https://github.com/kubernetes/
+[5]: https://groups.google.com/forum/#!forum/kubernetes-dev
+[6]: https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY/edit#
+[7]: https://groups.google.com/forum/#!forum/kubernetes-community-video-chat
diff --git a/blog/_posts/2016-03-00-1000-Nodes-And-Beyond-Updates-To-Kubernetes-Performance-And-Scalability-In-12.md b/blog/_posts/2016-03-00-1000-Nodes-And-Beyond-Updates-To-Kubernetes-Performance-And-Scalability-In-12.md
new file mode 100644
index 00000000000..bdb03e15669
--- /dev/null
+++ b/blog/_posts/2016-03-00-1000-Nodes-And-Beyond-Updates-To-Kubernetes-Performance-And-Scalability-In-12.md
@@ -0,0 +1,174 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " 1000 nodes and beyond: updates to Kubernetes performance and scalability in 1.2 "
+date: Tuesday, March 28, 2016
+pagination:
+ enabled: true
+---
+_Editor's note: this is the first in a [series of in-depth posts](http://blog.kubernetes.io/2016/03/five-days-of-kubernetes-12.html) on what's new in Kubernetes 1.2_
+
+We're proud to announce that with the [release of 1.2](http://blog.kubernetes.io/2016/03/Kubernetes-1.2-even-more-performance-upgrades-plus-easier-application-deployment-and-management-.html), Kubernetes now supports 1000-node clusters, with a reduction of 80% in 99th percentile tail latency for most API operations. This means in just six months, we've increased our overall scale by 10 times while maintaining a great user experience — the 99th percentile pod startup times are less than 3 seconds, and 99th percentile latency of most API operations is tens of milliseconds (the exception being LIST operations, which take hundreds of milliseconds in very large clusters).
+
+Words are fine, but nothing speaks louder than a demo. Check this out!
+
+
+
+In the above video, you saw the cluster scale up to 10 M queries per second (QPS) over 1,000 nodes, including a rolling update, with zero downtime and no impact to tail latency. That’s big enough to be one of the top 100 sites on the Internet!
+
+In this blog post, we’ll cover the work we did to achieve this result, and discuss some of our future plans for scaling even higher.
+
+
+### Methodology
+We benchmark Kubernetes scalability against the following Service Level Objectives (SLOs):
+
+1. **API responsiveness** [1](https://www.blogger.com/blogger.g?blogID=112706738355446097#1) 99% of all API calls return in less than 1s
+2. **Pod startup time** : 99% of pods and their containers (with pre-pulled images) start within 5s.
+We say Kubernetes scales to a certain number of nodes only if both of these SLOs are met. We continuously collect and report the measurements described above as part of the project test framework. This battery of tests breaks down into two parts: API responsiveness and Pod Startup Time.
+
+
+### API responsiveness for user-level abstractions[2](https://www.blogger.com/blogger.g?blogID=112706738355446097#2) {: .sup }
+Kubernetes offers high-level abstractions for users to represent their applications. For example, the ReplicationController is an abstraction representing a collection of [pods](http://kubernetes.io/docs/user-guide/pods/). Listing all ReplicationControllers or listing all pods from a given ReplicationController is a very common use case. On the other hand, there is little reason someone would want to list all pods in the system — for example, 30,000 pods (1000 nodes with 30 pods per node) represent ~150MB of data (~5kB/pod \* 30k pods). So this test uses ReplicationControllers.
+
+For this test (assuming N to be number of nodes in the cluster), we:
+
+1. Create roughly 3xN ReplicationControllers of different sizes (5, 30 and 250 replicas), which altogether have 30xN replicas. We spread their creation over time (i.e. we don’t start all of them at once) and wait until all of them are running.
+
+2. Perform a few operations on every ReplicationController (scale it, list all its instances, etc.), spreading those over time, and measuring the latency of each operation. This is similar to what a real user might do in the course of normal cluster operation.
+
+3. Stop and delete all ReplicationControllers in the system.
+For results of this test see the “Metrics for Kubernetes 1.2” section below.
+
+For the v1.3 release, we plan to extend this test by also creating Services, Deployments, DaemonSets, and other API objects.
+
+
+### Pod startup end-to-end latency[3](https://www.blogger.com/blogger.g?blogID=112706738355446097#3)
+Users are also very interested in how long it takes Kubernetes to schedule and start a pod. This is true not only upon initial creation, but also when a ReplicationController needs to create a replacement pod to take over from one whose node failed.
+
+We (assuming N to be the number of nodes in the cluster):
+
+1. Create a single ReplicationController with 30xN replicas and wait until all of them are running. We are also running high-density tests, with 100xN replicas, but with fewer nodes in the cluster.
+
+2. Launch a series of single-pod ReplicationControllers - one every 200ms. For each, we measure “total end-to-end startup time” (defined below).
+
+3. Stop and delete all pods and replication controllers in the system.
+We define “total end-to-end startup time” as the time from the moment the client sends the API server a request to create a ReplicationController, to the moment when “running & ready” pod status is returned to the client via watch. That means that “pod startup time” includes the ReplicationController being created and in turn creating a pod, scheduler scheduling that pod, Kubernetes setting up intra-pod networking, starting containers, waiting until the pod is successfully responding to health-checks, and then finally waiting until the pod has reported its status back to the API server and then API server reported it via watch to the client.
+
+While we could have decreased the “pod startup time” substantially by excluding for example waiting for report via watch, or creating pods directly rather than through ReplicationControllers, we believe that a broad definition that maps to the most realistic use cases is the best for real users to understand the performance they can expect from the system.
+
+
+### Metrics from Kubernetes 1.2
+
+So what was the result?We run our tests on Google Compute Engine, setting the size of the master VM based on on the size of the Kubernetes cluster. In particular for 1000-node clusters we use a n1-standard-32 VM for the master (32 cores, 120GB RAM).
+
+
+#### API responsiveness
+The following two charts present a comparison of 99th percentile API call latencies for the Kubernetes 1.2 release and the 1.0 release on 100-node clusters. (Smaller bars are better)
+
+ 
+
+
+
+We present results for LIST operations separately, since these latencies are significantly higher. Note that we slightly modified our tests in the meantime, so running current tests against v1.0 would result in higher latencies than they used to.
+
+
+ 
+
+We also ran these tests against 1000-node clusters. Note: We did not support clusters larger than 100 on GKE, so we do not have metrics to compare these results to. However, customers have reported running on 1,000+ node clusters since Kubernetes 1.0.
+
+
+ 
+
+
+Since LIST operations are significantly larger, we again present them separately: All latencies, in both cluster sizes, are well within our 1 second SLO.
+
+
+ 
+
+
+### Pod startup end-to-end latency
+The results for “pod startup latency” (as defined in the “Pod-Startup end-to-end latency” section) are presented in the following graph. For reference we are presenting also results from v1.0 for 100-node clusters in the first part of the graph.
+
+
+ 
+
+
+As you can see, we substantially reduced tail latency in 100-node clusters, and now deliver low pod startup latency up to the largest cluster sizes we have measured. It is noteworthy that the metrics for 1000-node clusters, for both API latency and pod startup latency, are generally better than those reported for 100-node clusters just six months ago!
+
+
+### How did we make these improvements?
+
+To make these significant gains in scale and performance over the past six months, we made a number of improvements across the whole system. Some of the most important ones are listed below.
+
+
+- _ **Created a “read cache” at the API server level ** _
+([https://github.com/kubernetes/kubernetes/issues/15945](https://github.com/kubernetes/kubernetes/issues/15945) )
+
+Since most Kubernetes control logic operates on an ordered, consistent snapshot kept up-to-date by etcd watches (via the API server), a slight delay in that arrival of that data has no impact on the correct operation of the cluster. These independent controller loops, distributed by design for extensibility of the system, are happy to trade a bit of latency for an increase in overall throughput.
+
+In Kubernetes 1.2 we exploited this fact to improve performance and scalability by adding an API server read cache. With this change, the API server’s clients can read data from an in-memory cache in the API server instead of reading it from etcd. The cache is updated directly from etcd via watch in the background. Those clients that can tolerate latency in retrieving data (usually the lag of cache is on the order of tens of milliseconds) can be served entirely from cache, reducing the load on etcd and increasing the throughput of the server. This is a continuation of an optimization begun in v1.1, where we added support for serving watch directly from the API server instead of etcd:[https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/apiserver-watch.md](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/apiserver-watch.md).
+
+Thanks to contributions from Wojciech Tyczynski at Google and Clayton Coleman and Timothy St. Clair at Red Hat, we were able to join careful system design with the unique advantages of etcd to improve the scalability and performance of Kubernetes.
+- **Introduce a “Pod Lifecycle Event Generator” (PLEG) in the Kubelet** ([https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/pod-lifecycle-event-generator.md](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/pod-lifecycle-event-generator.md))
+
+Kubernetes 1.2 also improved density from a pods-per-node perspective — for v1.2 we test and advertise up to 100 pods on a single node (vs 30 pods in the 1.1 release). This improvement was possible because of diligent work by the Kubernetes community through an implementation of the Pod Lifecycle Event Generator (PLEG).
+
+The Kubelet (the Kubernetes node agent) has a worker thread per pod which is responsible for managing the pod’s lifecycle. In earlier releases each worker would periodically poll the underlying container runtime (Docker) to detect state changes, and perform any necessary actions to ensure the node’s state matched the desired state (e.g. by starting and stopping containers). As pod density increased, concurrent polling from each worker would overwhelm the Docker runtime, leading to serious reliability and performance issues (including additional CPU utilization which was one of the limiting factors for scaling up).
+
+To address this problem we introduced a new Kubelet subcomponent — the PLEG — to centralize state change detection and generate lifecycle events for the workers. With concurrent polling eliminated, we were able to lower the steady-state CPU usage of Kubelet and the container runtime by 4x. This also allowed us to adopt a shorter polling period, so as to detect and react to changes more quickly.
+  
+
+
+- **Improved scheduler throughput** Kubernetes community members from CoreOS (Hongchao Deng and Xiang Li) helped to dive deep into the Kubernetes scheduler and dramatically improve throughput without sacrificing accuracy or flexibility. They cut total time to schedule 30,000 pods by nearly 1400%! You can read a great blog post on how they approached the problem here: [https://coreos.com/blog/improving-kubernetes-scheduler-performance.html](https://coreos.com/blog/improving-kubernetes-scheduler-performance.html)
+
+- **A more efficient JSON parser** Go’s standard library includes a flexible and easy-to-use JSON parser that can encode and decode any Go struct using the reflection API. But that flexibility comes with a cost — reflection allocates lots of small objects that have to be tracked and garbage collected by the runtime. Our profiling bore that out, showing that a large chunk of both client and server time was spent in serialization. Given that our types don’t change frequently, we suspected that a significant amount of reflection could be bypassed through code generation.
+
+After surveying the Go JSON landscape and conducting some initial tests, we found the [ugorji codec](https://github.com/ugorji/go) library offered the most significant speedups - a 200% improvement in encoding and decoding JSON when using generated serializers, with a significant reduction in object allocations. After contributing fixes to the upstream library to deal with some of our complex structures, we switched Kubernetes and the go-etcd client library over. Along with some other important optimizations in the layers above and below JSON, we were able to slash the cost in CPU time of almost all API operations, especially reads.
+
+- Other notable changes led to significant wins, including:
+
+ - Reducing the number of broken TCP connections, which were causing unnecessary new TLS sessions: [https://github.com/kubernetes/kubernetes/issues/15664](https://github.com/kubernetes/kubernetes/issues/15664)
+
+ 
+
+ - Improving the performance of ReplicationController in large clusters:[https://github.com/kubernetes/kubernetes/issues/21672](https://github.com/kubernetes/kubernetes/issues/21672)
+
+In both cases, the problem was debugged and/or fixed by Kubernetes community members, including Andy Goldstein and Jordan Liggitt from Red Hat, and Liang Mingqiang from NetEase.
+
+### Kubernetes 1.3 and Beyond
+
+Of course, our job is not finished. We will continue to invest in improving Kubernetes performance, as we would like it to scale to many thousands of nodes, just like Google’s [Borg](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43438.pdf). Thanks to our investment in testing infrastructure and our focus on how teams use containers in production, we have already identified the next steps on our path to improving scale.
+
+
+
+On deck for Kubernetes 1.3:
+
+1. Our main bottleneck is still the API server, which spends the majority of its time just marshaling and unmarshaling JSON objects. We plan to [add support for protocol buffers](https://github.com/kubernetes/kubernetes/pull/22600) to the API as an optional path for inter-component communication and for storing objects in etcd. Users will still be able to use JSON to communicate with the API server, but since the majority of Kubernetes communication is intra-cluster (API server to node, scheduler to API server, etc.) we expect a significant reduction in CPU and memory usage on the master.
+
+2. Kubernetes uses labels to identify sets of objects; For example, identifying which pods belong to a given ReplicationController requires iterating over all pods in a namespace and choosing those that match the controller’s label selector. The addition of an efficient indexer for labels that can take advantage of the existing API object cache will make it possible to quickly find the objects that match a label selector, making this common operation much faster.
+
+3. Scheduling decisions are based on a number of different factors, including spreading pods based on requested resources, spreading pods with the same selectors (e.g. from the same Service, ReplicationController, Job, etc.), presence of needed container images on the node, etc. Those calculations, in particular selector spreading, have many opportunities for improvement — see [https://github.com/kubernetes/kubernetes/issues/22262](https://github.com/kubernetes/kubernetes/issues/22262) for just one suggested change.
+
+4. We are also excited about the upcoming etcd v3.0 release, which was designed with Kubernetes use case in mind — it will both improve performance and introduce new features. Contributors from CoreOS have already begun laying the groundwork for moving Kubernetes to etcd v3.0 (see [https://github.com/kubernetes/kubernetes/pull/22604](https://github.com/kubernetes/kubernetes/pull/22604)).
+While this list does not capture all the efforts around performance, we are optimistic we will achieve as big a performance gain as we saw going from Kubernetes 1.0 to 1.2.
+
+### Conclusion
+
+In the last six months we’ve significantly improved Kubernetes scalability, allowing v1.2 to run 1000-node clusters with the same excellent responsiveness (as measured by our SLOs) as we were previously achieving only on much smaller clusters. But that isn’t enough — we want to push Kubernetes even further and faster. Kubernetes v1.3 will improve the system’s scalability and responsiveness further, while continuing to add features that make it easier to build and run the most demanding container-based applications.
+
+
+
+Please join our community and help us build the future of Kubernetes! There are many ways to participate. If you’re particularly interested in scalability, you’ll be interested in:
+
+- Our [scalability slack channel](https://kubernetes.slack.com/messages/sig-scale/)
+- The scalability “Special Interest Group”, which meets every Thursday at 9 AM Pacific Time at [SIG-Scale hangout](https://plus.google.com/hangouts/_/google.com/k8scale-hangout)
+ And of course for more information about the project in general, go to [www.kubernetes.io](http://www.kubernetes.io/)
+
+- _Wojciech Tyczynski, Software Engineer, Google_
+
+
+* * *
+[**1**](https://www.blogger.com/null)We exclude operations on “events” since these are more like system logs and are not required for the system to operate properly.
+[**2**](https://www.blogger.com/null)This is test/e2e/load.go from the Kubernetes github repository.
+[**3**](https://www.blogger.com/null)This is test/e2e/density.go test from the Kubernetes github repository
+[**4**](https://www.blogger.com/null)We are looking into optimizing this in the next release, but for now using a smaller master can result in significant (order of magnitude) performance degradation. We encourage anyone running benchmarking against Kubernetes or attempting to replicate these findings to use a similarly sized master, or performance will suffer.
diff --git a/blog/_posts/2016-03-00-Appformix-Helping-Enterprises.md b/blog/_posts/2016-03-00-Appformix-Helping-Enterprises.md
new file mode 100644
index 00000000000..acaa6dec25a
--- /dev/null
+++ b/blog/_posts/2016-03-00-Appformix-Helping-Enterprises.md
@@ -0,0 +1,53 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " AppFormix: Helping Enterprises Operationalize Kubernetes "
+date: Wednesday, March 29, 2016
+pagination:
+ enabled: true
+---
+_Today’s guest post is written Sumeet Singh, founder and CEO of [AppFormix](http://www.appformix.com/), a cloud infrastructure performance optimization service helping enterprise operators streamline their cloud operations on any OpenStack or Kubernetes cloud._
+
+If you run clouds for a living, you’re well aware that the tools we've used since the client/server era for monitoring, analytics and optimization just don’t cut it when applied to the agile, dynamic and rapidly changing world of modern cloud infrastructure.
+
+And, if you’re an operator of enterprise clouds, you know that implementing containers and container cluster management is all about giving your application developers a more agile, responsive and efficient cloud infrastructure. Applications are being rewritten and new ones developed – not for legacy environments where relatively static workloads are the norm, but for dynamic, scalable cloud environments. The dynamic nature of cloud native applications coupled with the shift to continuous deployment means that the demands placed by the applications on the infrastructure are constantly changing.
+
+This shift necessitates infrastructure transparency and real-time monitoring and analytics. Without these key pieces, neither applications nor their underlying plumbing can deliver the low-latency user experience end users have come to expect.
+
+**AppFormix Architectural Review**
+From an operational standpoint, it is necessary to understand how applications are consuming infrastructure resources in order to maximize ROI and guarantee SLAs. AppFormix software empowers operators and developers to monitor, visualize, and control how physical resources are utilized by cloud workloads.
+
+At the center of the software, the AppFormix Data Platform provides a distributed analysis engine that performs configurable, real-time evaluation of in-depth, high-resolution metrics. On each host, the resource-efficient AppFormix Agent collects and evaluates multi-layer metrics from the hardware, virtualization layer, and up to the application. Intelligent agents offer sub-second response times that make it possible to detect and solve problems before they start to impact applications and users. The raw data is associated with the elements that comprise a cloud-native environment: applications, virtual machines, containers, hosts. The AppFormix Agent then publishes metrics and events to a Data Manager that stores and forwards the data to Analytics modules. Events are based on predefined or dynamic conditions set by users or infrastructure operators to make sure that SLAs and policies are being met.
+
+
+|  |
+| Figure 1: Roll-up summary view of the Kubernetes cluster. Operators and Users can define their SLA policies and AppFormix provides with a real-time view of the health of all elements in the Kubernetes cluster. |
+
+
+
+|  |
+| Figure 2: Real-Time visualization of telemetry from a Kubernetes node provides a quick overview of resource utilization on the host as well as resources consumed by the pods and containers. The user defined Labels make is easy to capture namespaces, and other metadata. |
+
+Additional subsystems are the Policy Controller and Analytics. The Policy Controller manages policies for resource monitoring, analysis, and control. It also provides role-based access control. The Analytics modules analyze metrics and events produced by Data Platform, enabling correlation across multiple elements to provide higher-level information to operators and developers. The Analytics modules may also configure policies in Policy Controller in response to conditions in the infrastructure.
+
+AppFormix organizes elements of cloud infrastructure around hosts and instances (either containers or virtual machines), and logical groups of such elements. AppFormix integrates with cloud platforms using Adapter modules that discover the physical and virtual elements in the environment and configure those elements into the Policy Controller.
+
+**Integrating AppFormix with Kubernetes**
+Enterprises often run many environments located on- or off-prem, as well as running different compute technologies (VMs, containers, bare metal). The analytics platform we’ve developed at AppFormix gives Kubernetes users a single pane of glass from which to monitor and manage container clusters in private and hybrid environments.
+
+The AppFormix Kubernetes Adapter leverages the REST-based APIs of Kubernetes to discover nodes, pods, containers, services, and replication controllers. With the relational information about each element, Kubernetes Adapter is able to represent all of these elements in our system. A pod is a group of containers. A service and a replication controller are both different types of pod groups. In addition, using the watch endpoint, Kubernetes Adapter stays aware of changes to the environment.
+
+**DevOps in the Enterprise with AppFormix**
+With AppFormix, developers and operators can work collaboratively to optimize applications and infrastructure. Users can access a self-service IT experience that delivers visibility into CPU, memory, storage, and network consumption by each layer of the stack: physical hardware, platform, and application software.
+
+
+- **Real-time multi-layer performance metrics** - In real-time, developers can view multi-layer metrics that show container resource consumption in context of the physical node on which it executes. With this context, developers can determine if application performance is limited by the physical infrastructure, due to contention or resource exhaustion, or by application design.
+- **Proactive resource control** - AppFormix Health Analytics provides policy-based actions in response to conditions in the cluster. For example, when resource consumption exceeds threshold on a worker node, Health Analytics can remove the node from the scheduling pool by invoking Kubernetes REST APIs. This dynamic control is driven by real-time monitoring at each node.
+- **Capacity planning** - Kubernetes will schedule workloads, but operators need to understand how the resources are being utilized. What resources have the most demand? How is demand trending over time? Operators can generate reports that provide necessary data for capacity planning.
+
+
+
+
+As you can see, we’re working hard to give Kubernetes users a useful, performant toolset for both OpenStack and Kubernetes environments that allows operators to deliver self-service IT to their application developers. We’re excited to be partner contributing to the Kubernetes ecosystem and community.
+
+_-- Sumeet Singh, Founder and CEO, AppFormix_
diff --git a/blog/_posts/2016-03-00-Building-Highly-Available-Applications-Using-Kubernetes-New-Multi-Zone-Clusters-A.K.A-Ubernetes-Lite.md b/blog/_posts/2016-03-00-Building-Highly-Available-Applications-Using-Kubernetes-New-Multi-Zone-Clusters-A.K.A-Ubernetes-Lite.md
new file mode 100644
index 00000000000..b6760b17406
--- /dev/null
+++ b/blog/_posts/2016-03-00-Building-Highly-Available-Applications-Using-Kubernetes-New-Multi-Zone-Clusters-A.K.A-Ubernetes-Lite.md
@@ -0,0 +1,241 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Building highly available applications using Kubernetes new multi-zone clusters (a.k.a. 'Ubernetes Lite') "
+date: Wednesday, March 29, 2016
+pagination:
+ enabled: true
+---
+_Editor's note: this is the third post in a [series of in-depth posts](http://blog.kubernetes.io/2016/03/five-days-of-kubernetes-12.html) on what's new in Kubernetes 1.2_
+
+
+
+### Introduction
+One of the most frequently-requested features for Kubernetes is the ability to run applications across multiple zones. And with good reason — developers need to deploy applications across multiple domains, to improve availability in thxe advent of a single zone outage.
+
+[Kubernetes 1.2](http://blog.kubernetes.io/2016/03/Kubernetes-1.2-even-more-performance-upgrades-plus-easier-application-deployment-and-management-.html), released two weeks ago, adds support for running a single cluster across multiple failure zones (GCP calls them simply "zones," Amazon calls them "availability zones," here we'll refer to them as "zones"). This is the first step in a broader effort to allow federating multiple Kubernetes clusters together (sometimes referred to by the affectionate nickname "[Ubernetes](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/federation.md)"). This initial version (referred to as "Ubernetes Lite") offers improved application availability by spreading applications across multiple zones within a single cloud provider.
+
+Multi-zone clusters are deliberately simple, and by design, very easy to use — no Kubernetes API changes were required, and no application changes either. You simply deploy your existing Kubernetes application into a new-style multi-zone cluster, and your application automatically becomes resilient to zone failures.
+
+
+### Now into some details . . .
+Ubernetes Lite works by leveraging the Kubernetes platform’s extensibility through labels. Today, when nodes are started, labels are added to every node in the system. With Ubernetes Lite, the system has been extended to also add information about the zone it's being run in. With that, the scheduler can make intelligent decisions about placing application instances.
+
+Specifically, the scheduler already spreads pods to minimize the impact of any single node failure. With Ubernetes Lite, via `SelectorSpreadPriority`, the scheduler will make a best-effort placement to spread across zones as well. We should note, if the zones in your cluster are heterogenous (e.g., different numbers of nodes or different types of nodes), you may not be able to achieve even spreading of your pods across zones. If desired, you can use homogenous zones (same number and types of nodes) to reduce the probability of unequal spreading.
+
+This improved labeling also applies to storage. When persistent volumes are created, the `PersistentVolumeLabel` admission controller automatically adds zone labels to them. The scheduler (via the `VolumeZonePredicate` predicate) will then ensure that pods that claim a given volume are only placed into the same zone as that volume, as volumes cannot be attached across zones.
+
+
+### Walkthrough
+We're now going to walk through setting up and using a multi-zone cluster on both [Google Compute Engine](https://cloud.google.com/compute/) (GCE) and Amazon EC2 using the default kube-up script that ships with Kubernetes. Though we highlight GCE and EC2, this functionality is available in any Kubernetes 1.2 deployment where you can make changes during cluster setup. This functionality will also be available in [Google Container Engine](https://cloud.google.com/container-engine/) (GKE) shortly.
+
+
+### Bringing up your cluster
+Creating a multi-zone deployment for Kubernetes is the same as for a single-zone cluster, but you’ll need to pass an environment variable (`"MULTIZONE”`) to tell the cluster to manage multiple zones. We’ll start by creating a multi-zone-aware cluster on GCE and/or EC2.
+
+GCE:
+
+```
+curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=gce
+KUBE_GCE_ZONE=us-central1-a NUM_NODES=3 bash
+```
+EC2:
+
+```
+curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=aws
+KUBE_AWS_ZONE=us-west-2a NUM_NODES=3 bash
+```
+At the end of this command, you will have brought up a cluster that is ready to manage nodes running in multiple zones. You’ll also have brought up `NUM_NODES` nodes and the cluster's control plane (i.e., the Kubernetes master), all in the zone specified by `KUBE_{GCE,AWS}_ZONE`. In a future iteration of Ubernetes Lite, we’ll support a HA control plane, where the master components are replicated across zones. Until then, the master will become unavailable if the zone where it is running fails. However, containers that are running in all zones will continue to run and be restarted by Kubelet if they fail, thus the application itself will tolerate such a zone failure.
+
+
+### Nodes are labeled
+To see the additional metadata added to the node, simply view all the labels for your cluster (the example here is on GCE):
+
+```
+$ kubectl get nodes --show-labels
+
+NAME STATUS AGE LABELS
+kubernetes-master Ready,SchedulingDisabled 6m
+beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.
+io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kub
+ernetes.io/hostname=kubernetes-master
+kubernetes-minion-87j9 Ready 6m
+beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.
+io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kub
+ernetes.io/hostname=kubernetes-minion-87j9
+kubernetes-minion-9vlv Ready 6m
+beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.
+io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kub
+ernetes.io/hostname=kubernetes-minion-9vlv
+kubernetes-minion-a12q Ready 6m
+beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.
+io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kub
+ernetes.io/hostname=kubernetes-minion-a12q
+```
+The scheduler will use the labels attached to each of the nodes (failure-domain.beta.kubernetes.io/region for the region, and failure-domain.beta.kubernetes.io/zone for the zone) in its scheduling decisions.
+
+
+### Add more nodes in a second zone
+Let's add another set of nodes to the existing cluster, but running in a different zone (us-central1-b for GCE, us-west-2b for EC2). We run kube-up again, but by specifying `KUBE_USE_EXISTING_MASTER=1` kube-up will not create a new master, but will reuse one that was previously created.
+
+GCE:
+
+```
+KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=gce
+KUBE_GCE_ZONE=us-central1-b NUM_NODES=3 kubernetes/cluster/kube-up.sh
+```
+On EC2, we also need to specify the network CIDR for the additional subnet, along with the master internal IP address:
+
+```
+KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=aws
+KUBE_AWS_ZONE=us-west-2b NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.1.0/24
+MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh
+```
+View the nodes again; 3 more nodes will have been launched and labelled (the example here is on GCE):
+
+```
+$ kubectl get nodes --show-labels
+
+NAME STATUS AGE LABELS
+kubernetes-master Ready,SchedulingDisabled 16m
+beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.
+io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kub
+ernetes.io/hostname=kubernetes-master
+kubernetes-minion-281d Ready 2m
+beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.
+io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kub
+ernetes.io/hostname=kubernetes-minion-281d
+kubernetes-minion-87j9 Ready 16m
+beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.
+io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kub
+ernetes.io/hostname=kubernetes-minion-87j9
+kubernetes-minion-9vlv Ready 16m
+beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.
+io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kub
+ernetes.io/hostname=kubernetes-minion-9vlv
+kubernetes-minion-a12q Ready 17m
+beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.
+io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kub
+ernetes.io/hostname=kubernetes-minion-a12q
+kubernetes-minion-pp2f Ready 2m
+beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.
+io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kub
+ernetes.io/hostname=kubernetes-minion-pp2f
+kubernetes-minion-wf8i Ready 2m
+beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.
+io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kub
+ernetes.io/hostname=kubernetes-minion-wf8i
+```
+Let’s add one more zone:
+
+GCE:
+
+```
+KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=gce
+KUBE_GCE_ZONE=us-central1-f NUM_NODES=3 kubernetes/cluster/kube-up.sh
+```
+EC2:
+
+```
+KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=aws
+KUBE_AWS_ZONE=us-west-2c NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.2.0/24
+MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh
+```
+Verify that you now have nodes in 3 zones:
+
+```
+kubectl get nodes --show-labels
+```
+Highly available apps, here we come.
+
+
+### Deploying a multi-zone application
+Create the guestbook-go example, which includes a ReplicationController of size 3, running a simple web app. Download all the files from [here](https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook-go), and execute the following command (the command assumes you downloaded them to a directory named “guestbook-go”:
+
+```
+kubectl create -f guestbook-go/
+```
+You’re done! Your application is now spread across all 3 zones. Prove it to yourself with the following commands:
+
+```
+$ kubectl describe pod -l app=guestbook | grep Node
+Node: kubernetes-minion-9vlv/10.240.0.5
+Node: kubernetes-minion-281d/10.240.0.8
+Node: kubernetes-minion-olsh/10.240.0.11
+
+$ kubectl get node kubernetes-minion-9vlv kubernetes-minion-281d
+kubernetes-minion-olsh --show-labels
+NAME STATUS AGE LABELS
+kubernetes-minion-9vlv Ready 34m
+beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.
+io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kub
+ernetes.io/hostname=kubernetes-minion-9vlv
+kubernetes-minion-281d Ready 20m
+beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.
+io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kub
+ernetes.io/hostname=kubernetes-minion-281d
+kubernetes-minion-olsh Ready 3m
+beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.
+io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-f,kub
+ernetes.io/hostname=kubernetes-minion-olsh
+```
+Further, load-balancers automatically span all zones in a cluster; the guestbook-go example includes an example load-balanced service:
+
+```
+$ kubectl describe service guestbook | grep LoadBalancer.Ingress
+LoadBalancer Ingress: 130.211.126.21
+
+ip=130.211.126.21
+
+$ curl -s http://${ip}:3000/env | grep HOSTNAME
+ "HOSTNAME": "guestbook-44sep",
+
+$ (for i in `seq 20`; do curl -s http://${ip}:3000/env | grep HOSTNAME; done)
+```
+
+```
+| sort | uniq
+ "HOSTNAME": "guestbook-44sep",
+ "HOSTNAME": "guestbook-hum5n",
+ "HOSTNAME": "guestbook-ppm40",
+```
+The load balancer correctly targets all the pods, even though they’re in multiple zones.
+
+### Shutting down the cluster
+When you're done, clean up:
+
+GCE:
+
+```
+KUBERNETES_PROVIDER=gce KUBE_USE_EXISTING_MASTER=true
+KUBE_GCE_ZONE=us-central1-f kubernetes/cluster/kube-down.sh
+KUBERNETES_PROVIDER=gce KUBE_USE_EXISTING_MASTER=true
+KUBE_GCE_ZONE=us-central1-b kubernetes/cluster/kube-down.sh
+KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-a
+kubernetes/cluster/kube-down.sh
+```
+EC2:
+
+```
+KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2c
+kubernetes/cluster/kube-down.sh
+KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2b
+kubernetes/cluster/kube-down.sh
+KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a
+kubernetes/cluster/kube-down.sh
+```
+
+
+### Conclusion
+A core philosophy for Kubernetes is to abstract away the complexity of running highly available, distributed applications. As you can see here, other than a small amount of work at cluster spin-up time, all the complexity of launching application instances across multiple failure domains requires no additional work by application developers, as it should be. And we’re just getting started!
+
+Please join our community and help us build the future of Kubernetes! There are many ways to participate. If you’re particularly interested in scalability, you’ll be interested in:
+
+
+- Our federation [slack channel ](https://kubernetes.slack.com/messages/sig-federation/)
+- The federation “Special Interest Group,” which meets every Thursday at 9:30 a.m. Pacific Time at [SIG-Federation hangout ](https://plus.google.com/hangouts/_/google.com/ubernetes)
+
+
+And of course for more information about the project in general, go to www.kubernetes.io
+
+ -- _Quinton Hoole, Staff Software Engineer, Google, and Justin Santa Barbara_
diff --git a/blog/_posts/2016-03-00-Elasticbox-Introduces-Elastickube-To.md b/blog/_posts/2016-03-00-Elasticbox-Introduces-Elastickube-To.md
new file mode 100644
index 00000000000..28b49315cfa
--- /dev/null
+++ b/blog/_posts/2016-03-00-Elasticbox-Introduces-Elastickube-To.md
@@ -0,0 +1,63 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " ElasticBox introduces ElasticKube to help manage Kubernetes within the enterprise "
+date: Saturday, March 11, 2016
+pagination:
+ enabled: true
+---
+Today’s guest post is brought to you by Brannan Matherson, from ElasticBox, who’ll discuss a new open source project to help standardize container deployment and management in enterprise environments. This highlights the advantages of authentication and user management for containerized applications
+
+I’m delighted to share some exciting work that we’re doing at ElasticBox to contribute to the open source community regarding the rapidly changing advancements in container technologies. Our team is kicking off a new initiative called [ElasticKube](http://elastickube.com/) to help solve the problem of challenging container management scenarios within the enterprise. This project is a native container management experience that is specific to Kubernetes and leverages automation to provision clusters for containerized applications based on the latest release of Kubernetes 1.2.
+
+I’ve talked to many enterprise companies, both large and small, and the plethora of cloud offering capabilities is often confusing and makes the evaluation process very difficult, so why Kubernetes? Of the large public cloud players - Amazon Web Services, Microsoft Azure, and Google Cloud Platform - Kubernetes is poised to take an innovative leadership role in framing the container management space. The Kubernetes platform does not restrict or dictate any given technical approach for containers, but encourages the community to collectively solve problems as this container market still takes form. With a proven track record of supporting open source efforts, Kubernetes platform allows my team and me to actively contribute to this fundamental shift in the IT and developer world.
+
+We’ve chosen Kubernetes, not just for the core infrastructure services, but also the agility of Kubernetes to leverage the cluster management layer across any cloud environment - GCP, AWS, Azure, vSphere, and Rackspace. Kubernetes also provides a huge benefit for users to run clusters for containers locally on many popular technologies such as: Docker, Vagrant (and VirtualBox), CoreOS, Mesos and more. This amount of choice enables our team and many others in the community to consider solutions that will be viable for a wide range of enterprise scenarios. In the case of ElasticKube, we’re pleased with Kubernetes 1.2 which includes the full release of the deployment API. This provides the ability for us to perform seamless rolling updates of containerized applications that are running in production. In addition, we’ve been able to support new resource types like ConfigMaps and Horizontal Pod Autoscalers.
+
+Fundamentally, ElasticKube delivers a web console for which compliments Kubernetes for users managing their clusters. The initial experience incorporates team collaboration, lifecycle management and reporting, so organizations can efficiently manage resources in a predictable manner. Users will see an ElasticKube portal that takes advantage of the infrastructure abstraction that enables users to run a container that has already been built. With ElasticKube assuming the cluster has been deployed, the overwhelming value is to provide visibility into who did what and define permissions for access to the cluster with multiple containers running on them. Secondly, by partitioning clusters into namespaces, authorization management is more effective. Finally, by empowering users to build a set of reusable templates in a modern portal, ElasticKube provides a vehicle for delivering a self-service template catalog that can be stored in GitHub (for instance, using Helm templates) and deployed easily.
+
+ElasticKube enables organizations to accelerate adoption by developers, application operations and traditional IT operations teams and shares a mutual goal of increasing developer productivity, driving efficiency in container management and promoting the use of microservices as a modern application delivery methodology. When leveraging ElasticKube in your environment, users need to ensure the following technologies are configured appropriately to guarantee everything runs correctly:
+
+-
+Configure Google Container Engine (GKE) for cluster installation and management
+
+-
+Use Kubernetes to provision the infrastructure and clusters for containers
+
+-
+Use your existing tools of choice to actually build your containers
+-
+
+Use ElasticKube to run, deploy and manage your containers and services
+
+[](http://cl.ly/0i3M2L3Q030z/Image%202016-03-11%20at%209.49.12%20AM.png)
+
+
+
+Getting Started with Kubernetes and ElasticKube
+
+
+
+
+(this is a 3min walk through video with the following topics)
+
+1.
+Deploy ElasticKube to a Kubernetes cluster
+2.
+Configuration
+3.
+Admin: Setup and invite a user
+4.
+Deploy an instance
+
+
+
+Hear What Others are Saying
+
+“Kubernetes has provided us the level of sophistication required for enterprises to manage containers across complex networking environments and the appropriate amount of visibility into the application lifecycle. Additionally, the community commitment and engagement has been exceptional, and we look forward to being a major contributor to this next wave of modern cloud computing and application management.”
+
+_~Alberto Arias Maestro, Co-founder and Chief Technology Officer, ElasticBox_
+
+
+
+_-- Brannan Matherson, Head of Product Marketing, ElasticBox_
diff --git a/blog/_posts/2016-03-00-Five-Days-Of-Kubernetes-12.md b/blog/_posts/2016-03-00-Five-Days-Of-Kubernetes-12.md
new file mode 100644
index 00000000000..8c4d8128def
--- /dev/null
+++ b/blog/_posts/2016-03-00-Five-Days-Of-Kubernetes-12.md
@@ -0,0 +1,55 @@
+---
+layout: blog
+title: " Five Days of Kubernetes 1.2 "
+permalink: /blog/:year/:month/:title
+date: Tuesday, March 28, 2016
+pagination:
+ enabled: true
+---
+The Kubernetes project has had some huge milestones over the past few weeks. We released [Kubernetes 1.2](http://blog.kubernetes.io/2016/03/Kubernetes-1.2-even-more-performance-upgrades-plus-easier-application-deployment-and-management-.html), had our [first conference in Europe](https://kubecon.io/), and were accepted into the [Cloud Native Computing Foundation](https://cncf.io/). While we catch our breath, we would like to take a moment to highlight some of the great work contributed by the community since our last milestone, just four months ago.
+
+
+
+Our mission is to make building distributed systems easy and accessible for all. While Kubernetes 1.2 has LOTS of new features, there are a few that really highlight the strides we’re making towards that goal. Over the course of the next week, we’ll be publishing a series of in-depth posts covering what’s new, so come back daily this week to read about the new features that continue to make Kubernetes the easiest way to run containers at scale. Thanks, and stay tuned!
+
+
+
+|
+3/28
+ |
+\* [1000 nodes and Beyond: Updates to Kubernetes performance and scalability in 1.2](http://blog.kubernetes.io/2016/03/1000-nodes-and-beyond-updates-to-Kubernetes-performance-and-scalability-in-12.html)
+\* Guest post by Sysdig: [How container metadata changes your point of view](http://blog.kubernetes.io/2016/03/how-container-metadata-changes-your-point-of-view.html)
+ |
+|
+3/29
+ |
+\* [Building highly available applications using Kubernetes new multi-zone clusters (a.k.a. Ubernetes Lite")](http://blog.kubernetes.io/2016/03/building-highly-available-applications-using-Kubernetes-new-multi-zone-clusters-a.k.a-Ubernetes-Lite.html)
+\* Guest post by AppFormix: [Helping Enterprises Operationalize Kubernetes](http://blog.kubernetes.io/2016/03/appformix-helping-enterprises.html)
+ |
+|
+3/30
+ |
+\* [Using Spark and Zeppelin to process big data on Kubernetes 1.2](http://blog.kubernetes.io/2016/03/using-Spark-and-Zeppelin-to-process-Big-Data-on-Kubernetes.html).
+ |
+|
+3/31
+ |
+\* [Kubernetes 1.2 and simplifying advanced networking with Ingress](http://blog.kubernetes.io/2016/03/Kubernetes-1.2-and-simplifying-advanced-networking-with-Ingress.html)
+ |
+|
+4/1
+ |
+\* [Using Deployment Objects with Kubernetes 1.2](http://blog.kubernetes.io/2016/04/using-deployment-objects-with.html)
+ |
+|
+BONUS
+ |
+\* ConfigMap API [Configuration management with Containers](http://blog.kubernetes.io/2016/04/configuration-management-with-containers.html)
+ |
+
+
+
+You can follow us on twitter here [@Kubernetesio](https://twitter.com/kubernetesio)
+
+
+_--David Aronchick, Senior Product Manager for Kubernetes, Google_
diff --git a/blog/_posts/2016-03-00-How-Container-Metadata-Changes-Your-Point-Of-View.md b/blog/_posts/2016-03-00-How-Container-Metadata-Changes-Your-Point-Of-View.md
new file mode 100644
index 00000000000..f858870a4b0
--- /dev/null
+++ b/blog/_posts/2016-03-00-How-Container-Metadata-Changes-Your-Point-Of-View.md
@@ -0,0 +1,93 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " How container metadata changes your point of view "
+date: Tuesday, March 28, 2016
+pagination:
+ enabled: true
+---
+_Today’s guest post is brought to you by Apurva Davé, VP of Marketing at Sysdig, who’ll discuss using Kubernetes metadata & Sysdig to understand what’s going on in your Kubernetes cluster. _
+
+Sure, metadata is a fancy word. It actually means “data that describes other data.” While that definition isn’t all that helpful, it turns out metadata itself is especially helpful in container environments. When you have any complex system, the availability of metadata helps you sort and process the variety of data coming out of that system, so that you can get to the heart of an issue with less headache.
+
+In a Kubernetes environment, metadata can be a crucial tool for organizing and understanding the way containers are orchestrated across your many services, machines, availability zones or (in the future) multiple clouds. This metadata can also be consumed by other services running on top of your Kubernetes system and can help you manage your applications.
+
+We’ll take a look at some examples of this below, but first...
+
+###
+A quick intro to Kubernetes metadata
+Kubernetes metadata is abundant in the form of [_labels_](http://kubernetes.io/docs/user-guide/labels/) and [_annotations_](http://kubernetes.io/docs/user-guide/annotations/). Labels are designed to be identifying metadata for your infrastructure, whereas annotations are designed to be non-identifying. For both, they’re simply generic key:value pairs that look like this:
+
+```
+"labels": {
+ "key1" : "value1",
+ "key2" : "value2"
+}
+```
+Labels are not designed to be unique; you can expect any number of objects in your environment to carry the same label, and you can expect that an object could have many labels.
+
+What are some examples of labels you might use? Here are just a few. WARNING: Once you start, you might find more than a few ways to use this functionality!
+
+
+- Environment: Dev, Prod, Test, UAT
+- Customer: Cust A, Cust B, Cust C
+- Tier: Frontend, Backend
+- App: Cache, Web, Database, Auth
+
+In addition to custom labels you might define, Kubernetes also automatically applies labels to your system with useful metadata. Default labels supply key identifying information about your entire Kubernetes hierarchy: Pods, Services, Replication Controllers,and Namespaces.
+
+
+### Putting your metadata to work
+Once you spend a little time with Kubernetes, you’ll see that labels have one particularly powerful application that makes them essential:
+
+**Kubernetes labels allows you to easily move between a “physical” view of your hosts and containers, and a “logical” view of your applications and micro-services. **
+
+At its core, a platform like Kubernetes is designed to orchestrate the optimal use of underlying physical resources. This is a powerful way to consume private or public cloud resources very efficiently, and sometimes you need to visualize those physical resources. In reality, however, most of the time you care about the performance of the service first and foremost.
+
+But in a Kubernetes world, achieving that high utilization means a service’s containers may be scattered all over the place! So how do you actually measure the service’s performance? That’s where the metadata comes in. With Kubernetes metadata, you can create a deep understanding of your service’s performance, regardless of where the underlying containers are physically located.
+
+
+### Paint me a picture
+Let’s look at a quick example to make this more concrete: monitoring your application. Let’s work with a small, 3 node deployment running on GKE. For visualizing the environment we’ll use Sysdig Cloud. Here’s a list of the the nodes — note the “gke” prepended to the name of each host. We see some basic performance details like CPU, memory and network.
+
+
+[](https://1.bp.blogspot.com/-NSkvJcEj0L0/VvmM1eWSlLI/AAAAAAAAA5w/YupjdMPz8aEmXjSt8xyZJVOoa4osyLYBg/s1600/sysdig1.png)
+
+Each of these hosts has a number of containers running on it. Drilling down on the hosts, we see the containers associated with each:
+
+
+[](https://2.bp.blogspot.com/-7hrB4V8zAkg/VvmJRpLcQQI/AAAAAAAAAYA/Fz7pul56ZQ8Xus6u4zHBFAwe8HJesyeRw/s1600/Kubernetes%2BMetadata%2BBlog%2B2.png)
+
+
+
+Simply scanning this list of containers on a single host, I don’t see much organization to the responsibilities of these objects. For example, some of these containers run Kubernetes services (like kube-ui) and we presume others have to do with the application running (like javaapp.x).
+
+Now let’s use some of the metadata provided by Kubernetes to take an application-centric view of the system. Let’s start by creating a hierarchy of components based on labels, in this order:
+
+`Kubernetes namespace -> replication controller -> pod -> container`
+
+This aggregates containers at corresponding levels based on the above labels. In the app UI below, this aggregation and hierarchy are shown in the grey “grouping” bar above the data about our hosts. As you can see, we have a “prod” namespace with a group of services (replication controllers) below it. Each of those replication controllers can then consist of multiple pods, which are in turn made up of containers.
+
+
+[](https://4.bp.blogspot.com/-7JuCC5kuF6U/VvmJzM4UYmI/AAAAAAAAAYE/iIhR19aVCpAaVFRKujflMo047PmzP0DpA/s1600/Kubernetes%2BMetadata%2BBlog%2B3.png)
+
+In addition to organizing containers via labels, this view also aggregates metrics across relevant containers, giving a singular view into the performance of a namespace or replication controller.
+
+**In other words, with this aggregated view based on metadata, you can now start by monitoring and troubleshooting services, and drill into hosts and containers only if needed. **
+
+Let’s do one more thing with this environment — let’s use the metadata to create a visual representation of services and the topology of their communications. Here you see our containers organized by services, but also a map-like view that shows you how these services relate to each other.
+
+
+[](https://1.bp.blogspot.com/-URGCJheccOE/Vvmeh7VnzgI/AAAAAAAAA6I/WIz3pmcrk9A5sgadIU5J8lVObg32HFlQQ/s1600/sysdig4.png)
+
+The boxes represent services that are aggregates of containers (the number in the upper right of each box tells you how many containers), and the lines represent communications between services and their latencies.
+
+This kind of view provides yet another logical, instead of physical, view of how these application components are working together. From here I can understand service performance, relationships and underlying resource consumption (CPU in this example).
+
+
+### Metadata: love it, use it
+This is a pretty quick tour of metadata, but I hope it inspires you to spend a little time thinking about the relevance to your own system and how you could leverage it. Here we built a pretty simple example — apps and services — but imagine collecting metadata across your apps, environments, software components and cloud providers. You could quickly assess performance differences across any slice of this infrastructure effectively, all while Kubernetes is efficiently scheduling resource usage.
+
+Get started with metadata for visualizing these resources today, and in a followup post we’ll talk about the power of adaptive alerting based on metadata.
+
+_-- Apurva Davé is a closet Kubernetes fanatic, loves data, and oh yeah is also the VP of Marketing at Sysdig._
diff --git a/blog/_posts/2016-03-00-Kubernetes-1.2-And-Simplifying-Advanced-Networking-With-Ingress.md b/blog/_posts/2016-03-00-Kubernetes-1.2-And-Simplifying-Advanced-Networking-With-Ingress.md
new file mode 100644
index 00000000000..cc482644e23
--- /dev/null
+++ b/blog/_posts/2016-03-00-Kubernetes-1.2-And-Simplifying-Advanced-Networking-With-Ingress.md
@@ -0,0 +1,117 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes 1.2 and simplifying advanced networking with Ingress "
+date: Friday, March 31, 2016
+pagination:
+ enabled: true
+---
+_Editor's note: This is the sixth post in a [series of in-depth posts](http://blog.kubernetes.io/2016/03/five-days-of-kubernetes-12.html) on what's new in Kubernetes 1.2._
+_Ingress is currently in beta and under active development._
+
+In Kubernetes, Services and Pods have IPs only routable by the cluster network, by default. All traffic that ends up at an edge router is either dropped or forwarded elsewhere. In Kubernetes 1.2, we’ve made improvements to the Ingress object, to simplify allowing inbound connections to reach the cluster services. It can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, offer name based virtual hosting and lots more.
+
+
+### Ingress controllers
+Today, with containers or VMs, configuring a web server or load balancer is harder than it should be. Most web server configuration files are very similar. There are some applications that have weird little quirks that tend to throw a wrench in things, but for the most part, you can apply the same logic to them and achieve a desired result. In Kubernetes 1.2, the Ingress resource embodies this idea, and an Ingress controller is meant to handle all the quirks associated with a specific "class" of Ingress (be it a single instance of a load balancer, or a more complicated setup of frontends that provide GSLB, CDN, DDoS protection etc). An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches the ApiServer's /ingresses endpoint for updates to the [Ingress resource](http://kubernetes.io/docs/user-guide/ingress/). Its job is to satisfy requests for ingress.
+
+Your Kubernetes cluster must have exactly one Ingress controller that supports TLS for the following example to work. If you’re on a cloud-provider, first check the “kube-system” namespace for an Ingress controller RC. If there isn’t one, you can deploy the [nginx controller](https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx), or [write your own](https://github.com/kubernetes/contrib/tree/master/ingress/controllers#writing-an-ingress-controller) in \< 100 lines of code.
+
+Please take a minute to look over the known limitations of existing controllers (gce, nginx).
+
+
+### TLS termination and HTTP load-balancing
+Since the Ingress spans Services, it’s particularly suited for load balancing and centralized security configuration. If you’re familiar with the go programming language, Ingress is like [net/http’s “Server”](https://golang.org/pkg/net/http/#Server) for your entire cluster. The following example shows you how to configure TLS termination. Load balancing is not optional when dealing with ingress traffic, so simply creating the object will configure a load balancer.
+
+First create a test Service. We’ll run a simple echo server for this example so you know exactly what’s going on. The source is [here](https://github.com/kubernetes/contrib/tree/master/ingress/echoheaders).
+```
+$ kubectl run echoheaders
+--image=gcr.io/google\_containers/echoserver:1.3 --port=8080
+$ kubectl expose deployment echoheaders --target-port=8080
+--type=NodePort
+```
+If you’re on a cloud-provider, make sure you can reach the Service from outside the cluster through its node port.
+
+```
+$ NODE_IP=$(kubectl get node `kubectl get po -l run=echoheaders
+--template '{{range .items}}{{.spec.nodeName}}{{end}}'` --template
+'{{range $i, $n := .status.addresses}}{{if eq $n.type
+"ExternalIP"}}{{$n.address}}{{end}}{{end}}')
+$ NODE_PORT=$(kubectl get svc echoheaders --template '{{range $i, $e
+:= .spec.ports}}{{$e.nodePort}}{{end}}')
+$ curl $NODE_IP:$NODE_PORT
+```
+This is a sanity check that things are working as expected. If the last step hangs, you might need a [firewall rule](https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/BETA_LIMITATIONS.md#creating-the-firewall-rule-for-glbc-health-checks).
+
+Now lets create our TLS secret:
+```
+$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout
+
+/tmp/tls.key -out /tmp/tls.crt -subj "/CN=echoheaders/O=echoheaders"
+
+$ echo "
+apiVersion: v1
+kind: Secret
+metadata:
+ name: tls
+data:
+ tls.crt: `base64 -w 0 /tmp/tls.crt`
+ tls.key: `base64 -w 0 /tmp/tls.key`
+" | kubectl create -f
+```
+And the Ingress:
+
+```
+$ echo "
+
+apiVersion: extensions/v1beta1
+
+kind: Ingress
+
+metadata:
+
+ name: test
+
+spec:
+
+ tls:
+
+ - secretName: tls
+ backend:
+ serviceName: echoheaders
+ servicePort: 8080
+" | kubectl create -f -
+```
+You should get a load balanced IP soon:
+```
+$ kubectl get ing
+NAME RULE BACKEND ADDRESS AGE
+test - echoheaders:8080 130.X.X.X 4m
+```
+And if you wait till the Ingress controller marks your backends as healthy, you should see requests to that IP on :80 getting redirected to :443 and terminated using the given TLS certificates.
+```
+$ curl 130.X.X.X
+\
+\\301 Moved Permanently\ \\\\301 Moved Permanently\ \
+```
+
+```
+$ curl https://130.X.X.X -kCLIENT VALUES:client\_address=10.48.0.1command=GETreal path=/
+
+
+$ curl 130.X.X.X -Lk
+
+CLIENT VALUES:client\_address=10.48.0.1command=GETreal path=/
+```
+### Future work
+You can read more about the [Ingress API](http://kubernetes.io/docs/user-guide/ingress/) or controllers by following the links. The Ingress is still in beta, and we would love your input to grow it. You can contribute by writing controllers or evolving the API. All things related to the meaning of the word “[ingress](https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=ingress%20meaning)” are in scope, this includes DNS, different TLS modes, SNI, load balancing at layer 4, content caching, more algorithms, better health checks; the list goes on.
+
+There are many ways to participate. If you’re particularly interested in Kubernetes and networking, you’ll be interested in:
+
+- Our [Networking slack channel ](https://kubernetes.slack.com/messages/sig-network/)
+- Our [Kubernetes Networking Special Interest Group](https://groups.google.com/forum/#!forum/kubernetes-sig-network) email list
+- The Big Data “Special Interest Group,” which meets biweekly at 3pm (15h00) Pacific Time at [SIG-Networking hangout](https://zoom.us/j/5806599998)
+
+And of course for more information about the project in general, go to[www.kubernetes.io](http://kubernetes.io/)
+
+-- _Prashanth Balasubramanian, Software Engineer_
diff --git a/blog/_posts/2016-03-00-Kubernetes-1.2-Even-More-Performance-Upgrades-Plus-Easier-Application-Deployment-And-Management-.md b/blog/_posts/2016-03-00-Kubernetes-1.2-Even-More-Performance-Upgrades-Plus-Easier-Application-Deployment-And-Management-.md
new file mode 100644
index 00000000000..e747cf4a8b5
--- /dev/null
+++ b/blog/_posts/2016-03-00-Kubernetes-1.2-Even-More-Performance-Upgrades-Plus-Easier-Application-Deployment-And-Management-.md
@@ -0,0 +1,82 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes 1.2: Even more performance upgrades, plus easier application deployment and management "
+date: Friday, March 17, 2016
+pagination:
+ enabled: true
+---
+Today we released Kubernetes 1.2. This release represents significant improvements for large organizations building distributed systems. Now with over 680 unique contributors to the project, this release represents our largest yet.
+
+From the beginning, our mission has been to make building distributed systems easy and accessible for all. With the Kubernetes 1.2 release we’ve made strides towards our goal by increasing scale, decreasing latency and overall simplifying the way applications are deployed and managed. Now, developers at organizations of all sizes can build production scale apps more easily than ever before.
+
+### What’s new:
+
+- **Significant scale improvements**. Increased cluster scale by 400% to 1,000 nodes and 30,000 containers per cluster.
+- **Simplified application deployment and management**.
+
+ - Dynamic Configuration (via the ConfigMap API) enables applications to pull their configuration when they run rather than packaging it in at build time.
+ - Turnkey Deployments (via the Beta Deployment API) let you declare your application and Kubernetes will do the rest. It handles versioning, multiple simultaneous rollouts, aggregating status across all pods, maintaining application availability and rollback.
+- **Automated cluster management** :
+
+ - Improved reliability through cross-zone failover and multi-zone scheduling
+ - Simplified One-Pod-Per-Node Applications (via the Beta DaemonSet API) allows you to schedule a service (such as a logging agent) that runs one, and only one, pod per node.
+ - TLS and L7 support (via the Beta Ingress API) provides a straightforward way to integrate into custom networking environments by supporting TLS for secure communication and L7 for http-based traffic routing.
+ - Graceful Node Shutdown (aka Node Drain) takes care of transitioning pods off a node and allowing it to be shut down cleanly.
+ - Custom Metrics for Autoscaling now supports custom metrics, allowing you to specify a set of signals to indicate autoscaling pods.
+- **New GUI** allows you to get started quickly and enables the same functionality found in the CLI for a more approachable and discoverable interface.
+
+[](https://1.bp.blogspot.com/-_xwIlw1gJo4/VusiOuHRzCI/AAAAAAAAA3s/NDN91tgdypQE7iBjzTCWlO7vzfDNt_guw/s1600/k8-1.2-release.png)
+
+- **And many more**. For a complete list of updates, see the [release notes on github](https://github.com/kubernetes/kubernetes/releases/tag/v1.2.0).
+
+#### Community
+
+All these improvements would not be possible without our enthusiastic and global community. The momentum is astounding. We’re seeing over 400 pull requests per week, a 50% increase since the previous 1.1 release. There are meetups and conferences discussing Kubernetes nearly every day, on top of the 85 Kubernetes related [meetup groups](http://www.meetup.com/topics/kubernetes/) around the world. We’ve also seen significant participation in the community in the form of Special Interest Groups, with 18 active SIGs that cover topics from AWS and OpenStack to big data and scalability, to get involved [join or start a new SIG](https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs)). Lastly, we’re proud that Kubernetes is the first project to be accepted to the Cloud Native Computing Foundation (CNCF), read more about the announcement [here](https://cncf.io/news/announcement/2016/03/cloud-native-computing-foundation-accepts-kubernetes-first-hosted-projec-0).
+
+
+
+#### Documentation
+
+With Kubernetes 1.2 comes a relaunch of our website at [kubernetes.io](http://kubernetes.io/). We’ve slimmed down the docs contribution process so that all you have to do is fork/clone and send a PR. And the site works the same whether you’re staging it on your laptop, on github.io, or viewing it in production. It’s a pure GitHub Pages project; no scripts, no plugins.
+
+
+
+From now on, our docs are at a new repo: [https://github.com/kubernetes/kubernetes.github.io](https://github.com/kubernetes/kubernetes.github.io)
+
+
+
+To entice you even further to contribute, we’re also announcing our new bounty program. For every “bounty bug” you address with a merged pull request, we offer the listed amount in credit for Google Cloud Platform services. Just look for [bugs labeled “Bounty” in the new repo](https://github.com/kubernetes/kubernetes.github.io/issues?q=is%3Aissue+is%3Aopen+label%3ABounty) for more details.
+
+
+
+#### Roadmap
+
+All of our work is done in the open, to learn the latest about the project j[oin the weekly community meeting](https://groups.google.com/forum/#!forum/kubernetes-community-video-chat) or [watch a recorded hangout](https://www.youtube.com/playlist?list=PL69nYSiGNLP1pkHsbPjzAewvMgGUpkCnJ). In keeping with our major release schedule of every three to four months, here are just a few items that are in development for [next release and beyond](https://github.com/kubernetes/kubernetes/wiki/Release-1.3):
+
+- Improved stateful application support (aka Pet Set)
+- Cluster Federation (aka Ubernetes)
+- Even more (more!) performance improvements
+- In-cluster IAM
+- Cluster autoscaling
+- Scheduled job
+- Public dashboard that allows for nightly test runs across multiple cloud providers
+- Lots, lots more!
+Kubernetes 1.2 is available for download at [get.k8s.io](http://get.k8s.io/) and via the open source repository hosted on [GitHub](https://github.com/kubernetes/kubernetes). To get started with Kubernetes try our new [Hello World app](http://kubernetes.io/docs/hellonode/).
+
+
+
+#### Connect
+
+We’d love to hear from you and see you participate in this growing community:
+
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stackoverflow](https://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.kubernetes.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+
+Thank you for your support!
+
+
+
+ - _David Aronchick, Senior Product Manager for Kubernetes, Google_
diff --git a/blog/_posts/2016-03-00-Kubernetes-Community-Meeting-Notes.md b/blog/_posts/2016-03-00-Kubernetes-Community-Meeting-Notes.md
new file mode 100644
index 00000000000..d1b47557b5f
--- /dev/null
+++ b/blog/_posts/2016-03-00-Kubernetes-Community-Meeting-Notes.md
@@ -0,0 +1,51 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes Community Meeting Notes - 20160225 "
+date: Wednesday, March 01, 2016
+pagination:
+ enabled: true
+---
+##### February 25th - Redspread demo, 1.2 update and planning 1.3, newbie introductions, SIG-networking and a shout out to CoreOS blog post.
+
+The Kubernetes contributing community meets most Thursdays at 10:00PT to discuss the project's status via videoconference. Here are the notes from the latest meeting.
+
+Note taker: [Ilan Rabinovich]
+ * Quick call out for sharing presentations/slides [JBeda]
+ * Demo (10 min):[ Redspread][1] [Mackenzie Burnett, Dan Gillespie]
+ * 1.2 Release Watch [T.J. Goltermann]
+ * currently about 80 issues in the queue that need to be addressed before branching.
+ * currently looks like March 7th may slip to later in the week, but up in the air until flakey tests are resolved.
+ * non-1.2 changes may be delayed in review/merging until 1.2 stabilization work completes.
+ * 1.3 release planning
+ * Newbie Introductions
+ * SIG Reports -
+ * Networking [Tim Hockin]
+ * Scale [Bob Wise]
+ * meeting last Friday went very well. Discussed charter AND a working deployment
+ * moved meeting to Thursdays @ 1 (so in 3 hours!)
+ * Rob is posting a Cluster Ops announce on TheNewStack to recruit more members
+ * GSoC participation -- no application submitted. [Sarah Novotny]
+ * Brian Grant has offered to review PRs that need attention for 1.2
+ * Dynamic Provisioning
+ * Currently overlaps a bit with the ubernetes work
+ * PR in progress.
+ * Should work in 1.2, but being targeted more in 1.3
+ * Next meeting is March 3rd.
+ * Demo from Weave on Kubernetes Anywhere
+ * Another Kubernetes 1.2 update
+ * Update from CNCF update
+ * 1.3 commitments from google
+ * No meeting on March 10th.
+
+To get involved in the Kubernetes community consider joining our [Slack channel][2], taking a look at the [Kubernetes project][3] on GitHub, or join the [Kubernetes-dev Google group][4]. If you're really excited, you can do all of the above and join us for the next community conversation — March 3rd, 2016. Please add yourself or a topic you want to know about to the [agenda][5] and get a calendar invitation by joining [this group][6].
+
+The full recording is available on YouTube in the growing archive of [Kubernetes Community Meetings][7]. _\-- Kubernetes Community_
+
+[1]: https://redspread.com/
+[2]: http://slack.k8s.io/
+[3]: https://github.com/kubernetes/
+[4]: https://groups.google.com/forum/#!forum/kubernetes-dev
+[5]: https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY/edit#
+[6]: https://groups.google.com/forum/#!forum/kubernetes-community-video-chat
+[7]: https://www.youtube.com/playlist?list=PL69nYSiGNLP1pkHsbPjzAewvMgGUpkCnJ
diff --git a/blog/_posts/2016-03-00-Kubernetes-In-Enterprise-With-Fujitsus.md b/blog/_posts/2016-03-00-Kubernetes-In-Enterprise-With-Fujitsus.md
new file mode 100644
index 00000000000..74d5af3f976
--- /dev/null
+++ b/blog/_posts/2016-03-00-Kubernetes-In-Enterprise-With-Fujitsus.md
@@ -0,0 +1,87 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes in the Enterprise with Fujitsu’s Cloud Load Control "
+date: Saturday, March 11, 2016
+pagination:
+ enabled: true
+---
+Today’s guest post is by Florian Walker, Product Manager at Fujitsu and working on Cloud Load Control, an offering focused on the usage of Kubernetes in an enterprise context. Florian tells us what potential Fujitsu sees in Kubernetes, and how they make it accessible to enterprises.
+
+Earlier this year, Fujitsu released its Kubernetes-based offering Fujitsu ServerView[Cloud Load Control](http://www.fujitsu.com/software/clc/) (CLC) to the public. Some might be surprised since Fujitsu’s reputation is not necessarily related to software development, but rather to hardware manufacturing and IT services. As a long-time member of the Linux foundation and founding member of the Open Container Initiative and the Cloud Native Computing Foundation, Fujitsu does not only build software, but is committed to open source software, and contributes to several projects, including Kubernetes. But we not only believe in Kubernetes as an open source project, we also chose it as the core of our offering, because it provides the best balance of feature set, resource requirements and complexity to run distributed applications at scale.
+
+Today, we want to take you on a short tour explaining the background of our offering, why we think Kubernetes is the right fit for your customers and what value Cloud Load Control provides on top of it.
+**A long long time ago…**
+
+In mid 2014 we looked at the challenges enterprises are facing in the context of digitization, where traditional enterprises experience that more and more competitors from the IT sector are pushing into the core of their markets. A big part of Fujitsu’s customers are such traditional businesses, so we considered how we could help them and came up with three basic principles:
+
+-
+Decouple applications from infrastructure - Focus on where the value for the customer is: the application.
+-
+Decompose applications - Build applications from smaller, loosely coupled parts. Enable reconfiguration of those parts depending on the needs of the business. Also encourage innovation by low-cost experiments.
+-
+Automate everything - Fight the increasing complexity of the first two points by introducing a high degree of automation.
+
+We found that Linux containers themselves cover the first point and touch the second. But at this time there was little support for creating distributed applications and running them managed automatically. We found Kubernetes as the missing piece.
+**Not a free lunch**
+
+The general approach of Kubernetes in managing containerized workload is convincing, but as we looked at it with the eyes of customers, we realized that it’s not a free lunch. Many customers are medium-sized companies whose core business is often bound to strict data protection regulations. The top three requirements we identified are:
+
+-
+On-premise deployments (with the option for hybrid scenarios)
+-
+Efficient operations as part of a (much) bigger IT infrastructure
+-
+Enterprise-grade support, potentially on global scale
+
+We created Cloud Load Control with these requirements in mind. It is basically a distribution of Kubernetes targeted for on-premise use, primarily focusing on operational aspects of container infrastructure. We are committed to work with the community, and contribute all relevant changes and extensions upstream to the Kubernetes project.
+**On-premise deployments**
+
+As Kubernetes core developer Tim Hockin often puts it in his[talks](https://speakerdeck.com/thockin), Kubernetes is "a story with two parts" where setting up a Kubernetes cluster is not the easy part and often challenging due to variations in infrastructure. This is in particular true when it comes to production-ready deployments of Kubernetes. In the public cloud space, a customer could choose a service like Google Container Engine (GKE) to do this job. Since customers have less options on-premise, often they have to consider the deployment by themselves.
+
+Cloud Load Control addresses these issues. It enables customers to reliably and readily provision a production grade Kubernetes clusters on their own infrastructure, with the following benefits:
+
+-
+Proven setup process, lowers risk of problems while setting up the cluster
+-
+Reduction of provisioning time to minutes
+-
+Repeatable process, relevant especially for large, multi-tenant environments
+
+Cloud Load Control delivers these benefits for a range of platforms, starting from selected OpenStack distributions in the first versions of Cloud Load Control, and successively adding more platforms depending on customer demand. We are especially excited about the option to remove the virtualization layer and support Kubernetes bare-metal on Fujitsu servers in the long run. By removing a layer of complexity, the total cost to run the system would be decreased and the missing hypervisor would increase performance.
+
+Right now we are in the process of contributing a generic provider to set up Kubernetes on OpenStack. As a next step in driving multi-platform support, Docker-based deployment of Kubernetes seems to be crucial. We plan to contribute to this feature to ensure it is going to be Beta in Kubernetes 1.3.
+**Efficient operations**
+
+Reducing operation costs is the target of any organization providing IT infrastructure. This can be achieved by increasing the efficiency of operations and helping operators to get their job done. Considering large-scale container infrastructures, we found it is important to differentiate between two types of operations:
+
+-
+Platform-oriented, relates to the overall infrastructure, often including various systems, one of which might be Kubernetes.
+-
+Application-oriented, focusses rather on a single, or a small set of applications deployed on Kubernetes.
+
+Kubernetes is already great for the application-oriented part. Cloud Load Control was created to help platform-oriented operators to efficiently manage Kubernetes as part of the overall infrastructure and make it easy to execute Kubernetes tasks relevant to them.
+
+The first version of Cloud Load Control provides a user interface integrated in the OpenStack Horizon dashboard which enables the Platform ops to create and manage their Kubernetes clusters.
+
+ 
+
+Clusters are treated as first-class citizens of OpenStack. Their creation is as simple as the creation of a virtual machine. Operators do not need to learn a new system or method of provisioning, and the self-service approach enables large organizations to rapidly provide the Kubernetes infrastructure to their tenants.
+
+An intuitive UI is crucial for the simplification of operations. This is why we heavily contributed to the[Kubernetes Dashboard](https://github.com/kubernetes/dashboard) project and ship it in Cloud Load Control. Especially for operators who don’t know the Kubernetes CLI by heart, because they have to care about other systems too, a great UI is perfectly suitable to get typical operational tasks done, such as checking the health of the system or deploying a new application.
+
+Monitoring is essential. With the dashboard, it is possible to get insights on cluster level. To ensure that the OpenStack operators have a deep understanding of their platform, we will soon add an integration with[Monasca](https://wiki.openstack.org/wiki/Monasca), OpenStack’s monitoring-as-a-service project, so metrics of Kubernetes can be analyzed together with OpenStack metrics from a single point of access.
+**Quality and enterprise-grade support**
+
+As a Japanese company, quality and customer focus have the highest priority in every product and service we ship. This is where the actual value of Cloud Cloud Control comes from: it provides a specific version of the open source software which has been intensively tested and hardened to ensure stable operations on a particular set of platforms.
+
+Acknowledging that container technology and Kubernetes is new territory for a lot of enterprises, expert assistance is the key for setting up and running a production-grade container infrastructure. Cloud Load Control comes with a support service leveraging Fujitsu’s proven support structure. This enables support also for customers operating Kubernetes in different regions of the world, like Europe and Japan, as part of the same offering.
+**Conclusion**
+
+2014 seems to be light years away, we believe the decision for Kubernetes was the right one. It is built from the ground-up to enable the creation of container-based, distributed applications, and best supports this use case.
+
+With Cloud Load Control, we’re excited to enable enterprises to run Kubernetes in production environments and to help their operators to efficiently use it, so DevOps teams can build awesome applications on top of it.
+
+
+
+_-- Florian Walker, Product Manager, FUJITSU_
diff --git a/blog/_posts/2016-03-00-Scaling-Neural-Network-Image-Classification-Using-Kubernetes-With-Tensorflow-Serving.md b/blog/_posts/2016-03-00-Scaling-Neural-Network-Image-Classification-Using-Kubernetes-With-Tensorflow-Serving.md
new file mode 100644
index 00000000000..567b2fadf69
--- /dev/null
+++ b/blog/_posts/2016-03-00-Scaling-Neural-Network-Image-Classification-Using-Kubernetes-With-Tensorflow-Serving.md
@@ -0,0 +1,36 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Scaling neural network image classification using Kubernetes with TensorFlow Serving "
+date: Thursday, March 23, 2016
+pagination:
+ enabled: true
+---
+In 2011, Google developed an internal deep learning infrastructure called [DistBelief](http://research.google.com/pubs/pub40565.html), which allowed Googlers to build ever larger [neural networks](https://en.wikipedia.org/wiki/Artificial_neural_network) and scale training to thousands of cores. Late last year, Google [introduced TensorFlow](http://googleresearch.blogspot.com/2015/11/tensorflow-googles-latest-machine_9.html), its second-generation machine learning system. TensorFlow is general, flexible, portable, easy-to-use and, most importantly, developed with the open source community.
+
+[](https://4.bp.blogspot.com/-PDRpnk823Ps/VvHJH3vIyKI/AAAAAAAAA4g/adIWZPfa2W4ObtIaWNbhpl8UyIwk9R7xg/s1600/tensorflowserving-4.png)
+
+The process of introducing machine learning into your product involves creating and training a model on your dataset, and then pushing the model to production to serve requests. In this blog post, we’ll show you how you can use [Kubernetes](http://kubernetes.io/) with [TensorFlow Serving](http://googleresearch.blogspot.com/2016/02/running-your-models-in-production-with.html), a high performance, open source serving system for machine learning models, to meet the scaling demands of your application.
+
+Let’s use image classification as an [example](https://tensorflow.github.io/serving/serving_inception). Suppose your application needs to be able to correctly identify an image across a set of categories. For example, given the cute puppy image below, your system should classify it as a retriever.
+
+| [](https://3.bp.blogspot.com/-rUuOetJfoLc/VvHJHgDYusI/AAAAAAAAA4c/qO9xhVk4iH8EhrSqt3eZbqNGVQXH5fmCg/s1600/tensorflowserving-2.png) |
+| Image via [Wikipedia](https://commons.wikimedia.org/wiki/File:Golde33443.jpg) |
+
+You can implement image classification with TensorFlow using the [Inception-v3 model](http://googleresearch.blogspot.com/2016/03/train-your-own-image-classifier-with.html) trained on the data from the [ImageNet dataset](http://www.image-net.org/). This dataset contains images and their labels, which allows the TensorFlow learner to train a model that can be used for by your application in production.
+
+[](https://4.bp.blogspot.com/-oaJYNPqiqIc/VvHJH2Z19cI/AAAAAAAAA4k/xq8m0kqRIOUewTZLDvzjPh6YLHG4MxdSQ/s1600/tensorflowserving-1.png)
+Once the model is trained and [exported](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/session_bundle/exporter.py), [TensorFlow Serving](https://tensorflow.github.io/serving/) uses the model to perform inference — predictions based on new data presented by its clients. In our example, clients submit image classification requests over [gRPC](http://www.grpc.io/), a high performance, open source RPC framework from Google.
+
+[](https://4.bp.blogspot.com/-g2S3V47h7BY/VvHJIkBlTiI/AAAAAAAAA4o/wISpFzB6kvIZxJHlnmM7-XYzZYl1YFfDA/s1600/tensorflowserving-5.png)
+
+Inference can be very resource intensive. Our server executes the following TensorFlow graph to process every classification request it receives. The Inception-v3 model has over 27 million parameters and runs 5.7 billion floating point operations per inference.
+
+| [](https://2.bp.blogspot.com/-Gcb6gxzqDkE/VvHJHE7yD3I/AAAAAAAAA4Y/4EZD83OV_8goqodV2pcaQKYeinokf9UuA/s1600/tensorflowserving-3.png) |
+| Schematic diagram of Inception-v3 |
+
+Fortunately, this is where Kubernetes can help us. Kubernetes distributes inference request processing across a cluster using its [External Load Balancer](http://kubernetes.io/docs/user-guide/load-balancer/). Each [pod](http://kubernetes.io/docs/user-guide/pods/) in the cluster contains a [TensorFlow Serving Docker image](https://tensorflow.github.io/serving/docker) with the TensorFlow Serving-based gRPC server and a trained Inception-v3 model. The model is represented as a [set of files](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/session_bundle/README.md) describing the shape of the TensorFlow graph, model weights, assets, and so on. Since everything is neatly packaged together, we can dynamically scale the number of replicated pods using the [Kubernetes Replication Controller](http://kubernetes.io/docs/user-guide/replication-controller/operations/) to keep up with the service demands.
+
+To help you try this out yourself, we’ve written a [step-by-step tutorial](https://tensorflow.github.io/serving/serving_inception), which shows you how to create the TensorFlow Serving Docker container to serve the Inception-v3 image classification model, configure a Kubernetes cluster and run classification requests against it. We hope this will make it easier for you to integrate machine learning into your own applications and scale it with Kubernetes! To learn more about TensorFlow Serving, check out [tensorflow.github.io/serving](http://tensorflow.github.io/serving).
+
+- _Fangwei Li, Software Engineer, Google_
diff --git a/blog/_posts/2016-03-00-State-Of-Container-World-February-2016.md b/blog/_posts/2016-03-00-State-Of-Container-World-February-2016.md
new file mode 100644
index 00000000000..0ee64d632ce
--- /dev/null
+++ b/blog/_posts/2016-03-00-State-Of-Container-World-February-2016.md
@@ -0,0 +1,115 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " State of the Container World, February 2016 "
+date: Wednesday, March 01, 2016
+pagination:
+ enabled: true
+---
+Hello, and welcome to the second installment of the Kubernetes state of the container world survey. At the beginning of February we sent out a survey about people’s usage of containers, and wrote about the [results from the January survey](http://blog.kubernetes.io/2016/02/state-of-container-world-january-2016.html). Here we are again, as before, while we try to reach a large and representative set of respondents, this survey was publicized across the social media account of myself and others on the Kubernetes team, so I expect some pro-container and Kubernetes bias in the data.We continue to try to get as large an audience as possible, and in that vein, please go and take the [March survey](https://docs.google.com/a/google.com/forms/d/1hlOEyjuN4roIbcAAUbDhs7xjNMoM8r-hqtixf6zUsp4/viewform) and share it with your friends and followers everywhere! Without further ado, the numbers...
+
+## Containers continue to gain ground
+
+In January, 71% of respondents were currently using containers, in February, 89% of respondents were currently using containers. The percentage of users not even considering containers also shrank from 4% in January to a surprising 0% in February. Will see if that holds consistent in March.Likewise, the usage of containers continued to march across the dev/canary/prod lifecycle. In all parts of the lifecycle, container usage increased:
+
+
+-
+Development: 80% -\> 88%
+-
+Test: 67% -\> 72%
+-
+Pre production: 41% -\> 55%
+-
+Production: 50% -\> 62%
+What is striking in this is that pre-production growth continued, even as workloads were clearly transitioned into true production. Likewise the share of people considering containers for production rose from 78% in January to 82% in February. Again we’ll see if the trend continues into March.
+
+## Container and cluster sizes
+
+We asked some new questions in the survey too, around container and cluster sizes, and there were some interesting numbers:
+
+How many containers are you running?
+
+ 
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+How many machines are you running containers on?
+
+
+
+ 
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+So while container usage continues to grow, the size and scope continues to be quite modest, with more than 50% of users running fewer than 50 containers on fewer than 10 machines.
+
+## Things stay the same
+
+Across the orchestration space, things seemed pretty consistent between January and February (Kubernetes is quite popular with folks (54% -\> 57%), though again, please see the note at the top about the likely bias in our survey population. Shell scripts likewise are also quite popular and holding steady. You all certainly love your Bash (don’t worry, we do too ;)
+Likewise people continue to use cloud services both in raw IaaS form (10% on GCE, 30% on EC2, 2% on Azure) as well as cloud container services (16% for GKE, 11% on ECS, 1% on ACS). Though the most popular deployment target by far remains your laptop/desktop at ~53%.
+
+## Raw data
+
+As always, the complete raw data is available in a spreadsheet [here](https://docs.google.com/spreadsheets/d/126nnv9Q9avxDvC82irJGUDK3UODokILZOQe5X_WB9VQ/edit?usp=sharing).
+
+## Conclusions
+
+Containers continue to gain in popularity and usage. The world of orchestration is somewhat stabilizing, and cloud services continue to be a common place to run containers, though your laptop is even more popular.
+
+And if you are just getting started with containers (or looking to move beyond your laptop) please visit us at [kubernetes.io](http://kubernetes.io/) and [Google Container Engine](https://cloud.google.com/container-engine/). ‘Till next month, please get your friends, relatives and co-workers to take our [March survey](https://docs.google.com/a/google.com/forms/d/1hlOEyjuN4roIbcAAUbDhs7xjNMoM8r-hqtixf6zUsp4/viewform)!
+
+
+
+Thanks!
+
+_-- Brendan Burns, Software Engineer, Google_
diff --git a/blog/_posts/2016-03-00-Using-Spark-And-Zeppelin-To-Process-Big-Data-On-Kubernetes.md b/blog/_posts/2016-03-00-Using-Spark-And-Zeppelin-To-Process-Big-Data-On-Kubernetes.md
new file mode 100644
index 00000000000..3d930ac2e0d
--- /dev/null
+++ b/blog/_posts/2016-03-00-Using-Spark-And-Zeppelin-To-Process-Big-Data-On-Kubernetes.md
@@ -0,0 +1,138 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Using Spark and Zeppelin to process big data on Kubernetes 1.2 "
+date: Thursday, March 30, 2016
+pagination:
+ enabled: true
+---
+_Editor's note: this is the fifth post in a [series of in-depth posts](http://blog.kubernetes.io/2016/03/five-days-of-kubernetes-12.html) on what's new in Kubernetes 1.2 _
+
+With big data usage growing exponentially, many Kubernetes customers have expressed interest in running [Apache Spark](http://spark.apache.org/) on their Kubernetes clusters to take advantage of the portability and flexibility of containers. Fortunately, with Kubernetes 1.2, you can now have a platform that runs Spark and Zeppelin, and your other applications side-by-side.
+
+
+### Why Zeppelin?
+[Apache Zeppelin](https://zeppelin.incubator.apache.org/) is a web-based notebook that enables interactive data analytics. As one of its backends, Zeppelin connects to Spark. Zeppelin allows the user to interact with the Spark cluster in a simple way, without having to deal with a command-line interpreter or a Scala compiler.
+
+
+### Why Kubernetes?
+There are many ways to run Spark outside of Kubernetes:
+
+
+- You can run it using dedicated resources, in Standalone mode
+- You can run it on a [YARN](https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html) cluster, co-resident with Hadoop and HDFS
+- You can run it on a [Mesos](http://mesos.apache.org/) cluster alongside other Mesos applications
+
+
+
+So why would you run Spark on Kubernetes?
+
+
+- A single, unified interface to your cluster: Kubernetes can manage a broad range of workloads; no need to deal with YARN/HDFS for data processing and a separate container orchestrator for your other applications.
+- Increased server utilization: share nodes between Spark and cloud-native applications. For example, you may have a streaming application running to feed a streaming Spark pipeline, or a nginx pod to serve web traffic — no need to statically partition nodes.
+- Isolation between workloads: Kubernetes' [Quality of Service](https://github.com/kubernetes/kubernetes/blob/release-1.2/docs/proposals/resource-qos.md) mechanism allows you to safely co-schedule batch workloads like Spark on the same nodes as latency-sensitive servers.
+
+
+
+### Launch Spark
+For this demo, we’ll be using [Google Container Engine](https://cloud.google.com/container-engine/) (GKE), but this should work anywhere you have installed a Kubernetes cluster. First, create a Container Engine cluster with storage-full scopes. These Google Cloud Platform scopes will allow the cluster to write to a private Google Cloud Storage Bucket (we’ll get to why you need that later):
+
+```
+$ gcloud container clusters create spark --scopes storage-full
+--machine-type n1-standard-4
+```
+_Note_: We’re using n1-standard-4 (which are larger than the default node size) to demonstrate some features of Horizontal Pod Autoscaling. However, Spark works just fine on the default node size of n1-standard-1.
+
+After the cluster’s created, you’re ready to launch Spark on Kubernetes using the config files in the Kubernetes GitHub repo:
+
+```
+$ git clone https://github.com/kubernetes/kubernetes.git
+$ kubectl create -f kubernetes/examples/spark
+```
+`‘kubernetes/examples/spark’` is a directory, so this command tells kubectl to create all of the Kubernetes objects defined in all of the YAML files in that directory. You don’t have to clone the entire repository, but it makes the steps of this demo just a little easier.
+
+The pods (especially Apache Zeppelin) are somewhat large, so may take some time for Docker to pull the images. Once everything is running, you should see something similar to the following:
+
+```
+$ kubectl get pods
+NAME READY STATUS RESTARTS AGE
+spark-master-controller-v4v4y 1/1 Running 0 21h
+spark-worker-controller-7phix 1/1 Running 0 21h
+spark-worker-controller-hq9l9 1/1 Running 0 21h
+spark-worker-controller-vwei5 1/1 Running 0 21h
+zeppelin-controller-t1njl 1/1 Running 0 21h
+```
+You can see that Kubernetes is running one instance of Zeppelin, one Spark master and three Spark workers.
+
+
+### Set up the Secure Proxy to Zeppelin
+Next you’ll set up a secure proxy from your local machine to Zeppelin, so you can access the Zeppelin instance running in the cluster from your machine. (Note: You’ll need to change this command to the actual Zeppelin pod that was created on your cluster.)
+
+```
+$ kubectl port-forward zeppelin-controller-t1njl 8080:8080
+```
+This establishes a secure link to the Kubernetes cluster and pod (`zeppelin-controller-t1njl`) and then forwards the port in question (8080) to local port 8080, which will allow you to use Zeppelin safely.
+
+
+### Now that I have Zeppelin up and running, what do I do with it?
+For our example, we’re going to show you how to build a simple movie recommendation model. This is based on the code [on the Spark website](http://spark.apache.org/docs/1.5.2/mllib-collaborative-filtering.html), modified slightly to make it interesting for Kubernetes.
+
+Now that the secure proxy is up, visit [http://localhost:8080/](http://localhost:8080/). You should see an intro page like this:
+
+
+[](https://1.bp.blogspot.com/-rk6iWAauxGM/VvwPoE25QFI/AAAAAAAAA6c/NOBZzIWfTYEqJin-tWY1zrePPOqr3TZWQ/s1600/Spark2.png)
+
+
+Click “Import note,” give it an arbitrary name (e.g. “Movies”), and click “Add from URL.” For a URL, enter:
+
+```
+https://gist.githubusercontent.com/zmerlynn/875fed0f587d12b08ec9/raw/6
+eac83e99caf712482a4937800b17bbd2e7b33c4/movies.json
+```
+Then click “Import Note.” This will give you a ready-made Zeppelin note for this demo. You should now have a “Movies” notebook (or whatever you named it). If you click that note, you should see a screen similar to this:
+
+
+[](https://2.bp.blogspot.com/-qyjtrUpXisg/VvwPvSPnWNI/AAAAAAAAA6g/Son7C2yWolk28KLSy63acGPnuFGjJEoeg/s1600/Spark1.png)
+
+You can now click the Play button, near the top-right of the PySpark code block, and you’ll create a new, in-memory movie recommendation model! In the Spark application model, Zeppelin acts as a [Spark Driver Program](https://spark.apache.org/docs/1.5.2/cluster-overview.html), interacting with the Spark cluster master to get its work done. In this case, the driver program that’s running in the Zeppelin pod fetches the data and sends it to the Spark master, which farms it out to the workers, which crunch out a movie recommendation model using the code from the driver. With a larger data set in Google Cloud Storage (GCS), it would be easy to pull the data from GCS as well. In the next section, we’ll talk about how to save your data to GCS.
+
+
+### Working with Google Cloud Storage (Optional)
+For this demo, we’ll be using Google Cloud Storage, which will let us store our model data beyond the life of a single pod. Spark for Kubernetes is built with the [Google Cloud Storage](https://cloud.google.com/storage/) connector built-in. As long as you can access your data from a virtual machine in the Google Container Engine project where your Kubernetes nodes are running, you can access your data with the GCS connector on the Spark image.
+
+If you want, you can change the variables at the top of the note so that the example will actually save and restore a model for the movie recommendation engine — just point those variables at a GCS bucket that you have access to. If you want to create a GCS bucket, you can do something like this on the command line:
+
+```
+$ gsutil mb gs://my-spark-models
+```
+You’ll need to change this URI to something that is unique for you. This will create a bucket that you can use in the example above.
+
+**Note** : Computing the model and saving it is much slower than computing the model and throwing it away. This is expected. However, if you plan to reuse a model, it’s faster to compute the model and save it and then restore it each time you want to use it, rather than throw away and recompute the model each time.
+
+### Using Horizontal Pod Autoscaling with Spark (Optional)
+Spark is somewhat elastic to workers coming and going, which means we have an opportunity: we can use use [Kubernetes Horizontal Pod Autoscaling](http://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/) to scale-out the Spark worker pool automatically, setting a target CPU threshold for the workers and a minimum/maximum pool size. This obviates the need for having to configure the number of worker replicas manually.
+
+Create the Autoscaler like this (note: if you didn’t change the machine type for the cluster, you probably want to limit the --max to something smaller):
+
+```
+$ kubectl autoscale --min=1 --cpu-percent=80 --max=10 \
+ rc/spark-worker-controller
+```
+To see the full effect of autoscaling, wait for the replication controller to settle back to one replica. Use `‘kubectl get rc’` and wait for the “replicas” column on spark-worker-controller to fall back to 1.
+
+The workload we ran before ran too quickly to be terribly interesting for HPA. To change the workload to actually run long enough to see autoscaling become active, change the “rank = 100” line in the code to “rank = 200.” After you hit play, the Spark worker pool should rapidly increase to 20 pods. It will take up to 5 minutes after the job completes before the worker pool falls back down to one replica.
+
+
+### Conclusion
+In this article, we showed you how to run Spark and Zeppelin on Kubernetes, as well as how to use Google Cloud Storage to store your Spark model and how to use Horizontal Pod Autoscaling to dynamically size your Spark worker pool.
+
+This is the first in a series of articles we’ll be publishing on how to run big data frameworks on Kubernetes — so stay tuned!
+
+Please join our community and help us build the future of Kubernetes! There are many ways to participate. If you’re particularly interested in Kubernetes and big data, you’ll be interested in:
+
+- Our [Big Data slack channel](https://kubernetes.slack.com/messages/sig-big-data/)
+- Our [Kubernetes Big Data Special Interest Group email list](https://groups.google.com/forum/#!forum/kubernetes-sig-big-data)
+- The Big Data “Special Interest Group,” which meets every Monday at 1pm (13h00) Pacific Time at [SIG-Big-Data hangout ](https://plus.google.com/hangouts/_/google.com/big-data)
+And of course for more information about the project in general, go to [www.kubernetes.io](http://www.kubernetes.io/).
+
+ -- _Zach Loafman, Software Engineer, Google_
diff --git a/blog/_posts/2016-04-00-Adding-Support-For-Kubernetes-In-Rancher.md b/blog/_posts/2016-04-00-Adding-Support-For-Kubernetes-In-Rancher.md
new file mode 100644
index 00000000000..8ef4dd4edbc
--- /dev/null
+++ b/blog/_posts/2016-04-00-Adding-Support-For-Kubernetes-In-Rancher.md
@@ -0,0 +1,40 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Adding Support for Kubernetes in Rancher "
+date: Saturday, April 08, 2016
+pagination:
+ enabled: true
+---
+_Today’s guest post is written by Darren Shepherd, Chief Architect at Rancher Labs, an open-source software platform for managing containers._
+
+Over the last year, we’ve seen a tremendous increase in the number of companies looking to leverage containers in their software development and IT organizations. To achieve this, organizations have been looking at how to build a centralized container management capability that will make it simple for users to get access to containers, while centralizing visibility and control with the IT organization. In 2014 we started the open-source Rancher project to address this by building a management platform for containers.
+
+Recently we shipped Rancher v1.0. With this latest release, [Rancher](http://www.rancher.com/), an open-source software platform for managing containers, now supports Kubernetes as a container orchestration framework when creating environments. Now, launching a Kubernetes environment with Rancher is fully automated, delivering a functioning cluster in just 5-10 minutes.
+
+We created Rancher to provide organizations with a complete management platform for containers. As part of that, we’ve always supported deploying Docker environments natively using the Docker API and Docker Compose. Since its inception, we’ve been impressed with the operational maturity of Kubernetes, and with this release, we’re making it possible to deploy a variety of container orchestration and scheduling frameworks within the same management platform.
+
+Adding Kubernetes gives users access to one of the fastest growing platforms for deploying and managing containers in production. We’ll provide first-class Kubernetes support in Rancher going forward and continue to support native Docker deployments.
+
+**Bringing Kubernetes to Rancher**
+
+
+ {: .big-img}
+
+Our platform was already extensible for a variety of different packaging formats, so we were optimistic about embracing Kubernetes. We were right, working with the Kubernetes project has been a fantastic experience as developers. The design of the project made this incredibly easy, and we were able to utilize plugins and extensions to build a distribution of Kubernetes that leveraged our infrastructure and application services. For instance, we were able to plug in Rancher’s software defined networking, storage management, load balancing, DNS and infrastructure management functions directly into Kubernetes, without even changing the code base.
+
+
+
+Even better, we have been able to add a number of services around the core Kubernetes functionality. For instance, we implemented our popular [application catalog on top of Kubernetes](https://github.com/rancher/community-catalog/tree/master/kubernetes-templates). Historically we’ve used Docker Compose to define application templates, but with this release, we now support Kubernetes services, replication controllers and pods to deploy applications. With the catalog, users connect to a git repo and automate deployment and upgrade of an application deployed as Kubernetes services. Users then configure and deploy a complex multi-node enterprise application with one click of a button. Upgrades are fully automated as well, and pushed out centrally to users.
+
+
+
+**Giving Back**
+
+
+
+Like Kubernetes, Rancher is an open-source software project, free to use by anyone, and given to the community without any restrictions. You can find all of the source code, upcoming releases and issues for Rancher on [GitHub](http://www.github.com/rancher/rancher). We’re thrilled to be joining the Kubernetes community, and look forward to working with all of the other contributors. View a demo of the new Kubernetes support in Rancher [here](http://rancher.com/kubernetes/).
+
+
+
+_-- Darren Shepherd, Chief Architect, Rancher Labs_
diff --git a/blog/_posts/2016-04-00-Building-Awesome-User-Interfaces-For-Kubernetes.md b/blog/_posts/2016-04-00-Building-Awesome-User-Interfaces-For-Kubernetes.md
new file mode 100644
index 00000000000..558bb214734
--- /dev/null
+++ b/blog/_posts/2016-04-00-Building-Awesome-User-Interfaces-For-Kubernetes.md
@@ -0,0 +1,34 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " SIG-UI: the place for building awesome user interfaces for Kubernetes "
+date: Thursday, April 20, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: This week we’re featuring [Kubernetes Special Interest Groups](https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs)); Today’s post is by the SIG-UI team describing their mission and showing the cool projects they work on._
+
+Kubernetes has been handling production workloads for a long time now (see [case studies](http://kubernetes.io/#talkToUs)). It runs on public, private and hybrid clouds as well as bare metal. It can handle all types of workloads (web serving, batch and mixed) and enable [zero-downtime rolling updates](https://www.youtube.com/watch?v=9C6YeyyUUmI). It abstracts service discovery, load balancing and storage so that applications running on Kubernetes aren’t restricted to a specific cloud provider or environment.
+
+The abundance of features that Kubernetes offers is fantastic, but implementing a user-friendly, easy-to-use user interface is quite challenging. How shall all the features be presented to users? How can we gradually expose the Kubernetes concepts to newcomers, while empowering experts? There are lots of other challenges like these that we’d like to solve. This is why we created a special interest group for Kubernetes user interfaces.
+
+**Meet SIG-UI: the place for building awesome user interfaces for Kubernetes**
+The SIG UI mission is simple: we want to radically improve the user experience of all Kubernetes graphical user interfaces. Our goal is to craft UIs that are used by devs, ops and resource managers across their various environments, that are simultaneously intuitive enough for newcomers to Kubernetes to understand and use.
+
+SIG UI members have been independently working on a variety of UIs for Kubernetes. So far, the projects we’ve seen have been either custom internal tools coupled to their company workflows, or specialized API frontends. We have realized that there is a need for a universal UI that can be used standalone or be a standard base for custom vendors. That’s how we started the [Dashboard UI](http://github.com/kubernetes/dashboard) project. Version 1.0 has been recently released and is included with Kubernetes as a cluster addon. The Dashboard project was recently featured in a [talk at KubeCon EU](https://www.youtube.com/watch?v=sARH5zQhovE), and we have ambitious plans for the future!
+
+|  |
+| Dashboard UI v1.0 home screen showing applications running in a Kubernetes cluster. |
+
+
+Since the initial release of the Dashboard UI we have been thinking hard about what to do next and what users of UIs for Kubernetes think about our plans. We’ve had many internal discussions on this topic, but most importantly, reached out directly to our users. We created a questionnaire asking a few demographic questions as well as questions for prioritizing use cases. We received more than 200 responses from a wide spectrum of user types, which in turn helped to shape the Dashboard UI’s [current roadmap](https://github.com/kubernetes/dashboard/blob/master/docs/devel/roadmap.md). Our members from LiveWyer summarised the results in a [nice infographic](http://static.lwy.io/img/kubernetes_dashboard_infographic.png).
+
+**Connect with us**
+
+We believe that collaboration is the key to SIG UI success, so we invite everyone to connect with us. Whether you’re a Kubernetes user who wants to provide feedback, develop your own UIs, or simply want to collaborate on the Dashboard UI project, feel free to get in touch. There are many ways you can contact us:
+
+- Email us at the [sig-ui mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-ui)
+- Chat with us on the [Kubernetes Slack](http://slack.k8s.io/): #[sig-ui channel](https://kubernetes.slack.com/messages/sig-ui/)
+- Join our meetings: biweekly on Wednesdays 9AM PT (US friendly) and weekly 10AM CET (Europe friendly). See the [SIG-UI calendar](https://calendar.google.com/calendar/embed?src=google.com_52lm43hc2kur57dgkibltqc6kc%40group.calendar.google.com&ctz=Europe/Warsaw) for details.
+
+_-- Piotr Bryk, Software Engineer, Google_
diff --git a/blog/_posts/2016-04-00-Configuration-Management-With-Containers.md b/blog/_posts/2016-04-00-Configuration-Management-With-Containers.md
new file mode 100644
index 00000000000..27faa6feda8
--- /dev/null
+++ b/blog/_posts/2016-04-00-Configuration-Management-With-Containers.md
@@ -0,0 +1,203 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Configuration management with Containers "
+date: Tuesday, April 04, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: this is our seventh post in a [series of in-depth posts](http://blog.kubernetes.io/2016/03/five-days-of-kubernetes-12.html) on what's new in Kubernetes 1.2_
+
+A [good practice](http://12factor.net/config) when writing applications is to separate application code from configuration. We want to enable application authors to easily employ this pattern within Kubernetes. While the Secrets API allows separating information like credentials and keys from an application, no object existed in the past for ordinary, non-secret configuration. In [Kubernetes 1.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md/#v120), we've added a new API resource called ConfigMap to handle this type of configuration data.
+
+
+#### **The basics of ConfigMap**
+The ConfigMap API is simple conceptually. From a data perspective, the ConfigMap type is just a set of key-value pairs. Applications are configured in different ways, so we need to be flexible about how we let users store and consume configuration data. There are three ways to consume a ConfigMap in a pod:
+
+
+- Command line arguments
+- Environment variables
+- Files in a volume
+
+These different methods lend themselves to different ways of modeling the data being consumed. To be as flexible as possible, we made ConfigMap hold both fine- and/or coarse-grained data. Further, because applications read configuration settings from both environment variables and files containing configuration data, we built ConfigMap to support either method of access. Let’s take a look at an example ConfigMap that contains both types of configuration:
+
+
+```
+apiVersion: v1
+
+kind: ConfigMap
+
+metadata:
+
+ Name: example-configmap
+
+data:
+
+ # property-like keys
+
+ game-properties-file-name: game.properties
+
+ ui-properties-file-name: ui.properties
+
+ # file-like keys
+
+ game.properties: |
+
+ enemies=aliens
+
+ lives=3
+
+ enemies.cheat=true
+
+ enemies.cheat.level=noGoodRotten
+
+ secret.code.passphrase=UUDDLRLRBABAS
+
+ secret.code.allowed=true
+
+ secret.code.lives=30
+
+ ui.properties: |
+
+ color.good=purple
+
+ color.bad=yellow
+
+ allow.textmode=true
+
+ how.nice.to.look=fairlyNice
+```
+
+
+Users that have used Secrets will find it easy to begin using ConfigMap — they’re very similar. One major difference in these APIs is that Secret values are stored as byte arrays in order to support storing binaries like SSH keys. In JSON and YAML, byte arrays are serialized as base64 encoded strings. This means that it’s not easy to tell what the content of a Secret is from looking at the serialized form. Since ConfigMap is intended to hold only configuration information and not binaries, values are stored as strings, and thus are readable in the serialized form.
+
+
+
+We want creating ConfigMaps to be as flexible as storing data in them. To create a ConfigMap object, we’ve added a kubectl command called `kubectl create configmap` that offers three different ways to specify key-value pairs:
+
+
+- Specify literal keys and value
+- Specify an individual file
+- Specify a directory to create keys for each file
+
+
+
+These different options can be mixed, matched, and repeated within a single command:
+
+```
+ $ kubectl create configmap my-config \
+
+ --from-literal=literal-key=literal-value \
+
+ --from-file=ui.properties \
+ --from=file=path/to/config/dir
+```
+Consuming ConfigMaps is simple and will also be familiar to users of Secrets. Here’s an example of a Deployment that uses the ConfigMap above to run an imaginary game server:
+
+```
+apiVersion: extensions/v1beta1
+
+kind: Deployment
+
+metadata:
+
+ name: configmap-example-deployment
+
+ labels:
+
+ name: configmap-example-deployment
+
+spec:
+
+ replicas: 1
+
+ selector:
+
+ matchLabels:
+
+ name: configmap-example
+
+ template:
+
+ metadata:
+
+ labels:
+
+ name: configmap-example
+
+ spec:
+
+ containers:
+
+ - name: game-container
+
+ image: imaginarygame
+
+ command: ["game-server", "--config-dir=/etc/game/cfg"]
+
+ env:
+
+ # consume the property-like keys in environment variables
+
+ - name: GAME\_PROPERTIES\_NAME
+
+ valueFrom:
+
+ configMapKeyRef:
+
+ name: example-configmap
+
+ key: game-properties-file-name
+
+ - name: UI\_PROPERTIES\_NAME
+
+ valueFrom:
+
+ configMapKeyRef:
+
+ name: example-configmap
+
+ key: ui-properties-file-name
+
+ volumeMounts:
+
+ - name: config-volume
+
+ mountPath: /etc/game
+
+ volumes:
+
+ # consume the file-like keys of the configmap via volume plugin
+
+ - name: config-volume
+
+ configMap:
+
+ name: example-configmap
+
+ items:
+
+ - key: ui.properties
+
+ path: cfg/ui.properties
+
+ - key: game.properties
+
+ path: cfg/game.properties
+ restartPolicy: Never
+```
+In the above example, the Deployment uses keys of the ConfigMap via two of the different mechanisms available. The property-like keys of the ConfigMap are used as environment variables to the single container in the Deployment template, and the file-like keys populate a volume. For more details, please see the [ConfigMap docs](http://kubernetes.io/docs/user-guide/configmap/).
+
+We hope that these basic primitives are easy to use and look forward to seeing what people build with ConfigMaps. Thanks to the community members that provided feedback about this feature. Special thanks also to Tamer Tas who made a great contribution to the proposal and implementation of ConfigMap.
+
+If you’re interested in Kubernetes and configuration, you’ll want to participate in:
+
+- Our Configuration [Slack channel](https://kubernetes.slack.com/messages/sig-configuration/)
+- Our [Kubernetes Configuration Special Interest Group](https://groups.google.com/forum/#!forum/kubernetes-sig-config) email list
+- The Configuration “Special Interest Group,” which meets weekly on Wednesdays at 10am (10h00) Pacific Time at [SIG-Config hangout](https://hangouts.google.com/hangouts/_/google.com/kube-sig-config)
+
+
+
+And of course for more information about the project in general, go to [www.kubernetes.io](http://www.kubernetes.io/) and follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio).
+
+-- _Paul Morie, Senior Software Engineer, Red Hat_
diff --git a/blog/_posts/2016-04-00-Container-Survey-Results-March-2016.md b/blog/_posts/2016-04-00-Container-Survey-Results-March-2016.md
new file mode 100644
index 00000000000..aa64b3fd270
--- /dev/null
+++ b/blog/_posts/2016-04-00-Container-Survey-Results-March-2016.md
@@ -0,0 +1,37 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Container survey results - March 2016 "
+date: Saturday, April 08, 2016
+pagination:
+ enabled: true
+---
+Last month, we had our third installment of our container survey and today we look at the results. (raw data is available [here](https://docs.google.com/spreadsheets/d/13356w6I2xxKnmjblFSsKGVANZGGlX2yFMzb8eOIe2Oo/edit?usp=sharing))
+
+
+Looking at the headline number, “how many people are using containers” we see a decrease in the number of people currently using containers from 89% to 80%. Obviously, we can’t be certain for the cause of this decrease, but it’s my believe that the previous number was artificially high due to sampling biases and we did a better job getting a broader reach of participants in the March survey and so the March numbers more accurately represent what is going on in the world.
+
+
+Along the lines of getting an unbiased sample, I’m excited to announce that going forward, we will be partnering with [The New Stack](http://thenewstack.io/) [and the](http://thenewstack.io/)[Cloud Native Compute Foundation](http://cncf.io/) to publicize and distribute this container survey. This partnership will enable us to reach a broader audience than we are reaching and thus obtain a significantly more unbiased sample and representative portrayal of current container usage. I’m really excited about this collaboration!
+
+
+But without further ado, more on the data.
+
+
+For the rest of the numbers, the March survey shows steady continuation of the numbers that we saw in February. Most of the container usage is still in Development and Testing, though a solid majority (60%) are using it for production as well. For the remaining folks using containers there continues to be a plan to bring containers to production as the “I am planning to” number for production use matches up nearly identically with the numbers for people currently in testing.
+
+
+Physical and virtual machines continue to be the most popular places to deploy containers, though the March survey shows a fairly substantial drop (48% -\> 35%) in people deploying to physical machines.
+
+
+Likewise hosted container services show growth, with nearly every service showing some growth. [Google Container Engine](https://cloud.google.com/container-engine/) continues to be the most popular in the survey, followed by the [Amazon EC2 Container Service](https://aws.amazon.com/ecs/). It will be interesting to see how those numbers change as we move to the New Stack survey.
+
+
+Finally, [Kubernetes](http://kubernetes.io/) is still the favorite for container manager, with [Bash scripts](http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO.html) are still in second place. As with the container service provider numbers I’ll be quite interested to see what this looks like with a broader sample set.
+
+
+Finally, the absolute use of containers appears to be ticking up. The number of people running more than 250 containers has grown from 12% to nearly 20%. And the number people running containers on 50 or more machines has grown from 10% to 18%.
+
+As always, the raw data is available for you to analyze [here](https://docs.google.com/spreadsheets/d/13356w6I2xxKnmjblFSsKGVANZGGlX2yFMzb8eOIe2Oo/edit?usp=sharing).
+
+--Brendan Burns, Software Engineer, Google
diff --git a/blog/_posts/2016-04-00-Introducing-Kubernetes-Openstack-Sig.md b/blog/_posts/2016-04-00-Introducing-Kubernetes-Openstack-Sig.md
new file mode 100644
index 00000000000..c76f2ba6bfd
--- /dev/null
+++ b/blog/_posts/2016-04-00-Introducing-Kubernetes-Openstack-Sig.md
@@ -0,0 +1,67 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Introducing the Kubernetes OpenStack Special Interest Group "
+date: Saturday, April 22, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: This week we’re featuring [Kubernetes Special Interest Groups](https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs)); Today’s post is by the SIG-OpenStack team about their mission to facilitate ideas between the OpenStack and Kubernetes communities. _
+
+
+
+The community around the Kubernetes project includes a number of Special Interest Groups (SIGs) for the purposes of facilitating focused discussions relating to important subtopics between interested contributors. Today we would like to highlight the [Kubernetes OpenStack SIG](https://github.com/kubernetes/kubernetes/wiki/SIG-Openstack) focused on the interaction between [Kubernetes](http://kubernetes.io/) and [OpenStack](http://www.openstack.org/), the Open Source cloud computing platform.
+
+There are two high level scenarios that are being discussed in the SIG:
+
+
+- Using Kubernetes to manage containerized workloads running on top of OpenStack
+- Using Kubernetes to manage containerized OpenStack services themselves
+
+In both cases the intent is to help facilitate the inter-pollination of ideas between the growing Kubernetes and OpenStack communities. The OpenStack community itself includes a number of projects broadly aimed at assisting with both of these use cases including:
+
+
+- [Kolla](http://governance.openstack.org/reference/projects/kolla.html) - Provides OpenStack service containers and deployment tooling for operating OpenStack clouds.
+- [Kuryr](http://governance.openstack.org/reference/projects/kuryr.html) - Provides bridges between container networking/storage framework models and OpenStack infrastructure services.
+- [Magnum](http://governance.openstack.org/reference/projects/magnum.html) - Provides containers as a service for OpenStack.
+- [Murano](http://governance.openstack.org/reference/projects/murano.html) - Provides an Application Catalog service for OpenStack including support for Kubernetes itself, and for containerized applications, managed by Kubernetes.
+
+
+There are also a number of example templates available to assist with using the OpenStack Orchestration service ([Heat](http://governance.openstack.org/reference/projects/heat.html)) to deploy and configure either Kubernetes itself or offerings built around Kubernetes such as [OpenShift](https://github.com/redhat-openstack/openshift-on-openstack/). While each of these approaches has their own pros and cons the common theme is the ability, or potential ability, to use Kubernetes and where available leverage deeper integration between it and the OpenStack services themselves.
+
+
+
+Current SIG participants represent a broad array of organizations including but not limited to: CoreOS, eBay, GoDaddy, Google, IBM, Intel, Mirantis, OpenStack Foundation, Rackspace, Red Hat, Romana, Solinea, VMware.
+
+
+
+The SIG is currently working on [collating information](https://docs.google.com/document/d/1wNl_xcITKwzUsFNRu5npUTJuh9pbJAdzzpG6Cd2Fcp0/edit?ts=57033dd6) about these approaches to help Kubernetes users navigate the OpenStack ecosystem along with feedback on which approaches to the requirements presented work best for operators.
+
+
+
+**Kubernetes at OpenStack Summit Austin**
+
+
+
+The [OpenStack Summit](https://www.openstack.org/summit/austin-2016/) is in Austin from April 25th to 29th and is packed with sessions related to containers and container management using Kubernetes. If you plan on joining us in Austin you can review the [schedule](https://www.openstack.org/summit/austin-2016/summit-schedule/) online where you will find a number of sessions, both in the form of presentations and hands on workshops, relating to [Kubernetes](https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Kubernetes) and [containerization](https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=containers) at large. Folks from the Kubernetes OpenStack SIG are particularly keen to get the thoughts of operators in the “[Ops: Containers on OpenStack](https://www.openstack.org/summit/austin-2016/summit-schedule/events/9500)” and “[Ops: OpenStack in Containers](https://www.openstack.org/summit/austin-2016/summit-schedule/events/9501)” working sessions.
+
+
+
+Kubernetes community experts will also be on hand in the Container Expert Lounge to answer your burning questions. You can find the lounge on the 4th floor of the Austin Convention Center.
+
+
+
+Follow [@kubernetesio](https://twitter.com/kubernetesio) and [#OpenStackSummit](https://twitter.com/search?q=%23openstacksummit) to keep up with the latest updates on Kubernetes at OpenStack Summit throughout the week.
+
+**Connect With Us**
+
+If you’re interested in Kubernetes and OpenStack, there are several ways to participate:
+
+
+- Email us at the [SIG-OpenStack mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-openstack)
+- Chat with us on the [Kubernetes Slack](http://slack.k8s.io/): [#sig-openstack channel](https://kubernetes.slack.com/messages/sig-openstack/) and #openstack-kubernetes on freenode
+- Join our meeting occurring every second Tuesday at 2 PM PDT; attend via the zoom videoconference found in our [meeting notes](https://docs.google.com/document/d/1iAQ3LSF_Ky6uZdFtEZPD_8i6HXeFxIeW4XtGcUJtPyU/edit#).
+
+
+
+_-- Steve Gordon, Principal Product Manager at Red Hat, and Ihor Dvoretskyi, OpenStack Operations Engineer at Mirantis_
diff --git a/blog/_posts/2016-04-00-Kubernetes-Network-Policy-APIs.md b/blog/_posts/2016-04-00-Kubernetes-Network-Policy-APIs.md
new file mode 100644
index 00000000000..e053231948f
--- /dev/null
+++ b/blog/_posts/2016-04-00-Kubernetes-Network-Policy-APIs.md
@@ -0,0 +1,171 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " SIG-Networking: Kubernetes Network Policy APIs Coming in 1.3 "
+date: Tuesday, April 18, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: This week we’re featuring [Kubernetes Special Interest Groups](https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs)); Today’s post is by the Network-SIG team describing network policy APIs coming in 1.3 - policies for security, isolation and multi-tenancy._
+
+The [Kubernetes network SIG](https://kubernetes.slack.com/messages/sig-network/) has been meeting regularly since late last year to work on bringing network policy to Kubernetes and we’re starting to see the results of this effort.
+
+One problem many users have is that the open access network policy of Kubernetes is not suitable for applications that need more precise control over the traffic that accesses a pod or service. Today, this could be a multi-tier application where traffic is only allowed from a tier’s neighbor. But as new Cloud Native applications are built by composing microservices, the ability to control traffic as it flows among these services becomes even more critical.
+
+In most IaaS environments (both public and private) this kind of control is provided by allowing VMs to join a ‘security group’ where traffic to members of the group is defined by a network policy or Access Control List (ACL) and enforced by a network packet filter.
+
+The Network SIG started the effort by identifying [specific use case scenarios](https://docs.google.com/document/d/1blfqiH4L_fpn33ZrnQ11v7LcYP0lmpiJ_RaapAPBbNU/edit?pref=2&pli=1#) that require basic network isolation for enhanced security. Getting the API right for these simple and common use cases is important because they are also the basis for the more sophisticated network policies necessary for multi-tenancy within Kubernetes.
+
+From these scenarios several possible approaches were considered and a minimal [policy specification](https://docs.google.com/document/d/1qAm-_oSap-f1d6a-xRTj6xaH1sYQBfK36VyjB5XOZug/edit) was defined. The basic idea is that if isolation were enabled on a per namespace basis, then specific pods would be selected where specific traffic types would be allowed.
+
+The simplest way to quickly support this experimental API is in the form of a ThirdPartyResource extension to the API Server, which is possible today in Kubernetes 1.2.
+
+If you’re not familiar with how this works, the Kubernetes API can be extended by defining ThirdPartyResources that create a new API endpoint at a specified URL.
+
+#### third-party-res-def.yaml
+```
+kind: ThirdPartyResource
+
+apiVersion: extensions/v1beta1
+
+metadata:
+
+ name: network-policy.net.alpha.kubernetes.io
+
+description: "Network policy specification"
+
+versions:
+
+- name: v1alpha1
+ ```
+
+
+
+
+```
+$kubectl create -f third-party-res-def.yaml
+ ```
+
+
+
+
+
+This will create an API endpoint (one for each namespace):
+
+
+
+
+```
+/net.alpha.kubernetes.io/v1alpha1/namespace/default/networkpolicys/
+ ```
+
+
+
+
+
+Third party network controllers can now listen on these endpoints and react as necessary when resources are created, modified or deleted. _Note: With the upcoming release of Kubernetes 1.3 - when the Network Policy API is released in beta form - there will be no need to create a ThirdPartyResource API endpoint as shown above._
+
+
+
+Network isolation is off by default so that all pods can communicate as they normally do. However, it’s important to know that once network isolation is enabled, all traffic to all pods, in all namespaces is blocked, which means that enabling isolation is going to change the behavior of your pods
+
+
+
+Network isolation is enabled by defining the _network-isolation_ annotation on namespaces as shown below:
+
+
+
+
+```
+net.alpha.kubernetes.io/network-isolation: [on | off]
+ ```
+
+
+
+Once network isolation is enabled, explicit network policies **must be applied** to enable pod communication.
+
+A policy specification can be applied to a namespace to define the details of the policy as shown below:
+
+
+
+```
+POST /apis/net.alpha.kubernetes.io/v1alpha1/namespaces/tenant-a/networkpolicys/
+
+
+{
+
+"kind": "NetworkPolicy",
+
+"metadata": {
+
+"name": "pol1"
+
+},
+
+"spec": {
+
+"allowIncoming": {
+
+"from": [
+
+{ "pods": { "segment": "frontend" } }
+
+],
+
+"toPorts": [
+
+{ "port": 80, "protocol": "TCP" }
+
+]
+
+},
+
+"podSelector": { "segment": "backend" }
+
+}
+
+}
+ ```
+
+
+
+In this example, the ‘ **tenant-a** ’ namespace would get policy ‘ **pol1** ’ applied as indicated. Specifically, pods with the **segment** label ‘ **backend** ’ would allow TCP traffic on port 80 from pods with the **segment** label ‘ **frontend** ’ to be received.
+
+
+
+Today, [Romana](http://romana.io/), [OpenShift](https://www.openshift.com/), [OpenContrail](http://www.opencontrail.org/) and [Calico](http://projectcalico.org/) support network policies applied to namespaces and pods. Cisco and VMware are working on implementations as well. Both Romana and Calico demonstrated these capabilities with Kubernetes 1.2 recently at KubeCon. You can watch their presentations here: [Romana](https://www.youtube.com/watch?v=f-dLKtK6qCs) ([slides](http://www.slideshare.net/RomanaProject/kubecon-london-2016-ronana-cloud-native-sdn)), [Calico](https://www.youtube.com/watch?v=p1zfh4N4SX0) ([slides](http://www.slideshare.net/kubecon/kubecon-eu-2016-secure-cloudnative-networking-with-project-calico)).
+
+
+
+**How does it work?**
+
+
+
+Each solution has their their own specific implementation details. Today, they rely on some kind of on-host enforcement mechanism, but future implementations could also be built that apply policy on a hypervisor, or even directly by the network itself.
+
+
+
+External policy control software (specifics vary across implementations) will watch the new API endpoint for pods being created and/or new policies being applied. When an event occurs that requires policy configuration, the listener will recognize the change and a controller will respond by configuring the interface and applying the policy. The diagram below shows an API listener and policy controller responding to updates by applying a network policy locally via a host agent. The network interface on the pods is configured by a CNI plugin on the host (not shown).
+
+
+
+ {: .big-img}
+
+
+
+
+
+If you’ve been holding back on developing applications with Kubernetes because of network isolation and/or security concerns, these new network policies go a long way to providing the control you need. No need to wait until Kubernetes 1.3 since network policy is available now as an experimental API enabled as a ThirdPartyResource.
+
+
+
+If you’re interested in Kubernetes and networking, there are several ways to participate - join us at:
+
+- Our [Networking slack channel](https://kubernetes.slack.com/messages/sig-network/)
+- Our [Kubernetes Networking Special Interest Group](https://groups.google.com/forum/#!forum/kubernetes-sig-network) email list
+
+
+The Networking “Special Interest Group,” which meets bi-weekly at 3pm (15h00) Pacific Time at [SIG-Networking hangout](https://zoom.us/j/5806599998).
+
+
+_--Chris Marino, Co-Founder, Pani Networks_
diff --git a/blog/_posts/2016-04-00-Kubernetes-On-Aws_15.md b/blog/_posts/2016-04-00-Kubernetes-On-Aws_15.md
new file mode 100644
index 00000000000..4b9b4d99b82
--- /dev/null
+++ b/blog/_posts/2016-04-00-Kubernetes-On-Aws_15.md
@@ -0,0 +1,124 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " How to deploy secure, auditable, and reproducible Kubernetes clusters on AWS "
+date: Saturday, April 15, 2016
+pagination:
+ enabled: true
+---
+
+_Today’s guest post is written by Colin Hom, infrastructure engineer at [CoreOS](https://coreos.com/), the company delivering Google’s Infrastructure for Everyone Else (#GIFEE) and running the world's containers securely on CoreOS Linux, Tectonic and Quay._
+
+_Join us at [CoreOS Fest Berlin](https://coreos.com/fest/), the Open Source Distributed Systems Conference, and learn more about CoreOS and Kubernetes._
+
+At CoreOS, we're all about deploying Kubernetes in production at scale. Today we are excited to share a tool that makes deploying Kubernetes on Amazon Web Services (AWS) a breeze. Kube-aws is a tool for deploying auditable and reproducible Kubernetes clusters to AWS, currently used by CoreOS to spin up production clusters.
+
+Today you might be putting the Kubernetes components together in a more manual way. With this helpful tool, Kubernetes is delivered in a streamlined package to save time, minimize interdependencies and quickly create production-ready deployments.
+
+A simple templating system is leveraged to generate cluster configuration as a set of declarative configuration templates that can be version controlled, audited and re-deployed. Since the entirety of the provisioning is by [AWS CloudFormation](https://aws.amazon.com/cloudformation/) and cloud-init, there’s no need for external configuration management tools on your end. Batteries included!
+
+To skip the talk and go straight to the project, check out [the latest release of kube-aws](https://github.com/coreos/coreos-kubernetes/releases), which supports Kubernetes 1.2.x. To get your cluster running, [check out the documentation](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html).
+
+**Why kube-aws? Security, auditability and reproducibility**
+
+Kube-aws is designed with three central goals in mind.
+
+
+**Secure** : TLS assets are encrypted via the [AWS Key Management Service (KMS)](https://aws.amazon.com/kms/) before being embedded in the CloudFormation JSON. By managing [IAM policy](http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) for the KMS key independently, an operator can decouple operational access to the CloudFormation stack from access to the TLS secrets.
+
+
+
+**Auditable** : kube-aws is built around the concept of cluster assets. These configuration and credential assets represent the complete description of the cluster. Since KMS is used to encrypt TLS assets, you can feel free to check your unencrypted stack JSON into version control as well!
+
+
+
+**Reproducible** : The _--export_ option packs your parameterized cluster definition into a single JSON file which defines a CloudFormation stack. This file can be version controlled and submitted directly to the CloudFormation API via existing deployment tooling, if desired.
+
+
+**How to get started with kube-aws**
+
+
+
+On top of this foundation, kube-aws implements features that make Kubernetes deployments on AWS easier to manage and more flexible. Here are some examples.
+
+
+
+**Route53 Integration** : Kube-aws can manage your cluster DNS records as part of the provisioning process.
+
+
+
+cluster.yaml
+```
+externalDNSName: my-cluster.kubernetes.coreos.com
+
+createRecordSet: true
+
+hostedZone: kubernetes.coreos.com
+
+recordSetTTL: 300
+```
+
+
+
+**Existing VPC Support** : Deploy your cluster to an existing VPC.
+
+
+
+cluster.yaml
+
+
+```
+vpcId: vpc-xxxxx
+
+routeTableId: rtb-xxxxx
+ ```
+
+
+
+**Validation** : Kube-aws supports validation of cloud-init and CloudFormation definitions, along with any external resources that the cluster stack will integrate with. For example, here’s a cloud-config with a misspelled parameter:
+
+
+
+userdata/cloud-config-worker
+
+
+```
+#cloud-config
+
+coreos:
+
+ flannel:
+ interrface: $private\_ipv4
+ etcd\_endpoints: {{ .ETCDEndpoints }}
+ ```
+
+
+
+ $ kube-aws validate
+
+
+ \> Validating UserData...
+ Error: cloud-config validation errors:
+ UserDataWorker: line 4: warning: unrecognized key "interrface"
+
+
+
+To get started, check out the [kube-aws documentation](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html).
+
+
+**Future Work**
+
+As always, the goal with kube-aws is to make deployments that are production ready. While we use kube-aws in production on AWS today, this project is pre-1.0 and there are a number of areas in which kube-aws needs to evolve.
+
+**Fault tolerance** : At CoreOS we believe Kubernetes on AWS is a potent platform for fault-tolerant and self-healing deployments. In the upcoming weeks, kube-aws will be rising to a new challenge: surviving the [Chaos Monkey](https://github.com/Netflix/SimianArmy/wiki/Chaos-Monkey) – control plane and all!
+
+**Zero-downtime updates** : Updating CoreOS nodes and Kubernetes components can be done without downtime and without interdependency with the correct instance replacement strategy.
+
+A [github issue](https://github.com/coreos/coreos-kubernetes/issues/340) tracks the work towards this goal. We look forward to seeing you get involved with the project by filing issues or contributing directly.
+
+
+_Learn more about Kubernetes and meet the community at [CoreOS Fest Berlin](https://coreos.com/fest/) - May 9-10, 2016_
+
+
+
+_– Colin Hom, infrastructure engineer, CoreOS_
diff --git a/blog/_posts/2016-04-00-Sig-Clusterops-Promote-Operability-And-Interoperability-Of-K8S-Clusters.md b/blog/_posts/2016-04-00-Sig-Clusterops-Promote-Operability-And-Interoperability-Of-K8S-Clusters.md
new file mode 100644
index 00000000000..8ec52d31f8c
--- /dev/null
+++ b/blog/_posts/2016-04-00-Sig-Clusterops-Promote-Operability-And-Interoperability-Of-K8S-Clusters.md
@@ -0,0 +1,35 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " SIG-ClusterOps: Promote operability and interoperability of Kubernetes clusters "
+date: Wednesday, April 19, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: This week we’re featuring [Kubernetes Special Interest Groups](https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs)); Today’s post is by the SIG-ClusterOps team whose mission is to promote operability and interoperability of Kubernetes clusters -- to listen, help & escalate._
+
+We think Kubernetes is an awesome way to run applications at scale! Unfortunately, there's a bootstrapping problem: we need good ways to build secure & reliable scale environments around Kubernetes. While some parts of the platform administration leverage the platform (cool!), there are fundamental operational topics that need to be addressed and questions (like upgrade and conformance) that need to be answered.
+
+**Enter Cluster Ops SIG – the community members who work under the platform to keep it running.**
+
+Our objective for Cluster Ops is to be a person-to-person community first, and a source of opinions, documentation, tests and scripts second. That means we dedicate significant time and attention to simply comparing notes about what is working and discussing real operations. Those interactions give us data to form opinions. It also means we can use real-world experiences to inform the project.
+
+We aim to become the forum for operational review and feedback about the project. For Kubernetes to succeed, operators need to have a significant voice in the project by weekly participation and collecting survey data. We're not trying to create a single opinion about ops, but we do want to create a coordinated resource for collecting operational feedback for the project. As a single recognized group, operators are more accessible and have a bigger impact.
+
+**What about real world deliverables?**
+
+We've got plans for tangible results too. We’re already driving toward concrete deliverables like reference architectures, tool catalogs, community deployment notes and conformance testing. Cluster Ops wants to become the clearing house for operational resources. We're going to do it based on real world experience and battle tested deployments.
+
+**Connect with us.**
+
+Cluster Ops can be hard work – don't do it alone. We're here to listen, to help when we can and escalate when we can't. Join the conversation at:
+
+
+- Chat with us on the [Cluster Ops Slack channel](https://kubernetes.slack.com/messages/sig-cluster-ops/)
+- Email us at the [Cluster Ops SIG email list](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-ops)
+
+The Cluster Ops Special Interest Group meets weekly at 13:00PT on Thursdays, you can join us via the [video hangout](https://plus.google.com/hangouts/_/google.com/sig-cluster-ops) and see latest [meeting notes](https://docs.google.com/document/d/1IhN5v6MjcAUrvLd9dAWtKcGWBWSaRU8DNyPiof3gYMY/edit) for agendas and topics covered.
+
+
+
+_--Rob Hirschfeld, CEO, RackN _
diff --git a/blog/_posts/2016-04-00-Using-Deployment-Objects-With.md b/blog/_posts/2016-04-00-Using-Deployment-Objects-With.md
new file mode 100644
index 00000000000..c1b11239a63
--- /dev/null
+++ b/blog/_posts/2016-04-00-Using-Deployment-Objects-With.md
@@ -0,0 +1,146 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Using Deployment objects with Kubernetes 1.2 "
+date: Saturday, April 01, 2016
+pagination:
+ enabled: true
+---
+_Editor's note: this is the seventh post in a [series of in-depth posts](http://blog.kubernetes.io/2016/03/five-days-of-kubernetes-12.html) on what's new in Kubernetes 1.2_
+
+Kubernetes has made deploying and managing applications very straightforward, with most actions a single API or command line away, including rolling out new applications, canary testing and upgrading. So why would we need Deployments?
+
+Deployment objects automate deploying and rolling updating applications. Compared with kubectl rolling-update, Deployment API is much faster, is declarative, is implemented server-side and has more features (for example, you can rollback to any previous revision even after the rolling update is done).
+
+ In today’s blogpost, we’ll cover how to use Deployments to:
+
+1. Deploy/rollout an application
+2. Update the application declaratively and progressively, without a service outage
+3. Rollback to a previous revision, if something’s wrong when you’re deploying/updating the application
+
+[](https://4.bp.blogspot.com/-M9Xc21XYtLA/Vv7ImzURFxI/AAAAAAAACg0/jlHU3nJ-qYwC74DMiD-joaDPqQfebj3-g/s1600/image03.gif)
+
+Without further ado, let’s start playing around with Deployments!
+
+
+### Getting started
+If you want to try this example, basically you’ll need 3 things:
+
+1. **A running Kubernetes cluster** : If you don’t already have one, check the [Getting Started guides](http://kubernetes.io/docs/getting-started-guides/) for a list of solutions on a range of platforms, from your laptop, to VMs on a cloud provider, to a rack of bare metal servers.
+2. **Kubectl, the Kubernetes CLI** : If you see a URL response after running kubectl cluster-info, you’re ready to go. Otherwise, follow the [instructions](http://kubernetes.io/docs/user-guide/prereqs/) to install and configure kubectl; or the [instructions for hosted solutions](https://cloud.google.com/container-engine/docs/before-you-begin) if you have a Google Container Engine cluster.
+3. The [configuration files for this demo](https://github.com/kubernetes/kubernetes.github.io/tree/master/docs/user-guide/update-demo).
+If you choose not to run this example yourself, that’s okay. Just watch this [video](https://youtu.be/eigalYy0v4w) to see what’s going on in each step.
+
+
+### Diving in
+The configuration files contain a static website. First, we want to start serving its static content. From the root of the Kubernetes repository, run:
+```
+$ kubectl proxy --www=docs/user-guide/update-demo/local/ &
+```
+Starting to serve on …
+
+This runs a proxy on the default port 8001. You may now visit [http://localhost:8001/static/](http://localhost:8001/static/) the demo website (and it should be a blank page for now). Now we want to run an app and show it on the website.
+```
+$ kubectl run update-demo
+--image=gcr.io/google\_containers/update-demo:nautilus --port=80 -l name=update-demo
+
+deployment “update-demo” created
+```
+This deploys 1 replica of an app with the image “update-demo:nautilus” and you can see it visually on [http://localhost:8001/static/](http://localhost:8001/static/).1
+
+
+
+[](https://3.bp.blogspot.com/-EYXhcEK1upw/Vv7JL4rOAtI/AAAAAAAACg4/uy9oKePGjA82xPHhX6ak2_NiHPZ3FU8gw/s1600/deployment-API-5.png)
+
+The card showing on the website represents a Kubernetes pod, with the pod’s name (ID), status, image, and labels.
+
+
+### Getting bigger
+Now we want more copies of this app!
+$ kubectl scale deployment/update-demo --replicas=4
+deployment "update-demo" scaled
+
+
+
+[](https://1.bp.blogspot.com/-6YXQqogAGcY/Vv7JnU7g_FI/AAAAAAAAChE/00pqgQvUXkcgjPzi7NfDnSSRJeBUHFaGQ/s1600/deployment-API-2.png)
+
+### Updating your application
+How about updating the app?
+```
+ $ kubectl edit deployment/update-demo
+
+ This opens up your default editor, and you can update the deployment on the fly. Find .spec.template.spec.containers[0].image and change nautilus to kitty. Save the file, and you’ll see:
+
+ deployment "update-demo" edited
+```
+You’re now updating the image of this app from “update-demo:nautilus” to “update-demo:kitty”. Deployments allow you to update the app progressively, without a service outage.
+
+
+[](https://2.bp.blogspot.com/-x4FmFXdzw30/Vv7KAAQ21wI/AAAAAAAAChM/QWv8Y03lIsU4JBqjE3XFQU2EtzZgogylA/s1600/deployment-API-3.png)
+
+After a while, you’ll find the update seems stuck. What happened?
+
+### Debugging your rollout
+If you look closer, you’ll find that the pods with the new “kitty” tagged image stays pending. The Deployment automatically stops the rollout if it’s failing. Let’s look at one of the new pod to see what happened:
+```
+$ kubectl describe pod/update-demo-1326485872-a4key
+```
+ Looking at the events of this pod, you’ll notice that Kubernetes failed to pull the image because the “kitty” tag wasn’t found:
+
+Failed to pull image "gcr.io/google\_containers/update-demo:kitty": Tag kitty not found in repository gcr.io/google\_containers/update-demo
+
+### Rolling back
+Ok, now we want to undo the changes and then take our time to figure out which image tag we should use.
+```
+$ kubectl rollout undo deployment/update-demo
+deployment "update-demo" rolled back
+```
+
+
+[](https://1.bp.blogspot.com/-6YXQqogAGcY/Vv7JnU7g_FI/AAAAAAAAChE/00pqgQvUXkcgjPzi7NfDnSSRJeBUHFaGQ/s1600/deployment-API-2.png)
+
+Everything’s back to normal, phew!
+
+To learn more about rollback, visit [rolling back a Deployment](http://kubernetes.io/docs/user-guide/deployments/#rolling-back-a-deployment).
+
+### Updating your application (for real)
+After a while, we finally figure that the right image tag is “kitten”, instead of “kitty”. Now change .spec.template.spec.containers[0].image tag from “nautilus“ to “kitten“.
+```
+$ kubectl edit deployment/update-demo
+deployment "update-demo" edited
+```
+
+
+[](https://4.bp.blogspot.com/-u7qPUSQOMLE/Vv7JndUqKaI/AAAAAAAAChA/jHoysiDbnNQU2prPJn19ZFOtLiatzPsMg/s1600/deployment-API-1.png)
+
+Now you see there are 4 cute kittens on the demo website, which means we’ve updated the app successfully! If you want to know the magic behind this, look closer at the Deployment:
+```
+$ kubectl describe deployment/update-demo
+```
+
+
+[](https://1.bp.blogspot.com/-3U1OTNqdz1s/Vv7Kfw4uGYI/AAAAAAAAChU/CgF6Mv5J6b8_lANXkpEIFytRGo9x0Bn_A/s1600/deployment-API-6.png)
+
+From the events section, you’ll find that the Deployment is managing another resource called [Replica Set](http://kubernetes.io/docs/user-guide/replicasets/), each controls the number of replicas of a different pod template. The Deployment enables progressive rollout by scaling up and down Replica Sets of new and old pod templates.
+
+### Conclusion
+Now, you’ve learned the basic use of Deployment objects:
+
+1. Deploy an app with a Deployment, using kubectl run
+2. Updating the app by updating the Deployment with kubectl edit
+3. Rolling back to a previously deployed app with kubectl rollout undo
+But there’s so much more in Deployment that this article didn’t cover! To discover more, continue reading [Deployment’s introduction](http://kubernetes.io/docs/user-guide/deployments/).
+
+**_Note:_** _In Kubernetes 1.2, Deployment (beta release) is now feature-complete and enabled by default. For those of you who have tried Deployment in Kubernetes 1.1, please **delete all Deployment 1.1 resources** (including the Replication Controllers and Pods they manage) before trying out Deployments in 1.2. This is necessary because we made some non-backward-compatible changes to the API._
+
+ If you’re interested in Kubernetes and configuration, you’ll want to participate in:
+
+- Our Configuration [slack channel](https://kubernetes.slack.com/messages/sig-configuration/)
+- Our [Kubernetes Configuration Special Interest Group](https://groups.google.com/forum/#!forum/kubernetes-sig-config) email list
+- The Configuration “Special Interest Group,” which meets weekly on Wednesdays at 10am (10h00) Pacific Time at [SIG-Config hangout](https://hangouts.google.com/hangouts/_/google.com/kube-sig-config)
+And of course for more information about the project in general, go to [www.kubernetes.io](http://www.kubernetes.io/).
+
+ -- _Janet Kuo, Software Engineer, Google_
+
+
+**1** “kubectl run” outputs the type and name of the resource(s) it creates. In 1.2, it now creates a deployment resource. You can use that in subsequent commands, such as "kubectl get deployment ", or "kubectl expose deployment ". If you want to write a script to do that automatically, in a forward-compatible manner, use "-o name" flag with "kubectl run", and it will generate short output "deployments/", which can also be used on subsequent command lines. The "--generator" flag can be used with "kubectl run" to generate other types of resources, for example, set it to "run/v1" to create a Replication Controller, which was the default in 1.1 and 1.0, and to "run-pod/v1" to create a Pod, such as for --restart=Never pods.
diff --git a/blog/_posts/2016-05-00-Coreosfest2016-Kubernetes-Community.md b/blog/_posts/2016-05-00-Coreosfest2016-Kubernetes-Community.md
new file mode 100644
index 00000000000..4e79b46d611
--- /dev/null
+++ b/blog/_posts/2016-05-00-Coreosfest2016-Kubernetes-Community.md
@@ -0,0 +1,45 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " CoreOS Fest 2016: CoreOS and Kubernetes Community meet in Berlin (& San Francisco) "
+date: Wednesday, May 03, 2016
+pagination:
+ enabled: true
+---
+[CoreOS Fest 2016](https://coreos.com/fest/) will bring together the container and open source distributed systems community, including many thought leaders in the Kubernetes space. It is the second annual CoreOS community conference, held for the first time in Berlin on May 9th and 10th. CoreOS believes Kubernetes is the container orchestration component to deliver GIFEE (Google’s Infrastructure for Everyone Else).
+
+At this year’s CoreOS Fest, there are tracks dedicated to Kubernetes where you’ll hear about various topics ranging from Kubernetes performance and scalability, continuous delivery and Kubernetes, rktnetes, stackanetes and more. In addition, there will be a variety of talks, from introductory workshops to deep-dives into all things containers and related software.
+
+Don’t miss these great speaker sessions at the conference in **Berlin** :
+
+
+- [Kubernetes Performance & Scalability Deep-Dive](https://coreosfest2016.sched.org/event/6ckp/kubernetes-performance-scalability-deep-dive?iframe=no&w=i:100;&sidebar=yes&bg=no) by Filip Grzadkowski, Senior Software Engineer at Google
+- [Launching a complex application in a Kubernetes cloud](http://coreosfest2016.sched.org/event/6T0b/launching-a-complex-application-in-a-kubernetes-cloud) by Thomas Fricke and Jannis Rake-Revelant, Operations & Infrastructure Lead, immmr Gmbh (a service developed by the Deutsche Telekom’s R&D department)
+- [I have Kubernetes, now what?](https://coreosfest2016.sched.org/event/6db3/i-have-kubernetes-now-what?iframe=no&w=i:100;&sidebar=yes&bg=no) by Gabriel Monroy, CTO of Engine Yard and creator of Deis
+- [When rkt meets Kubernetes: a troubleshooting tale](https://coreosfest2016.sched.org/event/6YGg/when-rkt-meets-kubernetes-a-troubleshooting-tale?iframe=no&w=i:100;&sidebar=yes&bg=no) by Luca Marturana, Software Engineer at Sysdig
+- [Use Kubernetes to deploy telecom applications](https://coreosfest2016.sched.org/event/6eSE/use-kubernetes-to-deploy-telecom-applications?iframe=no&w=i:100;&sidebar=yes&bg=no) by Victor Hu, Senior Engineer at Huawei Technologies
+- [Continuous Delivery, Kubernetes and You](https://coreosfest2016.sched.org/event/6qCs/continuous-delivery-kubernetes-and-you?iframe=no&w=i:100;&sidebar=yes&bg=no) by Micha Hernandez van Leuffen, CEO and founder of Wercker
+- [#GIFEE, More Containers, More Problems](https://coreosfest2016.sched.org/event/6YJl/gifee-more-containers-more-problems?iframe=no&w=i:100;&sidebar=yes&bg=no) by Ed Rooth, Head of Tectonic at CoreOS
+- [Kubernetes Access Control with dex](https://coreosfest2016.sched.org/event/6YH4/kubernetes-access-control-with-dex?iframe=no&w=i:100;&sidebar=yes&bg=no) by Eric Chiang, Software Engineer at CoreOS
+
+If you can’t make it to Berlin, Kubernetes is also a focal point at the **CoreOS Fest [San Francisco](https://www.eventbrite.com/e/coreos-fest-san-francisco-satellite-event-tickets-22705108591)**[**satellite event**](https://www.eventbrite.com/e/coreos-fest-san-francisco-satellite-event-tickets-22705108591), a one day event dedicated to CoreOS and Kubernetes. In fact, Tim Hockin, senior staff engineer at Google and one of the creators of Kubernetes, will be kicking off the day with a keynote dedicated to Kubernetes updates.
+
+**San Francisco** sessions dedicated to Kubernetes include:
+
+
+- Tim Hockin’s keynote address, Senior Staff Engineer at Google
+- When rkt meets Kubernetes: a troubleshooting tale by Loris Degioanni, CEO of Sysdig
+- rktnetes: what's new with container runtimes and Kubernetes by Derek Gonyeo, Software Engineer at CoreOS
+- Magical Security Sprinkles: Secure, Resilient Microservices on CoreOS and Kubernetes by Oliver Gould, CTO of Buoyant
+
+**Kubernetes Workshop in SF** : [Getting Started with Kubernetes](https://www.eventbrite.com/e/getting-started-with-kubernetes-tickets-25180552711), hosted at Google San Francisco office (345 Spear St - 7th floor) by Google Developer Program Engineers Carter Morgan and Bill Prin on Tuesday May 10th from 9:00am to 1:00pm, lunch will be served afterwards. Limited seats, please [RSVP for free here](https://www.eventbrite.com/e/getting-started-with-kubernetes-tickets-25180552711).
+
+**Get your tickets** :
+
+- [CoreOS Fest - Berlin](https://ti.to/coreos/coreos-fest-2016/en), at the [Berlin Congress Center](https://www.google.com/maps/place/bcc+Berlin+Congress+Center+GmbH/@52.5206732,13.4165195,15z/data=!4m2!3m1!1s0x0:0xd2a15220241f2080) ([hotel option](http://www.parkinn-berlin.de/))
+- satellite event in [San Francisco](https://www.eventbrite.com/e/coreos-fest-san-francisco-satellite-event-tickets-22705108591), at the [111 Minna Gallery](https://www.google.com/maps/place/111+Minna+Gallery/@37.7873222,-122.3994124,15z/data=!4m2!3m1!1s0x0:0xb55875af8c0ca88b?sa=X&ved=0ahUKEwjZ8cPLtL7MAhVQ5GMKHa8bCM4Q_BIIdjAN)
+
+Learn more at: [coreos.com/fest/](https://coreos.com/fest/) and on Twitter [@CoreOSFest](https://twitter.com/coreosfest) #CoreOSFest
+
+
+_-- Sarah Novotny, Kubernetes Community Manager_
diff --git a/blog/_posts/2016-05-00-Hypernetes-Security-And-Multi-Tenancy-In-Kubernetes.md b/blog/_posts/2016-05-00-Hypernetes-Security-And-Multi-Tenancy-In-Kubernetes.md
new file mode 100644
index 00000000000..58224153bfc
--- /dev/null
+++ b/blog/_posts/2016-05-00-Hypernetes-Security-And-Multi-Tenancy-In-Kubernetes.md
@@ -0,0 +1,203 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Hypernetes: Bringing Security and Multi-tenancy to Kubernetes "
+date: Wednesday, May 24, 2016
+pagination:
+ enabled: true
+---
+_Today’s guest post is written by Harry Zhang and Pengfei Ni, engineers at HyperHQ, describing a new hypervisor based container called HyperContainer_
+
+While many developers and security professionals are comfortable with Linux containers as an effective boundary, many users need a stronger degree of isolation, particularly for those running in a multi-tenant environment. Sadly, today, those users are forced to run their containers inside virtual machines, even one VM per container.
+
+Unfortunately, this results in the loss of many of the benefits of a cloud-native deployment: slow startup time of VMs; a memory tax for every container; low utilization resulting in wasting resources.
+
+In this post, we will introduce HyperContainer, a hypervisor based container and see how it naturally fits into the Kubernetes design, and enables users to serve their customers directly with virtualized containers, instead of wrapping them inside of full blown VMs.
+
+**HyperContainer**
+
+[HyperContainer](http://hypercontainer.io/) is a hypervisor-based container, which allows you to launch Docker images with standard hypervisors (KVM, Xen, etc.). As an open-source project, HyperContainer consists of an [OCI](https://github.com/opencontainers/runtime-spec) compatible runtime implementation, named [runV](https://github.com/hyperhq/runv/), and a management daemon named [hyperd](https://github.com/hyperhq/hyperd). The idea behind HyperContainer is quite straightforward: to combine the best of both virtualization and container.
+
+We can consider containers as two parts (as Kubernetes does). The first part is the container runtime, where HyperContainer uses virtualization to achieve execution isolation and resource limitation instead of namespaces and cgroups. The second part is the application data, where HyperContainer leverages Docker images. So in HyperContainer, virtualization technology makes it possible to build a fully isolated sandbox with an independent guest kernel (so things like `top` and /proc all work), but from developer’s view, it’s portable and behaves like a standard container.
+
+**HyperContainer as Pod**
+
+The interesting part of HyperContainer is not only that it is secure enough for multi-tenant environments (such as a public cloud), but also how well it fits into the Kubernetes philosophy.
+
+One of the most important concepts in Kubernetes is Pods. The design of Pods is a lesson learned ([Borg paper section 8.1](http://static.googleusercontent.com/media/research.google.com/zh-CN//pubs/archive/43438.pdf)) from real world workloads, where in many cases people want an atomic scheduling unit composed of multiple containers (please check this [example](https://github.com/kubernetes/kubernetes/tree/master/examples/javaweb-tomcat-sidecar) for further information). In the context of Linux containers, a Pod wraps and encapsulates several containers into a logical group. But in HyperContainer, the hypervisor serves as a natural boundary, and Pods are introduced as first-class objects:
+
+
+
+ {: .big-img}
+
+
+
+HyperContainer wraps a Pod of light-weight application containers and exposes the container interface at Pod level. Inside the Pod, a minimalist Linux kernel called HyperKernel is booted. This HyperKernel is built with a tiny Init service called HyperStart. It will act as the PID 1 process and creates the Pod, setup Mount namespace, and launch apps from the loaded images.
+
+
+
+This model works nicely with Kubernetes. The integration of HyperContainer with Kubernetes, as we indicated in the title, is what makes up the [Hypernetes](https://github.com/hyperhq/hypernetes) project.
+
+
+
+**Hypernetes**
+
+
+
+One of the best parts of Kubernetes is that it is designed to support multiple container runtimes, meaning users are not locked-in to a single vendor. We are very pleased to announce that we have already begun working with the Kubernetes team to integrate HyperContainer into Kubernetes upstream. This integration involves:
+
+1. container runtime optimizing and refactoring
+2. new client-server mode runtime interface
+3. containerd integration to support runV
+
+The OCI standard and kubelet’s multiple runtime architecture make this integration much easier even though HyperContainer is not based on Linux container technology stack.
+
+
+
+On the other hand, in order to run HyperContainers in multi-tenant environment, we also created a new network plugin and modified an existing volume plugin. Since Hypernetes runs Pod as their own VMs, it can make use of your existing IaaS layer technologies for multi-tenant network and persistent volumes. The current Hypernetes implementation uses standard Openstack components.
+
+
+
+Below we go into further details about how all those above are implemented.
+
+
+
+**Identity and Authentication**
+
+
+
+In Hypernetes we chose [Keystone](http://docs.openstack.org/developer/keystone/) to manage different tenants and perform identification and authentication for tenants during any administrative operation. Since Keystone comes from the OpenStack ecosystem, it works seamlessly with the network and storage plugins we used in Hypernetes.
+
+
+
+**Multi-tenant Network Model**
+
+
+
+For a multi-tenant container cluster, each tenant needs to have strong network isolation from each other tenant. In Hypernetes, each tenant has its own Network. Instead of configuring a new network using OpenStack, which is complex, with Hypernetes, you just create a Network object like below.
+
+```
+apiVersion: v1
+kind: Network
+metadata:
+ name: net1
+spec:
+ tenantID: 065f210a2ca9442aad898ab129426350
+ subnets:
+ subnet1:
+ cidr: 192.168.0.0/24
+ gateway: 192.168.0.1
+```
+
+
+Note that the tenantID is supplied by Keystone. This yaml will automatically create a new Neutron network with a default router and a subnet 192.168.0.0/24.
+
+
+
+A Network controller will be responsible for the life-cycle management of any Network instance created by the user. This Network can be assigned to one or more Namespaces, and any Pods belonging to the same Network can reach each other directly through IP address.
+
+
+```
+apiVersion: v1
+kind: Namespace
+metadata:
+ name: ns1
+spec:
+ network: net1
+```
+
+
+If a Namespace does not have a Network spec, it will use the default Kubernetes network model instead, including the default kube-proxy. So if a user creates a Pod in a Namespace with an associated Network, Hypernetes will follow the [Kubernetes Network Plugin Model](http://kubernetes.io/docs/admin/network-plugins/) to set up a Neutron network for this Pod. Here is a high level example:
+
+
+
+ {: HyperContainer wraps a Pod of li.big-img}
+
+
+
+
+
+Hypernetes uses a standalone gRPC handler named kubestack to translate the Kubernetes Pod request into the Neutron network API. Moreover, kubestack is also responsible for handling another important networking feature: a multi-tenant Service proxy.
+
+
+
+In a multi-tenant environment, the default iptables-based kube-proxy can not reach the individual Pods, because they are isolated into different networks. Instead, Hypernetes uses a [built-in HAproxy in every HyperContainer](https://github.com/hyperhq/hyperd/blob/2072dd8e28a02a25ae6a819f81029b47a579e683/servicediscovery/servicediscovery.go) as the portal. This HAproxy will proxy all the Service instances in the namespace of that Pod. Kube-proxy will be responsible for updating these backend servers by following the standard OnServiceUpdate and OnEndpointsUpdate processes, so that users will not notice any difference. A downside of this method is that HAproxy has to listen to some specific ports which may conflicts with user’s containers.That’s why we are planning to use LVS to replace this proxy in the next release.
+
+
+
+With the help of the Neutron based network plugin, the Hypernetes Service is able to provide an OpenStack load balancer, just like how the “external” load balancer does on GCE. When user creates a Service with external IPs, an OpenStack load balancer will be created and endpoints will be automatically updated through the kubestack workflow above.
+
+
+
+**Persistent Storage**
+
+
+
+When considering storage, we are actually building a tenant-aware persistent volume in Kubernetes. The reason we decided not to use existing Cinder volume plugin of Kubernetes is that its model does not work in the virtualization case. Specifically:
+
+
+
+The Cinder volume plugin requires OpenStack as the Kubernetes provider.
+
+The OpenStack provider will find on which VM the target Pod is running on
+
+Cinder volume plugin will mount a Cinder volume to a path inside the host VM of Kubernetes.
+
+The kubelet will bind mount this path as a volume into containers of target Pod.
+
+
+
+But in Hypernetes, things become much simpler. Thanks to the physical boundary of Pods, HyperContainer can mount Cinder volumes directly as block devices into Pods, just like a normal VM. This mechanism eliminates extra time to query Nova to find out the VM of target Pod in the existing Cinder volume workflow listed above.
+
+
+
+The current implementation of the Cinder plugin in Hypernetes is based on Ceph RBD backend, and it works the same as all other Kubernetes volume plugins, one just needs to remember to create the Cinder volume (referenced by volumeID below) beforehand.
+
+
+```
+apiVersion: v1
+kind: Pod
+metadata:
+ name: nginx
+ labels:
+ app: nginx
+spec:
+ containers:
+ - name: nginx
+ image: nginx
+ ports:
+ - containerPort: 80
+ volumeMounts:
+ - name: nginx-persistent-storage
+ mountPath: /var/lib/nginx
+ volumes:
+ - name: nginx-persistent-storage
+ cinder:
+ volumeID: 651b2a7b-683e-47e1-bdd6-e3c62e8f91c0
+ fsType: ext4
+```
+
+
+
+
+So when the user provides a Pod yaml with a Cinder volume, Hypernetes will check if kubelet is using the Hyper container runtime. If so, the Cinder volume can be mounted directly to the Pod without any extra path mapping. Then the volume metadata will be passed to the Kubelet RunPod process as part of HyperContainer spec. Done!
+
+
+
+Thanks to the plugin model of Kubernetes network and volume, we can easily build our own solutions above for HyperContainer though it is essentially different from the traditional Linux container. We also plan to propose these solutions to Kubernetes upstream by following the CNI model and volume plugin standard after the runtime integration is completed.
+
+We believe all of these [open source projects](https://github.com/hyperhq/) are important components of the container ecosystem, and their growth depends greatly on the open source spirit and technical vision of the Kubernetes team.
+
+
+
+**Conclusion**
+
+
+
+This post introduces some of the technical details about HyperContainer and the Hypernetes project. We hope that people will be interested in this new category of secure container and its integration with Kubernetes. If you are looking to try out Hypernetes and HyperContainer, we have just announced the public beta of our new secure container cloud service ([Hyper\_](https://hyper.sh/)), which is built on these technologies. But even if you are running on-premise, we believe that Hypernetes and HyperContainer will let you run Kubernetes in a more secure way.
+
+
+
+
+
+_~Harry Zhang and Pengfei Ni, engineers at HyperHQ_
diff --git a/blog/_posts/2016-06-00-Bringing-End-To-End-Testing-To-Azure.md b/blog/_posts/2016-06-00-Bringing-End-To-End-Testing-To-Azure.md
new file mode 100644
index 00000000000..0b93c3e235a
--- /dev/null
+++ b/blog/_posts/2016-06-00-Bringing-End-To-End-Testing-To-Azure.md
@@ -0,0 +1,107 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Bringing End-to-End Kubernetes Testing to Azure (Part 1) "
+date: Tuesday, June 06, 2016
+pagination:
+ enabled: true
+---
+
+_Today’s guest post is by Travis Newhouse, Chief Architect at AppFormix, writing about their experiences bringing Kubernetes to Azure._
+
+At [AppFormix](http://www.appformix.com/), continuous integration testing is part of our culture. We see many benefits to running end-to-end tests regularly, including minimizing regressions and ensuring our software works together as a whole. To ensure a high quality experience for our customers, we require the ability to run end-to-end testing not just for our application, but for the entire orchestration stack. Our customers are adopting Kubernetes as their container orchestration technology of choice, and they demand choice when it comes to where their containers execute, from private infrastructure to public providers, including Azure. After several weeks of work, we are pleased to announce we are contributing a nightly, continuous integration job that executes e2e tests on the Azure platform. After running the e2e tests each night for only a few weeks, we have already found and fixed two issues in Kubernetes. We hope our contribution of an e2e job will help the community maintain support for the Azure platform as Kubernetes evolves.
+
+
+
+In this blog post, we describe the journey we took to implement deployment scripts for the Azure platform. The deployment scripts are a prerequisite to the e2e test job we are contributing, as the scripts make it possible for our e2e test job to test the latest commits to the Kubernetes master branch. In a subsequent blog post, we will describe details of the e2e tests that will help maintain support for the Azure platform, and how to contribute federated e2e test results to the Kubernetes project.
+
+
+
+**BACKGROUND**
+
+While Kubernetes is designed to operate on any IaaS, and [solution guides](http://kubernetes.io/docs/getting-started-guides/#table-of-solutions) exist for many platforms including [Google Compute Engine](http://kubernetes.io/docs/getting-started-guides/gce/), [AWS](http://kubernetes.io/docs/getting-started-guides/aws/), [Azure](http://kubernetes.io/docs/getting-started-guides/coreos/azure/), and [Rackspace](http://kubernetes.io/docs/getting-started-guides/rackspace/), the Kubernetes project refers to these as “versioned distros,” as they are only tested against a particular binary release of Kubernetes. On the other hand, “development distros” are used daily by automated, e2e tests for the latest Kubernetes source code, and serve as gating checks to code submission.
+
+
+
+When we first surveyed existing support for Kubernetes on Azure, we found documentation for running Kubernetes on Azure using CoreOS and Weave. The documentation includes [scripts for deployment](http://kubernetes.io/docs/getting-started-guides/coreos/azure/), but the scripts do not conform to the cluster/kube-up.sh framework for automated cluster creation required by a “development distro.” Further, there did not exist a continuous integration job that utilized the scripts to validate Kubernetes using the end-to-end test scenarios (those found in test/e2e in the Kubernetes repository).
+
+
+
+With some additional investigation into the project history (side note: git log --all --grep='azure' --oneline was quite helpful), we discovered that there previously existed a set of scripts that integrated with the cluster/kube-up.sh framework. These scripts were discarded on October 16, 2015 ([commit 8e8437d](https://github.com/kubernetes/kubernetes/pull/15790)) because the scripts hadn’t worked since before Kubernetes version 1.0. With these commits as a starting point, we set out to bring the scripts up to date, and create a supported continuous integration job that will aid continued maintenance.
+
+
+
+**CLUSTER DEPLOYMENT SCRIPTS**
+
+To setup a Kubernetes cluster with Ubuntu VMs on Azure, we followed the groundwork laid by the previously abandoned commit, and tried to leverage the existing code as much as possible. The solution uses SaltStack for deployment and OpenVPN for networking between the master and the minions. SaltStack is also used for configuration management by several other solutions, such as AWS, GCE, Vagrant, and Vsphere. Resurrecting the discarded commit was a starting point, but we soon realized several key elements that needed attention:
+
+- Install Docker and Kubernetes on the nodes using SaltStack
+- Configure authentication for services
+- Configure networking
+
+The cluster setup scripts ensure Docker is installed, copy the Kubernetes Docker images to the master and minions nodes, and load the images. On the master node, SaltStack launches kubelet, which in turn launches the following Kubernetes services running in containers: kube-api-server, kube-scheduler, and kube-controller-manager. On each of the minion nodes, SaltStack launches kubelet, which starts kube-proxy.
+
+
+
+Kubernetes services must authenticate when communicating with each other. For example, minions register with the kube-api service on the master. On the master node, scripts generate a self-signed certificate and key that kube-api uses for TLS. Minions are configured to skip verification of the kube-api’s (self-signed) TLS certificate. We configure the services to use username and password credentials. The username and password are generated by the cluster setup scripts, and stored in the kubeconfig file on each node.
+
+
+
+Finally, we implemented the networking configuration. To keep the scripts parameterized and minimize assumptions about the target environment, the scripts create a new Linux bridge device (cbr0), and ensure that all containers use that interface to access the network. To configure networking, we use OpenVPN to establish tunnels between master and minion nodes. For each minion, we reserve a /24 subnet to use for its pods. Azure assigned each node its own IP address. We also added the necessary routing table entries for this bridge to use OpenVPN interfaces. This is required to ensure pods in different hosts can communicate with each other. The routes on the master and minion are the following:
+
+
+
+
+
+###### master
+```
+Destination Gateway Genmask Flags Metric Ref Use Iface
+
+10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0
+
+10.8.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0
+
+10.244.1.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0
+
+10.244.2.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0
+
+172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 cbr0
+```
+
+###### minion-1
+```
+10.8.0.0 10.8.0.5 255.255.255.0 UG 0 0 0 tun0
+
+10.8.0.5 0.0.0.0 255.255.255.255 UH 0 0 0 tun0
+
+10.244.1.0 0.0.0.0 255.255.255.0 U 0 0 0 cbr0
+
+10.244.2.0 10.8.0.5 255.255.255.0 UG 0 0 0 tun0
+```
+
+###### minion-2
+```
+10.8.0.0 10.8.0.9 255.255.255.0 UG 0 0 0 tun0
+
+10.8.0.9 0.0.0.0 255.255.255.255 UH 0 0 0 tun0
+
+10.244.1.0 10.8.0.9 255.255.255.0 UG 0 0 0 tun0
+
+10.244.2.0 0.0.0.0 255.255.255.0 U 0 0 0 cbr0
+```
+
+ [](https://3.bp.blogspot.com/-U2KYWNzJpFI/V3QMYbKRX8I/AAAAAAAAAks/SqEvCDJHJ8QtbB9hJVM8WAkFuAUlrFl8ACLcB/s1600/Kubernetes%2BBlog%2BPost%2B-%2BKubernetes%2Bon%2BAzure%2B%2528Part%2B1%2529.png) |
+| Figure 1 - OpenVPN network configuration |
+
+**FUTURE WORK** With the deployment scripts implemented, a subset of e2e test cases are passing on the Azure platform. Nightly results are published to the [Kubernetes test history dashboard](http://storage.googleapis.com/kubernetes-test-history/static/index.html). Weixu Zhuang made a [pull request](https://github.com/kubernetes/kubernetes/pull/21207) on Kubernetes GitHub, and we are actively working with the Kubernetes community to merge the Azure cluster deployment scripts necessary for a nightly e2e test job. The deployment scripts provide a minimal working environment for Kubernetes on Azure. There are several next steps to continue the work, and we hope the community will get involved to achieve them.
+
+- Only a subset of the e2e scenarios are passing because some cloud provider interfaces are not yet implemented for Azure, such as load balancer and instance information. To this end, we seek community input and help to define an Azure implementation of the cloudprovider interface (pkg/cloudprovider/). These interfaces will enable features such as Kubernetes pods being exposed to the external network and cluster DNS.
+- Azure has new APIs for interacting with the service. The submitted scripts currently use the Azure Service Management APIs, [which are deprecated](https://azure.microsoft.com/en-us/documentation/articles/azure-classic-rm/). The Azure Resource Manager APIs should be used in the deployment scripts.
+The team at AppFormix is pleased to contribute support for Azure to the Kubernetes community. We look forward to feedback about how we can work together to improve Kubernetes on Azure.
+
+
+
+_Editor's Note: Want to _contribute to_ Kubernetes, get involved [here](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Ahelp-wanted). Have your own Kubernetes story you’d like to tell, [let us know](https://docs.google.com/a/google.com/forms/d/1cHiRdmBCEmUH9ekHY2G-KDySk5YXRzALHcMNgzwXtPM/viewform)!_
+
+
+Part II is available [here](http://blog.kubernetes.io/2016/07/bringing-end-to-end-kubernetes-testing-to-azure-2.html).
diff --git a/blog/_posts/2016-06-00-Container-Design-Patterns.md b/blog/_posts/2016-06-00-Container-Design-Patterns.md
new file mode 100644
index 00000000000..cf42a9ccd12
--- /dev/null
+++ b/blog/_posts/2016-06-00-Container-Design-Patterns.md
@@ -0,0 +1,26 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Container Design Patterns "
+date: Wednesday, June 21, 2016
+pagination:
+ enabled: true
+---
+Kubernetes automates deployment, operations, and scaling of applications, but our goals in the Kubernetes project extend beyond system management -- we want Kubernetes to help developers, too. Kubernetes should make it easy for them to write the distributed applications and services that run in cloud and datacenter environments. To enable this, Kubernetes defines not only an API for administrators to perform management actions, but also an API for containerized applications to interact with the management platform.
+
+Our work on the latter is just beginning, but you can already see it manifested in a few features of Kubernetes. For example:
+
+
+- The “[graceful termination](http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_podspec)” mechanism provides a callback into the container a configurable amount of time before it is killed (due to a rolling update, node drain for maintenance, etc.). This allows the application to cleanly shut down, e.g. persist in-memory state and cleanly conclude open connections.
+- [Liveness and readiness probes](http://kubernetes.io/docs/user-guide/production-pods/#liveness-and-readiness-probes-aka-health-checks) check a configurable application HTTP endpoint (other probe types are supported as well) to determine if the container is alive and/or ready to receive traffic. The response determines whether Kubernetes will restart the container, include it in the load-balancing pool for its Service, etc.
+- [ConfigMap](http://kubernetes.io/docs/user-guide/configmap/) allows applications to read their configuration from a Kubernetes resource rather than using command-line flags.
+
+More generally, we see Kubernetes enabling a new generation of design patterns, similar to [object oriented design patterns](https://en.wikipedia.org/wiki/Object-oriented_programming#Design_patterns), but this time for containerized applications. That design patterns would emerge from containerized architectures is not surprising -- containers provide many of the same benefits as software objects, in terms of modularity/packaging, abstraction, and reuse. Even better, because containers generally interact with each other via HTTP and widely available data formats like JSON, the benefits can be provided in a language-independent way.
+
+This week Kubernetes co-founder Brendan Burns is presenting a [**paper**](https://www.usenix.org/conference/hotcloud16/workshop-program/presentation/burns) outlining our thoughts on this topic at the [8th Usenix Workshop on Hot Topics in Cloud Computing](https://www.usenix.org/conference/hotcloud16) (HotCloud ‘16), a venue where academic researchers and industry practitioners come together to discuss ideas at the forefront of research in private and public cloud technology. The paper describes three classes of patterns: management patterns (such as those described above), patterns involving multiple cooperating containers running on the same node, and patterns involving containers running across multiple nodes. We don’t want to spoil the fun of reading the paper, but we will say that you’ll see that the [Pod](http://kubernetes.io/docs/user-guide/pods/) abstraction is a key enabler for the last two types of patterns.
+
+As the Kubernetes project continues to bring our decade of experience with [Borg](https://queue.acm.org/detail.cfm?id=2898444) to the open source community, we aim not only to make application deployment and operations at scale simple and reliable, but also to make it easy to create “cloud-native” applications in the first place. Our work on documenting our ideas around design patterns for container-based services, and Kubernetes’s enabling of such patterns, is a first step in this direction. We look forward to working with the academic and practitioner communities to identify and codify additional patterns, with the aim of helping containers fulfill the promise of bringing increased simplicity and reliability to the entire software lifecycle, from development, to deployment, to operations.
+
+To learn more about the Kubernetes project visit [kubernetes.io](http://kubernetes.io/) or chat with us on Slack at [slack.kubernetes.io](http://slack.kubernetes.io/).
+
+-_-Brendan Burns and David Oppenheimer, Software Engineers, Google_
diff --git a/blog/_posts/2016-06-00-Illustrated-Childrens-Guide-To-Kubernetes.md b/blog/_posts/2016-06-00-Illustrated-Childrens-Guide-To-Kubernetes.md
new file mode 100644
index 00000000000..904b5879865
--- /dev/null
+++ b/blog/_posts/2016-06-00-Illustrated-Childrens-Guide-To-Kubernetes.md
@@ -0,0 +1,20 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " The Illustrated Children's Guide to Kubernetes "
+date: Friday, June 09, 2016
+pagination:
+ enabled: true
+---
+_Kubernetes is an open source project with a growing community. We love seeing the ways that our community innovates inside and on top of Kubernetes. Deis is an excellent example of company who understands the strategic impact of strong container orchestration. They contribute directly to the project; in associated subprojects; and, delightfully, with a creative endeavor to help our user community understand more about what Kubernetes is. Want to contribute to Kubernetes? One way is to get involved [here](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Ahelp-wanted) and help us with code. But, please don’t consider that the only way to contribute. This little adventure that Deis takes us is an example of how open source isn’t only code. _
+
+_Have your own Kubernetes story you’d like to tell, [let us know](https://docs.google.com/a/google.com/forms/d/1cHiRdmBCEmUH9ekHY2G-KDySk5YXRzALHcMNgzwXtPM/viewform)!_
+_-- @sarahnovotny Community Wonk, Kubernetes project._
+
+_Guest post is by Beau Vrolyk, CEO of Deis, the open source Kubernetes-native PaaS._
+
+Over at [Deis](https://deis.com/), we’ve been busy building open source tools for Kubernetes. We’re just about to finish up moving our easy-to-use application platform to Kubernetes and couldn’t be happier with the results. In the Kubernetes project we’ve found not only a growing and vibrant community but also a well-architected system, informed by years of experience running containers at scale.
+
+But that’s not all! As we’ve decomposed, ported, and reborn our PaaS as a Kubernetes citizen; we found a need for tools to help manage all of the ephemera that comes along with building and running Kubernetes-native applications. The result has been open sourced as [Helm](https://github.com/kubernetes/helm) and we’re excited to see increasing adoption and growing excitement around the project.
+
+There’s fun in the Deis offices too -- we like to add some character to our architecture diagrams and pull requests. This time, literally. Meet Phippy--the intrepid little PHP app--and her journey to Kubernetes. What better way to talk to your parents, friends, and co-workers about this Kubernetes thing you keep going on about, than a little story time. We give to you The Illustrated Children's Guide to Kubernetes, conceived of and narrated by our own Matt Butcher and lovingly illustrated by Bailey Beougher. Join the fun on YouTube and tweet [@opendeis](https://twitter.com/Opendeis) to win your own copy of the book or a squishy little Phippy of your own.
diff --git a/blog/_posts/2016-07-00-Automation-Platform-At-Wercker-With-Kubernetes.md b/blog/_posts/2016-07-00-Automation-Platform-At-Wercker-With-Kubernetes.md
new file mode 100644
index 00000000000..0c1ecdd5e6a
--- /dev/null
+++ b/blog/_posts/2016-07-00-Automation-Platform-At-Wercker-With-Kubernetes.md
@@ -0,0 +1,69 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Steering an Automation Platform at Wercker with Kubernetes "
+date: Saturday, July 15, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: today’s guest post is by Andy Smith, the CTO of Wercker, sharing how Kubernetes helps them save time and speed up development. _
+
+At [Wercker](http://wercker.com/) we run millions of containers that execute our users’ CI/CD jobs. The vast majority of them are ephemeral and only last as long as builds, tests and deploys take to run, the rest are ephemeral, too -- aren't we all --, but tend to last a bit longer and run our infrastructure. As we are running many containers across many nodes, we were in need of a highly scalable scheduler that would make our lives easier, and as such, decided to implement Kubernetes.
+
+Wercker is a container-centric automation platform that helps developers build, test and deploy their applications. We support any number of pipelines, ranging from building code, testing API-contracts between microservices, to pushing containers to registries, and deploying to schedulers. All of these pipeline jobs run inside Docker containers and each artifact can be a Docker container.
+
+And of course we use Wercker to build Wercker, and deploy itself onto Kubernetes!
+
+**Overview**
+
+Because we are a platform for running multi-service cloud-native code we've made many design decisions around isolation. On the base level we use [CoreOS](http://coreos.com/) and [cloud-init](https://coreos.com/os/docs/latest/cloud-config.html) to bootstrap a cluster of heterogeneous nodes which I have named Patricians, Peasants, as well as controller nodes that don't have a cool name and are just called Controllers. Maybe we should switch to Constables.
+
+
+ {: .big-img}
+
+
+
+
+Patrician nodes are where the bulk of our infrastructure runs. These nodes have appropriate network interfaces to communicate with our backend services as well as be routable by various load balancers. This is where our logging is aggregated and sent off to logging services, our many microservices for reporting and processing the results of job runs, and our many microservices for handling API calls.
+
+
+
+On the other end of the spectrum are the Peasant nodes where the public jobs are run. Public jobs consist of worker pods reading from a job queue and dynamically generating new runner pods to handle execution of the job. The job itself is an incarnation of our open source [CLI tool](http://github.com/wercker/wercker), the same one you can run on your laptop with Docker installed. These nodes have very limited access to the rest of the infrastructure and the containers the jobs themselves run in are even further isolated.
+
+
+
+Controllers are controllers, I bet ours look exactly the same as yours.
+
+
+
+**Dynamic Pods**
+
+Our heaviest use of the Kubernetes API is definitely our system of creating dynamic pods to serve as the runtime environment for our actual job execution. After pulling job descriptions from the queue we define a new pod containing all the relevant environment for checking out code, managing a cache, executing a job and uploading artifacts. We launch the pod, monitor its progress, and destroy it when the job is done.
+
+
+
+**Ingresses**
+
+In order to provide a backend for HTTP API calls and allow self-registration of handlers we make use of the Ingress system in Kubernetes. It wasn't the clearest thing to set up, but reading through enough of the [nginx example](http://blog.kubernetes.io/2016/03/Kubernetes-1.2-and-simplifying-advanced-networking-with-Ingress.html) eventually got us to a good spot where it is easy to connect services to the frontend.
+
+
+
+**Upcoming Features in 1.3**
+
+
+
+While we generally treat all of our pods and containers as ephemeral and expect rapid restarts on failures, we are looking forward to Pet Sets and Init Containers as ways to optimize some of our processes. We are also pleased with official support for [Minikube](https://github.com/kubernetes/minikube) coming along as it improves our local testing and development.
+
+
+
+**Conclusion**
+
+
+
+Kubernetes saves us the non-trivial task of managing many, many containers across many nodes. It provides a robust API and tooling for introspecting these containers, and it includes much built in support for logging, metrics, monitoring and debugging. Service discovery and networking alone saves us so much time and speeds development immensely.
+
+Cheers to you Kubernetes, keep up the good work :)
+
+
+
+_-- Andy Smith, CTO, Wercker_
diff --git a/blog/_posts/2016-07-00-Autoscaling-In-Kubernetes.md b/blog/_posts/2016-07-00-Autoscaling-In-Kubernetes.md
new file mode 100644
index 00000000000..f04f27baaf5
--- /dev/null
+++ b/blog/_posts/2016-07-00-Autoscaling-In-Kubernetes.md
@@ -0,0 +1,412 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Autoscaling in Kubernetes "
+date: Wednesday, July 12, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: this post is part of a [series of in-depth articles](http://blog.kubernetes.io/2016/07/five-days-of-kubernetes-1.3.html) on what's new in Kubernetes 1.3_
+
+Customers using Kubernetes respond to end user requests quickly and ship software faster than ever before. But what happens when you build a service that is even more popular than you planned for, and run out of compute? In [Kubernetes 1.3](http://blog.kubernetes.io/2016/07/kubernetes-1.3-bridging-cloud-native-and-enterprise-workloads.html), we are proud to announce that we have a solution: autoscaling. On [Google Compute Engine](https://cloud.google.com/compute/) (GCE) and [Google Container Engine](https://cloud.google.com/container-engine/) (GKE) (and coming soon on [AWS](https://aws.amazon.com/)), Kubernetes will automatically scale up your cluster as soon as you need it, and scale it back down to save you money when you don’t.
+
+
+### Benefits of Autoscaling
+
+To understand better where autoscaling would provide the most value, let’s start with an example. Imagine you have a 24/7 production service with a load that is variable in time, where it is very busy during the day in the US, and relatively low at night. Ideally, we would want the number of nodes in the cluster and the number of pods in deployment to dynamically adjust to the load to meet end user demand. The new Cluster Autoscaling feature together with Horizontal Pod Autoscaler can handle this for you automatically.
+
+
+### Setting Up Autoscaling on GCE
+
+The following instructions apply to GCE. For GKE please check the autoscaling section in cluster operations manual available [here](https://cloud.google.com/container-engine/docs/clusters/operations#create_a_cluster_with_autoscaling).
+
+Before we begin, we need to have an active GCE project with Google Cloud Monitoring, Google Cloud Logging and Stackdriver enabled. For more information on project creation, please read our [Getting Started Guide](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/gce.md#prerequisites). We also need to download a recent version of Kubernetes project (version [v1.3.0](http://v1.3.0/) or later).
+
+First, we set up a cluster with Cluster Autoscaler turned on. The number of nodes in the cluster will start at 2, and autoscale up to a maximum of 5. To implement this, we’ll export the following environment variables:
+
+
+
+
+
+```
+export NUM\_NODES=2
+
+export KUBE\_AUTOSCALER\_MIN\_NODES=2
+
+export KUBE\_AUTOSCALER\_MAX\_NODES=5
+
+export KUBE\_ENABLE\_CLUSTER\_AUTOSCALER=true
+ ```
+
+
+and start the cluster by running:
+
+
+
+```
+./cluster/kube-up.sh
+ ```
+
+
+
+The kube-up.sh script creates a cluster together with Cluster Autoscaler add-on. The autoscaler will try to add new nodes to the cluster if there are pending pods which could schedule on a new node.
+
+
+
+Let’s see our cluster, it should have two nodes:
+
+
+
+
+```
+$ kubectl get nodes
+
+NAME STATUS AGE
+
+kubernetes-master Ready,SchedulingDisabled 2m
+
+kubernetes-minion-group-de5q Ready 2m
+
+kubernetes-minion-group-yhdx Ready 1m
+ ```
+
+
+
+#### Run & Expose PHP-Apache Server
+
+
+
+To demonstrate autoscaling we will use a custom docker image based on php-apache server. The image can be found [here](https://github.com/kubernetes/kubernetes/blob/8caeec429ee1d2a9df7b7a41b21c626346b456fb/docs/user-guide/horizontal-pod-autoscaling/image). It defines [index.php](https://github.com/kubernetes/kubernetes/blob/8caeec429ee1d2a9df7b7a41b21c626346b456fb/docs/user-guide/horizontal-pod-autoscaling/image/index.php) page which performs some CPU intensive computations.
+
+
+
+First, we’ll start a deployment running the image and expose it as a service:
+
+
+
+
+```
+$ kubectl run php-apache \
+
+ --image=gcr.io/google\_containers/hpa-example \
+
+ --requests=cpu=500m,memory=500M --expose --port=80
+
+service "php-apache" createddeployment "php-apache" created
+```
+
+
+
+
+
+Now, we will wait some time and verify that both the deployment and the service were correctly created and are running:
+
+
+
+
+
+```
+$ kubectl get deployment
+
+NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+
+php-apache 1 1 1 1 49s
+
+
+
+$ kubectl get pods
+NAME READY STATUS RESTARTS AGE
+
+php-apache-2046965998-z65jn 1/1 Running 0 30s
+ ```
+
+
+
+
+
+We may now check that php-apache server works correctly by calling wget with the service's address:
+
+
+
+
+
+```
+$ kubectl run -i --tty service-test --image=busybox /bin/sh
+Hit enter for command prompt
+$ wget -q -O- http://php-apache.default.svc.cluster.local
+
+OK!
+ ```
+
+
+
+#### Starting Horizontal Pod Autoscaler
+
+
+Now that the deployment is running, we will create a Horizontal Pod Autoscaler for it. To create it, we will use kubectl autoscale command, which looks like this:
+
+
+
+
+
+```
+$ kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
+ ```
+
+
+
+
+
+This defines a Horizontal Ppod Autoscaler that maintains between 1 and 10 replicas of the Pods controlled by the php-apache deployment we created in the first step of these instructions. Roughly speaking, the horizontal autoscaler will increase and decrease the number of replicas (via the deployment) so as to maintain an average CPU utilization across all Pods of 50% (since each pod requests 500 milli-cores by [kubectl run](https://github.com/kubernetes/kubernetes/blob/8caeec429ee1d2a9df7b7a41b21c626346b456fb/docs/user-guide/horizontal-pod-autoscaling/README.md#kubectl-run), this means average CPU usage of 250 milli-cores). See [here](https://github.com/kubernetes/kubernetes/blob/8caeec429ee1d2a9df7b7a41b21c626346b456fb/docs/design/horizontal-pod-autoscaler.md#autoscaling-algorithm) for more details on the algorithm.
+
+
+
+We may check the current status of autoscaler by running:
+
+
+
+
+```
+$ kubectl get hpa
+
+NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
+
+php-apache Deployment/php-apache/scale 50% 0% 1 20 14s
+ ```
+
+
+
+
+
+Please note that the current CPU consumption is 0% as we are not sending any requests to the server (the CURRENT column shows the average across all the pods controlled by the corresponding replication controller).
+
+#### Raising the Load
+
+
+Now, we will see how our autoscalers (Cluster Autoscaler and Horizontal Pod Autoscaler) react on the increased load of the server. We will start two infinite loops of queries to our server (please run them in different terminals):
+
+
+
+
+
+```
+$ kubectl run -i --tty load-generator --image=busybox /bin/sh
+Hit enter for command prompt
+$ while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done
+ ```
+
+
+
+
+
+We need to wait a moment (about one minute) for stats to propagate. Afterwards, we will examine status of Horizontal Pod Autoscaler:
+
+
+
+
+```
+$ kubectl get hpa
+
+NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
+
+php-apache Deployment/php-apache/scale 50% 310% 1 20 2m
+
+
+
+$ kubectl get deployment php-apache
+
+NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+
+php-apache 7 7 7 3 4m
+ ```
+
+
+
+
+
+Horizontal Pod Autoscaler has increased the number of pods in our deployment to 7. Let’s now check, if all the pods are running:
+
+
+
+
+
+```
+jsz@jsz-desk2:~/k8s-src$ kubectl get pods
+
+php-apache-2046965998-3ewo6 0/1 Pending 0 1m
+
+php-apache-2046965998-8m03k 1/1 Running 0 1m
+
+php-apache-2046965998-ddpgp 1/1 Running 0 5m
+
+php-apache-2046965998-lrik6 1/1 Running 0 1m
+
+php-apache-2046965998-nj465 0/1 Pending 0 1m
+
+php-apache-2046965998-tmwg1 1/1 Running 0 1m
+
+php-apache-2046965998-xkbw1 0/1 Pending 0 1m
+ ```
+
+
+
+
+As we can see, some pods are pending. Let’s describe one of pending pods to get the reason of the pending state:
+
+
+
+
+
+```
+$ kubectl describe pod php-apache-2046965998-3ewo6
+
+Name: php-apache-2046965998-3ewo6
+
+Namespace: default
+
+...
+
+Events:
+
+ FirstSeen From SubobjectPath Type Reason Message
+
+
+
+ 1m {default-scheduler } Warning FailedScheduling pod (php-apache-2046965998-3ewo6) failed to fit in any node
+
+fit failure on node (kubernetes-minion-group-yhdx): Insufficient CPU
+
+fit failure on node (kubernetes-minion-group-de5q): Insufficient CPU
+
+
+
+ 1m {cluster-autoscaler } Normal TriggeredScaleUp pod triggered scale-up, mig: kubernetes-minion-group, sizes (current/new): 2/3
+ ```
+
+
+
+The pod is pending as there was no CPU in the system for it. We see there’s a TriggeredScaleUp event connected with the pod. It means that the pod triggered reaction of Cluster Autoscaler and a new node will be added to the cluster. Now we’ll wait for the reaction (about 3 minutes) and list all nodes:
+
+
+
+
+
+```
+$ kubectl get nodes
+
+NAME STATUS AGE
+
+kubernetes-master Ready,SchedulingDisabled 9m
+
+kubernetes-minion-group-6z5i Ready 43s
+
+kubernetes-minion-group-de5q Ready 9m
+
+kubernetes-minion-group-yhdx Ready 9m
+ ```
+
+
+
+As we see a new node kubernetes-minion-group-6z5i was added by Cluster Autoscaler. Let’s verify that all pods are now running:
+
+
+
+
+```
+$ kubectl get pods
+
+NAME READY STATUS RESTARTS AGE
+
+php-apache-2046965998-3ewo6 1/1 Running 0 3m
+
+php-apache-2046965998-8m03k 1/1 Running 0 3m
+
+php-apache-2046965998-ddpgp 1/1 Running 0 7m
+
+php-apache-2046965998-lrik6 1/1 Running 0 3m
+
+php-apache-2046965998-nj465 1/1 Running 0 3m
+
+php-apache-2046965998-tmwg1 1/1 Running 0 3m
+
+php-apache-2046965998-xkbw1 1/1 Running 0 3m
+ ```
+
+
+
+After the node addition all php-apache pods are running!
+
+
+
+#### Stop Load
+
+
+We will finish our example by stopping the user load. We’ll terminate both infinite while loops sending requests to the server and verify the result state:
+
+
+
+
+```
+$ kubectl get hpa
+
+NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
+
+php-apache Deployment/php-apache/scale 50% 0% 1 10 16m
+
+
+
+$ kubectl get deployment php-apache
+
+NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+
+php-apache 1 1 1 1 14m
+ ```
+
+
+
+
+
+As we see, in the presented case CPU utilization dropped to 0, and the number of replicas dropped to 1.
+
+
+
+After deleting pods most of the cluster resources are unused. Scaling the cluster down may take more time than scaling up because Cluster Autoscaler makes sure that the node is really not needed so that short periods of inactivity (due to pod upgrade etc) won’t trigger node deletion (see [cluster autoscaler doc](https://github.com/kubernetes/kubernetes.github.io/blob/release-1.3/docs/admin/cluster-management.md#cluster-autoscaling)). After approximately 10-12 minutes you can verify that the number of nodes in the cluster dropped:
+
+
+
+
+```
+$ kubectl get nodes
+
+NAME STATUS AGE
+
+kubernetes-master Ready,SchedulingDisabled 37m
+
+kubernetes-minion-group-de5q Ready 36m
+
+kubernetes-minion-group-yhdx Ready 36m
+ ```
+
+
+
+The number of nodes in our cluster is now two again as node kubernetes-minion-group-6z5i was removed by Cluster Autoscaler.
+
+
+
+### Other use cases
+
+
+
+As we have shown, it is very easy to dynamically adjust the number of pods to the load using a combination of Horizontal Pod Autoscaler and Cluster Autoscaler.
+
+
+
+However Cluster Autoscaler alone can also be quite helpful whenever there are irregularities in the cluster load. For example, clusters related to development or continuous integration tests can be less needed on weekends or at night. Batch processing clusters may have periods when all jobs are over and the new will only start in couple hours. Having machines that do nothing is a waste of money.
+
+
+
+In all of these cases Cluster Autoscaler can reduce the number of unused nodes and give quite significant savings because you will only pay for these nodes that you actually need to run your pods. It also makes sure that you always have enough compute power to run your tasks.
+
+
+
+_-- Jerzy Szczepkowski and Marcin Wielgus, Software Engineers, Google_
diff --git a/blog/_posts/2016-07-00-Bringing-End-To-End-Kubernetes-Testing-To-Azure-2.md b/blog/_posts/2016-07-00-Bringing-End-To-End-Kubernetes-Testing-To-Azure-2.md
new file mode 100644
index 00000000000..1f178a4a71c
--- /dev/null
+++ b/blog/_posts/2016-07-00-Bringing-End-To-End-Kubernetes-Testing-To-Azure-2.md
@@ -0,0 +1,55 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Bringing End-to-End Kubernetes Testing to Azure (Part 2) "
+date: Tuesday, July 18, 2016
+pagination:
+ enabled: true
+---
+_Editor’s Note: Today’s guest post is Part II from a [series](http://blog.kubernetes.io/2016/06/bringing-end-to-end-testing-to-azure.html) by Travis Newhouse, Chief Architect at AppFormix, writing about their contributions to Kubernetes._
+
+Historically, Kubernetes testing has been hosted by Google, running e2e tests on [Google Compute Engine](https://cloud.google.com/compute/) (GCE) and [Google Container Engine](https://cloud.google.com/container-engine/) (GKE). In fact, the gating checks for the submit-queue are a subset of tests executed on these test platforms. Federated testing aims to expand test coverage by enabling organizations to host test jobs for a variety of platforms and contribute test results to benefit the Kubernetes project. Members of the Kubernetes test team at Google and SIG-Testing have created a [Kubernetes test history dashboard](http://storage.googleapis.com/kubernetes-test-history/static/index.html) that publishes the results from all federated test jobs (including those hosted by Google).
+
+In this blog post, we describe extending the e2e test jobs for Azure, and show how to contribute a federated test to the Kubernetes project.
+
+**END-TO-END INTEGRATION TESTS FOR AZURE**
+
+After successfully implementing [“development distro” scripts to automate deployment of Kubernetes on Azure](http://blog.kubernetes.io/2016/06/bringing-end-to-end-testing-to-azure.html), our next goal was to run e2e integration tests and share the results with the Kubernetes community.
+
+We automated our workflow for executing e2e tests of Kubernetes on Azure by defining a nightly job in our private Jenkins server. Figure 2 shows the workflow that uses kube-up.sh to deploy Kubernetes on Ubuntu virtual machines running in Azure, then executes the e2e tests. On completion of the tests, the job uploads the test results and logs to a Google Cloud Storage directory, in a format that can be processed by the [scripts that produce the test history dashboard](https://github.com/kubernetes/test-infra/tree/master/jenkins/test-history). Our Jenkins job uses the hack/jenkins/e2e-runner.sh and hack/jenkins/upload-to-gcs.sh scripts to produce the results in the correct format.
+
+
+| {: .big-img} |
+| Figure 2 - Nightly test job workflow |
+
+
+**HOW TO CONTRIBUTE AN E2E TEST**
+
+Throughout our work to create the Azure e2e test job, we have collaborated with members of [SIG-Testing](https://github.com/kubernetes/community/tree/master/sig-testing) to find a way to publish the results to the Kubernetes community. The results of this collaboration are documentation and a streamlined process to contribute results from a federated test job. The steps to contribute e2e test results can be summarized in 4 steps.
+
+
+1. Create a [Google Cloud Storage](https://cloud.google.com/storage/) bucket in which to publish the results.
+2. Define an automated job to run the e2e tests. By setting a few environment variables, hack/jenkins/e2e-runner.sh deploys Kubernetes binaries and executes the tests.
+3. Upload the results using hack/jenkins/upload-to-gcs.sh.
+4. Incorporate the results into the test history dashboard by submitting a pull-request with modifications to a few files in [kubernetes/test-infra](https://github.com/kubernetes/test-infra).
+
+The federated tests documentation describes these steps in more detail. The scripts to run e2e tests and upload results simplifies the work to contribute a new federated test job. The specific steps to set up an automated test job and an appropriate environment in which to deploy Kubernetes are left to the reader’s preferences. For organizations using Jenkins, the jenkins-job-builder configurations for GCE and GKE tests may provide helpful examples.
+
+**RETROSPECTIVE**
+
+The e2e tests on Azure have been running for several weeks now. During this period, we have found two issues in Kubernetes. Weixu Zhuang immediately published fixes that have been merged into the Kubernetes master branch.
+
+The first issue happened when we wanted to bring up the Kubernetes cluster using SaltStack on Azure using Ubuntu VMs. A commit (07d7cfd3) modified the OpenVPN certificate generation script to use a variable that was only initialized by scripts in the cluster/ubuntu. Strict checking on existence of parameters by the certificate generation script caused other platforms that use the script to fail (e.g. our changes to support Azure). We submitted a [pull-request that fixed the issue](https://github.com/kubernetes/kubernetes/pull/21357) by initializing the variable with a default value to make the certificate generation scripts more robust across all platform types.
+
+The second [pull-request cleaned up an unused import](https://github.com/kubernetes/kubernetes/pull/22321) in the Daemonset unit test file. The import statement broke the unit tests with golang 1.4. Our nightly Jenkins job helped us find this error and we promptly pushed a fix for it.
+
+**CONCLUSION AND FUTURE WORK**
+
+The addition of a nightly e2e test job for Kubernetes on Azure has helped to define the process to contribute a federated test to the Kubernetes project. During the course of the work, we also saw the immediate benefit of expanding test coverage to more platforms when our Azure test job identified compatibility issues.
+
+We want to thank Aaron Crickenberger, Erick Fejta, Joe Finney, and Ryan Hutchinson for their help to incorporate the results of our Azure e2e tests into the Kubernetes test history. If you’d like to get involved with testing to create a stable, high quality releases of Kubernetes, join us in the [Kubernetes Testing SIG (sig-testing)](https://github.com/kubernetes/community/tree/master/sig-testing).
+
+
+
+
+_--Travis Newhouse, Chief Architect at AppFormix_
diff --git a/blog/_posts/2016-07-00-Citrix-Netscaler-And-Kubernetes.md b/blog/_posts/2016-07-00-Citrix-Netscaler-And-Kubernetes.md
new file mode 100644
index 00000000000..763446f8c94
--- /dev/null
+++ b/blog/_posts/2016-07-00-Citrix-Netscaler-And-Kubernetes.md
@@ -0,0 +1,31 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Citrix + Kubernetes = A Home Run "
+date: Friday, July 14, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: today’s guest post is by Mikko Disini, a Director of Product Management at Citrix Systems, sharing their collaboration experience on a Kubernetes integration. _
+
+Technical collaboration is like sports. If you work together as a team, you can go down the homestretch and pull through for a win. That’s our experience with the Google Cloud Platform team.
+
+Recently, we approached Google Cloud Platform (GCP) to collaborate on behalf of Citrix customers and the broader enterprise market looking to migrate workloads. This migration required including the [NetScaler Docker load balancer](https://www.citrix.com/blogs/2016/06/20/the-best-docker-load-balancer-at-dockercon-in-seattle-this-week/), CPX, into Kubernetes nodes and resolving any issues with getting traffic into the CPX proxies.
+
+**Why NetScaler and Kubernetes?**
+
+
+1. Citrix customers want the same Layer 4 to Layer 7 capabilities from NetScaler that they have on-prem as they move to the cloud as they begin deploying their container and microservices architecture with Kubernetes
+2. Kubernetes provides a proven infrastructure for running containers and VMs with automated workload delivery
+3. NetScaler CPX provides Layer 4 to Layer 7 services and highly efficient telemetry data to a logging and analytics platform, [NetScaler Management and Analytics System](https://www.citrix.com/blogs/2016/05/24/introducing-the-next-generation-netscaler-management-and-analytics-system/)
+
+I wish all our experiences working together with a technical partner were as good as working with GCP. We had a list of issues to enable our use cases and were able to collaborate swiftly on a solution. To resolve these, GCP team offered in depth technical assistance, working with Citrix such that NetScaler CPX can spin up and take over as a client-side proxy running on each host.
+
+Next, NetScaler CPX needed to be inserted in the data path of GCP ingress load balancer so that NetScaler CPX can spread traffic to front end web servers. The NetScaler team made modifications so that NetScaler CPX listens to API server events and configures itself to create a VIP, IP table rules and server rules to take ingress traffic and load balance across front end applications. Google Cloud Platform team provided feedback and assistance to verify modifications made to overcome the technical hurdles. Done!
+
+NetScaler CPX use case is supported in [Kubernetes 1.3](http://blog.kubernetes.io/2016/07/kubernetes-1.3-bridging-cloud-native-and-enterprise-workloads.html). Citrix customers and the broader enterprise market will have the opportunity to leverage NetScaler with Kubernetes, thereby lowering the friction to move workloads to the cloud.
+
+You can learn more about NetScaler CPX [here](https://www.citrix.com/networking/microservices.html).
+
+
+_ -- Mikko Disini, Director of Product Management - NetScaler, Citrix Systems_
diff --git a/blog/_posts/2016-07-00-Cross-Cluster-Services.md b/blog/_posts/2016-07-00-Cross-Cluster-Services.md
new file mode 100644
index 00000000000..d87040f92f6
--- /dev/null
+++ b/blog/_posts/2016-07-00-Cross-Cluster-Services.md
@@ -0,0 +1,344 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Cross Cluster Services - Achieving Higher Availability for your Kubernetes Applications "
+date: Friday, July 14, 2016
+pagination:
+ enabled: true
+---
+
+_Editor’s note: this post is part of a [series of in-depth articles](http://blog.kubernetes.io/2016/07/five-days-of-kubernetes-1.3.html) on what's new in Kubernetes 1.3_
+
+As Kubernetes users scale their production deployments we’ve heard a clear desire to deploy services across zone, region, cluster and cloud boundaries. Services that span clusters provide geographic distribution, enable hybrid and multi-cloud scenarios and improve the level of high availability beyond single cluster multi-zone deployments. Customers who want their services to span one or more (possibly remote) clusters, need them to be reachable in a consistent manner from both within and outside their clusters.
+
+In Kubernetes 1.3, our goal was to minimize the friction points and reduce the management/operational overhead associated with deploying a service with geographic distribution to multiple clusters. This post explains how to do this.
+
+Note: Though the examples used here leverage Google Container Engine ([GKE](https://cloud.google.com/container-engine/)) to provision Kubernetes clusters, they work anywhere you want to deploy Kubernetes.
+
+Let’s get started. The first step is to create is to create Kubernetes clusters into 4 Google Cloud Platform (GCP) regions using GKE.
+
+
+- asia-east1-b
+- europe-west1-b
+- us-east1-b
+- us-central1-b
+
+Let’s run the following commands to build the clusters:
+
+
+
+
+```
+gcloud container clusters create gce-asia-east1 \
+
+ --scopes cloud-platform \
+
+ --zone asia-east1-b
+
+gcloud container clusters create gce-europe-west1 \
+
+ --scopes cloud-platform \
+
+ --zone=europe-west1-b
+
+gcloud container clusters create gce-us-east1 \
+
+ --scopes cloud-platform \
+
+ --zone=us-east1-b
+
+gcloud container clusters create gce-us-central1 \
+
+ --scopes cloud-platform \
+
+ --zone=us-central1-b
+ ```
+
+
+Let’s verify the clusters are created:
+
+
+
+```
+gcloud container clusters list
+
+NAME ZONE MASTER\_VERSION MASTER\_IP NUM\_NODES STATUS
+gce-asia-east1 asia-east1-b 1.2.4 104.XXX.XXX.XXX 3 RUNNING
+gce-europe-west1 europe-west1-b 1.2.4 130.XXX.XX.XX 3 RUNNING
+gce-us-central1 us-central1-b 1.2.4 104.XXX.XXX.XX 3 RUNNING
+gce-us-east1 us-east1-b 1.2.4 104.XXX.XX.XXX 3 RUNNING
+ ```
+
+
+
+[](https://lh6.googleusercontent.com/LEMtlOvr6i_iK1DwVmS-ltSKU5PmjrrN287sxwvyiGH-QLjOhF25RUjVTVt4IUo-0oGXvj8bxfRFCxTZa_5Qfv_LjxglshTxcnpm73E6Uy7MgVPTiI2GevdwAogHenZIb2S6A6lr)
+
+
+
+The next step is to bootstrap the clusters and deploy the federation control plane on one of the clusters that has been provisioned. If you’d like to follow along, refer to Kelsey Hightower’s [tutorial](https://github.com/kelseyhightower/kubernetes-cluster-federation) which walks through the steps involved.
+
+
+
+**Federated Services**
+
+
+
+[Federated Services](https://github.com/kubernetes/kubernetes/blob/release-1.3/docs/design/federated-services.md) are directed to the Federation API endpoint and specify the desired properties of your service.
+
+
+
+Once created, the Federated Service automatically:
+
+- creates matching Kubernetes Services in every cluster underlying your cluster federation,
+- monitors the health of those service "shards" (and the clusters in which they reside), and
+- manages a set of DNS records in a public DNS provider (like Google Cloud DNS, or AWS Route 53), thus ensuring that clients of your federated service can seamlessly locate an appropriate healthy service endpoint at all times, even in the event of cluster, availability zone or regional outages.
+
+Clients inside your federated Kubernetes clusters (i.e. Pods) will automatically find the local shard of the federated service in their cluster if it exists and is healthy, or the closest healthy shard in a different cluster if it does not.
+
+
+
+Federations of Kubernetes Clusters can include clusters running in different cloud providers (e.g. GCP, AWS), and on-premise (e.g. on OpenStack). All you need to do is create your clusters in the appropriate cloud providers and/or locations, and register each cluster's API endpoint and credentials with your Federation API Server.
+
+
+
+In our example, we have clusters created in 4 regions along with a federated control plane API deployed in one of our clusters, that we’ll be using to provision our service. See diagram below for visual representation.
+
+
+
+ 
+
+
+
+**Creating a Federated Service**
+
+
+
+Let’s list out all the clusters in our federation:
+
+
+
+```
+kubectl --context=federation-cluster get clusters
+
+NAME STATUS VERSION AGE
+gce-asia-east1 Ready 1m
+gce-europe-west1 Ready 57s
+gce-us-central1 Ready 47s
+gce-us-east1 Ready 34s
+ ```
+
+
+
+Let’s create a federated service object:
+
+
+
+
+```
+kubectl --context=federation-cluster create -f services/nginx.yaml
+ ```
+
+
+
+The '--context=federation-cluster' flag tells kubectl to submit the request to the Federation API endpoint, with the appropriate credentials. The federated service will automatically create and maintain matching Kubernetes services in all of the clusters underlying your federation.
+
+
+
+You can verify this by checking in each of the underlying clusters, for example:
+
+
+
+
+```
+kubectl --context=gce-asia-east1a get svc nginx
+NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+nginx 10.63.250.98 104.199.136.89 80/TCP 9m
+ ```
+
+
+
+The above assumes that you have a context named 'gce-asia-east1a' configured in your client for your cluster in that zone. The name and namespace of the underlying services will automatically match those of the federated service that you created above.
+
+
+
+The status of your Federated Service will automatically reflect the real-time status of the underlying Kubernetes services, for example:
+
+
+
+
+```
+kubectl --context=federation-cluster describe services nginx
+
+Name: nginx
+Namespace: default
+Labels: run=nginx
+Selector: run=nginx
+Type: LoadBalancer
+IP:
+LoadBalancer Ingress: 104.XXX.XX.XXX, 104.XXX.XX.XXX, 104.XXX.XX.XXX, 104.XXX.XXX.XX
+Port: http 80/TCP
+Endpoints: \
+Session Affinity: None
+No events.
+ ```
+
+
+
+The 'LoadBalancer Ingress' addresses of your federated service corresponds with the 'LoadBalancer Ingress' addresses of all of the underlying Kubernetes services. For inter-cluster and inter-cloud-provider networking between service shards to work correctly, your services need to have an externally visible IP address. Service Type: Loadbalancer is typically used here.
+
+
+
+Note also what we have not yet provisioned any backend Pods to receive the network traffic directed to these addresses (i.e. 'Service Endpoints'), so the federated service does not yet consider these to be healthy service shards, and has accordingly not yet added their addresses to the DNS records for this federated service.
+
+
+
+**Adding Backend Pods**
+
+
+
+To render the underlying service shards healthy, we need to add backend Pods behind them. This is currently done directly against the API endpoints of the underlying clusters (although in future the Federation server will be able to do all this for you with a single command, to save you the trouble). For example, to create backend Pods in our underlying clusters:
+
+
+
+
+```
+for CLUSTER in asia-east1-a europe-west1-a us-east1-a us-central1-a
+do
+kubectl --context=$CLUSTER run nginx --image=nginx:1.11.1-alpine --port=80
+done
+ ```
+
+
+
+**Verifying Public DNS Records**
+
+
+
+Once the Pods have successfully started and begun listening for connections, Kubernetes in each cluster (via automatic health checks) will report them as healthy endpoints of the service in that cluster. The cluster federation will in turn consider each of these service 'shards' to be healthy, and place them in serving by automatically configuring corresponding public DNS records. You can use your preferred interface to your configured DNS provider to verify this. For example, if your Federation is configured to use Google Cloud DNS, and a managed DNS domain 'example.com':
+
+
+
+
+```
+$ gcloud dns managed-zones describe example-dot-com
+
+creationTime: '2016-06-26T18:18:39.229Z'
+description: Example domain for Kubernetes Cluster Federation
+dnsName: example.com.
+id: '3229332181334243121'
+kind: dns#managedZone
+name: example-dot-com
+nameServers:
+- ns-cloud-a1.googledomains.com.
+- ns-cloud-a2.googledomains.com.
+- ns-cloud-a3.googledomains.com.
+- ns-cloud-a4.googledomains.com.
+
+$ gcloud dns record-sets list --zone example-dot-com
+
+NAME TYPE TTL DATA
+example.com. NS 21600 ns-cloud-e1.googledomains.com., ns-cloud-e2.googledomains.com.
+example.com. SOA 21600 ns-cloud-e1.googledomains.com. cloud-dns-hostmaster.google.com. 1 21600 3600 1209600 300
+nginx.mynamespace.myfederation.svc.example.com. A 180 104.XXX.XXX.XXX, 130.XXX.XX.XXX, 104.XXX.XX.XXX, 104.XXX.XXX.XX
+nginx.mynamespace.myfederation.svc.us-central1-a.example.com. A 180 104.XXX.XXX.XXX
+nginx.mynamespace.myfederation.svc.us-central1.example.com.
+nginx.mynamespace.myfederation.svc.us-central1.example.com. A 180 104.XXX.XXX.XXX, 104.XXX.XXX.XXX, 104.XXX.XXX.XXX
+nginx.mynamespace.myfederation.svc.asia-east1-a.example.com. A 180 130.XXX.XX.XXX
+nginx.mynamespace.myfederation.svc.asia-east1.example.com.
+nginx.mynamespace.myfederation.svc.asia-east1.example.com. A 180 130.XXX.XX.XXX, 130.XXX.XX.XXX
+nginx.mynamespace.myfederation.svc.europe-west1.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.example.com.
+... etc.
+ ```
+
+
+
+Note: If your Federation is configured to use AWS Route53, you can use one of the equivalent AWS tools, for example:
+
+
+
+
+```
+$aws route53 list-hosted-zones
+
+and
+
+$aws route53 list-resource-record-sets --hosted-zone-id Z3ECL0L9QLOVBX
+ ```
+
+
+
+Whatever DNS provider you use, any DNS query tool (for example 'dig' or 'nslookup') will of course also allow you to see the records created by the Federation for you.
+
+
+
+**Discovering a Federated Service from pods Inside your Federated Clusters**
+
+
+
+By default, Kubernetes clusters come preconfigured with a cluster-local DNS server ('KubeDNS'), as well as an intelligently constructed DNS search path which together ensure that DNS queries like "myservice", "myservice.mynamespace", "bobsservice.othernamespace" etc issued by your software running inside Pods are automatically expanded and resolved correctly to the appropriate service IP of services running in the local cluster.
+
+
+
+With the introduction of Federated Services and Cross-Cluster Service Discovery, this concept is extended to cover Kubernetes services running in any other cluster across your Cluster Federation, globally. To take advantage of this extended range, you use a slightly different DNS name (e.g. myservice.mynamespace.myfederation) to resolve federated services. Using a different DNS name also avoids having your existing applications accidentally traversing cross-zone or cross-region networks and you incurring perhaps unwanted network charges or latency, without you explicitly opting in to this behavior.
+
+
+
+So, using our NGINX example service above, and the federated service DNS name form just described, let's consider an example: A Pod in a cluster in the us-central1-a availability zone needs to contact our NGINX service. Rather than use the service's traditional cluster-local DNS name ("nginx.mynamespace", which is automatically expanded to"nginx.mynamespace.svc.cluster.local") it can now use the service's Federated DNS name, which is"nginx.mynamespace.myfederation". This will be automatically expanded and resolved to the closest healthy shard of my NGINX service, wherever in the world that may be. If a healthy shard exists in the local cluster, that service's cluster-local (typically 10.x.y.z) IP address will be returned (by the cluster-local KubeDNS). This is exactly equivalent to non-federated service resolution.
+
+
+
+If the service does not exist in the local cluster (or it exists but has no healthy backend pods), the DNS query is automatically expanded to "nginx.mynamespace.myfederation.svc.us-central1-a.example.com". Behind the scenes, this is finding the external IP of one of the shards closest to my availability zone. This expansion is performed automatically by KubeDNS, which returns the associated CNAME record. This results in a traversal of the hierarchy of DNS records in the above example, and ends up at one of the external IP's of the Federated Service in the local us-central1 region.
+
+
+
+It is also possible to target service shards in availability zones and regions other than the ones local to a Pod by specifying the appropriate DNS names explicitly, and not relying on automatic DNS expansion. For example, "nginx.mynamespace.myfederation.svc.europe-west1.example.com" will resolve to all of the currently healthy service shards in Europe, even if the Pod issuing the lookup is located in the U.S., and irrespective of whether or not there are healthy shards of the service in the U.S. This is useful for remote monitoring and other similar applications.
+
+
+
+**Discovering a Federated Service from Other Clients Outside your Federated Clusters**
+
+
+
+For external clients, automatic DNS expansion described is no longer possible. External clients need to specify one of the fully qualified DNS names of the federated service, be that a zonal, regional or global name. For convenience reasons, it is often a good idea to manually configure additional static CNAME records in your service, for example:
+
+
+
+
+```
+eu.nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.europe-west1.example.com.
+us.nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.us-central1.example.com.
+nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.example.com.
+ ```
+
+
+
+That way your clients can always use the short form on the left, and always be automatically routed to the closest healthy shard on their home continent. All of the required failover is handled for you automatically by Kubernetes Cluster Federation.
+
+
+
+**Handling Failures of Backend Pods and Whole Clusters**
+
+
+
+Standard Kubernetes service cluster-IP's already ensure that non-responsive individual Pod endpoints are automatically taken out of service with low latency. The Kubernetes cluster federation system automatically monitors the health of clusters and the endpoints behind all of the shards of your federated service, taking shards in and out of service as required. Due to the latency inherent in DNS caching (the cache timeout, or TTL for federated service DNS records is configured to 3 minutes, by default, but can be adjusted), it may take up to that long for all clients to completely fail over to an alternative cluster in in the case of catastrophic failure. However, given the number of discrete IP addresses which can be returned for each regional service endpoint (see e.g. us-central1 above, which has three alternatives) many clients will fail over automatically to one of the alternative IP's in less time than that given appropriate configuration.
+
+
+
+**Community**
+
+
+
+We'd love to hear feedback on Kubernetes Cross Cluster Services. To join the community:
+
+- Post issues or feature requests on [GitHub](https://github.com/kubernetes/kubernetes/tree/master/federation)
+- Join us in the #federation channel on [Slack](https://kubernetes.slack.com/messages/sig-federation)
+- Participate in the [Cluster Federation SIG](https://groups.google.com/forum/#!forum/kubernetes-sig-federation)
+
+
+Please give Cross Cluster Services a try, and let us know how it goes!
+
+
+
+
+
+_-- Quinton Hoole, Engineering Lead, Google and Allan Naim, Product Manager, Google_
diff --git a/blog/_posts/2016-07-00-Dashboard-Web-Interface-For-Kubernetes.md b/blog/_posts/2016-07-00-Dashboard-Web-Interface-For-Kubernetes.md
new file mode 100644
index 00000000000..abb04a8857f
--- /dev/null
+++ b/blog/_posts/2016-07-00-Dashboard-Web-Interface-For-Kubernetes.md
@@ -0,0 +1,76 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Dashboard - Full Featured Web Interface for Kubernetes "
+date: Saturday, July 15, 2016
+pagination:
+ enabled: true
+---
+
+_Editor’s note: this post is part of a [series of in-depth articles](http://blog.kubernetes.io/2016/07/five-days-of-kubernetes-1.3.html) on what's new in Kubernetes 1.3_
+
+[Kubernetes Dashboard](http://github.com/kubernetes/dashboard) is a project that aims to bring a general purpose monitoring and operational web interface to the Kubernetes world. Three months ago we [released](http://blog.kubernetes.io/2016/04/building-awesome-user-interfaces-for-kubernetes.html) the first production ready version, and since then the dashboard has made massive improvements. In a single UI, you’re able to perform majority of possible interactions with your Kubernetes clusters without ever leaving your browser. This blog post breaks down new features introduced in the latest release and outlines the roadmap for the future.
+
+**Full-Featured Dashboard**
+
+Thanks to a large number of contributions from the community and project members, we were able to deliver many new features for [Kubernetes 1.3 release](http://blog.kubernetes.io/2016/07/kubernetes-1.3-bridging-cloud-native-and-enterprise-workloads.html). We have been carefully listening to all the great feedback we have received from our users (see the [summary infographics](http://static.lwy.io/img/kubernetes_dashboard_infographic.png)) and addressed the highest priority requests and pain points.
+
+The Dashboard UI now handles all workload resources. This means that no matter what workload type you run, it is visible in the web interface and you can do operational changes on it. For example, you can modify your stateful MySQL installation with [Pet Sets](http://kubernetes.io/docs/user-guide/petset/), do a rolling update of your web server with Deployments or install cluster monitoring with DaemonSets.
+
+
+
+ [{:.big-img} ](https://lh3.googleusercontent.com/p9bMGxPx4jE6_Z2KB-MktmyuAxyFst-bEk29M_Bn0Bj5ul7uzinH6u5WjHsMmqhGvBwlABZt06dwQ5qkBZiLq_EM1oddCmpwChvXDNXZypaS5l8uzkKuZj3PBUmzTQT4dgDxSXgz)
+
+
+
+In addition to viewing resources, you can create, edit, update, and delete them. This feature enables many use cases. For example, you can kill a failed Pod, do a rolling update on a Deployment, or just organize your resources. You can also export and import YAML configuration files of your cloud apps and store them in a version control system.
+
+
+
+ {: .big-img}
+
+
+
+The release includes a beta view of cluster nodes for administration and operational use cases. The UI lists all nodes in the cluster to allow for overview analysis and quick screening for problematic nodes. The details view shows all information about the node and links to pods running on it.
+
+
+
+ {: .big-img}
+
+
+
+There are also many smaller scope new features that the we shipped with the release, namely: support for namespaced resources, internationalization, performance improvements, and many bug fixes (find out more in the [release notes](https://github.com/kubernetes/dashboard/releases/tag/v1.1.0)). All these improvements result in a better and simpler user experience of the product.
+
+
+
+**Future Work**
+
+
+
+The team has ambitious plans for the future spanning across multiple use cases. We are also open to all feature requests, which you can post on our [issue tracker](https://github.com/kubernetes/dashboard/issues).
+
+
+
+Here is a list of our focus areas for the following months:
+
+- [Handle more Kubernetes resources](https://github.com/kubernetes/dashboard/issues/961) - To show all resources that a cluster user may potentially interact with. Once done, Dashboard can act as a complete replacement for CLI.
+- [Monitoring and troubleshooting](https://github.com/kubernetes/dashboard/issues/962) - To add resource usage statistics/graphs to the objects shown in Dashboard. This focus area will allow for actionable debugging and troubleshooting of cloud applications.
+- [Security, auth and logging in](https://github.com/kubernetes/dashboard/issues/964) - Make Dashboard accessible from networks external to a Cluster and work with custom authentication systems.
+
+
+
+**Connect With Us**
+
+
+
+We would love to talk with you and hear your feedback!
+
+- Email us at the [SIG-UI mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-ui)
+- Chat with us on the Kubernetes Slack [#SIG-UI channel](https://kubernetes.slack.com/messages/sig-ui/)
+- Join our meetings: 4PM CEST. See the [SIG-UI calendar](https://calendar.google.com/calendar/embed?src=google.com_52lm43hc2kur57dgkibltqc6kc%40group.calendar.google.com&ctz=Europe/Warsaw) for details.
+
+
+
+
+
+_-- Piotr Bryk, Software Engineer, Google_
diff --git a/blog/_posts/2016-07-00-Five-Days-Of-Kubernetes-1.3.md b/blog/_posts/2016-07-00-Five-Days-Of-Kubernetes-1.3.md
new file mode 100644
index 00000000000..619893c9d01
--- /dev/null
+++ b/blog/_posts/2016-07-00-Five-Days-Of-Kubernetes-1.3.md
@@ -0,0 +1,61 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Five Days of Kubernetes 1.3 "
+date: Tuesday, July 11, 2016
+pagination:
+ enabled: true
+---
+Last week we [released Kubernetes 1.3](http://blog.kubernetes.io/2016/07/kubernetes-1.3-bridging-cloud-native-and-enterprise-workloads.html), two years from the day when the first Kubernetes commit was pushed to GitHub. Now 30,000+ commits later from over 800 contributors, this 1.3 releases is jam packed with updates driven by feedback from users.
+
+While many new improvements and features have been added in the latest release, we’ll be highlighting several that stand-out. Follow along and read these in-depth posts on what’s new and how we continue to make Kubernetes the best way to manage containers at scale.
+
+
+|
+Day 1
+ |
+
+\* [Minikube: easily run Kubernetes locally](http://blog.kubernetes.io/2016/07/minikube-easily-run-kubernetes-locally.html)
+\* [rktnetes: brings rkt container engine to Kubernetes](http://blog.kubernetes.io/2016/07/rktnetes-brings-rkt-container-engine-to-Kubernetes.html)
+ |
+|
+Day 2
+ |
+\* [Autoscaling in Kubernetes](http://blog.kubernetes.io/2016/07/autoscaling-in-kubernetes.html)
+\* _Partner post: [Kubernetes in Rancher, the further evolution](http://blog.kubernetes.io/2016/07/kubernetes-in-rancher-further-evolution.html)_
+ |
+|
+Day 3
+ |
+\* [Deploying thousand instances of Cassandra using Pet Set](http://blog.kubernetes.io/2016/07/thousand-instances-of-cassandra-using-kubernetes-pet-set.html)
+\* _Partner post: [Stateful Applications in Containers, by Diamanti](http://blog.kubernetes.io/2016/07/stateful-applications-in-containers-kubernetes.html)_
+ |
+|
+Day 4
+ |
+\* [Cross Cluster Services](http://blog.kubernetes.io/2016/07/cross-cluster-services.html)
+_\* Partner post: [Citrix and NetScaler CPX](http://blog.kubernetes.io/2016/07/Citrix-NetScaler-and-Kubernetes.html)_
+ |
+|
+Day 5
+ |
+\* [Dashboard - Full Featured Web Interface for Kubernetes](http://blog.kubernetes.io/2016/07/dashboard-web-interface-for-kubernetes.html)
+\* _Partner post: [Steering an Automation Platform at Wercker with Kubernetes](http://blog.kubernetes.io/2016/07/automation-platform-at-wercker-with-kubernetes.html)_
+ |
+|
+Bonus
+ |
+\* [Updates to Performance and Scalability](http://blog.kubernetes.io/2016/07/kubernetes-updates-to-performance-and-scalability-in-1.3.html)
+ |
+
+
+
+**Connect**
+
+
+We’d love to hear from you and see you participate in this growing community:
+
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stackoverflow](https://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.kubernetes.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-07-00-Kubernetes-1.3-Bridging-Cloud-Native-And-Enterprise-Workloads.md b/blog/_posts/2016-07-00-Kubernetes-1.3-Bridging-Cloud-Native-And-Enterprise-Workloads.md
new file mode 100644
index 00000000000..b96c53060c2
--- /dev/null
+++ b/blog/_posts/2016-07-00-Kubernetes-1.3-Bridging-Cloud-Native-And-Enterprise-Workloads.md
@@ -0,0 +1,67 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes 1.3: Bridging Cloud Native and Enterprise Workloads "
+date: Thursday, July 06, 2016
+pagination:
+ enabled: true
+---
+Nearly two years ago, when we officially kicked off the Kubernetes project, we wanted to simplify distributed systems management and provide the core technology required to everyone. The community’s response to this effort has blown us away. Today, thousands of customers, partners and developers are running clusters in production using Kubernetes and have joined the cloud native revolution.
+
+Thanks to the help of over 800 contributors, we are pleased to announce today the availability of Kubernetes 1.3, our most robust and feature-rich release to date.
+
+As our users scale their production deployments we’ve heard a clear desire to deploy services across cluster, zone and cloud boundaries. We’ve also heard a desire to run more workloads in containers, including stateful services. In this release, we’ve worked hard to address these two problems, while making it easier for new developers and enterprises to use Kubernetes to manage distributed systems at scale.
+
+Product highlights in Kubernetes 1.3 include the ability to bridge services across multiple clouds (including on-prem), support for multiple node types, integrated support for stateful services (such as key-value stores and databases), and greatly simplified cluster setup and deployment on your laptop. Now, developers at organizations of all sizes can build production scale apps more easily than ever before.
+
+
+
+**What’s new:**
+
+- **Increased scale and automation** - Customers want to scale their services up and down automatically in response to application demand. In 1.3 we have made it easier to autoscale clusters up and down while doubling the maximum number of nodes per cluster. Customers no longer need to think about cluster size, and can allow the underlying cluster to respond to demand.
+
+- **Cross-cluster federated services** - Customers want their services to span one or more (possibly remote) clusters, and for them to be reachable in a consistent manner from both within and outside their clusters. Services that span clusters have higher availability, provide geographic distribution and enable hybrid and multi-cloud scenarios. Kubernetes 1.3 introduces cross-cluster service discovery so containers, and external clients can consistently resolve to services irrespective of whether they are running partially or completely in other clusters.
+
+- **Stateful applications** - Customers looking to use containers for stateful workloads (such as databases or key value stores) will find a new ‘PetSet’ object with raft of alpha features, including:
+
+ - Permanent hostnames that persist across restarts
+ - Automatically provisioned persistent disks per container that live beyond the life of a container
+ - Unique identities in a group to allow for clustering and leader election
+ - Initialization containers which are critical for starting up clustered applications
+- **Ease of use for local development** - Developers want an easy way to learn to use Kubernetes. In Kubernetes 1.3 we are introducing [Minikube](https://github.com/kubernetes/minikube), where with one command a developer can start a local Kubernetes cluster on their laptop that is API compatible with a full Kubernetes cluster. This enable developers to test locally, and push to their Kubernetes clusters when they are ready.
+- **Support for rkt and container standards OCI & CNI** - Kubernetes is an extensible and modular orchestration platform. Part of what has made Kubernetes successful is our commitment to giving customers access to the latest container technologies that best suit their environment. In Kubernetes 1.3 we support emerging standards such as the Container Network Interface ([CNI](https://github.com/containernetworking/cni)) natively, and have already taken steps to the Open Container Initiative ([OCI](https://github.com/opencontainers)), which is still being ratified. We are also introducing [rkt](https://github.com/coreos/rkt) as an alternative container runtime in Kubernetes node, with a first-class integration between rkt and the kubelet. This allows Kubernetes users to take advantage of some of rkt's unique features.
+- **Updated Kubernetes dashboard UI** - Customers can now use the Kubernetes open source dashboard for the majority of interactions with their clusters, rather than having to use the CLI. The updated UI lets users control, edit and create all workload resources (including Deployments and PetSets).
+- And many more. For a complete list of updates, see the [_release notes on GitHub_](https://github.com/kubernetes/kubernetes/releases/tag/v1.3.0).
+
+**Community**
+
+We could not have achieved this milestone without the tireless effort of countless people that are part of the Kubernetes community. We have [19 different Special Interest Groups](https://github.com/kubernetes/community/blob/master/README.md#special-interest-groups-sig), and over 100 meetups around the world. Kubernetes is a community project, built in the open, and it truly would not be possible without the over 233 person-years of effort the community has put in to date. Woot!
+
+
+
+**Availability**
+
+Kubernetes 1.3 is available for download at [get.k8s.io](http://get.k8s.io/) and via the open source repository hosted on [GitHub](http://github.com/kubernetes/kubernetes). To get started with Kubernetes try our [Hello World app](http://kubernetes.io/docs/hellonode/).
+
+
+
+To learn the latest about the project, we encourage everyone to [join the weekly community meeting](https://groups.google.com/forum/#!forum/kubernetes-community-video-chat) or [watch a recorded hangout](https://www.youtube.com/playlist?list=PL69nYSiGNLP1pkHsbPjzAewvMgGUpkCnJ).
+
+
+
+**Connect**
+
+We’d love to hear from you and see you participate in this growing community:
+
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stackoverflow](https://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.kubernetes.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+
+
+
+Thank you for your support!
+
+
+
+-- Aparna Sinha, Product Manager, Google
diff --git a/blog/_posts/2016-07-00-Kubernetes-In-Rancher-Further-Evolution.md b/blog/_posts/2016-07-00-Kubernetes-In-Rancher-Further-Evolution.md
new file mode 100644
index 00000000000..32c63c9f64c
--- /dev/null
+++ b/blog/_posts/2016-07-00-Kubernetes-In-Rancher-Further-Evolution.md
@@ -0,0 +1,197 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes in Rancher: the further evolution "
+date: Wednesday, July 12, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: today's guest post is from Alena Prokharchyk, Principal Software Engineer at Rancher Labs, who’ll share how they are incorporating new Kubernetes features into their platform._
+
+Kubernetes was the first external orchestration platform supported by [Rancher](http://rancher.com/kubernetes), and since its release, it has become one of the most widely used among our users, and continues to grow rapidly in adoption. As Kubernetes has evolved, so has Rancher in terms of adapting new Kubernetes features. We’ve started with supporting Kubernetes version 1.1, then switched to 1.2 as soon as it was released, and now we’re working on supporting the exciting new features in 1.3. I’d like to walk you through the features that we’ve been adding support for during each of these stages.
+
+
+### Rancher and Kubernetes 1.2
+
+Kubernetes 1.2 introduced enhanced Ingress object to simplify allowing inbound connections to reach the cluster services: here’s an excellent [blog post about ingress](http://blog.kubernetes.io/2016/03/Kubernetes-1.2-and-simplifying-advanced-networking-with-Ingress.html) policies. Ingress resource allows users to define host name routing rules and TLS config for the Load Balancer in a user friendly way. Then it should be backed up by an Ingress controller that would configure a corresponding cloud provider’s Load Balancer with the Ingress rules. Since Rancher already included a software defined Load Balancer based on HAproxy, we already supported all of the configuration requirements of the Ingress resource, and didn’t have to do any changes on the Rancher side to adopt Ingress. What we had to do was write an Ingress controller that would listen to Kubernetes ingress specific events, configure the Rancher Load Balancer accordingly, and propagate the Load Balancer public entry point back to Kubernetes:
+
+
+
+ 
+
+Now, the ingress controller gets deployed as a part of our Rancher Kubernetes system stack, and is managed by Rancher. Rancher monitors Ingress controller health, and recreates it in case of any failures. In addition to standard ingress features, Rancher also lets you to horizontally scale the Load Balancer supporting the ingress service by specifying scale via Ingress annotations. For example:
+
+
+```
+apiVersion: extensions/v1beta1
+
+kind: Ingress
+
+metadata:
+
+ name: scalelb
+
+ annotations:
+
+ scale: "2"
+
+spec:
+
+ rules:
+
+ - host: foo.bar.com
+
+ http:
+
+ paths:
+
+ - path: /foo
+
+ backend:
+
+ serviceName: nginx-service
+
+ servicePort: 80
+ ```
+
+
+
+As a result of the above, 2 instances of Rancher Load Balancer will get started on separate hosts, and Ingress will get updated with 2 public ip addresses:
+
+
+
+
+
+```
+kubectl get ingress
+
+NAME RULE BACKEND ADDRESS
+
+scalelb - 104.154.107.202, 104.154.107.203 // hosts ip addresses where Rancher LB instances are deployed
+
+ foo.bar.com
+
+ /foo nginx-service:80
+ ```
+
+
+
+
+More details on Rancher Ingress Controller implementation for Kubernetes can be found here:
+
+- [Blog post](http://rancher.com/rancher-controller-for-the-kubernetes-ingress-feature/)
+- [Rancher documentation on Ingress](http://docs.rancher.com/rancher/latest/en/kubernetes/ingress/)
+- [Rancher ingress controller repo](https://github.com/rancher/ingress-controller)
+
+### Rancher and Kubernetes 1.3
+
+
+We’ve very excited about Kubernetes 1.3 release, and all the new features that are included with it. There are two that we are especially interested in: Stateful Apps and Cluster Federation.
+
+
+#### Kubernetes Stateful Apps
+
+Stateful Apps is a new resource to Kubernetes to represent a set of pods in stateful application. This is an alternative to the using Replication Controllers, which are best leveraged for running stateless apps. This feature is specifically useful for apps that rely on quorum with leader election (such as MongoDB, Zookeeper, etcd) and decentralized quorum (Cassandra). Stateful Apps create and maintains a set of pods, each of which have a stable network identity. In order to provide the network identity, it must be possible to have a resolvable DNS name for the pod that is tied to the pod identity as per [Kubernetes design doc](https://github.com/smarterclayton/kubernetes/blob/961f1f94c35d4979ac83bbad482090cb6c22781c/docs/proposals/petset.md):
+
+
+
+
+
+```
+# service mongo pointing to pods created by PetSet mdb, with identities mdb-1, mdb-2, mdb-3
+
+
+dig mongodb.namespace.svc.cluster.local +short A
+
+172.130.16.50
+
+
+dig mdb-1.mongodb.namespace.svc.cluster.local +short A
+
+# IP of pod created for mdb-1
+
+
+dig mdb-2.mongodb.namespace.svc.cluster.local +short A
+
+# IP of pod created for mdb-2
+
+
+dig mdb-3.mongodb.namespace.svc.cluster.local +short A
+
+# IP of pod created for mdb-3
+ ```
+
+
+
+The above is implemented via an annotation on pods, which is surfaced to endpoints, and finally surfaced as DNS on the service that exposes those pods. Currently Rancher simplifies DNS configuration by leveraging Rancher DNS as a drop-in replacement for SkyDNS. Rancher DNS is fast, stable, and scalable - every host in cluster gets DNS server running. Kubernetes services get programmed to Rancher DNS, and being resolved to either service’s cluster IP from 10,43.x.x address space, or to set of Pod ip addresses for headless service. To make PetSet work with Kubernetes via Rancher, we’ll have to add support for Pod Identities to Rancher DNS configuration. We’re working on this now and should have it supported in one of the upcoming Rancher releases.
+
+
+#### Cluster Federation
+Cluster Federation is a control plane of cluster federation in Kubernetes. It offers improved application availability by spreading applications across multiple clusters (the image below is a courtesy of Kubernetes):
+
+
+
+ 
+
+Each Kubernetes cluster exposes an API endpoint and gets registered to Cluster Federation as a part of Federation object. Then using Cluster Federation API, you can create federated services. Those objects are comprised of multiple equivalent underlying Kubernetes resources. Assuming that the 3 clusters on the picture above belong to the same Federation object, each Service created via Cluster Federation, will get equivalent service created in each of the clusters. Besides that, a Cluster Federation service will get publicly resolvable DNS name resolvable to Kuberentes service’s public ip addresses (DNS record gets programmed to a one of the public DNS providers below):
+
+
+
+ 
+
+
+
+
+
+To support Cluster Federation via Kubernetes in Rancher, certain changes need to be done. Today each Kubernetes cluster is represented as a Rancher environment. In each Kubernetes environment, we create a full Kubernetes system stack comprised of several services: Kubernetes API server, Scheduler, Ingress controller, persistent etcd, Controller manager, Kubelet and Proxy (2 last ones run on every host). To setup Cluster Federation, we will create one extra environment where Cluster Federation stack is going to run:
+
+
+
+
+ 
+
+
+
+
+Then every underlying Kubernetes cluster represented by Rancher environment, should be registered to a specific Cluster Federation. Potentially each cluster can be auto-discovered by Rancher Cluster Federation environment via label representing federation name on Kubernetes cluster. We’re still working through finalizing our design, but we’re very excited by this feature, and see a lot of use cases it can solve. Cluster Federation doc references:
+
+
+- Kubernetes [cluster federation design doc](https://github.com/kubernetes/kubernetes/blob/master/docs/design/federation-phase-1.md)
+- Kubernetes [blog post on multi zone clusters](http://blog.kubernetes.io/2016/03/building-highly-available-applications-using-Kubernetes-new-multi-zone-clusters-a.k.a-Ubernetes-Lite.html)
+- Kubernetes [federated services design doc](https://github.com/kubernetes/kubernetes/blob/master/docs/design/federated-services.md)
+
+
+### Plans for Kubernetes 1.4
+
+
+When we launched Kubernetes support in Rancher we decided to maintain our own distribution of Kubernetes in order to support Rancher’s native networking. We were aware that by having our own distribution, we’d need to update it every time there were changes made to Kubernetes, but we felt it was necessary to support the use cases we were working on for users. As part of our work for 1.4 we looked at our networking approach again, and re-analyzed the initial need for our own fork of Kubernetes. Other than the networking integration, all of the work we’ve done with Kubernetes has been developed as a Kubernetes plugin:
+
+- Rancher as a CloudProvider (to support Load Balancers).
+- Rancher as a CredentialProvider (to support Rancher private registries).
+- Rancher Ingress controller to back up Kubernetes ingress resource.
+
+So we’ve decided to eliminate the need of Rancher Kubernetes distribution, and try to upstream all our changes to the Kubernetes repo. To do that, we will be reworking our networking integration, and support Rancher networking as a [CNI plugin for Kubernetes](http://kubernetes.io/docs/admin/network-plugins/#cni). More details on that will be shared as soon as the feature design is finalized, but expect it to come in the next 2-3 months. We will also continue investing in Rancher’s core capabilities integrated with Kubernetes, including, but not limited to:
+
+- Access rights management via Rancher environment that represents Kubernetes cluster
+- Credential management and easy web-based access to standard kubectl cli
+- Load Balancing support
+- Rancher internal DNS support
+- Catalog support for Kubernetes templates
+- Enhanced UI to represent even more Kubernetes objects like: Deployment, Ingress, Daemonset.
+
+All of that is to make Kubernetes experience even more powerful and user intuitive. We’re so excited by all of the progress in the Kubernetes community, and thrilled to be participating. Kubernetes 1.3 is an incredibly significant release, and you’ll be able to upgrade to it very soon within Rancher.
+
+
+
+
+
+_-- Alena Prokharchyk, Principal Software Engineer, Rancher Labs. [Twitter @lemonjet](https://twitter.com/Lemonjet) & [GitHub alena1108](https://github.com/alena1108)_
+
+
+
+
+
+
+
+
+ 
diff --git a/blog/_posts/2016-07-00-Minikube-Easily-Run-Kubernetes-Locally.md b/blog/_posts/2016-07-00-Minikube-Easily-Run-Kubernetes-Locally.md
new file mode 100644
index 00000000000..845d9ed9b94
--- /dev/null
+++ b/blog/_posts/2016-07-00-Minikube-Easily-Run-Kubernetes-Locally.md
@@ -0,0 +1,135 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Minikube: easily run Kubernetes locally "
+date: Tuesday, July 11, 2016
+pagination:
+ enabled: true
+---
+_Editor's note: This is the first post in a [series of in-depth articles](http://blog.kubernetes.io/2016/07/five-days-of-kubernetes-1.3.html) on what's new in Kubernetes 1.3 _
+
+While Kubernetes is one of the best tools for managing containerized applications available today, and has been production-ready for over a year, Kubernetes has been missing a great local development platform.
+
+For the past several months, several of us from the Kubernetes community have been working to fix this in the [Minikube](http://github.com/kubernetes/minikube) repository on GitHub. Our goal is to build an easy-to-use, high-fidelity Kubernetes distribution that can be run locally on Mac, Linux and Windows workstations and laptops with a single command.
+
+Thanks to lots of help from members of the community, we're proud to announce the official release of Minikube. This release comes with support for [Kubernetes 1.3](http://blog.kubernetes.io/2016/07/kubernetes-1.3-bridging-cloud-native-and-enterprise-workloads.html), new commands to make interacting with your local cluster easier and experimental drivers for xhyve (on Mac OSX) and KVM (on Linux).
+
+**Using Minikube**
+
+Minikube ships as a standalone Go binary, so installing it is as simple as downloading Minikube and putting it on your path:
+
+Minikube currently requires that you have VirtualBox installed, which you can download [here](https://www.virtualbox.org/).
+
+
+
+_(This is for Mac, for Linux substitute “minikube-darwin-amd64” with “minikube-linux-amd64”)_curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/_
+
+
+
+
+To start a Kubernetes cluster in Minikube, use the `minikube start` command:
+
+
+
+
+
+```
+$ minikube start
+
+Starting local Kubernetes cluster...
+
+Kubernetes is available at https://192.168.99.100:443
+
+Kubectl is now configured to use the cluster
+```
+
+
+
+
+
+ 
+
+
+At this point, you have a running single-node Kubernetes cluster on your laptop! Minikube also configures `kubectl` for you, so you're also ready to run containers with no changes.
+
+
+
+Minikube creates a Host-Only network interface that routes to your node. To interact with running pods or services, you should send traffic over this address. To find out this address, you can use the `minikube ip` command:
+
+ 
+
+
+
+
+
+Minikube also comes with the Kubernetes Dashboard. To open this up in your browser, you can use the built-in `minikube dashboard` command:
+
+
+ 
+
+ {: .big-img}
+
+
+
+
+
+In general, Minikube supports everything you would expect from a Kubernetes cluster. You can use `kubectl exec` to get a bash shell inside a pod in your cluster. You can use the `kubectl port-forward` and `kubectl proxy` commands to forward traffic from localhost to a pod or the API server.
+
+
+
+
+Since Minikube is running locally instead of on a cloud provider, certain provider-specific features like LoadBalancers and PersistentVolumes will not work out-of-the-box. However, you can use NodePort LoadBalancers and HostPath PersistentVolumes.
+
+
+
+**Architecture**
+
+
+
+
+
+Minikube is built on top of Docker's [libmachine](https://github.com/docker/machine/tree/master/libmachine), and leverages the driver model to create, manage and interact with locally-run virtual machines.
+
+
+
+
+[RedSpread](https://redspread.com/) was kind enough to donate their [localkube](https://github.com/redspread/localkube) codebase to the Minikube repo, which we use to spin up a single-process Kubernetes cluster inside a VM. Localkube bundles etcd, DNS, the Kubelet and all the Kubernetes master components into a single Go binary, and runs them all via separate goroutines.
+
+
+
+**Upcoming Features**
+
+
+
+Minikube has been a lot of fun to work on so far, and we're always looking to improve Minikube to make the Kubernetes development experience better. If you have any ideas for features, don't hesitate to let us know in the [issue tracker](https://github.com/kubernetes/minikube/issues).
+
+
+
+Here's a list of some of the things we're hoping to add to Minikube soon:
+
+
+
+- Native hypervisor support for OSX and Windows
+- We're planning to remove the dependency on Virtualbox, and integrate with the native hypervisors included in OSX and Windows (Hypervisor.framework and Hyper-v, respectively).
+- Improved support for Kubernetes features
+- We're planning to increase the range of supported Kubernetes features, to include things like Ingress.
+- Configurable versions of Kubernetes
+- Today Minikube only supports Kubernetes 1.3. We're planning to add support for user-configurable versions of Kubernetes, to make it easier to match what you have running in production on your laptop.
+
+
+
+
+**Community**
+
+
+
+We'd love to hear feedback on Minikube. To join the community:
+
+- Post issues or feature requests on [GitHub](https://github.com/kubernetes/minikube)
+- Join us in the #minikube channel on [Slack](https://kubernetes.slack.com/)
+
+Please give Minikube a try, and let us know how it goes!
+
+
+
+_--Dan Lorenc, Software Engineer, Google_
diff --git a/blog/_posts/2016-07-00-Oh-The-Places-You-Will-Go.md b/blog/_posts/2016-07-00-Oh-The-Places-You-Will-Go.md
new file mode 100644
index 00000000000..a9fa9f52f9d
--- /dev/null
+++ b/blog/_posts/2016-07-00-Oh-The-Places-You-Will-Go.md
@@ -0,0 +1,38 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Happy Birthday Kubernetes. Oh, the places you’ll go! "
+date: Friday, July 21, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note, Today’s guest post is from an independent Kubernetes contributor, Justin Santa Barbara, sharing his reflection on growth of the project from inception to its future._
+
+**Dear K8s,**
+
+_It’s hard to believe you’re only one - you’ve grown up so fast. On the occasion of your first birthday, I thought I would write a little note about why I was so excited when you were born, why I feel fortunate to be part of the group that is raising you, and why I’m eager to watch you continue to grow up!_
+
+_--Justin_
+
+You started with an excellent foundation - good declarative functionality, built around a solid API with a well defined schema and the machinery so that we could evolve going forwards. And sure enough, over your first year you grew so fast: autoscaling, HTTP load-balancing support (Ingress), support for persistent workloads including clustered databases (PetSets). You’ve made friends with more clouds (welcome Azure & OpenStack to the family), and even started to span zones and clusters (Federation). And these are just some of the most visible changes - there’s so much happening inside that brain of yours!
+
+I think it’s wonderful you’ve remained so open in all that you do - you seem to write down everything on Github - for better or worse. I think we’ve all learned a lot about that on the way, like the perils of having engineers make scaling statements that are then weighed against claims made without quite the same framework of precision and rigor. But I’m proud that you chose not to lower your standards, but rose to the challenge and just ran faster instead - it might not be the most realistic approach, but it is the only way to move mountains!
+
+And yet, somehow, you’ve managed to avoid a lot of the common dead-ends that other open source software has fallen into, particularly as those projects got bigger and the developers end up working on it more than they use it directly. How did you do that? There’s a probably-apocryphal story of an employee at IBM that makes a huge mistake, and is summoned to meet with the big boss, expecting to be fired, only to be told “We just spent several million dollars training you. Why would we want to fire you?”. Despite all the investment google is pouring into you (along with Redhat and others), I sometimes wonder if the mistakes we are avoiding could be worth even more. There is a very open development process, yet there’s also an “oracle” that will sometimes course-correct by telling us what happens two years down the road if we make a particular design decision. This is a parent you should probably listen to!
+
+And so although you’re only a year old, you really have an [old soul](http://queue.acm.org/detail.cfm?id=2898444). I’m just one of the [many people raising you](http://blog.kubernetes.io/2016/07/happy-k8sbday-1.html), but it’s a wonderful learning experience for me to be able to work with the people that have built these incredible systems and have all this domain knowledge. Yet because we started from scratch (rather than taking the existing Borg code) we’re at the same level and can still have genuine discussions about how to raise you. Well, at least as close to the same level as we could ever be, but it’s to their credit that they are all far too nice ever to mention it!
+
+If I would pick just two of the wise decisions those brilliant people made:
+
+
+- Labels & selectors give us declarative “pointers”, so we can say “why” we want things, rather than listing the things directly. It’s the secret to how you can scale to [great heights](http://blog.kubernetes.io/2016/07/thousand-instances-of-cassandra-using-kubernetes-pet-set.html); not by naming each step, but saying “a thousand more steps just like that first one”.
+- Controllers are state-synchronizers: we specify the goals, and your controllers will indefatigably work to bring the system to that state. They work through that strongly-typed API foundation, and are used throughout the code, so Kubernetes is more of a set of a hundred small programs than one big one. It’s not enough to scale to thousands of nodes technically; the project also has to scale to thousands of developers and features; and controllers help us get there.
+
+And so on we will go! We’ll be replacing those controllers and building on more, and the API-foundation lets us build anything we can express in that way - with most things just a label or annotation away! But your thoughts will not be defined by language: with third party resources you can express anything you choose. Now we can build Kubernetes without building in Kubernetes, creating things that feel as much a part of Kubernetes as anything else. Many of the recent additions, like ingress, DNS integration, autoscaling and network policies were done or could be done in this way. Eventually it will be hard to imagine you before these things, but tomorrow’s standard functionality can start today, with no obstacles or gatekeeper, maybe even for an audience of one.
+
+So I’m looking forward to seeing more and more growth happen further and further from the core of Kubernetes. We had to work our way through those phases; starting with things that needed to happen in the kernel of Kubernetes - like replacing replication controllers with deployments. Now we’re starting to build things that don’t require core changes. But we’re still still talking about infrastructure separately from applications. It’s what comes next that gets really interesting: when we start building applications that rely on the Kubernetes APIs. We’ve always had the Cassandra example that uses the Kubernetes API to self-assemble, but we haven’t really even started to explore this more widely yet. In the same way that the S3 APIs changed how we build things that remember, I think the k8s APIs are going to change how we build things that think.
+
+So I’m looking forward to your second birthday: I can try to predict what you’ll look like then, but I know you’ll surpass even the most audacious things I can imagine. Oh, the places you’ll go!
+
+
+_-- Justin Santa Barbara, Independent Kubernetes Contributor_
diff --git a/blog/_posts/2016-07-00-Rktnetes-Brings-Rkt-Container-Engine-To-Kubernetes.md b/blog/_posts/2016-07-00-Rktnetes-Brings-Rkt-Container-Engine-To-Kubernetes.md
new file mode 100644
index 00000000000..f7501505439
--- /dev/null
+++ b/blog/_posts/2016-07-00-Rktnetes-Brings-Rkt-Container-Engine-To-Kubernetes.md
@@ -0,0 +1,86 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " rktnetes brings rkt container engine to Kubernetes "
+date: Tuesday, July 11, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: this post is part of a [series of in-depth articles](http://blog.kubernetes.io/2016/07/five-days-of-kubernetes-1.3.html) on what's new in Kubernetes 1.3 _
+
+As part of [Kubernetes 1.3](http://blog.kubernetes.io/2016/07/kubernetes-1.3-bridging-cloud-native-and-enterprise-workloads.html), we’re happy to report that our work to bring interchangeable container engines to Kubernetes is bearing early fruit. What we affectionately call “rktnetes” is included in the version 1.3 Kubernetes release, and is ready for development use. rktnetes integrates support for [CoreOS rkt](https://coreos.com/rkt/) into Kubernetes as the container runtime on cluster nodes, and is now part of the mainline Kubernetes source code. Today it’s easier than ever for developers and ops pros with container portability in mind to try out running Kubernetes with a different container engine.
+
+"We find CoreOS’s rkt a compelling container engine in Kubernetes because of how rkt is composed with the underlying systemd,” said Mark Petrovic, senior MTS and architect at Xoom, a PayPal service. “The rkt runtime assumes only the responsibility it needs to, then delegates to other system services where appropriate. This separation of concerns is important to us.”
+
+
+### What’s rktnetes?
+
+rktnetes is the nickname given to the code that enables Kubernetes nodes to execute application containers with the rkt container engine, rather than with Docker. This change adds new abilities to Kubernetes, for instance running containers under flexible levels of isolation. rkt explores an alternative approach to container runtime architecture, aimed to reflect the Unix philosophy of cleanly separated, modular tools. Work done to support rktnetes also opens up future possibilities for Kubernetes, such as multiple container image format support, and the integration of other container runtimes tailored for specific use cases or platforms.
+
+
+### Why does Kubernetes need rktnetes?
+
+rktnetes is about more than just rkt. It’s also about refining and exercising Kubernetes interfaces, and paving the way for other modular runtimes in the future. While the Docker container engine is well known, and is currently the default Kubernetes container runtime, a number of benefits derive from pluggable container environments. Some clusters may call for very specific container engine implementations, for example, and ensuring the Kubernetes design is flexible enough to support alternate runtimes, starting with rkt, helps keep the interfaces between components clean and simple.
+
+#### Separation of concerns: Decomposing the monolithic container daemon
+The current container runtime used by Kubernetes imposes a number of design decisions. Experimenting with other container execution architectures is worthwhile in such a rapidly evolving space. Today, when Kubernetes sends a request to a node to start running a pod, it communicates through the kubelet on each node with the default container runtime’s central daemon, responsible for managing all of the node’s containers.
+
+rkt does not implement a monolithic container management daemon. (It is worth noting that the [default container runtime is in the midst of refactoring its original monolithic architecture](https://blog.docker.com/2016/04/docker-engine-1-11-runc/).) The rkt design has from day one tried to apply the principle of modularity to the fullest, including reusing well-tested system components, rather than reimplementing them.
+
+The task of building container images is abstracted away from the container runtime core in rkt, and implemented by an independent utility. The same approach is taken to ongoing container lifecycle management. A single binary, rkt, configures the environment and prepares container images for execution, then sets the container application and its isolation environment running. At this point, the rkt program has done its “one job”, and the container isolator takes over.
+
+The API for querying container engine and pod state, used by Kubernetes to track cluster work on each node, is implemented in a separate service, isolating coordination and orchestration features from the core container runtime. While the API service does not fully implement all the API features of the current default container engine, it already helps isolate containers from failures and upgrades in the core runtime, and provides the read-only parts of the expected API for querying container metadata.
+
+#### Modular container isolation levels
+With rkt managing container execution, Kubernetes can take advantage of the CoreOS container engine’s modular _stage1_ isolation mechanism. The typical container runs under rkt in a software-isolated environment constructed from Linux kernel namespaces, cgroups, and other facilities. Containers isolated in this common way nevertheless share a single kernel with all the other containers on a system, making for lightweight isolation of running apps.
+
+However, rkt features pluggable isolation environments, referred to as stage1s, to change how containers are executed and isolated. For example, the [rkt fly stage1](https://coreos.com/rkt/docs/latest/running-fly-stage1.html) runs containers in the host namespaces (PID, mount, network, etc), granting containers greater power on the host system. Fly is used for containerizing lower-level system and network software, like the kubelet itself. At the other end of the isolation spectrum, the [KVM stage1](https://coreos.com/rkt/docs/latest/running-lkvm-stage1.html) runs standard app containers as individual virtual machines, each above its own Linux kernel, managed by the KVM hypervisor. This isolation level can be useful for high security and multi-tenant cluster workloads.
+
+
+
+[](https://1.bp.blogspot.com/-k3RRYf70fsg/V4a_-lVypxI/AAAAAAAAAl4/m9lVW0mxw7s35dzLlT4XJO5gdMzy_RBiQCLcB/s1600/rkt%2Bstages.png)
+
+
+
+
+Currently, rktnetes can use the KVM stage1 to execute all containers on a node with VM isolation by setting the kubelet’s --rkt-stage1-image option. Experimental work exists to choose the stage1 isolation regime on a per-pod basis with a Kubernetes annotation declaring the pod’s appropriate stage1. KVM containers and standard Linux containers can be mixed together in the same cluster.
+
+
+### How rkt works with Kubernetes
+
+
+
+Kubernetes today talks to the default container engine over an API provided by the Docker daemon. rktnetes communicates with rkt a little bit differently. First, there is a distinction between how Kubernetes changes the state of a node’s containers – how it starts and stops pods, or reschedules them for failover or scaling – and how the orchestrator queries pod metadata for regular, read-only bookkeeping. Two different facilities implement these two different cases.
+
+
+[](https://3.bp.blogspot.com/-Agx6uMnddDc/V4bAA2YH_-I/AAAAAAAAAl8/PbKRFjVy0JMqyZ_OJ4oqMtGyTmlFTh0bQCEw/s1600/rktnetes%2B%25281%2529.png)
+
+
+
+#### Managing microservice lifecycles
+The kubelet on each cluster node communicates with rkt to [prepare](https://coreos.com/rkt/docs/latest/subcommands/prepare.html) containers and their environments into pods, and with systemd, the linux service management framework, to invoke and manage the pod processes. Pods are then managed as systemd services, and the kubelet sends systemd commands over dbus to manipulate them. Lifecycle management, such as restarting failed pods and killing completed processes, is handled by systemd, at the kubelet’s behest.
+
+
+#### The API service for reading pod data
+A discrete [rkt api-service](https://coreos.com/rkt/docs/latest/subcommands/api-service.html) implements the pod introspection mechanisms expected by Kubernetes. While each node’s kubelet uses systemd to start, stop, and restart pods as services, it contacts the API service to read container runtime metadata. This includes basic orchestration information such as the number of pods running on the node, the names and networks of those pods, and the details of pod configuration, resource limits and storage volumes (think of the information shown by the kubectl describe subcommand).
+
+Pod logs, having been written to journal files, are made available for kubectl logs and other forensic subcommands by the API service as well, which reads from log files to provide pod log data to the kubelet for answering control plane requests.
+
+This dual interface to the container environment is an area of very active development, and plans are for the API service to expand to provide methods for the pod manipulation commands. The underlying mechanism will continue to keep separation of concerns in mind, but will hide more of this from the kubelet. The methods the kubelet uses to control the rktnetes container engine will grow less different from the default container runtime interface over time.
+
+
+### Try rktnetes
+
+So what can you do with rktnetes today? Currently, rktnetes passes all of [the applicable Kubernetes “end-to-end” (aka “e2e”) tests](http://storage.googleapis.com/kubernetes-test-history/static/suite-rktnetes:kubernetes-e2e-gce.html), provides standard metrics to cAdvisor, manages networks using [CNI](https://github.com/containernetworking/cni), handles per-container/pod logs, and automatically garbage collects old containers and images. Kubernetes running on rkt already provides more than the basics of a modular, flexible container runtime for Kubernetes clusters, and it is already a functional part of our development environment at CoreOS.
+
+Developers and early adopters can follow the known issues in the [rktnetes notes](http://kubernetes.io/docs/getting-started-guides/rkt/notes/) to get an idea of the wrinkles and bumps test-drivers can expect to encounter. This list groups the high-level pieces required to bring rktnetes to feature parity with the existing container runtime and API. We hope you’ll try out rktnetes in your Kubernetes clusters, too.
+
+#### Use rkt with Kubernetes Today
+The introductory guide [_Running Kubernetes on rkt_](http://kubernetes.io/docs/getting-started-guides/rkt/) walks through the steps to spin up a rktnetes cluster, from kubelet --container-runtime=rkt to networking and starting pods. This intro also sketches the configuration you’ll need to start a cluster on GCE with the Kubernetes kube-up.sh script.
+
+Recent work aims to make rktnetes cluster creation much easier, too. While not yet merged, an [in-progress pull request creates a single rktnetes configuration toggle](https://github.com/coreos/coreos-kubernetes/pull/551) to select rkt as the container engine when deploying a Kubernetes cluster with the [coreos-kubernetes](https://github.com/coreos/coreos-kubernetes#kubernetes-on-coreos) configuration tools. You can also check out the [rktnetes workshop project](https://github.com/coreos/rkt8s-workshop), which launches a single-node rktnetes cluster on just about any developer workstation with one vagrant up command.
+
+We’re excited to see the experiments the wider Kubernetes and CoreOS communities devise to put rktnetes to the test, and we welcome your input – and pull requests!
+
+
+_--Yifan Gu and Josh Wood, rktnetes Team, [CoreOS](https://coreos.com/). Twitter [@CoreOSLinux](https://twitter.com/coreoslinux)._
diff --git a/blog/_posts/2016-07-00-The-Bet-On-Kubernetes.md b/blog/_posts/2016-07-00-The-Bet-On-Kubernetes.md
new file mode 100644
index 00000000000..888e6f19d2c
--- /dev/null
+++ b/blog/_posts/2016-07-00-The-Bet-On-Kubernetes.md
@@ -0,0 +1,48 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " The Bet on Kubernetes, a Red Hat Perspective "
+date: Friday, July 21, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: Today’s guest post is from a Kubernetes contributor Clayton Coleman, Architect on OpenShift at Red Hat, sharing their adoption of the project from its beginnings._
+
+Two years ago, Red Hat made a big bet on Kubernetes. We bet on a simple idea: that an open source community is the best place to build the future of application orchestration, and that only an open source community could successfully integrate the diverse range of capabilities necessary to succeed. As a Red Hatter, that idea is not far-fetched - we’ve seen it successfully applied in many communities, but we’ve also seen it fail, especially when a broad reach is not supported by solid foundations. On the one year anniversary of Kubernetes 1.0, two years after the first open-source commit to the Kubernetes project, it’s worth asking the question:
+
+**Was Kubernetes the right bet?**
+
+The success of software is measured by the successes of its users - whether that software enables for them new opportunities or efficiencies. In that regard, Kubernetes has succeeded beyond our wildest dreams. We know of hundreds of real production deployments of Kubernetes, in the enterprise through Red Hat’s multi-tenant enabled [OpenShift](https://github.com/openshift/origin) distribution, on [Google Container Engine](https://cloud.google.com/container-engine/) (GKE), in heavily customized versions run by some of the world's largest software companies, and through the education, entertainment, startup, and do-it-yourself communities. Those deployers report improved time to delivery, standardized application lifecycles, improved resource utilization, and more resilient and robust applications. And that’s just from customers or contributors to the community - I would not be surprised if there were now thousands of installations of Kubernetes managing tens of thousands of real applications out in the wild.
+
+I believe that reach to be a validation of the vision underlying Kubernetes: to build a platform for all applications by providing tools for each of the core patterns in distributed computing. Those patterns:
+
+
+- simple replicated web software
+- distributed load balancing and service discovery
+- immutable images run in containers
+- co-location of related software into pods
+- simplified consumption of network attached storage
+- flexible and powerful resource scheduling
+- running batch and scheduled jobs alongside service workloads
+- managing and maintaining clustered software like databases and message queues
+
+
+Allow developers and operators to move to the next scale of abstraction, just like they have enabled Google and others in the tech ecosystem to scale to datacenter computers and beyond. From Kubernetes 1.0 to 1.3 we have continually improved the power and flexibility of the platform while ALSO improving performance, scalability, reliability, and usability. The explosion of integrations and tools that run on top of Kubernetes further validates core architectural decisions to be [composable](https://research.google.com/pubs/pub43438.html), to expose [open and flexible APIs](http://kubernetes.io/docs/api/), and to [deliberately limit the core platform](http://kubernetes.io/docs/whatisk8s/#kubernetes-is-not) and encourage extension.
+
+Today Kubernetes has one of the largest and most vibrant communities in the open source ecosystem, with almost a thousand contributors, one of the highest human-generated commit rates of any single-repository project on GitHub, over a thousand projects based around Kubernetes, and correspondingly active Stack Overflow and Slack channels. Red Hat is proud to be part of this ecosystem as the largest contributor to Kubernetes after Google, and every day more companies and individuals join us. The idea of Kubernetes found fertile ground, and you, the community, provided the excitement and commitment that made it grow.
+
+So, did we bet correctly? For all the reasons above, and hundreds more: **Yes**.
+
+**What’s next?**
+
+Happy as we are with the success of Kubernetes, this is no time to rest! While there are many more features and improvements we want to build into Kubernetes, I think there is a general consensus that we want to focus on the only long term goal that matters - a healthy, successful, and thriving technical community around Kubernetes. As John F. Kennedy probably said:
+
+\> _Ask not what your community can do for you, but what you can do for your community_
+
+In a recent post to the kubernetes-dev list, Brian Grant [laid out a great set of near term goals](https://groups.google.com/d/topic/kubernetes-dev/MoyWB66vAKY/discussion) - goals that help grow the community, refine how we execute, and enable future expansion. In each of the [Kubernetes Special Interest Groups](https://github.com/kubernetes/community/blob/master/README.md#special-interest-groups-sig) we are trying to build sustainable teams that can execute across companies and communities, and we are actively working to ensure each of these SIGs is able to contribute, coordinate, and deliver across a diverse range of interests under one vision for the project.
+
+Of special interest to us is the story of extension - how the core of Kubernetes can become the beating heart of the datacenter operating system, and enable even more patterns for application management to build on top of Kubernetes, not just into it. Work done in the 1.2 and 1.3 releases around third party APIs, API discovery, flexible scheduler policy, external authorization and authentication (beyond those built into Kubernetes) is just the start. When someone has a need, we want them to easily find a solution, and we also want it to be easy for others to consume and contribute to that solution. Likewise, the best way to prove ideas is to prototype them against real needs and to iterate against real problems, which should be easy and natural.
+
+By Kubernetes’ second birthday, I hope to reflect back on a long year of refinement, user success, and community participation. It has been a privilege and an honor to contribute to Kubernetes, and it still feels like we are just getting started. Thank you, and I hope you come along for the ride!
+
+_-- Clayton Coleman, Contributor and Architect on Kubernetes and OpenShift at Red Hat. Follow him on Twitter and GitHub: @smarterclayton_
diff --git a/blog/_posts/2016-07-00-Thousand-Instances-Of-Cassandra-Using-Kubernetes-Pet-Set.md b/blog/_posts/2016-07-00-Thousand-Instances-Of-Cassandra-Using-Kubernetes-Pet-Set.md
new file mode 100644
index 00000000000..861f9d3e892
--- /dev/null
+++ b/blog/_posts/2016-07-00-Thousand-Instances-Of-Cassandra-Using-Kubernetes-Pet-Set.md
@@ -0,0 +1,377 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Thousand Instances of Cassandra using Kubernetes Pet Set "
+date: Thursday, July 13, 2016
+pagination:
+ enabled: true
+---
+
+_Editor’s note: this post is part of a [series of in-depth articles](http://blog.kubernetes.io/2016/07/five-days-of-kubernetes-1.3.html) on what's new in Kubernetes 1.3_
+
+
+## Running The Greek Pet Monster Races
+
+
+For the [Kubernetes 1.3 launch](http://blog.kubernetes.io/2016/07/kubernetes-1.3-bridging-cloud-native-and-enterprise-workloads.html), we wanted to put the new Pet Set through its paces. By testing a thousand instances of [Cassandra](https://cassandra.apache.org/), we could make sure that Kubernetes 1.3 was production ready. Read on for how we adapted Cassandra to Kubernetes, and had our largest deployment ever.
+
+It’s fairly straightforward to use containers with basic stateful applications today. Using a persistent volume, you can mount a disk in a pod, and ensure that your data lasts beyond the life of your pod. However, with deployments of distributed stateful applications, things can become more tricky. With Kubernetes 1.3, the new [Pet Set](http://kubernetes.io/docs/user-guide/petset/) component makes everything much easier. To test this new feature out at scale, we decided to host the Greek Pet Monster Races! We raced Centaurs and other Ancient Greek Monsters over hundreds of thousands of races across multiple availability zones.
+
+[](https://upload.wikimedia.org/wikipedia/commons/thumb/4/42/Cassandra1.jpeg/283px-Cassandra1.jpeg)
+As many of you know Kubernetes is from the Ancient Greek: κυβερνήτης. This means helmsman, pilot, steersman, or ship master. So in order to keep track of race results, we needed a data store, and we choose Cassandra. Κασσάνδρα, Cassandra who was the daughter of King of Priam and Queen Hecuba of Troy. With multiple references to the ancient Greek language, we thought it would be appropriate to race ancient Greek monsters.
+
+
+
+From there the story kinda goes sideways because Cassandra was actually the Pets as well. Read on and we will explain.
+
+
+
+One of the new exciting features in Kubernetes 1.3 is Pet Set. In order to organize the deployment of containers inside of Kubernetes, different deployment mechanisms are available. Examples of these components include Resource Controllers and Daemon Set. Pet Sets is a new feature that delivers the capability to deploy containers, as Pets, inside of Kubernetes. Pet Sets provide a guarantee of identity for various aspects of the pet / pod deployment: DNS name, consistent storage, and ordered pod indexing. Previously, using components like Deployments and Replication Controllers, would only deploy an application with a weak uncoupled identity. A weak identity is great for managing applications such as microservices, where service discovery is important, the application is stateless, and the naming of individual pods does not matter. Many software applications do require strong identity, including many different types of distributed stateful systems. Cassandra is a great example of a distributed application that requires consistent network identity, and stable storage.
+
+
+
+Pet Sets provides the following capabilities:
+
+
+
+- A stable hostname, available to others in DNS. Number is based off of the Pet Set name and starts at zero. For example cassandra-0.
+- An ordinal index of Pets. 0, 1, 2, 3, etc.
+- Stable storage linked to the ordinal and hostname of the Pet.
+- Peer discovery is available via DNS. With Cassandra the names of the peers are known before the Pets are created.
+- Startup and Teardown ordering. Which numbered Pet is going to be created next is known, and which Pet will be destroyed upon reducing the Pet Set size. This feature is useful for such admin tasks as draining data from a Pet, when reducing the size of a cluster.
+
+
+
+If your application has one or more of these requirements, then it may be a candidate for Pet Set.
+A relevant analogy is that a Pet Set is composed of Pet dogs. If you have a white, brown or black dog and the brown dog runs away, you can replace it with another brown dog no one would notice. If over time you can keep replacing your dogs with only white dogs then someone would notice. Pet Set allows your application to maintain the unique identity or hair color of your Pets.
+
+
+
+Example workloads for Pet Set:
+
+
+
+- Clustered software like Cassandra, Zookeeper, etcd, or Elastic require stable membership.
+- Databases like MySQL or PostgreSQL that require a single instance attached to a persistent volume at any time.
+
+
+
+Only use Pet Set if your application requires some or all of these properties. Managing pods as stateless replicas is vastly easier.
+
+
+
+So back to our races!
+
+
+
+As we have mentioned, Cassandra was a perfect candidate to deploy via a Pet Set. A Pet Set is much like a [Replica Controller](http://kubernetes.io/docs/user-guide/replication-controller/) with a few new bells and whistles. Here's an example YAML manifest:
+
+
+
+
+# Headless service to provide DNS lookup
+```
+apiVersion: v1
+
+kind: Service
+
+metadata:
+
+ labels:
+
+ app: cassandra
+
+ name: cassandra
+
+spec:
+
+ clusterIP: None
+
+ ports:
+
+ - port: 9042
+
+ selector:
+
+ app: cassandra-data
+
+----
+
+# new API name
+
+apiVersion: "apps/v1alpha1"
+
+kind: PetSet
+
+metadata:
+
+ name: cassandra
+
+spec:
+
+ serviceName: cassandra
+
+ # replicas are the same as used by Replication Controllers
+
+ # except pets are deployed in order 0, 1, 2, 3, etc
+
+ replicas: 5
+
+ template:
+
+ metadata:
+
+ annotations:
+
+ pod.alpha.kubernetes.io/initialized: "true"
+
+ labels:
+
+ app: cassandra-data
+
+ spec:
+
+ # just as other component in Kubernetes one
+
+ # or more containers are deployed
+
+ containers:
+
+ - name: cassandra
+
+ image: "cassandra-debian:v1.1"
+
+ imagePullPolicy: Always
+
+ ports:
+
+ - containerPort: 7000
+
+ name: intra-node
+
+ - containerPort: 7199
+
+ name: jmx
+
+ - containerPort: 9042
+
+ name: cql
+
+ resources:
+
+ limits:
+
+ cpu: "4"
+
+ memory: 11Gi
+
+ requests:
+
+ cpu: "4"
+
+ memory: 11Gi
+
+ securityContext:
+
+ privileged: true
+
+ env:
+
+ - name: MAX\_HEAP\_SIZE
+
+ value: 8192M
+
+ - name: HEAP\_NEWSIZE
+
+ value: 2048M
+
+ # this is relying on guaranteed network identity of Pet Sets, we
+
+ # will know the name of the Pets / Pod before they are created
+
+ - name: CASSANDRA\_SEEDS
+
+ value: "cassandra-0.cassandra.default.svc.cluster.local,cassandra-1.cassandra.default.svc.cluster.local"
+
+ - name: CASSANDRA\_CLUSTER\_NAME
+
+ value: "OneKDemo"
+
+ - name: CASSANDRA\_DC
+
+ value: "DC1-Data"
+
+ - name: CASSANDRA\_RACK
+
+ value: "OneKDemo-Rack1-Data"
+
+ - name: CASSANDRA\_AUTO\_BOOTSTRAP
+
+ value: "false"
+
+ # this variable is used by the read-probe looking
+
+ # for the IP Address in a `nodetool status` command
+
+ - name: POD\_IP
+
+ valueFrom:
+
+ fieldRef:
+
+ fieldPath: status.podIP
+
+ readinessProbe:
+
+ exec:
+
+ command:
+
+ - /bin/bash
+
+ - -c
+
+ - /ready-probe.sh
+
+ initialDelaySeconds: 15
+
+ timeoutSeconds: 5
+
+ # These volume mounts are persistent. They are like inline claims,
+
+ # but not exactly because the names need to match exactly one of
+
+ # the pet volumes.
+
+ volumeMounts:
+
+ - name: cassandra-data
+
+ mountPath: /cassandra\_data
+
+ # These are converted to volume claims by the controller
+
+ # and mounted at the paths mentioned above. Storage can be automatically
+
+ # created for the Pets depending on the cloud environment.
+
+ volumeClaimTemplates:
+
+ - metadata:
+
+ name: cassandra-data
+
+ annotations:
+
+ volume.alpha.kubernetes.io/storage-class: anything
+
+ spec:
+
+ accessModes: ["ReadWriteOnce"]
+
+ resources:
+
+ requests:
+ storage: 380Gi
+ ```
+
+
+
+You may notice that these containers are on the rather large size, and it is not unusual to run Cassandra in production with 8 CPU and 16GB of ram. There are two key new features that you will notice above; dynamic volume provisioning, and of course Pet Set. The above manifest will create 5 Cassandra Pets / Pods starting with the number 0: cassandra-data-0, cassandra-data-1, etc.
+
+In order to generate data for the races, we used another Kubernetes feature called Jobs. Simple python code was written to generate the random speed of the monster for every second of the race. Then that data, position information, winners, other data points, and metrics were stored in Cassandra. To visualize the data, we used JHipster to generate a AngularJS UI with Java services, and then used D3 for graphing.
+
+An example of one of the Jobs:
+
+
+```
+apiVersion: batch/v1
+
+kind: Job
+
+metadata:
+
+ name: pet-race-giants
+
+ labels:
+
+ name: pet-races
+
+spec:
+
+ parallelism: 2
+
+ completions: 4
+
+ template:
+
+ metadata:
+
+ name: pet-race-giants
+
+ labels:
+
+ name: pet-races
+
+ spec:
+
+ containers:
+
+ - name: pet-race-giants
+
+ image: py3numpy-job:v1.0
+
+ command: ["pet-race-job", --length=100", "--pet=Giants", "--scale=3"]
+
+ resources:
+
+ limits:
+
+ cpu: "2"
+
+ requests:
+
+ cpu: "2"
+
+ restartPolicy: Never
+ ```
+
+
+
+[](https://upload.wikimedia.org/wikipedia/commons/0/0e/Polyphemus.gif)Since we are talking about Monsters, we had to go big. We deployed 1,009 minion nodes to [Google Compute Engine](https://cloud.google.com/compute/) (GCE), spread across 4 zones, running a custom version of the Kubernetes 1.3 beta. We ran this demo on beta code since the demo was being set up before the 1.3 release date. For the minion nodes, GCE virtual machine n1-standard-8 machine size was chosen, which is vm with 8 virtual CPUs and 30GB of memory. It would allow for a single instance of Cassandra to run on one node, which is recommended for disk I/O.
+
+Then the pets were deployed! One thousand of them, in two different Cassandra Data Centers. Cassandra distributed architecture is specifically tailored for multiple-data center deployment. Often multiple Cassandra data centers are deployed inside the same physical or virtual data center, in order to separate workloads. Data is replicated across all data centers, but workloads can be different between data centers and thus application tuning can be different. Data centers named 'DC1-Analytics' and ‘DC1-Data’ where deployed with 500 pets each. The race data was created by the python Batch Jobs connected to DC1-Data, and the JHipster UI was connected DC1-Analytics.
+
+Here are the final numbers:
+
+
+- 8,072 Cores. The master used 24, minion nodes used the rest
+- 1,009 IP addresses
+- 1,009 routes setup by Kubernetes on Google Cloud Platform
+- 100,510 GB persistent disk used by the Minions and the Master
+- 380,020 GB SSD disk persistent disk. 20 GB for the master and 340 GB per Cassandra Pet.
+- 1,000 deployed instances of Cassandra
+Yes we deployed 1,000 pets, but one really did not want to join the party! Technically with the Cassandra setup, we could have lost 333 nodes without service or data loss.
+
+
+
+### Limitations with Pet Sets in 1.3 Release
+
+
+- Pet Set is an alpha resource not available in any Kubernetes release prior to 1.3.
+- The storage for a given pet must either be provisioned by a dynamic storage provisioner based on the requested storage class, or pre-provisioned by an admin.
+- Deleting the Pet Set will not delete any pets or Pet storage. You will need to delete your Pets and possibly its storage by hand.
+- All Pet Sets currently require a "governing service", or a Service responsible for the network identity of the pets. The user is responsible for this Service.
+- Updating an existing Pet Set is currently a manual process. You either need to deploy a new Pet Set with the new image version or orphan Pets one by one and update their image, which will join them back to the cluster.
+
+
+#### Resources and References
+
+
+- The source code for the demo is available on [GitHub](https://github.com/k8s-for-greeks/gpmr): (Pet Set examples will be merged into the Kubernetes Cassandra Examples).
+- More information about [Jobs](http://kubernetes.io/docs/user-guide/jobs/)
+- [Documentation for Pet Set](https://github.com/kubernetes/kubernetes.github.io/blob/release-1.3/docs/user-guide/petset.md)
+- Image credits: Cassandra [image](https://commons.wikimedia.org/wiki/File:Cassandra1.jpeg) and Cyclops [image](https://commons.wikimedia.org/wiki/File:Polyphemus.gif)
+
+
+_-- Chris Love, Senior DevOps Open Source Consultant for [Datapipe](https://www.datapipe.com/). [Twitter @chrislovecnm](https://twitter.com/chrislovecnm/)_
diff --git a/blog/_posts/2016-07-00-Update-On-Kubernetes-For-Windows-Server-Containers.md b/blog/_posts/2016-07-00-Update-On-Kubernetes-For-Windows-Server-Containers.md
new file mode 100644
index 00000000000..f35c66a1546
--- /dev/null
+++ b/blog/_posts/2016-07-00-Update-On-Kubernetes-For-Windows-Server-Containers.md
@@ -0,0 +1,108 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Updates to Performance and Scalability in Kubernetes 1.3 -- 2,000 node 60,000 pod clusters "
+date: Friday, July 07, 2016
+pagination:
+ enabled: true
+---
+We are proud to announce that with the [release of version 1.3](http://blog.kubernetes.io/2016/07/kubernetes-1.3-bridging-cloud-native-and-enterprise-workloads.html), Kubernetes now supports 2000-node clusters with even better end-to-end pod startup time. The latency of our API calls are within our one-second [Service Level Objective (SLO)](https://en.wikipedia.org/wiki/Service_level_objective) and most of them are even an order of magnitude better than that. It is possible to run larger deployments than a 2,000 node cluster, but performance may be degraded and it may not meet our strict SLO.
+
+In this blog post we discuss the detailed performance results from Kubernetes 1.3 and what changes we made from version 1.2 to achieve these results. We also describe Kubemark, a performance testing tool that we’ve integrated into our continuous testing framework to detect performance and scalability regressions.
+
+**Evaluation Methodology**
+
+We have described our test scenarios in a [previous blog post](http://blog.kubernetes.io/2016/03/1000-nodes-and-beyond-updates-to-Kubernetes-performance-and-scalability-in-12.html). The biggest change since the 1.2 release is that in our API responsiveness tests we now create and use multiple namespaces. In particular for the 2000-node/60000 pod cluster tests we create 8 namespaces. The change was done because we believe that users of such very large clusters are likely to use many namespaces, certainly at least 8 in the cluster in total.
+
+**Metrics from Kubernetes 1.3**
+
+So, what is the performance of Kubernetes version 1.3? The following graph shows the end-to-end pod startup latency with a 2000 and 1000 node cluster. For comparison we show the same metric from Kubernetes 1.2 with a 1000-node cluster.
+
+
+
+ 
+The next graphs show API response latency for a v1.3 2000-node cluster.
+
+
+
+ 
+
+
+
+
+
+ 
+
+**How did we achieve these improvements?**
+
+The biggest change that we made for scalability in Kubernetes 1.3 was adding an efficient [Protocol Buffer](https://developers.google.com/protocol-buffers/)-based serialization format to the API as an alternative to JSON. It is primarily intended for communication between Kubernetes control plane components, but all API server clients can use this format. All Kubernetes control plane components now use it for their communication, but the system continues to support JSON for backward compatibility.
+
+We didn’t change the format in which we store cluster state in etcd to Protocol Buffers yet, as we’re still working on the upgrade mechanism. But we’re very close to having this ready, and we expect to switch the storage format to Protocol Buffers in Kubernetes 1.4. Our experiments show that this should reduce pod startup end-to-end latency by another 30%.
+
+**How do we test Kubernetes at scale?**
+
+Spawning clusters with 2000 nodes is expensive and time-consuming. While we need to do this at least once for each release to collect real-world performance and scalability data, we also need a lighter-weight mechanism that can allow us to quickly evaluate our ideas for different performance improvements, and that we can run continuously to detect performance regressions. To address this need we created a tool call “Kubemark.”
+
+**What is “Kubemark”?**
+
+Kubemark is a performance testing tool which allows users to run experiments on emulated clusters. We use it for measuring performance in large clusters.
+
+A Kubemark cluster consists of two parts: a real master node running the normal master components, and a set of “hollow” nodes. The prefix “hollow” means an implementation/instantiation of a component with some “moving parts” mocked out. The best example is hollow-kubelet, which pretends to be an ordinary Kubelet, but doesn’t start any containers or mount any volumes. It just claims it does, so from master components’ perspective it behaves like a real Kubelet.
+
+Since we want a Kubemark cluster to be as similar to a real cluster as possible, we use the real Kubelet code with an injected fake Docker client. Similarly hollow-proxy (KubeProxy equivalent) reuses the real KubeProxy code with injected no-op Proxier interface (to avoid mutating iptables).
+
+
+
+Thanks to those changes
+
+
+- many hollow-nodes can run on a single machine, because they are not modifying the environment in which they are running
+- without real containers running and the need for a container runtime (e.g. Docker), we can run up to 14 hollow-nodes on a 1-core machine.
+- yet hollow-nodes generate roughly the same load on the API server as their “whole” counterparts, so they provide a realistic load for performance testing [the only fundamental difference is that we are not simulating any errors that can happens in reality (e.g. failing containers) - adding support for this is a potential extension to the framework in the future]
+
+**How do we set up Kubemark clusters?**
+
+
+
+To create a Kubemark cluster we use the power the Kubernetes itself gives us - we run Kubemark clusters on Kubernetes. Let’s describe this in detail.
+
+In order to create a N-node Kubemark cluster, we:
+
+
+- create a regular Kubernetes cluster where we can run N hollow-nodes [e.g. to create 2000-node Kubemark cluster, we create a regular Kubernetes cluster with 22 8-core nodes]
+- create a dedicated VM, where we start all master components for our Kubemark cluster (etcd, apiserver, controllers, scheduler, …).
+- schedule N “hollow-node” pods on the base Kubernetes cluster. Those hollow-nodes are configured to talk to the Kubemark API server running on the dedicated VM
+- finally, we create addon pods (currently just Heapster) by scheduling them on the base cluster and configuring them to talk to the Kubemark API server
+Once this done, you have a usable Kubemark cluster that you can run your (performance) tests on. We have scripts for doing all of this on Google Compute Engine (GCE). For more details, take a look at our [guide](https://github.com/kubernetes/kubernetes/blob/release-1.3/docs/devel/kubemark-guide.md#starting-a-kubemark-cluster).
+
+
+One thing worth mentioning here is that while running Kubemark, underneath we’re also testing Kubernetes correctness. Obviously your Kubemark cluster will not work correctly if the base Kubernetes cluster under it doesn’t work.
+
+**Performance measured in real clusters vs Kubemark**
+
+
+Crucially, the performance of Kubemark clusters is mostly similar to the performance of real clusters. For the pod startup end-to-end latency, as shown in the graph below, the difference is negligible:
+
+ 
+
+For the API-responsiveness, the differences are higher, though generally less than 2x. However, trends are exactly the same: an improvement/regression in a real cluster is visible as a similar percentage drop/increase in metrics in Kubemark.
+
+ 
+**Conclusion**
+
+We continue to improve the performance and scalability of Kubernetes. In this blog post we
+showed that the 1.3 release scales to 2000 nodes while meeting our responsiveness SLOs
+explained the major change we made to improve scalability from the 1.2 release, and
+described Kubemark, our emulation framework that allows us to quickly evaluate the performance impact of code changes, both when experimenting with performance improvement ideas and to detect regressions as part of our continuous testing infrastructure.
+
+Please join our community and help us build the future of Kubernetes! If you’re particularly interested in scalability, participate by:
+
+
+- chatting with us on our [Slack channel](https://kubernetes.slack.com/messages/sig-scale/)
+- joining the scalability [Special Interest Group](https://github.com/kubernetes/community/blob/master/README.md#special-interest-groups-sig), which meets every Thursday at 9 AM Pacific Time on this [SIG-Scale Hangout](https://plus.google.com/hangouts/_/google.com/k8scale-hangout)
+
+For more information about the Kubernetes project, visit [kubernetes.io](http://kubernetes.io/) and follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio).
+
+
+
+_-- Wojciech Tyczynski, Software Engineer, Google_
diff --git a/blog/_posts/2016-07-00-happy-k8sbday-1.md b/blog/_posts/2016-07-00-happy-k8sbday-1.md
new file mode 100644
index 00000000000..92336c1cf38
--- /dev/null
+++ b/blog/_posts/2016-07-00-happy-k8sbday-1.md
@@ -0,0 +1,29 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " A Very Happy Birthday Kubernetes "
+date: Friday, July 21, 2016
+pagination:
+ enabled: true
+---
+Last year at OSCON, I got to reconnect with a bunch of friends and see what they have been working on. That turned out to be the [Kubernetes 1.0 launch event](https://www.youtube.com/playlist?list=PL69nYSiGNLP0Ljwa9J98xUd6UlM604Y-l). Even that day, it was clear the project was supported by a broad community -- a group that showed an ambitious vision for distributed computing.
+
+Today, on the first anniversary of the Kubernetes 1.0 launch, it’s amazing to see what a community of dedicated individuals can do. Kubernauts have collectively put in [237 person years of coding effort](https://www.openhub.net/p/kubernetes) since launch to bring forward our most recent [release 1.3](http://blog.kubernetes.io/2016/07/kubernetes-1.3-bridging-cloud-native-and-enterprise-workloads.html). However the community is much more than simply coding effort. It is made up of people -- individuals that have given their expertise and energy to make this project flourish. With more than 830 diverse contributors, from independents to the largest companies in the world, it’s their work that makes Kubernetes stand out. Here are stories from a couple early contributors reflecting back on the project:
+
+
+- [Sam Ghods](https://www.box.com/blog/kubernetes-box-microservices-maximum-velocity/), services architect and co-founder at Box
+- [Justin Santa Barbara](http://blog.kubernetes.io/2016/07/oh-the-places-you-will-go.html), independent Kubernetes contributor
+- [Clayton Coleman](http://blog.kubernetes.io/2016/07/the-bet-on-kubernetes.html), contributor and architect on Kubernetes on OpenShift at Red Hat
+
+The community is also more than online GitHub and Slack conversation; year one saw the launch of KubeCon, the Kubernetes user conference, which started as a grassroot effort that brought together 1,000 individuals between two events in San Francisco and London. The advocacy continues with users globally. There are more than [130 Meetup groups](http://www.meetup.com/topics/kubernetes/) that mention Kubernetes, many of which are helping celebrate Kubernetes’ birthday. To join the celebration, participate at one of the 20 [**#k8sbday**](https://twitter.com/search?q=k8sbday&src=typd) parties worldwide: [Austin](http://www.meetup.com/Microservices-and-Containers-Austin/), [Bangalore](http://www.meetup.com/Bangalore-Kubernetes-Meetup/), [Beijing](http://www.meetup.com/Kubernetes-Meetup-Beijing/events/232537953/), [Boston](http://www.meetup.com/Boston-OpenShift-Meetup/), [Cape Town](http://www.meetup.com/Cape-Town-DevOps), [Charlotte](http://www.meetup.com/ccog-meetup/events/231626855/), [Cologne](http://www.meetup.com/de-DE/Kubernetes-Meetup-Cologne/), [Geneva](http://www.meetup.com/Kubernetes-Geneva/), [Karlsruhe](http://www.meetup.com/inovex-karlsruhe/events/232561446/), [Kisumu](http://www.meetup.com/Docker-Kisumu/events/232595339/), [Montreal](http://www.meetup.com/Kubernetes-Montreal/events/232726956/), [Portland](http://www.meetup.com/Cloud-Native-PDX), [Raleigh](http://www.meetup.com/Raleigh-Openshift-Meetup/), [Research Triangle](http://www.meetup.com/Triangle-Kubernetes-Meetup/), [San Francisco](https://www.eventbrite.com/e/kubernetes-birthday-bash-tickets-26250411688), [Seattle](http://www.meetup.com/Seattle-Kubernetes-Meetup/), [Singapore](http://www.meetup.com/GCPUGSG/events/232659329/), [SF Bay Area](http://www.meetup.com/Bay-Area-Kubernetes-Meetup/events/232623207/), or [Washington DC](http://www.meetup.com/DC-Kubernetes-Meetup/).
+
+The Kubernetes community continues to work to make our project more welcoming and open to our _kollaborators_. This spring, Kubernetes and KubeCon moved to the Cloud Native Compute Foundation ([CNCF](https://cncf.io/)), a Linux Foundation Project, to accelerate the collaborative vision outlined only a year ago at OSCON …. lifting a glass to another great year.
+
+
+
+[](https://1.bp.blogspot.com/-Wn9QJb6wQ7w/V5Cm1Y2iKhI/AAAAAAAAAnc/SZ3yFFcxjmoqAmz9chp8o2KJJUoKI0KQwCLcB/s1600/k8s%2BCommit%2BInfographic.png)
+
+
+
+
+_-- Sarah Novotny, Kubernetes Community Wonk_
diff --git a/blog/_posts/2016-07-00-openstack-kubernetes-communities.md b/blog/_posts/2016-07-00-openstack-kubernetes-communities.md
new file mode 100644
index 00000000000..d2cfd3a59b7
--- /dev/null
+++ b/blog/_posts/2016-07-00-openstack-kubernetes-communities.md
@@ -0,0 +1,34 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Why OpenStack's embrace of Kubernetes is great for both communities "
+date: Wednesday, July 26, 2016
+pagination:
+ enabled: true
+---
+Today, [Mirantis](https://www.mirantis.com/), the leading contributor to [OpenStack](http://stackalytics.com/?release=mitaka), [announced](https://techcrunch.com/2016/07/25/openstack-will-soon-be-able-to-run-on-top-of-kubernetes/) that it will re-write its private cloud platform to use Kubernetes as its underlying orchestration engine. We think this is a great step forward for both the OpenStack and Kubernetes communities. With Kubernetes under the hood, OpenStack users will benefit from the tremendous efficiency, manageability and resiliency that Kubernetes brings to the table, while positioning their applications to use more cloud-native patterns. The Kubernetes community, meanwhile, can feel confident in their choice of orchestration framework, while gaining the ability to manage both container- and VM-based applications from a single platform.
+
+**The Path to Cloud Native**
+
+Google spent over ten years developing, applying and refining the principles of cloud native computing. A cloud-native application is:
+
+- Container-packaged. Applications are composed of hermetically sealed, reusable units across diverse environments;
+- Dynamically scheduled, for increased infrastructure efficiency and decreased operational overhead; and
+- Microservices-based. Loosely coupled components significantly increase the overall agility, resilience and maintainability of applications.
+
+These principles have enabled us to build the largest, most efficient, most powerful cloud infrastructure in the world, which anyone can access via [Google Cloud Platform](http://cloud.google.com/). They are the same principles responsible for the recent surge in popularity of Linux containers. Two years ago, we open-sourced Kubernetes to spur adoption of containers and scalable, microservices-based applications, and the recently released [Kubernetes version 1.3](http://blog.kubernetes.io/2016/07/kubernetes-1.3-bridging-cloud-native-and-enterprise-workloads.html) introduces a number of features to bridge enterprise and cloud native workloads. We expect that adoption of cloud-native principles will drive the same benefits within the OpenStack community, as well as smoothing the path between OpenStack and the public cloud providers that embrace them.
+
+**Making OpenStack better**
+
+We hear from enterprise customers that they want to move towards cloud-native infrastructure and application patterns. Thus, it is hardly surprising that OpenStack would also move in this direction [1], with large OpenStack users such as [eBay](http://fortune.com/2016/04/23/ebay-parlays-new-age-tools/) and [GoDaddy](http://thenewstack.io/tns-analysts-show-95-consider-containerizing-openstack/) adopting Kubernetes as key components of their stack. Kubernetes and cloud-native patterns will improve OpenStack lifecycle management by enabling rolling updates, versioning, and canary deployments of new components and features. In addition, OpenStack users will benefit from self-healing infrastructure, making OpenStack easier to manage and more resilient to the failure of core services and individual compute nodes. Finally, OpenStack users will realize the developer and resource efficiencies that come with a container-based infrastructure.
+
+**OpenStack is a great tool for Kubernetes users**
+
+Conversely, incorporating Kubernetes into OpenStack will give Kubernetes users access to a robust framework for deploying and managing applications built on virtual machines. As users move to the cloud-native model, they will be faced with the challenge of managing hybrid application architectures that contain some mix of virtual machines and Linux containers. The combination of Kubernetes and OpenStack means that they can do so on the same platform using a common set of tools.
+
+We are excited by the ever increasing momentum of the cloud-native movement as embodied by Kubernetes and related projects, and look forward to working with Mirantis, its partner Intel, and others within the OpenStack community to brings the benefits of cloud-native to their applications and infrastructure.
+
+
+_--Martin Buhr, Product Manager, Strategic Initiatives, Google_
+
+[1] Check out the announcement of Kubernetes-OpenStack Special Interest Group [here](http://blog.kubernetes.io/2016/04/introducing-kubernetes-openstack-sig.html), and a great talk about OpenStack on Kubernetes by CoreOS CEO Alex Polvi at the most recent OpenStack summit [here](https://www.youtube.com/watch?v=e-j9FOO-i84).
diff --git a/blog/_posts/2016-07-00-stateful-applications-in-containers-kubernetes.md b/blog/_posts/2016-07-00-stateful-applications-in-containers-kubernetes.md
new file mode 100644
index 00000000000..91c51041f81
--- /dev/null
+++ b/blog/_posts/2016-07-00-stateful-applications-in-containers-kubernetes.md
@@ -0,0 +1,44 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Stateful Applications in Containers!? Kubernetes 1.3 Says “Yes!” "
+date: Thursday, July 13, 2016
+pagination:
+ enabled: true
+---
+
+_Editor's note: today’s guest post is from Mark Balch, VP of Products at Diamanti, who’ll share more about the contributions they’ve made to Kubernetes._
+
+Congratulations to the Kubernetes community on another [value-packed release](http://blog.kubernetes.io/2016/07/kubernetes-1.3-bridging-cloud-native-and-enterprise-workloads.html). A focus on stateful applications and federated clusters are two reasons why I’m so excited about 1.3. Kubernetes support for stateful apps such as Cassandra, Kafka, and MongoDB is critical. Important services rely on databases, key value stores, message queues, and more. Additionally, relying on one data center or container cluster simply won’t work as apps grow to serve millions of users around the world. Cluster federation allows users to deploy apps across multiple clusters and data centers for scale and resiliency.
+
+You may have [heard me say before](https://www.diamanti.com/blog/the-next-great-application-platform/) that containers are the next great application platform. Diamanti is accelerating container adoption for stateful apps in production - where performance and ease of deployment really matter.
+
+**Apps Need More Than Cattle**
+
+Beyond stateless containers like web servers (so-called “cattle” because they are interchangeable), users are increasingly deploying stateful workloads with containers to benefit from “build once, run anywhere” and to improve bare metal efficiency/utilization. These “pets” (so-called because each requires special handling) bring new requirements including longer life cycle, configuration dependencies, stateful failover, and performance sensitivity. Container orchestration must address these needs to successfully deploy and scale apps.
+
+Enter [Pet Set](http://kubernetes.io/docs/user-guide/petset/), a new object in Kubernetes 1.3 for improved stateful application support. Pet Set sequences through the startup phase of each database replica (for example), ensuring orderly master/slave configuration. Pet Set also simplifies service discovery by leveraging ubiquitous DNS SRV records, a well-recognized and long-understood mechanism.
+
+Diamanti’s [FlexVolume contribution](https://github.com/kubernetes/kubernetes/pull/13840) to Kubernetes enables stateful workloads by providing persistent volumes with low-latency storage and guaranteed performance, including enforced quality-of-service from container to media.
+
+**A Federalist**
+
+Users who are planning for application availability must contend with issues of failover and scale across geography. Cross-cluster federated services allows containerized apps to easily deploy across multiple clusters. Federated services tackles challenges such as managing multiple container clusters and coordinating service deployment and discovery across federated clusters.
+
+Like a strictly centralized model, federation provides a common app deployment interface. With each cluster retaining autonomy, however, federation adds flexibility to manage clusters locally during network outages and other events. Cross-cluster federated services also applies consistent service naming and adoption across container clusters, simplifying DNS resolution.
+
+It’s easy to imagine powerful multi-cluster use cases with cross-cluster federated services in future releases. An example is scheduling containers based on governance, security, and performance requirements. Diamanti’s scheduler extension was developed with this concept in mind. Our [first implementation](https://github.com/kubernetes/kubernetes/pull/13580) makes the Kubernetes scheduler aware of network and storage resources local to each cluster node. Similar concepts can be applied in the future to broader placement controls with cross-cluster federated services.
+
+**Get Involved**
+
+With interest growing in stateful apps, work has already started to further enhance Kubernetes storage. The Storage Special Interest Group is discussing proposals to support local storage resources. Diamanti is looking forward to extend FlexVolume to include richer APIs that enable local storage and storage services including data protection, replication, and reduction. We’re also working on proposals for improved app placement, migration, and failover across container clusters through Kubernetes cross-cluster federated services.
+
+Join the conversation and contribute! Here are some places to get started:
+
+
+- Product Management [group](https://groups.google.com/forum/#!forum/kubernetes-pm)
+- Kubernetes [Storage SIG](https://groups.google.com/forum/#!forum/kubernetes-sig-storage)
+- Kubernetes [Cluster Federation SIG](https://groups.google.com/forum/#!forum/kubernetes-sig-federation)
+
+
+_-- Mark Balch, VP Products, [Diamanti](https://diamanti.com/). Twitter [@markbalch](https://twitter.com/markbalch)_
diff --git a/blog/_posts/2016-08-00-Challenges-Remotely-Managed-Onpremise-Kubernetes-Cluster.md b/blog/_posts/2016-08-00-Challenges-Remotely-Managed-Onpremise-Kubernetes-Cluster.md
new file mode 100644
index 00000000000..9c3f65a2453
--- /dev/null
+++ b/blog/_posts/2016-08-00-Challenges-Remotely-Managed-Onpremise-Kubernetes-Cluster.md
@@ -0,0 +1,61 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Challenges of a Remotely Managed, On-Premises, Bare-Metal Kubernetes Cluster "
+date: Wednesday, August 02, 2016
+pagination:
+ enabled: true
+---
+
+_Today's post is written by Bich Le, chief architect at Platform9, describing how their engineering team overcame challenges in remotely managing bare-metal Kubernetes clusters. _
+
+**Introduction**
+
+The recently announced [Platform9 Managed Kubernetes](https://platform9.com/press/platform9-makes-easy-deploy-docker-containers-production-scale/) (PMK) is an on-premises enterprise Kubernetes solution with an unusual twist: while clusters run on a user’s internal hardware, their provisioning, monitoring, troubleshooting and overall life cycle is managed remotely from the Platform9 SaaS application. While users love the intuitive experience and ease of use of this deployment model, this approach poses interesting technical challenges. In this article, we will first describe the motivation and deployment architecture of PMK, and then present an overview of the technical challenges we faced and how our engineering team addressed them.
+
+**Multi-OS bootstrap model**
+
+Like its predecessor, [Managed OpenStack](https://platform9.com/products/kvm/), PMK aims to make it as easy as possible for an enterprise customer to deploy and operate a “private cloud”, which, in the current context, means one or more Kubernetes clusters. To accommodate customers who standardize on a specific Linux distro, our installation process uses a “bare OS” or “bring your own OS” model, which means that an administrator deploys PMK to existing Linux nodes by installing a simple RPM or Deb package on their favorite OS (Ubuntu-14, CentOS-7, or RHEL-7). The package, which the administrator downloads from their Platform9 SaaS portal, starts an agent which is preconfigured with all the information and credentials needed to securely connect to and register itself with the customer’s Platform9 SaaS controller running on the WAN.
+
+**Node management**
+
+The first challenge was configuring Kubernetes nodes in the absence of a bare-metal cloud API and SSH access into nodes. We solved it using the _node pool_ concept and configuration management techniques. Every node running the agent automatically shows up in the SaaS portal, which allows the user to _authorize_ the node for use with Kubernetes. A newly authorized node automatically enters a _node pool_, indicating that it is available but not used in any clusters. Independently, the administrator can create one or more Kubernetes clusters, which start out empty. At any later time, he or she can request one or more nodes to be attached to any cluster. PMK fulfills the request by transferring the specified number of nodes from the pool to the cluster. When a node is authorized, its agent becomes a configuration management agent, polling for instructions from a CM server running in the SaaS application and capable of downloading and configuring software.
+
+Cluster creation and node attach/detach operations are exposed to administrators via a REST API, a CLI utility named _qb_, and the SaaS-based Web UI. The following screenshot shows the Web UI displaying one 3-node cluster named clus100, one empty cluster clus101, and the three nodes.
+
+
+
+ {: .big-img}
+
+
+**Cluster initialization**
+
+The first time one or more nodes are attached to a cluster, PMK configures the nodes to form a complete Kubernetes cluster. Currently, PMK automatically decides the number and placement of Master and Worker nodes. In the future, PMK will give administrators an “advanced mode” option allowing them to override and customize those decisions. Through the CM server, PMK then sends to each node a configuration and a set of scripts to initialize each node according to the configuration. This includes installing or upgrading Docker to the required version; starting 2 docker daemons (bootstrap and main), creating the etcd K/V store, establishing the flannel network layer, starting kubelet, and running the Kubernetes appropriate for the node’s role (master vs. worker). The following diagram shows the component layout of a fully formed cluster.
+
+
+
+ {: .big-img}
+
+
+**Containerized kubelet?**
+
+Another hurdle we encountered resulted from our original decision to run kubelet as recommended by the [Multi-node Docker Deployment Guide](http://kubernetes.io/docs/getting-started-guides/docker-multinode/). We discovered that this approach introduces complexities that led to many difficult-to-troubleshoot bugs that were sensitive to the combined versions of Kubernetes, Docker, and the node OS. Example: kubelet’s need to mount directories containing secrets into containers to support the [Service Accounts](http://kubernetes.io/docs/user-guide/service-accounts/) mechanism. It turns out that [doing this from inside of a container is tricky](https://github.com/kubernetes/kubernetes/issues/6848), and requires a [complex sequence of steps](https://github.com/kubernetes/kubernetes/blob/release-1.0/pkg/util/mount/nsenter_mount.go#L37) that turned out to be fragile. After fixing a continuing stream of issues, we finally decided to run kubelet as a native program on the host OS, resulting in significantly better stability.
+
+**Overcoming networking hurdles**
+
+The Beta release of PMK currently uses [flannel with UDP back-end](https://github.com/coreos/flannel) for the network layer. In a Kubernetes cluster, many infrastructure services need to communicate across nodes using a variety of ports (443, 4001, etc..) and protocols (TCP and UDP). Often, customer nodes intentionally or unintentionally block some or all of the traffic, or run existing services that conflict with the required ports, resulting in non-obvious failures. To address this, we try to detect configuration problems early and inform the administrator immediately. PMK runs a “preflight” check on all nodes participating in a cluster before installing the Kubernetes software. This means running small test programs on each node to verify that (1) the required ports are available for binding and listening; and (2) nodes can connect to each other using all required ports and protocols. These checks run in parallel and take less than a couple of seconds before cluster initialization.
+
+**Monitoring**
+
+One of the values of a SaaS-managed private cloud is constant monitoring and early detection of problems by the SaaS team. Issues that can be addressed without intervention by the customer are handled automatically, while others trigger proactive communication with the customer via UI alerts, email, or real-time channels. Kubernetes monitoring is a huge topic worthy of its own blog post, so we’ll just briefly touch upon it. We broadly classify the problem into layers: (1) hardware & OS, (2) Kubernetes core (e.g. API server, controllers and kubelets), (3) add-ons (e.g. [SkyDNS](https://github.com/skynetservices/skydns) & [ServiceLoadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer)) and (4) applications. We are currently focused on layers 1-3. A major source of issues we’ve seen is add-on failures. If either DNS or the ServiceLoadbalancer reverse http proxy (soon to be upgraded to an [Ingress Controller](https://github.com/kubernetes/contrib/tree/master/ingress/controllers)) fails, application services will start failing. One way we detect such failures is by monitoring the components using the Kubernetes API itself, which is proxied into the SaaS controller, allowing the PMK support team to monitor any cluster resource. To detect service failure, one metric we pay attention to is _pod restarts_. A high restart count indicates that a service is continually failing.
+
+**Future topics**
+
+We faced complex challenges in other areas that deserve their own posts: (1) _Authentication and authorization with [Keystone](http://docs.openstack.org/developer/keystone/)_, the identity manager used by Platform9 products; (2) _Software upgrades_, i.e. how to make them brief and non-disruptive to applications; and (3) _Integration with customer’s external load-balancers_ (in the absence of good automation APIs).
+
+**Conclusion**
+
+[Platform9 Managed Kubernetes](https://platform9.com/products/docker/) uses a SaaS-managed model to try to hide the complexity of deploying, operating and maintaining bare-metal Kubernetes clusters in customers’ data centers. These requirements led to the development of a unique cluster deployment and management architecture, which in turn led to unique technical challenges.This article described an overview of some of those challenges and how we solved them. For more information on the motivation behind PMK, feel free to view Madhura Maskasky's [blog post](https://platform9.com/blog/containers-as-a-service-kubernetes-docker/).
+
+
+_--Bich Le, Chief Architect, Platform9_
diff --git a/blog/_posts/2016-08-00-Create-Couchbase-Cluster-Using-Kubernetes.md b/blog/_posts/2016-08-00-Create-Couchbase-Cluster-Using-Kubernetes.md
new file mode 100644
index 00000000000..fea6a34dfd2
--- /dev/null
+++ b/blog/_posts/2016-08-00-Create-Couchbase-Cluster-Using-Kubernetes.md
@@ -0,0 +1,427 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Create a Couchbase cluster using Kubernetes "
+date: Tuesday, August 15, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: today’s guest post is by Arun Gupta, Vice President Developer Relations at Couchbase, showing how to setup a Couchbase cluster with Kubernetes._
+
+[Couchbase Server](http://www.couchbase.com/nosql-databases/couchbase-server) is an open source, distributed NoSQL document-oriented database. It exposes a fast key-value store with managed cache for submillisecond data operations, purpose-built indexers for fast queries and a query engine for executing SQL queries. For mobile and Internet of Things (IoT) environments, [Couchbase Lite](http://developer.couchbase.com/mobile) runs native on-device and manages sync to Couchbase Server.
+
+Couchbase Server 4.5 was [recently announced](http://blog.couchbase.com/2016/june/announcing-couchbase-server-4.5), bringing [many new features](http://developer.couchbase.com/documentation/server/4.5/introduction/whats-new.html), including [production certified support for Docker](http://www.couchbase.com/press-releases/couchbase-announces-support-for-docker-containers). Couchbase is supported on a wide variety of orchestration frameworks for Docker containers, such as Kubernetes, Docker Swarm and Mesos, for full details visit [this page](http://couchbase.com/containers).
+
+This blog post will explain how to create a Couchbase cluster using Kubernetes. This setup is tested using Kubernetes 1.3.3, Amazon Web Services, and Couchbase 4.5 Enterprise Edition.
+
+Like all good things, this post is standing on the shoulder of giants. The design pattern used in this blog was defined in a [Friday afternoon hack](https://twitter.com/arungupta/status/703378246432231424) with [@saturnism](https://twitter.com/saturnism). A working version of the configuration files was [contributed](https://twitter.com/arungupta/status/759059647680552962) by [@r\_schmiddy](http://twitter.com/r_schmiddy).
+
+
+
+**Couchbase Cluster**
+
+
+
+A cluster of Couchbase Servers is typically deployed on commodity servers. Couchbase Server has a peer-to-peer topology where all the nodes are equal and communicate to each other on demand. There is no concept of master nodes, slave nodes, config nodes, name nodes, head nodes, etc, and all the software loaded on each node is identical. It allows the nodes to be added or removed without considering their “type”. This model works particularly well with cloud infrastructure in general. For Kubernetes, this means that we can use the exact same container image for all Couchbase nodes.
+
+
+
+A typical Couchbase cluster creation process looks like:
+
+- Start Couchbase: Start n Couchbase servers
+- Create cluster: Pick any server, and add all other servers to it to create the cluster
+- Rebalance cluster: Rebalance the cluster so that data is distributed across the cluster
+
+
+
+In order to automate using Kubernetes, the cluster creation is split into a “master” and “worker” Replication Controller (RC).
+
+
+
+The master RC has only one replica and is also published as a Service. This provides a single reference point to start the cluster creation. By default services are visible only from inside the cluster. This service is also exposed as a load balancer. This allows the [Couchbase Web Console](http://developer.couchbase.com/documentation/server/current/admin/ui-intro.html) to be accessible from outside the cluster.
+
+
+
+The worker RC use the exact same image as master RC. This keeps the cluster homogenous which allows to scale the cluster easily.
+
+
+
+ {: .big-img}
+
+
+
+
+Configuration files used in this blog are available [here](http://github.com/arun-gupta/couchbase-kubernetes/tree/master/cluster). Let’s create the Kubernetes resources to create the Couchbase cluster.
+
+
+
+
+
+**Create Couchbase “master” Replication Controller**
+
+
+
+Couchbase master RC can be created using the following configuration file:
+
+```
+apiVersion: v1
+kind: ReplicationController
+metadata:
+ name: couchbase-master-rc
+spec:
+ replicas: 1
+ selector:
+ app: couchbase-master-pod
+ template:
+ metadata:
+ labels:
+ app: couchbase-master-pod
+ spec:
+ containers:
+ - name: couchbase-master
+ image: arungupta/couchbase:k8s
+ env:
+ - name: TYPE
+ value: MASTER
+ ports:
+ - containerPort: 8091
+----
+apiVersion: v1
+kind: Service
+metadata:
+ name: couchbase-master-service
+ labels:
+ app: couchbase-master-service
+spec:
+ ports:
+ - port: 8091
+ selector:
+ app: couchbase-master-pod
+ type: LoadBalancer
+ ```
+
+
+
+This configuration file creates a couchbase-master-rc Replication Controller. This RC has one replica of the pod created using the arungupta/couchbase:k8s image. This image is created using the Dockerfile [here](http://github.com/arun-gupta/couchbase-kubernetes/blob/master/cluster/Dockerfile). This Dockerfile uses a [configuration script](https://github.com/arun-gupta/couchbase-kubernetes/blob/master/cluster/configure-node.sh) to configure the base Couchbase Docker image. First, it uses [Couchbase REST API](http://developer.couchbase.com/documentation/server/current/rest-api/rest-endpoints-all.html) to setup memory quota, setup index, data and query services, security credentials, and loads a sample data bucket. Then, it invokes the appropriate [Couchbase CLI](http://developer.couchbase.com/documentation/server/current/cli/cbcli-intro.html) commands to add the Couchbase node to the cluster or add the node and rebalance the cluster. This is based upon three environment variables:
+
+- TYPE: Defines whether the joining pod is worker or master
+- AUTO\_REBALANCE: Defines whether the cluster needs to be rebalanced
+- COUCHBASE\_MASTER: Name of the master service
+
+
+For this first configuration file, the TYPE environment variable is set to MASTER and so no additional configuration is done on the Couchbase image.
+
+
+
+Let’s create and verify the artifacts.
+
+
+
+Create Couchbase master RC:
+
+
+
+```
+kubectl create -f cluster-master.yml
+replicationcontroller "couchbase-master-rc" created
+service "couchbase-master-service" created
+ ```
+
+
+
+List all the services:
+
+```
+kubectl get svc
+NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+couchbase-master-service 10.0.57.201 8091/TCP 30s
+kubernetes 10.0.0.1 \ 443/TCP 5h
+ ```
+
+
+
+Output shows that couchbase-master-service is created.
+
+
+
+Get all the pods:
+
+```
+kubectl get po
+NAME READY STATUS RESTARTS AGE
+couchbase-master-rc-97mu5 1/1 Running 0 1m
+ ```
+
+
+
+A pod is created using the Docker image specified in the configuration file.
+
+Check the RC:
+
+
+
+```
+kubectl get rc
+NAME DESIRED CURRENT AGE
+couchbase-master-rc 1 1 1m
+ ```
+
+
+
+It shows that the desired and current number of pods in the RC are matching.
+
+
+
+Describe the service:
+
+
+
+```
+kubectl describe svc couchbase-master-service
+Name: couchbase-master-service
+Namespace: default
+Labels: app=couchbase-master-service
+Selector: app=couchbase-master-pod
+Type: LoadBalancer
+IP: 10.0.57.201
+LoadBalancer Ingress: a94f1f286590c11e68e100283628cd6c-1110696566.us-west-2.elb.amazonaws.com
+Port: \ 8091/TCP
+NodePort: \ 30019/TCP
+Endpoints: 10.244.2.3:8091
+Session Affinity: None
+Events:
+
+ FirstSeen LastSeen Count From SubobjectPath Type Reason Message
+
+ --------- -------- ----- ---- ------------- -------- ------ -------
+
+ 2m 2m 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer
+
+ 2m 2m 1 {service-controller } Normal CreatedLoadBalancer Created load balancer
+ ```
+
+
+
+Among other details, the address shown next to LoadBalancer Ingress is relevant for us. This address is used to access the Couchbase Web Console.
+
+
+
+Wait for ~3 mins for the load balancer to be ready to receive requests. Couchbase Web Console is accessible at \:8091 and looks like:
+
+
+
+ {: .big-img}
+
+
+The image used in the configuration file is configured with the Administrator username and password password. Enter the credentials to see the console:
+
+
+
+ {: .big-img}
+
+
+
+Click on Server Nodes to see how many Couchbase nodes are part of the cluster. As expected, it shows only one node:
+
+
+ {: .big-img}
+
+Click on Data Buckets to see a sample bucket that was created as part of the image:
+
+ {: .big-img}
+
+
+This shows the travel-sample bucket is created and has 31,591 JSON documents.
+
+**Create Couchbase “worker” Replication Controller**
+Now, let’s create a worker replication controller. It can be created using the configuration file:
+
+```
+apiVersion: v1
+kind: ReplicationController
+metadata:
+ name: couchbase-worker-rc
+spec:
+ replicas: 1
+ selector:
+ app: couchbase-worker-pod
+ template:
+ metadata:
+ labels:
+ app: couchbase-worker-pod
+ spec:
+ containers:
+ - name: couchbase-worker
+ image: arungupta/couchbase:k8s
+ env:
+ - name: TYPE
+ value: "WORKER"
+ - name: COUCHBASE\_MASTER
+ value: "couchbase-master-service"
+ - name: AUTO\_REBALANCE
+ value: "false"
+ ports:
+ - containerPort: 8091
+ ```
+
+
+
+This RC also creates a single replica of Couchbase using the same arungupta/couchbase:k8s image. The key differences here are:
+
+- TYPE environment variable is set to WORKER. This adds a worker Couchbase node to be added to the cluster.
+- COUCHBASE\_MASTER environment variable is passed the value of couchbase-master-service. This uses the service discovery mechanism built into Kubernetes for pods in the worker and the master to communicate.
+- AUTO\_REBALANCE environment variable is set to false. This ensures that the node is only added to the cluster but the cluster itself is not rebalanced. Rebalancing is required to to re-distribute data across multiple nodes of the cluster. This is the recommended way as multiple nodes can be added first, and then cluster can be manually rebalanced using the Web Console.
+Let’s create a worker:
+
+
+
+```
+kubectl create -f cluster-worker.yml
+replicationcontroller "couchbase-worker-rc" created
+ ```
+
+
+
+
+
+
+
+Check the RC:
+
+
+
+```
+kubectl get rc
+NAME DESIRED CURRENT AGE
+couchbase-master-rc 1 1 6m
+couchbase-worker-rc 1 1 22s
+ ```
+
+
+
+
+
+A new couchbase-worker-rc is created where the desired and the current number of instances are matching.
+
+
+
+Get all pods:
+
+```
+kubectl get po
+NAME READY STATUS RESTARTS AGE
+couchbase-master-rc-97mu5 1/1 Running 0 6m
+couchbase-worker-rc-4ik02 1/1 Running 0 46s
+ ```
+
+
+
+An additional pod is now created. Each pod’s name is prefixed with the corresponding RC’s name. For example, a worker pod is prefixed with couchbase-worker-rc.
+
+
+
+Couchbase Web Console gets updated to show that a new Couchbase node is added. This is evident by red circle with the number 1 on the Pending Rebalance tab.
+
+
+
+ {: .big-img}
+
+
+
+Clicking on the tab shows the IP address of the node that needs to be rebalanced:
+
+ 
+
+
+
+
+
+**Scale Couchbase cluster**
+
+
+Now, let’s scale the Couchbase cluster by scaling the replicas for worker RC:
+
+
+```
+kubectl scale rc couchbase-worker-rc --replicas=3
+replicationcontroller "couchbase-worker-rc" scaled
+ ```
+
+
+
+
+
+Updated state of RC shows that 3 worker pods have been created:
+
+
+
+```
+kubectl get rc
+NAME DESIRED CURRENT AGE
+couchbase-master-rc 1 1 8m
+couchbase-worker-rc 3 3 2m
+ ```
+
+
+
+
+
+This can be verified again by getting the list of pods:
+
+```
+kubectl get po
+NAME READY STATUS RESTARTS AGE
+couchbase-master-rc-97mu5 1/1 Running 0 8m
+couchbase-worker-rc-4ik02 1/1 Running 0 2m
+couchbase-worker-rc-jfykx 1/1 Running 0 53s
+couchbase-worker-rc-v8vdw 1/1 Running 0 53s
+ ```
+
+
+
+Pending Rebalance tab of Couchbase Web Console shows that 3 servers have now been added to the cluster and needs to be rebalanced.
+
+
+
+ {: .big-img}
+
+Rebalance Couchbase Cluster
+
+Finally, click on Rebalance button to rebalance the cluster. A message window showing the current state of rebalance is displayed:
+
+
+
+ {: .big-img}
+
+
+
+Once all the nodes are rebalanced, Couchbase cluster is ready to serve your requests:
+
+
+
+ {: .big-img}
+
+
+
+In addition to creating a cluster, Couchbase Server supports a range of [high availability and disaster recovery](http://developer.couchbase.com/documentation/server/current/ha-dr/ha-dr-intro.html) (HA/DR) strategies. Most HA/DR strategies rely on a multi-pronged approach of maximizing availability, increasing redundancy within and across data centers, and performing regular backups.
+
+Now that your Couchbase cluster is ready, you can run your first [sample application](http://developer.couchbase.com/documentation/server/current/travel-app/index.html).
+
+For further information check out the Couchbase [Developer Portal](http://developer.couchbase.com/server) and [Forums](https://forums.couchbase.com/), or see questions on [Stack Overflow](http://stackoverflow.com/questions/tagged/couchbase).
+
+
+
+
+
+
+_--Arun Gupta, Vice President Developer Relations at Couchbase_
+
+
+
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow [@Kubernetesio](https://twitter.com/kubernetesio) on Twitter for latest updates
diff --git a/blog/_posts/2016-08-00-Kubernetes-Namespaces-Use-Cases-Insights.md b/blog/_posts/2016-08-00-Kubernetes-Namespaces-Use-Cases-Insights.md
new file mode 100644
index 00000000000..1765fe16c6c
--- /dev/null
+++ b/blog/_posts/2016-08-00-Kubernetes-Namespaces-Use-Cases-Insights.md
@@ -0,0 +1,148 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes Namespaces: use cases and insights "
+date: Wednesday, August 16, 2016
+pagination:
+ enabled: true
+---
+
+_“Who's on first, What's on second, I Don't Know's on third” _
+
+_[Who's on First?](https://www.youtube.com/watch?v=kTcRRaXV-fg) by Abbott and Costello_
+
+
+
+**Introduction**
+
+Kubernetes is a system with several concepts. Many of these concepts get manifested as “objects” in the RESTful API (often called “resources” or “kinds”). One of these concepts is [Namespaces](http://kubernetes.io/docs/user-guide/namespaces/). In Kubernetes, Namespaces are the way to partition a single Kubernetes cluster into multiple virtual clusters. In this post we’ll highlight examples of how our customers are using Namespaces.
+
+But first, a metaphor: Namespaces are like human family names. A family name, e.g. Wong, identifies a family unit. Within the Wong family, one of its members, e.g. Sam Wong, is readily identified as just “Sam” by the family. Outside of the family, and to avoid “Which Sam?” problems, Sam would usually be referred to as “Sam Wong”, perhaps even “Sam Wong from San Francisco”.
+
+Namespaces are a logical partitioning capability that enable one Kubernetes cluster to be used by multiple users, teams of users, or a single user with multiple applications without concern for undesired interaction. Each user, team of users, or application may exist within its Namespace, isolated from every other user of the cluster and operating as if it were the sole user of the cluster. (Furthermore, [Resource Quotas](http://kubernetes.io/docs/admin/resourcequota/) provide the ability to allocate a subset of a Kubernetes cluster’s resources to a Namespace.)
+
+For all but the most trivial uses of Kubernetes, you will benefit by using Namespaces. In this post, we’ll cover the most common ways that we’ve seen Kubernetes users on Google Cloud Platform use Namespaces, but our list is not exhaustive and we’d be interested to learn other examples from you.
+
+**Use-cases covered**
+
+- Roles and Responsibilities in an enterprise for namespaces
+- Partitioning landscapes: dev vs. test vs. prod
+- Customer partitioning for non-multi-tenant scenarios
+- When not to use namespaces
+
+**Use-case #1: Roles and Responsibilities in an Enterprise**
+
+
+
+A typical enterprise contains multiple business/technology entities that operate independently of each other with some form of overarching layer of controls managed by the enterprise itself. Operating a Kubernetes clusters in such an environment can be done effectively when roles and responsibilities pertaining to Kubernetes are defined.
+
+Below are a few recommended roles and their responsibilities that can make managing Kubernetes clusters in a large scale organization easier.
+
+
+- Designer/Architect role: This role will define the overall namespace strategy, taking into account product/location/team/cost-center and determining how best to map these to Kubernetes Namespaces. Investing in such a role prevents namespace proliferation and “snowflake” Namespaces.
+- Admin role: This role has admin access to all Kubernetes clusters. Admins can create/delete clusters and add/remove nodes to scale the clusters. This role will be responsible for patching, securing and maintaining the clusters. As well as implementing Quotas between the different entities in the organization. The Kubernetes Admin is responsible for implementing the namespaces strategy defined by the Designer/Architect.
+
+
+
+These two roles and the actual developers using the clusters will also receive support and feedback from the enterprise security and network teams on issues such as security isolation requirements and how namespaces fit this model, or assistance with networking subnets and load-balancers setup.
+
+
+
+Anti-patterns
+
+1. Isolated Kubernetes usage “Islands” without centralized control: Without the initial investment in establishing a centralized control structure around Kubernetes management there is a risk of ending with a “mushroom farm” topology i.e. no defined size/shape/structure of clusters within the org. The result is a difficult to manage, higher risk and elevated cost due to underutilization of resources.
+2. Old-world IT controls choking usage and innovation: A common tendency is to try and transpose existing on-premises controls/procedures onto new dynamic frameworks .This results in weighing down the agile nature of these frameworks and nullifying the benefits of rapid dynamic deployments.
+3. Omni-cluster: Delaying the effort of creating the structure/mechanism for namespace management can result in one large omni-cluster that is hard to peel back into smaller usage groups.
+
+**Use-case #2: Using Namespaces to partition development landscapes**
+
+
+
+Software development teams customarily partition their development pipelines into discrete units. These units take various forms and use various labels but will tend to result in a discrete dev environment, a testing|QA environment, possibly a staging environment and finally a production environment. The resulting layouts are ideally suited to Kubernetes Namespaces. Each environment or stage in the pipeline becomes a unique namespace.
+
+
+
+The above works well as each namespace can be templated and mirrored to the next subsequent environment in the dev cycle, e.g. dev-\>qa-\>prod. The fact that each namespace is logically discrete allows the development teams to work within an isolated “development” namespace. DevOps (The closest role at Google is called [Site Reliability Engineering](https://landing.google.com/sre/interview/ben-treynor.html) “SRE”) will be responsible for migrating code through the pipelines and ensuring that appropriate teams are assigned to each environment. Ultimately, DevOps is solely responsible for the final, production environment where the solution is delivered to the end-users.
+
+
+
+A major benefit of applying namespaces to the development cycle is that the naming of software components (e.g. micro-services/endpoints) can be maintained without collision across the different environments. This is due to the isolation of the Kubernetes namespaces, e.g. serviceX in dev would be referred to as such across all the other namespaces; but, if necessary, could be uniquely referenced using its full qualified name serviceX.development.mycluster.com in the development namespace of mycluster.com.
+
+
+
+Anti-patterns
+
+1. Abusing the namespace benefit resulting in unnecessary environments in the development pipeline. So; if you don’t do staging deployments, don’t create a “staging” namespace.
+2. Overcrowding namespaces e.g. having all your development projects in one huge “development” namespace. Since namespaces attempt to partition, use these to partition by your projects as well. Since Namespaces are flat, you may wish something similar to: projectA-dev, projectA-prod as projectA’s namespaces.
+
+**Use-case #3: Partitioning of your Customers**
+
+
+
+If you are, for example, a consulting company that wishes to manage separate applications for each of your customers, the partitioning provided by Namespaces aligns well. You could create a separate Namespace for each customer, customer project or customer business unit to keep these distinct while not needing to worry about reusing the same names for resources across projects.
+
+
+
+An important consideration here is that Kubernetes does not currently provide a mechanism to enforce access controls across namespaces and so we recommend that you do not expose applications developed using this approach externally.
+
+
+
+Anti-patterns
+
+1. Multi-tenant applications don’t need the additional complexity of Kubernetes namespaces since the application is already enforcing this partitioning.
+2. Inconsistent mapping of customers to namespaces. For example, you win business at a global corporate, you may initially consider one namespace for the enterprise not taking into account that this customer may prefer further partitioning e.g. BigCorp Accounting and BigCorp Engineering. In this case, the customer’s departments may each warrant a namespace.
+
+**When Not to use Namespaces**
+
+
+In some circumstances Kubernetes Namespaces will not provide the isolation that you need. This may be due to geographical, billing or security factors. For all the benefits of the logical partitioning of namespaces, there is currently no ability to enforce the partitioning. Any user or resource in a Kubernetes cluster may access any other resource in the cluster regardless of namespace. So, if you need to protect or isolate resources, the ultimate namespace is a separate Kubernetes cluster against which you may apply your regular security|ACL controls.
+
+
+
+Another time when you may consider not using namespaces is when you wish to reflect a geographically distributed deployment. If you wish to deploy close to US, EU and Asia customers, a Kubernetes cluster deployed locally in each region is recommended.
+
+
+
+When fine-grained billing is required perhaps to chargeback by cost-center or by customer, the recommendation is to leave the billing to your infrastructure provider. For example, in Google Cloud Platform (GCP), you could use a separate GCP [Project](https://cloud.google.com/compute/docs/projects) or [Billing Account](https://support.google.com/cloud/answer/6288653) and deploy a Kubernetes cluster to a specific-customer’s project(s).
+
+
+
+In situations where confidentiality or compliance require complete opaqueness between customers, a Kubernetes cluster per customer/workload will provide the desired level of isolation. Once again, you should delegate the partitioning of resources to your provider.
+
+
+
+Work is underway to provide (a) ACLs on Kubernetes Namespaces to be able to enforce security; (b) to provide Kubernetes [Cluster Federation](http://blog.kubernetes.io/2016/07/cross-cluster-services.html). Both mechanisms will address the reasons for the separate Kubernetes clusters in these anti-patterns.
+
+
+
+An easy to grasp **anti-pattern** for Kubernetes namespaces is versioning. You should not use Namespaces as a way to disambiguate versions of your Kubernetes resources. Support for versioning is present in the containers and container registries as well as in Kubernetes Deployment resource. Multiple versions should coexist by utilizing the Kubernetes container model which also provides for auto migration between versions with deployments. Furthermore versions scope namespaces will cause massive proliferation of namespaces within a cluster making it hard to manage.
+
+
+
+**Caveat Gubernator**
+
+
+
+You may wish to, but you cannot create a hierarchy of namespaces. Namespaces cannot be nested within one another. You can’t, for example, create my-team.my-org as a namespace but could perhaps have team-org.
+
+
+
+Namespaces are easy to create and use but it’s also easy to deploy code inadvertently into the wrong namespace. Good DevOps hygiene suggests documenting and automating processes where possible and this will help. The other way to avoid using the wrong namespace is to set a [kubectl context](http://kubernetes.io/docs/user-guide/kubectl/kubectl_config_set-context/).
+
+
+
+As mentioned previously, Kubernetes does not (currently) provide a mechanism to enforce security across Namespaces. You should only use Namespaces within trusted domains (e.g. internal use) and not use Namespaces when you need to be able to provide guarantees that a user of the Kubernetes cluster or ones its resources be unable to access any of the other Namespaces resources. This enhanced security functionality is being discussed in the Kubernetes Special Interest Group for Authentication and Authorization, get involved at [SIG-Auth](https://github.com/kubernetes/kubernetes/wiki/SIG-Auth).
+
+
+
+_--Mike Altarace & Daz Wilkin, Strategic Customer Engineers, Google Cloud Platform_
+
+
+
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-08-00-Security-Best-Practices-Kubernetes-Deployment.md b/blog/_posts/2016-08-00-Security-Best-Practices-Kubernetes-Deployment.md
new file mode 100644
index 00000000000..8c728388caf
--- /dev/null
+++ b/blog/_posts/2016-08-00-Security-Best-Practices-Kubernetes-Deployment.md
@@ -0,0 +1,239 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Security Best Practices for Kubernetes Deployment "
+date: Thursday, August 31, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: today’s post is by Amir Jerbi and Michael Cherny of Aqua Security, describing security best practices for Kubernetes deployments, based on data they’ve collected from various use-cases seen in both on-premises and cloud deployments._
+
+Kubernetes provides many controls that can greatly improve your application security. Configuring them requires intimate knowledge with Kubernetes and the deployment’s security requirements. The best practices we highlight here are aligned to the container lifecycle: build, ship and run, and are specifically tailored to Kubernetes deployments. We adopted these best practices in [our own SaaS deployment](http://blog.aquasec.com/running-a-security-service-in-google-cloud-real-world-example) that runs Kubernetes on Google Cloud Platform.
+
+The following are our recommendations for deploying a secured Kubernetes application:
+
+**Ensure That Images Are Free of Vulnerabilities **
+Having running containers with vulnerabilities opens your environment to the risk of being easily compromised. Many of the attacks can be mitigated simply by making sure that there are no software components that have known vulnerabilities.
+
+
+- **Implement Continuous Security Vulnerability Scanning** -- Containers might include outdated packages with known vulnerabilities (CVEs). This cannot be a ‘one off’ process, as new vulnerabilities are published every day. An ongoing process, where images are continuously assessed, is crucial to insure a required security posture.
+
+- **Regularly Apply Security Updates to Your Environment** -- Once vulnerabilities are found in running containers, you should always update the source image and redeploy the containers. Try to avoid direct updates (e.g. ‘apt-update’) to the running containers, as this can break the image-container relationship. Upgrading containers is extremely easy with the Kubernetes rolling updates feature - this allows gradually updating a running application by upgrading its images to the latest version.
+
+**Ensure That Only Authorized Images are Used in Your Environment**
+Without a process that ensures that only images adhering to the organization’s policy are allowed to run, the organization is open to risk of running vulnerable or even malicious containers. Downloading and running images from unknown sources is dangerous. It is equivalent to running software from an unknown vendor on a production server. Don’t do that.
+
+Use private registries to store your approved images - make sure you only push approved images to these registries. This alone already narrows the playing field, reducing the number of potential images that enter your pipeline to a fraction of the hundreds of thousands of publicly available images. Build a CI pipeline that integrates security assessment (like vulnerability scanning), making it part of the build process.
+
+The CI pipeline should ensure that only vetted code (approved for production) is used for building the images. Once an image is built, it should be scanned for security vulnerabilities, and only if no issues are found then the image would be pushed to a private registry, from which deployment to production is done. A failure in the security assessment should create a failure in the pipeline, preventing images with bad security quality from being pushed to the image registry.
+
+There is work in progress being done in Kubernetes for image authorization plugins (expected in Kubernetes 1.4), which will allow preventing the shipping of unauthorized images. For more info see this [pull request](https://github.com/kubernetes/kubernetes/pull/27129).
+
+**Limit Direct Access to Kubernetes Nodes**
+You should limit SSH access to Kubernetes nodes, reducing the risk for unauthorized access to host resource. Instead you should ask users to use "kubectl exec", which will provide direct access to the container environment without the ability to access the host.
+
+You can use Kubernetes [Authorization Plugins](http://kubernetes.io/docs/admin/authorization/) to further control user access to resources. This allows defining fine-grained-access control rules for specific namespace, containers and operations.
+
+**Create Administrative Boundaries between Resources**
+Limiting the scope of user permissions can reduce the impact of mistakes or malicious activities. A Kubernetes namespace allows you to partition created resources into logically named groups. Resources created in one namespace can be hidden from other namespaces. By default, each resource created by a user in Kubernetes cluster runs in a default namespace, called default. You can create additional namespaces and attach resources and users to them. You can use Kubernetes Authorization plugins to create policies that segregate access to namespace resources between different users.
+
+For example: the following policy will allow ‘alice’ to read pods from namespace ‘fronto’.
+
+
+```
+{
+
+ "apiVersion": "abac.authorization.kubernetes.io/v1beta1",
+
+ "kind": "Policy",
+
+ "spec": {
+
+ "user": "alice",
+
+ "namespace": "fronto",
+
+ "resource": "pods",
+
+ "readonly": true
+
+ }
+
+}
+ ```
+
+
+**Define Resource Quota**
+An option of running resource-unbound containers puts your system in risk of DoS or “noisy neighbor” scenarios. To prevent and minimize those risks you should define resource quotas. By default, all resources in Kubernetes cluster are created with unbounded CPU and memory requests/limits. You can create resource quota policies, attached to Kubernetes namespace, in order to limit the CPU and memory a pod is allowed to consume.
+
+The following is an example for namespace resource quota definition that will limit number of pods in the namespace to 4, limiting their CPU requests between 1 and 2 and memory requests between 1GB to 2GB.
+
+compute-resources.yaml:
+
+
+
+```
+apiVersion: v1
+kind: ResourceQuota
+metadata:
+ name: compute-resources
+spec:
+ hard:
+ pods: "4"
+ requests.cpu: "1"
+ requests.memory: 1Gi
+ limits.cpu: "2"
+ limits.memory: 2Gi
+ ```
+
+
+Assign a resource quota to namespace:
+
+
+
+
+
+```
+kubectl create -f ./compute-resources.yaml --namespace=myspace
+ ```
+
+
+
+**Implement Network Segmentation**
+
+Running different applications on the same Kubernetes cluster creates a risk of one compromised application attacking a neighboring application. Network segmentation is important to ensure that containers can communicate only with those they are supposed to.
+
+One of the challenges in Kubernetes deployments is creating network segmentation between pods, services and containers. This is a challenge due to the “dynamic” nature of container network identities (IPs), along with the fact that containers can communicate both inside the same node or between nodes.
+
+
+
+Users of Google Cloud Platform can benefit from automatic firewall rules, preventing cross-cluster communication. A similar implementation can be deployed on-premises using network firewalls or SDN solutions. There is work being done in this area by the Kubernetes [Network SIG](https://github.com/kubernetes/community/wiki/SIG-Network), which will greatly improve the pod-to-pod communication policies. A new network policy API should address the need to create firewall rules around pods, limiting the network access that a containerized can have.
+
+
+
+The following is an example of a network policy that controls the network for “backend” pods, only allowing inbound network access from “frontend” pods:
+
+
+
+
+
+```
+POST /apis/net.alpha.kubernetes.io/v1alpha1/namespaces/tenant-a/networkpolicys
+{
+ "kind": "NetworkPolicy",
+
+ "metadata": {
+
+ "name": "pol1"
+
+ },
+
+ "spec": {
+
+ "allowIncoming": {
+
+ "from": [{
+
+ "pods": { "segment": "frontend" }
+
+ }],
+
+ "toPorts": [{
+
+ "port": 80,
+
+ "protocol": "TCP"
+
+ }]
+
+ },
+
+ "podSelector": {
+
+ "segment": "backend"
+
+ }
+
+ }
+
+}
+ ```
+
+
+
+Read more about Network policies [here](http://blog.kubernetes.io/2016/04/Kubernetes-Network-Policy-APIs.html).
+
+
+
+**Apply Security Context to Your Pods and Containers**
+
+When designing your containers and pods, make sure that you configure the security context for your pods, containers and volumes. A security context is a property defined in the deployment yaml. It controls the security parameters that will be assigned to the pod/container/volume. Some of the important parameters are:
+
+
+| Security Context Setting | Description |
+| :------------: | :------------: |
+| SecurityContext->runAsNonRoot |Indicates that containers should run as non-root user
+| SecurityContext->Capabilities |Controls the Linux capabilities assigned to the container.|
+| SecurityContext->readOnlyRootFilesystem |Controls whether a container will be able to write into the root filesystem.|
+| PodSecurityContext->runAsNonRoot |Prevents running a container with 'root' user as part of the pod|
+{: .post-table}
+
+
+
+The following is an example for pod definition with security context parameters:
+
+
+
+
+
+
+```
+apiVersion: v1
+kind: Pod
+metadata:
+ name: hello-world
+spec:
+ containers:
+ # specification of the pod’s containers
+ # ...
+ securityContext:
+ readOnlyRootFilesystem: true
+ runAsNonRoot: true
+ ```
+
+
+
+Reference [here](http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_podsecuritycontext).
+
+
+
+In case you are running containers with elevated privileges (--privileged) you should consider using the “DenyEscalatingExec” admission control. This control denies exec and attach commands to pods that run with escalated privileges that allow host access. This includes pods that run as privileged, have access to the host IPC namespace, and have access to the host PID namespace. For more details on admission controls, see the Kubernetes [documentation](http://kubernetes.io/docs/admin/admission-controllers/).
+
+
+
+**Log Everything**
+
+Kubernetes supplies cluster-based logging, allowing to log container activity into a central log hub. When a cluster is created, the standard output and standard error output of each container can be ingested using a Fluentd agent running on each node into either Google Stackdriver Logging or into Elasticsearch and viewed with Kibana.
+
+
+
+**Summary**
+
+Kubernetes supplies many options to create a secured deployment. There is no one-size-fit-all solution that can be used everywhere, so a certain degree of familiarity with these options is required, as well as an understanding of how they can enhance your application’s security.
+
+
+We recommend implementing the best practices that were highlighted in this blog, and use Kubernetes flexible configuration capabilities to incorporate security processes into the continuous integration pipeline, automating the entire process with security seamlessly “baked in”.
+
+
+
+
+
+_--Michael Cherny, Head of Security Research, and Amir Jerbi, CTO and co-founder [Aqua Security](https://www.aquasec.com/)_
+
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-08-00-Sig-Apps-Running-Apps-In-Kubernetes.md b/blog/_posts/2016-08-00-Sig-Apps-Running-Apps-In-Kubernetes.md
new file mode 100644
index 00000000000..19ab9a3bb34
--- /dev/null
+++ b/blog/_posts/2016-08-00-Sig-Apps-Running-Apps-In-Kubernetes.md
@@ -0,0 +1,44 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " SIG Apps: build apps for and operate them in Kubernetes "
+date: Wednesday, August 16, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: This post is by the Kubernetes SIG-Apps team sharing how they focus on the developer and devops experience of running applications in Kubernetes._
+
+Kubernetes is an incredible manager for containerized applications. Because of this, [numerous](http://blog.kubernetes.io/2016/02/sharethis-kubernetes-in-production.html) [companies](https://blog.box.com/blog/kubernetes-box-microservices-maximum-velocity/) [have](http://techblog.yahoo.co.jp/infrastructure/os_n_k8s/) [started](http://www.nextplatform.com/2015/11/12/inside-ebays-shift-to-kubernetes-and-containers-atop-openstack/) to run their applications in Kubernetes.
+
+Kubernetes Special Interest Groups ([SIGs](https://github.com/kubernetes/community/blob/master/README.md#special-interest-groups-sig)) have been around to support the community of developers and operators since around the 1.0 release. People organized around networking, storage, scaling and other operational areas.
+
+As Kubernetes took off, so did the need for tools, best practices, and discussions around building and operating cloud native applications. To fill that need the Kubernetes [SIG Apps](https://github.com/kubernetes/community/tree/master/sig-apps) came into existence.
+
+SIG Apps is a place where companies and individuals can:
+
+
+- see and share demos of the tools being built to enable app operators
+- learn about and discuss needs of app operators
+- organize around efforts to improve the experience
+
+Since the inception of SIG Apps we’ve had demos of projects like [KubeFuse](https://github.com/opencredo/kubefuse), [KPM](https://github.com/kubespray/kpm), and [StackSmith](https://stacksmith.bitnami.com/). We’ve also executed on a survey of those operating apps in Kubernetes.
+
+From the survey results we’ve learned a number of things including:
+
+
+- That 81% of respondents want some form of autoscaling
+- To store secret information 47% of respondents use built-in secrets. At reset these are not currently encrypted. (If you want to help add encryption there is an [issue](https://github.com/kubernetes/kubernetes/issues/10439) for that.)
+- The most responded questions had to do with 3rd party tools and debugging
+- For 3rd party tools to manage applications there were no clear winners. There are a wide variety of practices
+- An overall complaint about a lack of useful documentation. (Help contribute to the docs [here](https://github.com/kubernetes/kubernetes.github.io).)
+- There’s a lot of data. Many of the responses were optional so we were surprised that 935 of all questions across all candidates were filled in. If you want to look at the data yourself it’s [available online](https://docs.google.com/spreadsheets/d/15SUL7QTpR4Flrp5eJ5TR8A5ZAFwbchfX2QL4MEoJFQ8/edit?usp=sharing).
+
+When it comes to application operation there’s still a lot to be figured out and shared. If you've got opinions about running apps, tooling to make the experience better, or just want to lurk and learn about what's going please come join us.
+
+
+- Chat with us on SIG-Apps [Slack channel](https://kubernetes.slack.com/messages/sig-apps)
+- Email as at SIG-Apps [mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-apps)
+- Join our open meetings: weekly at 9AM PT on Wednesdays, [full details here](https://github.com/kubernetes/community/blob/master/sig-apps/README.md#meeting).
+
+
+_--Matt Farina, Principal Engineer, Hewlett Packard Enterprise_
diff --git a/blog/_posts/2016-08-00-Stateful-Applications-Using-Kubernetes-Datera.md b/blog/_posts/2016-08-00-Stateful-Applications-Using-Kubernetes-Datera.md
new file mode 100644
index 00000000000..16201981472
--- /dev/null
+++ b/blog/_posts/2016-08-00-Stateful-Applications-Using-Kubernetes-Datera.md
@@ -0,0 +1,606 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Scaling Stateful Applications using Kubernetes Pet Sets and FlexVolumes with Datera Elastic Data Fabric "
+date: Tuesday, August 29, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: today’s guest post is by Shailesh Mittal, Software Architect and Ashok Rajagopalan, Sr Director Product at Datera Inc, talking about Stateful Application provisioning with Kubernetes on Datera Elastic Data Fabric._
+
+**Introduction**
+
+Persistent volumes in Kubernetes are foundational as customers move beyond stateless workloads to run stateful applications. While Kubernetes has supported stateful applications such as MySQL, Kafka, Cassandra, and Couchbase for a while, the introduction of Pet Sets has significantly improved this support. In particular, the procedure to sequence the provisioning and startup, the ability to scale and associate durably by [Pet Sets](http://kubernetes.io/docs/user-guide/petset/) has provided the ability to automate to scale the “Pets” (applications that require consistent handling and durable placement).
+
+Datera, elastic block storage for cloud deployments, has [seamlessly integrated with Kubernetes](http://datera.io/blog-library/8/19/datera-simplifies-stateful-containers-on-kubernetes-13) through the [FlexVolume](http://kubernetes.io/docs/user-guide/volumes/#flexvolume) framework. Based on the first principles of containers, Datera allows application resource provisioning to be decoupled from the underlying physical infrastructure. This brings clean contracts (aka, no dependency or direct knowledge of the underlying physical infrastructure), declarative formats, and eventually portability to stateful applications.
+
+While Kubernetes allows for great flexibility to define the underlying application infrastructure through yaml configurations, Datera allows for that configuration to be passed to the storage infrastructure to provide persistence. Through the notion of Datera AppTemplates, in a Kubernetes environment, stateful applications can be automated to scale.
+
+
+
+
+
+**Deploying Persistent Storage**
+
+
+
+Persistent storage is defined using the Kubernetes [PersistentVolume](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistent-volumes) subsystem. PersistentVolumes are volume plugins and define volumes that live independently of the lifecycle of the pod that is using it. They are implemented as NFS, iSCSI, or by cloud provider specific storage system. Datera has developed a volume plugin for PersistentVolumes that can provision iSCSI block storage on the Datera Data Fabric for Kubernetes pods.
+
+
+
+The Datera volume plugin gets invoked by kubelets on minion nodes and relays the calls to the Datera Data Fabric over its REST API. Below is a sample deployment of a PersistentVolume with the Datera plugin:
+
+
+
+ ```
+ apiVersion: v1
+
+ kind: PersistentVolume
+
+ metadata:
+
+ name: pv-datera-0
+
+ spec:
+
+ capacity:
+
+ storage: 100Gi
+
+ accessModes:
+
+ - ReadWriteOnce
+
+ persistentVolumeReclaimPolicy: Retain
+
+ flexVolume:
+
+ driver: "datera/iscsi"
+
+ fsType: "xfs"
+
+ options:
+
+ volumeID: "kube-pv-datera-0"
+
+ size: “100"
+
+ replica: "3"
+
+ backstoreServer: "[tlx170.tlx.daterainc.com](http://tlx170.tlx.daterainc.com/):7717”
+ ```
+
+
+
+This manifest defines a PersistentVolume of 100 GB to be provisioned in the Datera Data Fabric, should a pod request the persistent storage.
+
+
+
+
+ ```
+[root@tlx241 /]# kubectl get pv
+
+NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
+
+pv-datera-0 100Gi RWO Available 8s
+
+pv-datera-1 100Gi RWO Available 2s
+
+pv-datera-2 100Gi RWO Available 7s
+
+pv-datera-3 100Gi RWO Available 4s
+ ```
+
+
+
+**Configuration**
+
+
+
+The Datera PersistenceVolume plugin is installed on all minion nodes. When a pod lands on a minion node with a valid claim bound to the persistent storage provisioned earlier, the Datera plugin forwards the request to create the volume on the Datera Data Fabric. All the options that are specified in the PersistentVolume manifest are sent to the plugin upon the provisioning request.
+
+
+
+Once a volume is provisioned in the Datera Data Fabric, volumes are presented as an iSCSI block device to the minion node, and kubelet mounts this device for the containers (in the pod) to access it.
+
+ {: .big-img}
+
+
+
+**Using Persistent Storage**
+
+
+
+Kubernetes PersistentVolumes are used along with a pod using PersistentVolume Claims. Once a claim is defined, it is bound to a PersistentVolume matching the claim’s specification. A typical claim for the PersistentVolume defined above would look like below:
+
+
+
+
+ ```
+kind: PersistentVolumeClaim
+
+apiVersion: v1
+
+metadata:
+
+ name: pv-claim-test-petset-0
+
+spec:
+
+ accessModes:
+
+ - ReadWriteOnce
+
+ resources:
+
+ requests:
+
+ storage: 100Gi
+ ```
+
+
+
+When this claim is defined and it is bound to a PersistentVolume, resources can be used with the pod specification:
+
+
+
+
+ ```
+[root@tlx241 /]# kubectl get pv
+
+NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
+
+pv-datera-0 100Gi RWO Bound default/pv-claim-test-petset-0 6m
+
+pv-datera-1 100Gi RWO Bound default/pv-claim-test-petset-1 6m
+
+pv-datera-2 100Gi RWO Available 7s
+
+pv-datera-3 100Gi RWO Available 4s
+
+
+[root@tlx241 /]# kubectl get pvc
+
+NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
+
+pv-claim-test-petset-0 Bound pv-datera-0 0 3m
+
+pv-claim-test-petset-1 Bound pv-datera-1 0 3m
+ ```
+
+
+
+A pod can use a PersistentVolume Claim like below:
+
+
+
+ ```
+apiVersion: v1
+
+kind: Pod
+
+metadata:
+
+ name: kube-pv-demo
+
+spec:
+
+ containers:
+
+ - name: data-pv-demo
+
+ image: nginx
+
+ volumeMounts:
+
+ - name: test-kube-pv1
+
+ mountPath: /data
+
+ ports:
+
+ - containerPort: 80
+
+ volumes:
+
+ - name: test-kube-pv1
+
+ persistentVolumeClaim:
+
+ claimName: pv-claim-test-petset-0
+ ```
+
+
+
+The result is a pod using a PersistentVolume Claim as a volume. It in-turn sends the request to the Datera volume plugin to provision storage in the Datera Data Fabric.
+
+
+
+
+ ```
+[root@tlx241 /]# kubectl describe pods kube-pv-demo
+
+Name: kube-pv-demo
+
+Namespace: default
+
+Node: tlx243/172.19.1.243
+
+Start Time: Sun, 14 Aug 2016 19:17:31 -0700
+
+Labels: \
+
+Status: Running
+
+IP: 10.40.0.3
+
+Controllers: \
+
+Containers:
+
+ data-pv-demo:
+
+ Container ID: [docker://ae2a50c25e03143d0dd721cafdcc6543fac85a301531110e938a8e0433f74447](about:blank)
+
+ Image: nginx
+
+ Image ID: [docker://sha256:0d409d33b27e47423b049f7f863faa08655a8c901749c2b25b93ca67d01a470d](about:blank)
+
+ Port: 80/TCP
+
+ State: Running
+
+ Started: Sun, 14 Aug 2016 19:17:34 -0700
+
+ Ready: True
+
+ Restart Count: 0
+
+ Environment Variables: \
+
+Conditions:
+
+ Type Status
+
+ Initialized True
+
+ Ready True
+
+ PodScheduled True
+
+Volumes:
+
+ test-kube-pv1:
+
+ Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
+
+ ClaimName: pv-claim-test-petset-0
+
+ ReadOnly: false
+
+ default-token-q3eva:
+
+ Type: Secret (a volume populated by a Secret)
+
+ SecretName: default-token-q3eva
+
+ QoS Tier: BestEffort
+
+Events:
+
+ FirstSeen LastSeen Count From SubobjectPath Type Reason Message
+
+ --------- -------- ----- ---- ------------- -------- ------ -------
+
+ 43s 43s 1 {default-scheduler } Normal Scheduled Successfully assigned kube-pv-demo to tlx243
+
+ 42s 42s 1 {kubelet tlx243} spec.containers{data-pv-demo} Normal Pulling pulling image "nginx"
+
+ 40s 40s 1 {kubelet tlx243} spec.containers{data-pv-demo} Normal Pulled Successfully pulled image "nginx"
+
+ 40s 40s 1 {kubelet tlx243} spec.containers{data-pv-demo} Normal Created Created container with docker id ae2a50c25e03
+
+ 40s 40s 1 {kubelet tlx243} spec.containers{data-pv-demo} Normal Started Started container with docker id ae2a50c25e03
+ ```
+
+
+
+The persistent volume is presented as iSCSI device at minion node (tlx243 in this case):
+
+
+
+ ```
+[root@tlx243 ~]# lsscsi
+
+[0:2:0:0] disk SMC SMC2208 3.24 /dev/sda
+
+[11:0:0:0] disk DATERA IBLOCK 4.0 /dev/sdb
+
+
+[root@tlx243 datera~iscsi]# mount ``` grep sdb
+
+/dev/sdb on /var/lib/kubelet/pods/6b99bd2a-628e-11e6-8463-0cc47ab41442/volumes/datera~iscsi/pv-datera-0 type xfs (rw,relatime,attr2,inode64,noquota)
+ ```
+
+
+
+Containers running in the pod see this device mounted at /data as specified in the manifest:
+
+
+
+ ```
+[root@tlx241 /]# kubectl exec kube-pv-demo -c data-pv-demo -it bash
+
+root@kube-pv-demo:/# mount ``` grep data
+
+/dev/sdb on /data type xfs (rw,relatime,attr2,inode64,noquota)
+ ```
+
+
+
+**Using Pet Sets**
+
+
+
+Typically, pods are treated as stateless units, so if one of them is unhealthy or gets superseded, Kubernetes just disposes it. In contrast, a PetSet is a group of stateful pods that has a stronger notion of identity. The goal of a PetSet is to decouple this dependency by assigning identities to individual instances of an application that are not anchored to the underlying physical infrastructure.
+
+
+
+A PetSet requires {0..n-1} Pets. Each Pet has a deterministic name, PetSetName-Ordinal, and a unique identity. Each Pet has at most one pod, and each PetSet has at most one Pet with a given identity. A PetSet ensures that a specified number of “pets” with unique identities are running at any given time. The identity of a Pet is comprised of:
+
+- a stable hostname, available in DNS
+- an ordinal index
+- stable storage: linked to the ordinal & hostname
+
+A typical PetSet definition using a PersistentVolume Claim looks like below:
+
+
+
+ ```
+# A headless service to create DNS records
+
+apiVersion: v1
+
+kind: Service
+
+metadata:
+
+ name: test-service
+
+ labels:
+
+ app: nginx
+
+spec:
+
+ ports:
+
+ - port: 80
+
+ name: web
+
+ clusterIP: None
+
+ selector:
+
+ app: nginx
+
+---
+
+apiVersion: apps/v1alpha1
+
+kind: PetSet
+
+metadata:
+
+ name: test-petset
+
+spec:
+
+ serviceName: "test-service"
+
+ replicas: 2
+
+ template:
+
+ metadata:
+
+ labels:
+
+ app: nginx
+
+ annotations:
+
+ [pod.alpha.kubernetes.io/initialized:](http://pod.alpha.kubernetes.io/initialized:) "true"
+
+ spec:
+
+ terminationGracePeriodSeconds: 0
+
+ containers:
+
+ - name: nginx
+
+ image: [gcr.io/google\_containers/nginx-slim:0.8](http://gcr.io/google_containers/nginx-slim:0.8)
+
+ ports:
+
+ - containerPort: 80
+
+ name: web
+
+ volumeMounts:
+
+ - name: pv-claim
+
+ mountPath: /data
+
+ volumeClaimTemplates:
+
+ - metadata:
+
+ name: pv-claim
+
+ annotations:
+
+ [volume.alpha.kubernetes.io/storage-class:](http://volume.alpha.kubernetes.io/storage-class:) anything
+
+ spec:
+
+ accessModes: ["ReadWriteOnce"]
+
+ resources:
+
+ requests:
+
+ storage: 100Gi
+ ```
+
+
+
+We have the following PersistentVolume Claims available:
+
+
+
+ ```
+[root@tlx241 /]# kubectl get pvc
+
+NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
+
+pv-claim-test-petset-0 Bound pv-datera-0 0 41m
+
+pv-claim-test-petset-1 Bound pv-datera-1 0 41m
+
+pv-claim-test-petset-2 Bound pv-datera-2 0 5s
+
+pv-claim-test-petset-3 Bound pv-datera-3 0 2s
+ ```
+
+
+
+When this PetSet is provisioned, two pods get instantiated:
+
+
+
+ ```
+[root@tlx241 /]# kubectl get pods
+
+NAMESPACE NAME READY STATUS RESTARTS AGE
+
+default test-petset-0 1/1 Running 0 7s
+
+default test-petset-1 1/1 Running 0 3s
+ ```
+
+
+
+Here is how the PetSet test-petset instantiated earlier looks like:
+
+
+
+
+ ```
+[root@tlx241 /]# kubectl describe petset test-petset
+
+Name: test-petset
+
+Namespace: default
+
+Image(s): [gcr.io/google\_containers/nginx-slim:0.8](http://gcr.io/google_containers/nginx-slim:0.8)
+
+Selector: app=nginx
+
+Labels: app=nginx
+
+Replicas: 2 current / 2 desired
+
+Annotations: \
+
+CreationTimestamp: Sun, 14 Aug 2016 19:46:30 -0700
+
+Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
+
+No volumes.
+
+No events.
+ ```
+
+
+
+Once a PetSet is instantiated, such as test-petset below, upon increasing the number of replicas (i.e. the number of pods started with that PetSet), more pods get instantiated and more PersistentVolume Claims get bound to new pods:
+
+
+
+ ```
+[root@tlx241 /]# kubectl patch petset test-petset -p'{"spec":{"replicas":"3"}}'
+
+"test-petset” patched
+
+
+[root@tlx241 /]# kubectl describe petset test-petset
+
+Name: test-petset
+
+Namespace: default
+
+Image(s): [gcr.io/google\_containers/nginx-slim:0.8](http://gcr.io/google_containers/nginx-slim:0.8)
+
+Selector: app=nginx
+
+Labels: app=nginx
+
+Replicas: 3 current / 3 desired
+
+Annotations: \
+
+CreationTimestamp: Sun, 14 Aug 2016 19:46:30 -0700
+
+Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
+
+No volumes.
+
+No events.
+
+
+[root@tlx241 /]# kubectl get pods
+
+NAME READY STATUS RESTARTS AGE
+
+test-petset-0 1/1 Running 0 29m
+
+test-petset-1 1/1 Running 0 28m
+
+test-petset-2 1/1 Running 0 9s
+ ```
+
+
+
+Now the PetSet is running 3 pods after patch application.
+
+
+
+When the above PetSet definition is patched to have one more replica, it introduces one more pod in the system. This in turn results in one more volume getting provisioned on the Datera Data Fabric. So volumes get dynamically provisioned and attached to a pod upon the PetSet scaling up.
+
+
+
+To support the notion of durability and consistency, if a pod moves from one minion to another, volumes do get attached (mounted) to the new minion node and detached (unmounted) from the old minion to maintain persistent access to the data.
+
+
+
+**Conclusion**
+
+
+
+This demonstrates Kubernetes with Pet Sets orchestrating stateful and stateless workloads. While the Kubernetes community is working on expanding the FlexVolume framework’s capabilities, we are excited that this solution makes it possible for Kubernetes to be run more widely in the datacenters.
+
+
+
+Join and contribute: Kubernetes [Storage SIG](https://groups.google.com/forum/#!forum/kubernetes-sig-storage).
+
+
+
+- [Download Kubernetes](http://get.k8s.io/)
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on the [k8s Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-09-00-Cloud-Native-Application-Interfaces.md b/blog/_posts/2016-09-00-Cloud-Native-Application-Interfaces.md
new file mode 100644
index 00000000000..9cc4c782198
--- /dev/null
+++ b/blog/_posts/2016-09-00-Cloud-Native-Application-Interfaces.md
@@ -0,0 +1,61 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Cloud Native Application Interfaces "
+date: Friday, September 01, 2016
+pagination:
+ enabled: true
+---
+
+
+**Standard Interfaces (or, the Thirteenth Factor)**
+
+_--by Brian Grant and Craig Mcluckie, Google_
+
+
+
+When you say we need ‘software standards’ in erudite company, you get some interesting looks. Most concede that software standards have been central to the success of the boldest and most successful projects out there (like the Internet). Most are also skeptical about how they apply to the innovative world we live in today. Our projects are executed in week increments, not years. Getting bogged down behind mega-software-corporation-driven standards practices would be the death knell in this fluid, highly competitive world.
+
+This isn’t about ‘those’ standards. The ones that emerge after years of deep consideration and negotiation that are eventually published by a body with a four-letter acronym for a name. This is about a different approach: finding what is working in the real world, and acting as a community to embrace it.
+
+Let’s go back to first principles. To describe Cloud Native in one word, we'd choose "automatable".
+
+Most existing applications are not.
+
+Applications have many interfaces with their environment, whether with management infrastructure, shared services, or other applications. For us to remove the operator from patching, scaling, migrating an app from one environment to another, changing out dependencies, and handling of failure conditions, a set of well structured common interfaces is essential. It goes without saying that these interfaces must be designed for machines, not just humans. Machine-friendly interfaces allow automation systems to understand the systems under management, and create the loose coupling needed for applications to live in automated environments.
+
+
+
+As containerized infrastructure gets built there are a set of critical interfaces available to applications that go far beyond what is available to a single node today. The adoption of ‘serverless patterns’ (meaning ephemeral, event driven function execution) will further compound the need to make sense of running code in an environment that is completely decoupled from the node. The services needed will start with application configuration and extend to monitoring, logging, autoscaling and beyond. The set of capabilities will only grow as applications continue to adapt to be fuller citizens in a "cloud native" world.
+
+
+
+Exploring one example a little further, a number of service-discovery solutions have been developed but are often tied to a particular storage implementation, a particular programming language, a non-standard protocol, and/or are opinionated in some other way (e.g., dictating application naming structure). This makes them unsuitable for general-purpose use. While DNS has limitations (that will eventually need to be addressed), it's at least a standard protocol with room for innovation in its implementation. This is demonstrated by CoreDNS and other cloud-native DNS implementations.
+
+
+
+When we look inside the systems at Google, we have been able to achieve very high levels of automation without formal interface definitions thanks to a largely homogeneous software and hardware environment. Adjacent systems can safely make assumptions about interfaces, and by providing a set of universally used libraries we can skirt the issue. A good example of this is our log format doesn’t need to be formally specified because the libraries that generate logs are maintained by the teams that maintain the logs processing systems. This means that we have been able to get by to date without something like fluentd (which is solving the problem in the community of interfacing with logging systems).
+
+
+
+Even though Google has managed to get by this way, it hurts us. One way is when we acquire a company. Porting their technology to run in our automation systems requires a spectacular amount of work. Doing that work while continuing to innovate is particularly tough. Even more significant though, there’s a lot of innovation happening in the open source world that isn’t easy for us to tap into. When new technology emerges, we would like to be able to experiment with it, adopt it piecemeal, and perhaps contribute back to it. When you run a vertically integrated, bespoke stack, that is a hard thing to do.
+
+
+The lack of standard interfaces leaves customers with three choices:
+
+- Live with high operations cost (the status quo), and accept that your developers in many cases will spend the majority of their time dealing with the care and feeding of applications.
+- Sign-up to be like Google (build your own everything, down to the concrete in the floor).
+- Rely on a single, or a small collection of vendors to provide a complete solution and accept some degree of lock-in. Few in companies of any size (from enterprise to startup) find this appealing.
+It is our belief that an open community is more powerful and that customers benefit when there is competition at every layer of the stack. It should be possible to pull together a stack with best-of-breed capabilities at every level -- logging, monitoring, orchestration, container runtime environment, block and file-system storage, SDN technology, etc.
+
+
+
+Standardizing interfaces (at least by convention) between the management system and applications is critical. One might consider the use of common conventions for interfaces as a thirteenth factor (expanding on the [12-factor methodology](https://12factor.net/)) in creating modern systems that work well in the cloud and at scale.
+
+
+
+Kubernetes and Cloud Native Computing Foundation ([CNCF](https://cncf.io/)) represent a great opportunity to support the emergence of standard interfaces, and to support the emergence of a fully automated software world. We’d love to see this community embrace the ideal of promoting standard interfaces from working technology. The obvious first step is to identify the immediate set of critical interfaces, and establish working groups in CNCF to start assess what exists in this area as candidates, and to sponsor work to start developing standard interfaces that work across container formats, orchestrators, developer tools and the myriad other systems that are needed to deliver on the Cloud Native vision.
+
+
+
+_--Brian Grant and Craig Mcluckie, Google_
diff --git a/blog/_posts/2016-09-00-Creating-Postgresql-Cluster-Using-Helm.md b/blog/_posts/2016-09-00-Creating-Postgresql-Cluster-Using-Helm.md
new file mode 100644
index 00000000000..1229119db86
--- /dev/null
+++ b/blog/_posts/2016-09-00-Creating-Postgresql-Cluster-Using-Helm.md
@@ -0,0 +1,148 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Creating a PostgreSQL Cluster using Helm "
+date: Saturday, September 09, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: Today’s guest post is by Jeff McCormick, a developer at Crunchy Data, showing how to deploy a PostgreSQL cluster using Helm, a Kubernetes package manager._
+
+[Crunchy Data](http://www.crunchydata.com/) supplies a set of open source PostgreSQL and PostgreSQL related containers. The Crunchy PostgreSQL Container Suite includes containers that deploy, monitor, and administer the open source PostgreSQL database, for more details view this GitHub [repository](https://github.com/crunchydata/crunchy-containers).
+
+In this post we’ll show you how to deploy a PostgreSQL cluster using [Helm](https://github.com/kubernetes/helm), a Kubernetes package manager. For reference, the Crunchy Helm Chart examples used within this post are located [here](https://github.com/CrunchyData/crunchy-containers/tree/master/examples/kubehelm/crunchy-postgres), and the pre-built containers can be found on DockerHub at [this location](https://hub.docker.com/u/crunchydata/dashboard/).
+
+This example will create the following in your Kubernetes cluster:
+
+- postgres master service
+- postgres replica service
+- postgres 9.5 master database (pod)
+- postgres 9.5 replica database (replication controller)
+
+
+
+
+ 
+
+
+
+This example creates a simple Postgres streaming replication deployment with a master (read-write), and a single asynchronous replica (read-only). You can scale up the number of replicas dynamically.
+
+
+
+**Contents**
+
+
+
+The example is made up of various Chart files as follows:
+
+
+| | |
+| :------------: | :------------: |
+|values.yaml |This file contains values which you can reference within the database templates allowing you to specify in one place values like database passwords|
+|templates/master-pod.yaml|The postgres master database pod definition. This file causes a single postgres master pod to be created.
+|templates/master-service.yaml|The postgres master database has a service created to act as a proxy. This file causes a single service to be created to proxy calls to the master database.
+|templates/replica-rc.yaml| The postgres replica database is defined by this file. This file causes a replication controller to be created which allows the postgres replica containers to be scaled up on-demand.|
+|templates/replica-service.yaml|This file causes the service proxy for the replica database container(s) to be created.|
+
+
+
+
+**Installation**
+
+
+
+[Install Helm](https://github.com/kubernetes/helm#install) according to their GitHub documentation and then install the examples as follows:
+
+
+
+
+```
+helm init
+
+cd crunchy-containers/examples/kubehelm
+
+helm install ./crunchy-postgres
+ ```
+
+
+
+**Testing**
+
+
+
+After installing the Helm chart, you will see the following services:
+
+
+
+```
+kubectl get services
+NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+crunchy-master 10.0.0.171 \ 5432/TCP 1h
+crunchy-replica 10.0.0.31 \ 5432/TCP 1h
+kubernetes 10.0.0.1 \ 443/TCP 1h
+ ```
+
+
+
+It takes about a minute for the replica to begin replicating with the master. To test out replication, see if replication is underway with this command, enter password for the password when prompted:
+
+
+
+```
+psql -h crunchy-master -U postgres postgres -c 'table pg\_stat\_replication'
+ ```
+
+
+
+If you see a line returned from that query it means the master is replicating to the slave. Try creating some data on the master:
+
+
+
+
+```
+psql -h crunchy-master -U postgres postgres -c 'create table foo (id int)'
+
+psql -h crunchy-master -U postgres postgres -c 'insert into foo values (1)'
+ ```
+
+
+
+
+Then verify that the data is replicated to the slave:
+
+
+
+
+```
+psql -h crunchy-replica -U postgres postgres -c 'table foo'
+ ```
+
+
+
+You can scale up the number of read-only replicas by running the following kubernetes command:
+
+
+
+```
+kubectl scale rc crunchy-replica --replicas=2
+ ```
+
+
+It takes 60 seconds for the replica to start and begin replicating from the master.
+
+
+
+The Kubernetes Helm and Charts projects provide a streamlined way to package up complex applications and deploy them on a Kubernetes cluster. Deploying PostgreSQL clusters can sometimes prove challenging, but the task is greatly simplified using Helm and Charts.
+
+
+
+_--Jeff McCormick, Developer, Crunchy Data_
+
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-09-00-Deploying-To-Multiple-Kubernetes-With-Kit.md b/blog/_posts/2016-09-00-Deploying-To-Multiple-Kubernetes-With-Kit.md
new file mode 100644
index 00000000000..c7503760ea0
--- /dev/null
+++ b/blog/_posts/2016-09-00-Deploying-To-Multiple-Kubernetes-With-Kit.md
@@ -0,0 +1,68 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Deploying to Multiple Kubernetes Clusters with kit "
+date: Wednesday, September 06, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: today’s guest post is by Chesley Brown, Full-Stack Engineer, at InVision, talking about how they build and open sourced kit to help them to continuously deploy updates to multiple clusters._
+
+Our Docker journey at InVision may sound familiar. We started with Docker in our development environments, trying to get consistency there first. We wrangled our legacy monolith application into Docker images and streamlined our Dockerfiles to minimize size and amp the efficiency. Things were looking good. Did we learn a lot along the way? For sure. But at the end of it all, we had our entire engineering team working with Docker locally for their development environments. Mission accomplished! Well, not quite. Development was one thing, but moving to production was a whole other ballgame.
+
+**Along Came Kubernetes**
+
+Kubernetes came into our lives during our evaluation of orchestrators and schedulers last December. AWS ECS was still fresh and Docker had just released 1.9 (networking overlay release). We spent the month evaluating our choices, narrowing it down to native Docker tooling (Machine, Swarm, Compose), ECS and Kubernetes. Well, needless to say, Kubernetes was our clear winner and we started the new year moving headlong to leverage Kubernetes to get us to production. But it wasn't long when we ran into a tiny complication...
+
+**Automated Deployments With A Catch**
+
+Here at [InVision](https://www.invisionapp.com/), we have a unique challenge. We just don’t have a single production environment running Kubernetes, but several, all needing automated updates via our CI/CD process. And although the code running on these environments was similar, the configurations were not. Things needed to work smoothly, automatically, as we couldn't afford to add friction to the deploy process or encumber our engineering teams.
+
+Having several near duplicate clusters could easily turn into a Kubernetes manifest nightmare. Anti-patterns galore, as we copy and paste 95% of the manifests to get a new cluster. Scalable? No. Headache? Yes. Keeping those manifests up-to-date and accurate would be a herculean (and error-prone) task. We needed something easier, something that allows reuse, keeping the maintenance low, and that we could incorporate into our CI/CD system.
+
+So after looking for a project or tooling that could fit our needs, we came up empty. At InVision, we love to create tools to help us solve problems, and figuring we may not be the only team in this situation we decided to roll up our sleeves and created something of our own. The result is our open-source tool, kit! (short for Kubernetes + git)
+
+**Hello kit!**
+
+[](https://raw.githubusercontent.com/InVisionApp/kit/master/media/kit-logo-horz-sm.png)
+
+[kit](https://github.com/InVisionApp/kit) is a suite of components that, when plugged into your CI/CD system and source control, allows you to continuously deploy updates (or entirely new services!) to as many clusters as needed, all leveraging webhooks and without having to host an external service.
+
+Using kit’s templating format, you can define your service files once and have them reused across multiple clusters. It works by building on top of your usual Kubernetes manifest files allowing them to be defined once and then reused across clusters by only defining the unique configuration needed for that specific cluster. This allows you to easily build the orchestration for your application and deploy it to as many clusters as needed. It also allows the ability to group variations of your application so you could have clusters that run the “development” version of your application while others run the “production” version and so on.
+
+Developers simply commit code to their branches as normal and kit deploys to all clusters running that service. Kit then manages updating the image and tag that is used for a given service directly to the repository containing all your kit manifest templates. This means any and all changes to your clusters, from environment variables, or configurations to image updates are all tracked under source control history providing you with an audit trail for every cluster you have.
+
+We made all of this Open Source so you can [check out the kit repo](https://github.com/InVisionApp/kit)!
+
+**Is kit Right For Us?**
+
+If you are running Kubernetes across several clusters (or namespaces) all needing to continuously deploy, you bet! Because using kit doesn’t require hosting any external server, your team can leverage the webhooks you probably already have with github and your CI/CD system to get started. From there you create a repo to host your Kubernetes manifest files which tells what services are deployed to which clusters. Complexity of these files is greatly simplified thanks to kit’s templating engine.The kit-image-deployer component is incorporated into the CI/CD process and whenever a developer commits code to master and the build passes, it’s automatically deployed to all configured clusters.
+
+**So What Are The Components?**
+[](https://4.bp.blogspot.com/-BdD0AgQKFWY/V87u5p7uw2I/AAAAAAAAArM/Z6_279MSn2AVDmO192GtPPTuVBbLgsHCQCLcB/s1600/kit.png)
+kit is comprised of several components each building on the next. The general flow is a developer commits code to their repository, an image is built and then kit-image-deployer commits the new image and tag to your manifests repository. From there the kit-deploymentizer runs, parsing all your manifest templates to generate the raw Kubernetes manifest files. Finally the kit-deployer runs and takes all the built manifest files and deploys them to all the appropriate clusters. Here is a summary of the components and the flow:
+
+**[kit-image-deployer](https://github.com/InVisionApp/kit-image-deployer)**
+A service that can be used to update given yaml files within a git repository with a new Docker image path. This can be used in collaboration with kit-deploymentizer and kit-deployer to automatically update the images used for a service across multiple clusters.
+
+[**kit-deploymentizer**](https://github.com/InVisionApp/kit-deploymentizer)
+This service intelligently builds deployment files as to allow reusability of environment variables and other forms of configuration. It also supports aggregating these deployments for multiple clusters. In the end, it generates a list of clusters and a list of deployment files for each of these clusters. Best used in collaboration with kit-deployer and kit-image-deployer to achieve a continuous deployment workflow.
+
+[**kit-deployer**](https://github.com/InVisionApp/kit-deployer)
+Use this service to deploy files to multiple Kubernetes clusters. Just organize your manifest files into directories that match the names of your clusters (the name defined in your kubeconfig files). Then you provide a directory of kubeconfig files and the kit-deployer will asynchronously send all manifests up to their corresponding clusters.
+
+**So What's Next?**
+
+In the near future, we want to make deployments even smarter so as to handle updating things like mongo replicasets. We also want to add in smart monitoring to further improve on the self-healing nature of Kubernetes. We’re also working on adding additional integrations (such as Slack) and notification methods. And most importantly we’re working towards shifting more control to the individual developers of each service by allowing the kit manifest templates to exist in each individual service repository instead of a single master manifest repository. This will allow them to manage their service completely from development straight to production across all clusters.
+
+We hope you take a closer look at [kit](https://github.com/InVisionApp/kit) and tell us what you think! Check out our [InVision Engineering](http://engineering.invisionapp.com/) blog for more posts about the cool things we are up to at InVision. If you want to work on kit or other interesting things like this, click through to [our jobs page](https://www.invisionapp.com/company#jobs). We'd love to hear from you!
+
+
+_--Chesley Brown, Full-Stack Engineer, at [InVision](https://www.invisionapp.com/)._
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-09-00-High-Performance-Network-Policies-Kubernetes.md b/blog/_posts/2016-09-00-High-Performance-Network-Policies-Kubernetes.md
new file mode 100644
index 00000000000..c7f49433350
--- /dev/null
+++ b/blog/_posts/2016-09-00-High-Performance-Network-Policies-Kubernetes.md
@@ -0,0 +1,194 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " High performance network policies in Kubernetes clusters "
+date: Thursday, September 21, 2016
+pagination:
+ enabled: true
+---
+_Editor's note: today’s post is by Juergen Brendel, Pritesh Kothari and Chris Marino co-founders of Pani Networks, the sponsor of the Romana project, the network policy software used for these benchmark tests._
+
+
+
+**Network Policies**
+
+
+
+Since the release of Kubernetes 1.3 back in July, users have been able to define and enforce network policies in their clusters. These policies are firewall rules that specify permissible types of traffic to, from and between pods. If requested, Kubernetes blocks all traffic that is not explicitly allowed. Policies are applied to groups of pods identified by common labels. Labels can then be used to mimic traditional segmented networks often used to isolate layers in a multi-tier application: You might identify your front-end and back-end pods by a specific “segment” label, for example. Policies control traffic between those segments and even traffic to or from external sources.
+
+
+
+**Segmenting traffic**
+
+
+
+What does this mean for the application developer? At last, Kubernetes has gained the necessary capabilities to provide "[defence in depth](https://en.wikipedia.org/wiki/Defense_in_depth_(computing))". Traffic can be segmented and different parts of your application can be secured independently. For example, you can very easily protect each of your services via specific network policies: All the pods identified by a [Replication Controller](http://kubernetes.io/docs/user-guide/replication-controller/) behind a service are already identified by a specific label. Therefore, you can use this same label to apply a policy to those pods.
+
+
+
+Defense in depth has long been recommended as best [practice](http://blog.kubernetes.io/2016/08/security-best-practices-kubernetes-deployment.html). This kind of isolation between different parts or layers of an application is easily achieved on AWS and OpenStack by applying security groups to VMs.
+
+
+
+However, prior to network policies, this kind of isolation for containers was not possible. VXLAN overlays can provide simple network isolation, but application developers need more fine grained control over the traffic accessing pods. As you can see in this simple example, Kubernetes network policies can manage traffic based on source and origin, protocol and port.
+
+
+
+
+
+```
+apiVersion: extensions/v1beta1
+kind: NetworkPolicy
+metadata:
+ name: pol1
+spec:
+ podSelector:
+ matchLabels:
+ role: backend
+ ingress:
+ - from:
+ - podSelector:
+ matchLabels:
+ role: frontend
+ ports:
+ - protocol: tcp
+ port: 80
+ ```
+
+
+
+
+
+**Not all network backends support policies**
+
+
+
+Network policies are an exciting feature, which the Kubernetes community has worked on for a long time. However, it requires a networking backend that is capable of applying the policies. By themselves, simple routed networks or the commonly used [flannel](https://github.com/coreos/flannel) network driver, for example, cannot apply network policy.
+
+
+
+There are only a few policy-capable networking backends available for Kubernetes today: [Romana](http://romana.io/), [Calico](http://projectcalico.org/), and [Canal](https://github.com/tigera/canal); with [Weave](http://www.weave.works/) indicating support in the near future. Red Hat’s OpenShift includes network policy features as well.
+
+
+
+We chose Romana as the back-end for these tests because it configures pods to use natively routable IP addresses in a full L3 configuration. Network policies, therefore, can be applied directly by the host in the Linux kernel using iptables rules. This results is a high performance, easy to manage network.
+
+
+
+**Testing performance impact of network policies**
+
+
+
+After network policies have been applied, network packets need to be checked against those policies to verify that this type of traffic is permissible. But what is the performance penalty for applying a network policy to every packet? Can we use all the great policy features without impacting application performance? We decided to find out by running some tests.
+
+
+
+Before we dive deeper into these tests, it is worth mentioning that ‘performance’ is a tricky thing to measure, network performance especially so.
+
+
+
+_Throughput_ (i.e. data transfer speed measured in Gpbs) and _latency_ (time to complete a request) are common measures of network performance. The performance impact of running an overlay network on throughput and latency has been examined previously [here](https://smana.kubespray.io/index.php/posts/kubernetes-net-bench) and [here](http://machinezone.github.io/research/networking-solutions-for-kubernetes/). What we learned from these tests is that Kubernetes networks are generally pretty fast, and servers have no trouble saturating a 1G link, with or without an overlay. It's only when you have 10G networks that you need to start thinking about the overhead of encapsulation.
+
+
+
+This is because during a typical network performance benchmark, there’s no application logic for the host CPU to perform, leaving it available for whatever network processing is required. **_For this reason we ran our tests in an operating range that did not saturate the link, or the CPU. This has the effect of isolating the impact of processing network policy rules on the host_**. For these tests we decided to measure latency as measured by the average time required to complete an HTTP request across a range of response sizes.
+
+
+
+
+
+**Test setup**
+
+- Hardware: Two servers with Intel Core i5-5250U CPUs (2 core, 2 threads per core) running at 1.60GHz, 16GB RAM and 512GB SSD. NIC: Intel Ethernet Connection I218-V (rev 03)
+- Ubuntu 14.04.5
+- Kubernetes 1.3 for data collection (verified samples on [v1.4.0-beta.5](http://v1.4.0-beta.5/))
+- [Romana v0.9.3.1](https://github.com/romana/romana)
+- Client and server load test [software](https://github.com/paninetworks/testing-tools)
+
+For the tests we had a client pod send 2,000 HTTP requests to a server pod. HTTP requests were sent by the client pod at a rate that ensured that neither the server nor network ever saturated. We also made sure each request started a new TCP session by disabling persistent connections (i.e. HTTP [keep-alive](https://en.wikipedia.org/wiki/HTTP_persistent_connection)). We ran each test with different response sizes and measured the average request duration time (how long does it take to complete a request of that size). Finally, we repeated each set of measurements with different policy configurations.
+
+
+
+Romana detects Kubernetes network policies when they’re created, translates them to Romana’s own policy format, and then applies them on all hosts. Currently, Kubernetes network policies only apply to ingress traffic. This means that outgoing traffic is not affected.
+
+First, we conducted the test without any policies to establish a baseline. We then ran the test again, increasing numbers of policies for the test's network segment. The policies were of the common “allow traffic for a given protocol and port” format. To ensure packets had to traverse all the policies, we created a number of policies that did not match the packet, and finally a policy that would result in acceptance of the packet.
+
+
+
+The table below shows the results, measured in milliseconds for different request sizes and numbers of policies:
+
+
+
+Response Size
+
+|Policies |.5k |1k |10k |100k |1M |
+|--|--|--|--|--|
+|0 |0.732 |0.738 |1.077 |2.532 |10.487 |
+|10 |0.744 |0.742 |1.084 |2.570 |10.556 |
+|50 |0.745 |0.755 |1.086 |2.580 |10.566 |
+|100 |0.762 |0.770 |1.104 |2.640 |10.597 |
+|200 |0.783 |0.783 |1.147 |2.652 |10.677 |
+{: .post-table}
+
+
+What we see here is that, as the number of policies increases, processing network policies introduces a very small delay, never more than 0.2ms, even after applying 200 policies. For all practical purposes, no meaningful delay is introduced when network policy is applied. Also worth noting is that doubling the response size from 0.5k to 1.0k had virtually no effect. This is because for very small responses, the fixed overhead of creating a new connection dominates the overall response time (i.e. the same number of packets are transferred).
+
+
+
+ 
+
+
+
+
+
+Note: .5k and 1k lines overlap at ~.8ms in the chart above
+
+
+
+Even as a percentage of baseline performance, the impact is still very small. The table below shows that for the smallest response sizes, the worst case delay remains at 7%, or less, up to 200 policies. For the larger response sizes the delay drops to about 1%.
+
+
+
+
+
+Response Size
+
+|Policies | .5k | 1k | 10k | 100k | 1M |
+|--|--|--|--|--|---|
+| 0 | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% |
+| 10 | -1.6% | -0.5% | -0.6% | -1.5% | -0.7% |
+| 50 | -1.8% | -2.3% | -0.8% | -1.9% | -0.8% |
+| 100 | -4.1% | -4.3% | -2.5% | -4.3% | -1.0% |
+| 200 | -7.0% | -6.1% | -6.5% | -4.7% | -1.8% |
+{: .post-table}
+
+
+ 
+
+
+
+
+
+
+
+What is also interesting in these results is that as the number of policies increases, we notice that larger requests experience a smaller relative (i.e. percentage) performance degradation.
+
+
+
+This is because when Romana installs iptables rules, it ensures that packets belonging to established connection are evaluated first. The full list of policies only needs to be traversed for the first packets of a connection. After that, the connection is considered ‘established’ and the connection’s state is stored in a fast lookup table. For larger requests, therefore, most packets of the connection are processed with a quick lookup in the ‘established’ table, rather than a full traversal of all rules. This iptables optimization results in performance that is largely independent of the number of network policies.
+
+
+
+Such ‘flow tables’ are common optimizations in network equipment and it seems that iptables uses the same technique quite effectively.
+
+
+
+Its also worth noting that in practise, a reasonably complex application may configure a few dozen rules per segment. It is also true that common network optimization techniques like Websockets and persistent connections will improve the performance of network policies even further (especially for small request sizes), since connections are held open longer and therefore can benefit from the established connection optimization.
+
+
+
+These tests were performed using Romana as the backend policy provider and other network policy implementations may yield different results. However, what these tests show is that for almost every application deployment scenario, network policies can be applied using Romana as a network back end without any negative impact on performance.
+
+
+
+If you wish to try it for yourself, we invite you to check out [Romana](http://romana.io/). In our [GitHub repo](https://github.com/romana/romana) you can find an easy to use installer, which works with AWS, Vagrant VMs or any other servers. You can use it to quickly get you started with a Romana powered Kubernetes or OpenStack cluster.
diff --git a/blog/_posts/2016-09-00-How-Qbox-Saved-50-Percent-On-Aws-Bills.md b/blog/_posts/2016-09-00-How-Qbox-Saved-50-Percent-On-Aws-Bills.md
new file mode 100644
index 00000000000..21d3adf1acd
--- /dev/null
+++ b/blog/_posts/2016-09-00-How-Qbox-Saved-50-Percent-On-Aws-Bills.md
@@ -0,0 +1,46 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " How Qbox Saved 50% per Month on AWS Bills Using Kubernetes and Supergiant "
+date: Wednesday, September 27, 2016
+pagination:
+ enabled: true
+---
+_Editor’s Note: Today’s post is by the team at Qbox, a hosted Elasticsearch provider sharing their experience with Kubernetes and how it helped save them fifty-percent off their cloud bill. _
+
+A little over a year ago, we at Qbox faced an existential problem. Just about all of the major IaaS providers either launched or acquired services that competed directly with our [Hosted Elasticsearch](https://qbox.io/) service, and many of them started offering it for free. The race to zero was afoot unless we could re-engineer our infrastructure to be more performant, more stable, and less expensive than the VM approach we had had before, and the one that is in use by our IaaS brethren. With the help of Kubernetes, Docker, and Supergiant (our own hand-rolled layer for managing distributed and stateful data), we were able to deliver 50% savings, a mid-five figure sum. At the same time, support tickets plummeted. We were so pleased with the results that we decided to [open source Supergiant](https://github.com/supergiant/supergiant) as its own standalone product. This post will demonstrate how we accomplished it.
+
+Back in 2013, when not many were even familiar with Elasticsearch, we launched our as-a-service offering with a dedicated, direct VM model. We hand-selected certain instance types optimized for Elasticsearch, and users configured single-tenant, multi-node clusters running on isolated virtual machines in any region. We added a markup on the per-compute-hour price for the DevOps support and monitoring, and all was right with the world for a while as Elasticsearch became the global phenomenon that it is today.
+
+**Background**
+As we grew to thousands of clusters, and many more thousands of nodes, it wasn’t just our AWS bill getting out of hand. We had 4 engineers replacing dead nodes and answering support tickets all hours of the day, every day. What made matters worse was the volume of resources allocated compared to the usage. We had thousands of servers with a collective CPU utilization under 5%. We were spending too much on processors that were doing absolutely nothing.
+
+How we got there was no great mystery. VM’s are a finite resource, and with a very compute-intensive, burstable application like Elasticsearch, we would be juggling the users that would either undersize their clusters to save money or those that would over-provision and overspend. When the aforementioned competitive pressures forced our hand, we had to re-evaluate everything.
+
+**Adopting Docker and Kubernetes**
+Our team avoided Docker for a while, probably on the vague assumption that the network and disk performance we had with VMs wouldn't be possible with containers. That assumption turned out to be entirely wrong.
+
+To run performance tests, we had to find a system that could manage networked containers and volumes. That's when we discovered Kubernetes. It was alien to us at first, but by the time we had familiarized ourselves and built a performance testing tool, we were sold. It was not just as good as before, it was better.
+
+The performance improvement we observed was due to the number of containers we could “pack” on a single machine. Ironically, we began the Docker experiment wanting to avoid “noisy neighbor,” which we assumed was inevitable when several containers shared the same VM. However, that isolation also acted as a bottleneck, both in performance and cost. To use a real-world example, If a machine has 2 cores and you need 3 cores, you have a problem. It’s rare to come across a public-cloud VM with 3 cores, so the typical solution is to buy 4 cores and not utilize them fully.
+
+This is where Kubernetes really starts to shine. It has the concept of [requests and limits](http://kubernetes.io/docs/user-guide/compute-resources/), which provides granular control over resource sharing. Multiple containers can share an underlying host VM without the fear of “noisy neighbors”. They can request exclusive control over an amount of RAM, for example, and they can define a limit in anticipation of overflow. It’s practical, performant, and cost-effective multi-tenancy. We were able to deliver the best of both the single-tenant and multi-tenant worlds.
+
+**Kubernetes + Supergiant**
+We built [Supergiant](https://supergiant.io/) originally for our own Elasticsearch customers. Supergiant solves Kubernetes complications by allowing pre-packaged and re-deployable application topologies. In more specific terms, Supergiant lets you use Components, which are somewhat similar to a microservice. Components represent an almost-uniform set of Instances of software (e.g., Elasticsearch, MongoDB, your web application, etc.). They roll up all the various Kubernetes and cloud operations needed to deploy a complex topology into a compact entity that is easy to manage.
+
+For Qbox, we went from needing 1:1 nodes to approximately 1:11 nodes. Sure, the nodes were larger, but the utilization made a substantial difference. As in the picture below, we could cram a whole bunch of little instances onto one big instance and not lose any performance. Smaller users would get the added benefit of higher network throughput by virtue of being on bigger resources, and they would also get greater CPU and RAM bursting.
+
+ {:.big-img}
+
+**Adding Up the Cost Savings**
+The [packing algorithm](https://supergiant.io/blog/supergiant-packing-algorithm-unique-save-money) in Supergiant, with its increased utilization, resulted in an immediate 25% drop in our infrastructure footprint. Remember, this came with better performance and fewer support tickets. We could dial up the packing algorithm and probably save even more money. Meanwhile, because our nodes were larger and far more predictable, we could much more fully leverage the economic goodness that is AWS Reserved Instances. We went with 1-year partial RI’s, which cut the remaining costs by 40%, give or take. Our customers still had the flexibility to spin up, down, and out their Elasticsearch nodes, without forcing us to constantly juggle, combine, split, and recombine our reservations. At the end of the day, we saved 50%. That is $600k per year that can go towards engineering salaries instead of enriching our IaaS provider.
+
+
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-09-00-How-We-Made-Kubernetes-Easy-To-Install.md b/blog/_posts/2016-09-00-How-We-Made-Kubernetes-Easy-To-Install.md
new file mode 100644
index 00000000000..841ccc49072
--- /dev/null
+++ b/blog/_posts/2016-09-00-How-We-Made-Kubernetes-Easy-To-Install.md
@@ -0,0 +1,63 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " How we made Kubernetes insanely easy to install "
+date: Thursday, September 28, 2016
+pagination:
+ enabled: true
+---
+
+_Editor's note: Today’s post is by [Luke Marsden](https://twitter.com/lmarsden), Head of Developer Experience, at Weaveworks, showing the Special Interest Group Cluster-Lifecycle’s recent work on kubeadm, a tool to make installing Kubernetes much simpler._
+
+Over at [SIG-cluster-lifecycle](https://github.com/kubernetes/community/blob/master/sig-cluster-lifecycle/README.md), we've been hard at work the last few months on kubeadm, a tool that makes Kubernetes dramatically easier to install. We've heard from users that installing Kubernetes is harder than it should be, and we want folks to be focused on writing great distributed apps not wrangling with infrastructure!
+
+There are three stages in setting up a Kubernetes cluster, and we decided to focus on the second two (to begin with):
+
+1. **Provisioning** : getting some machines
+2. **Bootstrapping** : installing Kubernetes on them and configuring certificates
+3. **Add-ons** : installing necessary cluster add-ons like DNS and monitoring services, a pod network, etc
+We realized early on that there's enormous variety in the way that users want to **provision** their machines.
+
+They use lots of different cloud providers, private clouds, bare metal, or even Raspberry Pi's, and almost always have their own preferred tools for automating provisioning machines: Terraform or CloudFormation, Chef, Puppet or Ansible, or even PXE booting bare metal. So we made an important decision: **kubeadm would not provision machines**. Instead, the only assumption it makes is that the user has some [computers running Linux](http://kubernetes.io/docs/getting-started-guides/kubeadm/#prerequisites).
+
+Another important constraint was we didn't want to just build another tool that "configures Kubernetes from the outside, by poking all the bits into place". There are many external projects out there for doing this, but we wanted to aim higher. We chose to actually improve the Kubernetes core itself to make it easier to install. Luckily, a lot of the groundwork for making this happen had already been started.
+
+We realized that if we made Kubernetes insanely easy to install manually, it should be obvious to users how to automate that process using any tooling.
+
+So, enter [kubeadm](http://kubernetes.io/docs/getting-started-guides/kubeadm/). It has no infrastructure dependencies, and satisfies the requirements above. It's easy to use and should be easy to automate. It's still in **alpha** , but it works like this:
+
+- You install Docker and the official Kubernetes packages for you distribution.
+- Select a master host, run kubeadm init.
+- This sets up the control plane and outputs a kubeadm join [...] command which includes a secure token.
+- On each host selected to be a worker node, run the kubeadm join [...] command from above.
+- Install a pod network. [Weave Net](https://github.com/weaveworks/weave-kube) is a great place to start here. Install it using just kubectl apply -f https://git.io/weave-kube
+Presto! You have a working Kubernetes cluster! [Try kubeadm today](http://kubernetes.io/docs/getting-started-guides/kubeadm/).
+
+For a video walkthrough, check this out:
+
+
+
+Follow the [kubeadm getting started guide](http://kubernetes.io/docs/getting-started-guides/kubeadm/) to try it yourself, and please give us [feedback on GitHub](https://github.com/kubernetes/kubernetes/issues/new), mentioning **@kubernetes/sig-cluster-lifecycle**!
+
+Finally, I want to give a huge shout-out to so many people in the SIG-cluster-lifecycle, without whom this wouldn't have been possible. I'll mention just a few here:
+
+
+- [Joe Beda](https://twitter.com/jbeda) kept us focused on keeping things simple for the user.
+- [Mike Danese](https://twitter.com/errordeveloper) at Google has been an incredible technical lead and always knows what's happening. Mike also tirelessly kept up on the many code reviews necessary.
+- [Ilya Dmitrichenko](https://twitter.com/errordeveloper), my colleague at Weaveworks, wrote most of the kubeadm code and also kindly helped other folks contribute.
+- [Lucas Käldström](https://twitter.com/kubernetesonarm) from Finland has got to be the youngest contributor in the group and was merging last-minute pull requests on the Sunday night before his school math exam.
+- [Brandon Philips](https://twitter.com/brandonphilips) and his team at CoreOS led the development of TLS bootstrapping, an essential component which we couldn't have done without.
+- [Devan Goodwin](https://twitter.com/dgood) from Red Hat built the JWS discovery service that Joe imagined and sorted out our RPMs.
+- [Paulo Pires](https://twitter.com/el_ppires) from Portugal jumped in to help out with external etcd support and picked up lots of other bits of work.
+- And many other contributors!
+
+This truly has been an excellent cross-company and cross-timezone achievement, with a lovely bunch of people. There's lots more work to do in SIG-cluster-lifecycle, so if you’re interested in these challenges join our SIG. Looking forward to collaborating with you all!
+
+_--[Luke Marsden](https://twitter.com/lmarsden), Head of Developer Experience at [Weaveworks](https://twitter.com/weaveworks)_
+
+
+- Try [kubeadm](http://kubernetes.io/docs/getting-started-guides/kubeadm/) to install Kubernetes today
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-09-00-Kubernetes-1.4-Making-It-Easy-To-Run-On-Kuberentes-Anywhere.md b/blog/_posts/2016-09-00-Kubernetes-1.4-Making-It-Easy-To-Run-On-Kuberentes-Anywhere.md
new file mode 100644
index 00000000000..936226e19a1
--- /dev/null
+++ b/blog/_posts/2016-09-00-Kubernetes-1.4-Making-It-Easy-To-Run-On-Kuberentes-Anywhere.md
@@ -0,0 +1,88 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes 1.4: Making it easy to run on Kubernetes anywhere "
+date: Tuesday, September 26, 2016
+
+---
+Today we’re happy to announce the release of Kubernetes 1.4.
+
+Since the release to general availability just over 15 months ago, Kubernetes has continued to grow and achieve broad adoption across the industry. From brand new startups to large-scale businesses, users have described how big a difference Kubernetes has made in building, deploying and managing distributed applications. However, one of our top user requests has been making Kubernetes itself easier to install and use. We’ve taken that feedback to heart, and 1.4 has several major improvements.
+
+These setup and usability enhancements are the result of concerted, coordinated work across the community - more than 20 contributors from SIG-Cluster-Lifecycle came together to greatly simplify the Kubernetes user experience, covering improvements to installation, startup, certificate generation, discovery, networking, and application deployment.
+
+Additional product highlights in this release include simplified cluster deployment on any cloud, easy installation of stateful apps, and greatly expanded Cluster Federation capabilities, enabling a straightforward deployment across multiple clusters, and multiple clouds.
+
+**What’s new:**
+
+**Cluster creation with two commands -** To get started with Kubernetes a user must provision nodes, install Kubernetes and bootstrap the cluster. A common request from users is to have an easy, portable way to do this on any cloud (public, private, or bare metal).
+
+
+- Kubernetes 1.4 introduces ‘[kubeadm](http://kubernetes.io/docs/getting-started-guides/kubeadm/)’ which reduces bootstrapping to two commands, with no complex scripts involved. Once kubernetes is installed, kubeadm init starts the master while kubeadm join joins the nodes to the cluster.
+- Installation is also streamlined by packaging Kubernetes with its dependencies, for most major Linux distributions including Red Hat and Ubuntu Xenial. This means users can now install Kubernetes using familiar tools such as apt-get and yum.
+- Add-on deployments, such as for an overlay network, can be reduced to one command by using a [DaemonSet](http://kubernetes.io/docs/admin/daemons/).
+- Enabling this simplicity is a new certificates API and its use for kubelet [TLS bootstrap](http://kubernetes.io/docs/admin/master-node-communication/#kubelet-tls-bootstrap), as well as a new discovery API.
+
+**Expanded stateful application support -** While cloud-native applications are built to run in containers, many existing applications need additional features to make it easy to adopt containers. Most commonly, these include stateful applications such as batch processing, databases and key-value stores. In Kubernetes 1.4, we have introduced a number of features simplifying the deployment of such applications, including:
+
+
+- [ScheduledJob](http://kubernetes.io/docs/user-guide/scheduled-jobs/) is introduced as Alpha so users can run batch jobs at regular intervals.
+- Init-containers are Beta, addressing the need to run one or more containers before starting the main application, for example to sequence dependencies when starting a database or multi-tier app.
+- [Dynamic PVC Provisioning](http://kubernetes.io/docs/user-guide/persistent-volumes/) moved to Beta. This feature now enables cluster administrators to expose multiple storage provisioners and allows users to select them using a new Storage Class API object.
+- Curated and pre-tested [Helm charts](https://github.com/kubernetes/charts) for common stateful applications such as MariaDB, MySQL and Jenkins will be available for one-command launches using version 2 of the Helm Package Manager.
+
+**Cluster federation API additions -** One of the most requested capabilities from our global customers has been the ability to build applications with clusters that span regions and clouds.
+
+
+- [Federated Replica Sets](http://kubernetes.io/docs/user-guide/federation/replicasets/) Beta - replicas can now span some or all clusters enabling cross region or cross cloud replication. The total federated replica count and relative cluster weights / replica counts are continually reconciled by a federated replica-set controller to ensure you have the pods you need in each region / cloud.
+- Federated Services are now Beta, and [secrets](http://kubernetes.io/docs/user-guide/federation/secrets/), [events](http://kubernetes.io/docs/user-guide/federation/events) and [namespaces](http://kubernetes.io/docs/user-guide/federation/namespaces) have also been added to the federation API.
+- [Federated Ingress](http://kubernetes.io/docs/user-guide/federation/federated-ingress/) Alpha - starting with Google Cloud Platform (GCP), users can create a single L7 globally load balanced VIP that spans services deployed across a federation of clusters within GCP. With Federated Ingress in GCP, external clients point to a single IP address and are sent to the closest cluster with usable capacity in any region or zone of the federation in GCP.
+
+**Container security support -** Administrators of multi-tenant clusters require the ability to provide varying sets of permissions among tenants, infrastructure components, and end users of the system.
+
+
+- [Pod Security Policy](http://kubernetes.io/docs/user-guide/pod-security-policy/) is a new object that enables cluster administrators to control the creation and validation of security contexts for pods/containers. Admins can associate service accounts, groups, and users with a set of constraints to define a security context.
+- [AppArmor](http://kubernetes.io/docs/admin/apparmor/) support is added, enabling admins to run a more secure deployment, and provide better auditing and monitoring of their systems. Users can configure a container to run in an AppArmor profile by setting a single field.
+
+**Infrastructure enhancements - ** We continue adding to the scheduler, storage and client capabilities in Kubernetes based on user and ecosystem needs.
+
+
+- Scheduler - introducing [inter-pod affinity and anti-affinity](http://kubernetes.io/docs/user-guide/node-selection/) Alpha for users who want to customize how Kubernetes co-locates or spreads their pods. Also [priority scheduling capability for cluster add-ons](http://kubernetes.io/docs/admin/rescheduler/#guaranteed-scheduling-of-critical-add-on-pods) such as DNS, Heapster, and the Kube Dashboard.
+- Disruption SLOs - Pod Disruption Budget is introduced to limit impact of pods deleted by cluster management operations (such as node upgrade) at any one time.
+- Storage - New [volume plugins](http://kubernetes.io/docs/user-guide/volumes/) for Quobyte and Azure Data Disk have been added.
+- Clients - Swagger 2.0 support is added, enabling non-Go clients.
+
+**Kubernetes Dashboard UI -** lastly, a great looking Kubernetes [Dashboard UI](https://github.com/kubernetes/dashboard#kubernetes-dashboard) with 90% CLI parity for at-a-glance management.
+
+For a complete list of updates see the [release notes](https://github.com/kubernetes/kubernetes/pull/33410) on GitHub. Apart from features the most impressive aspect of Kubernetes development is the community of contributors. This is particularly true of the 1.4 release, the full breadth of which will unfold in upcoming weeks.
+
+**Availability**
+Kubernetes 1.4 is available for download at [get.k8s.io](http://get.k8s.io/) and via the open source repository hosted on [GitHub](http://github.com/kubernetes/kubernetes). To get started with Kubernetes try the [Hello World app](http://kubernetes.io/docs/hellonode/).
+
+To get involved with the project, join the [weekly community meeting](https://groups.google.com/forum/#!forum/kubernetes-community-video-chat) or start contributing to the project here (marked help).
+
+**Users and Case Studies**
+Over the past fifteen months since the Kubernetes 1.0 GA release, the [adoption and enthusiasm](http://kubernetes.io/case-studies/) for this project has surpassed everyone's imagination. Kubernetes runs in production at hundreds of organization and thousands more are in development. Here are a few unique highlights of companies running Kubernetes:
+
+
+- **[Box](https://www.box.com/) --** accelerated their time to delivery from six months to launch a service to less than a week. [Read more](https://blog.box.com/blog/kubernetes-box-microservices-maximum-velocity/) on how Box runs mission critical production services on Kubernetes.
+- **[Pearson](https://www.pearson.com/) --** minimized complexity and increased their engineer productivity. [Read how](http://kubernetes.io/case-studies/pearson) Pearson is using Kubernetes to reinvent the world’s largest educational company.
+- **[OpenAI](https://openai.com/) --** a non-profit artificial intelligence research company, built [infrastructure for deep learning](https://openai.com/blog/infrastructure-for-deep-learning/) with Kubernetes to maximize productivity for researchers allowing them to focus on the science.
+
+We’re very grateful to our community of over 900 contributors who contributed more than 5,000 commits to make this release possible. To get a closer look on how the community is using Kubernetes, join us at the user conference [KubeCon](http://events.linuxfoundation.org/events/kubecon) to hear directly from users and contributors.
+
+**Connect**
+
+
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+
+Thank you for your support!
+
+_-- Aparna Sinha, Product Manager, Google_
+
+
+
+
+
diff --git a/blog/_posts/2016-10-00-Dynamic-Provisioning-And-Storage-In-Kubernetes.md b/blog/_posts/2016-10-00-Dynamic-Provisioning-And-Storage-In-Kubernetes.md
new file mode 100644
index 00000000000..c59f53c269a
--- /dev/null
+++ b/blog/_posts/2016-10-00-Dynamic-Provisioning-And-Storage-In-Kubernetes.md
@@ -0,0 +1,192 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Dynamic Provisioning and Storage Classes in Kubernetes "
+date: Saturday, October 07, 2016
+pagination:
+ enabled: true
+---
+
+Storage is a critical part of running containers, and Kubernetes offers some powerful primitives for managing it. Dynamic volume provisioning, a feature unique to Kubernetes, allows storage volumes to be created on-demand. Without dynamic provisioning, cluster administrators have to manually make calls to their cloud or storage provider to create new storage volumes, and then create PersistentVolume objects to represent them in Kubernetes. The dynamic provisioning feature eliminates the need for cluster administrators to pre-provision storage. Instead, it automatically provisions storage when it is requested by users. This feature was introduced as alpha in Kubernetes 1.2, and has been improved and promoted to beta in the [latest release, 1.4](http://blog.kubernetes.io/2016/09/kubernetes-1.4-making-it-easy-to-run-on-kuberentes-anywhere.html). This release makes dynamic provisioning far more flexible and useful.
+
+**What’s New?**
+
+The alpha version of dynamic provisioning only allowed a single, hard-coded provisioner to be used in a cluster at once. This meant that when Kubernetes determined storage needed to be dynamically provisioned, it always used the same volume plugin to do provisioning, even if multiple storage systems were available on the cluster. The provisioner to use was inferred based on the cloud environment - EBS for AWS, Persistent Disk for Google Cloud, Cinder for OpenStack, and vSphere Volumes on vSphere. Furthermore, the parameters used to provision new storage volumes were fixed: only the storage size was configurable. This meant that all dynamically provisioned volumes would be identical, except for their storage size, even if the storage system exposed other parameters (such as disk type) for configuration during provisioning.
+
+Although the alpha version of the feature was limited in utility, it allowed us to “get some miles” on the idea, and helped determine the direction we wanted to take.
+
+The beta version of dynamic provisioning, new in Kubernetes 1.4, introduces a [new API object](http://kubernetes.io/docs/user-guide/persistent-volumes/#storageclasses), StorageClass. Multiple StorageClass objects can be defined each specifying a volume plugin (aka provisioner) to use to provision a volume and the set of parameters to pass to that provisioner when provisioning. This design allows cluster administrators to define and expose multiple flavors of storage (from the same or different storage systems) within a cluster, each with a custom set of parameters. This design also ensures that end users don’t have to worry about the the complexity and nuances of how storage is provisioned, but still have the ability to select from multiple storage options.
+
+**How Do I use It?**
+
+Below is an example of how a cluster administrator would expose two tiers of storage, and how a user would select and use one. For more details, see the [reference](http://kubernetes.io/docs/user-guide/persistent-volumes/#storageclasses) and [example](https://github.com/kubernetes/kubernetes/tree/release-1.4/examples/experimental/persistent-volume-provisioning) docs.
+
+**Admin Configuration**
+
+The cluster admin defines and deploys two StorageClass objects to the Kubernetes cluster:
+
+```
+kind: StorageClass
+
+apiVersion: storage.k8s.io/v1beta1
+
+metadata:
+
+ name: slow
+
+provisioner: kubernetes.io/gce-pd
+
+parameters:
+
+ type: pd-standard
+ ```
+
+
+This creates a storage class called “slow” which will provision standard disk-like Persistent Disks.
+
+```
+kind: StorageClass
+
+apiVersion: storage.k8s.io/v1beta1
+
+metadata:
+
+ name: fast
+
+provisioner: kubernetes.io/gce-pd
+
+parameters:
+
+ type: pd-ssd
+ ```
+
+
+
+This creates a storage class called “fast” which will provision SSD-like Persistent Disks.
+
+
+
+**User Request**
+
+
+
+Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. For the beta version of this feature, this is done via the volume.beta.kubernetes.io/storage-class annotation. The value of this annotation must match the name of a StorageClass configured by the administrator.
+
+
+
+To select the “fast” storage class, for example, a user would create the following PersistentVolumeClaim:
+
+
+
+```
+{
+
+ "kind": "PersistentVolumeClaim",
+
+ "apiVersion": "v1",
+
+ "metadata": {
+
+ "name": "claim1",
+
+ "annotations": {
+
+ "volume.beta.kubernetes.io/storage-class": "fast"
+
+ }
+
+ },
+
+ "spec": {
+
+ "accessModes": [
+
+ "ReadWriteOnce"
+
+ ],
+
+ "resources": {
+
+ "requests": {
+
+ "storage": "30Gi"
+
+ }
+
+ }
+
+ }
+
+}
+ ```
+
+
+
+
+This claim will result in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume will be destroyed.
+
+
+
+
+**Defaulting Behavior**
+
+
+
+Dynamic Provisioning can be enabled for a cluster such that all claims are dynamically provisioned without a storage class annotation. This behavior is enabled by the cluster administrator by marking one StorageClass object as “default”. A StorageClass can be marked as default by adding the storageclass.beta.kubernetes.io/is-default-class annotation to it.
+
+
+
+When a default StorageClass exists and a user creates a PersistentVolumeClaim without a storage-class annotation, the new [DefaultStorageClass](https://github.com/kubernetes/kubernetes/pull/30900) admission controller (also introduced in v1.4), automatically adds the class annotation pointing to the default storage class.
+
+
+
+**Can I Still Use the Alpha Version?**
+
+
+
+
+Kubernetes 1.4 maintains backwards compatibility with the alpha version of the dynamic provisioning feature to allow for a smoother transition to the beta version. The alpha behavior is triggered by the existance of the alpha dynamic provisioning annotation (volume. **alpha**.kubernetes.io/storage-class). Keep in mind that if the beta annotation (volume. **beta**.kubernetes.io/storage-class) is present, it takes precedence, and triggers the beta behavior.
+
+
+
+Support for the [alpha version](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api_changes.md#alpha-beta-and-stable-versions) is deprecated and will be removed in a future release.
+
+
+
+**What’s Next?**
+
+
+
+Dynamic Provisioning and Storage Classes will continue to evolve and be refined in future releases. Below are some areas under consideration for further development.
+
+
+
+**Standard Cloud Provisioners**
+
+For deployment of Kubernetes to cloud providers, we are [considering](https://github.com/kubernetes/kubernetes/pull/31617/files) automatically creating a provisioner for the cloud’s native storage system. This means that a standard deployment on AWS would result in a StorageClass that provisions EBS volumes, a standard deployment on Google Cloud would result in a StorageClass that provisions GCE PDs. It is also being debated whether these provisioners should be marked as default, which would make dynamic provisioning the default behavior (no annotation required).
+
+
+
+**Out-of-Tree Provisioners**
+
+There has been ongoing discussion about whether Kubernetes storage plugins should live “in-tree” or “out-of-tree”. While the details for how to implement out-of-tree plugins is still in the air, there is [a proposa](https://github.com/kubernetes/kubernetes/pull/30285)l introducing a standardized way to implement out-of-tree dynamic provisioners.
+
+
+
+**How Do I Get Involved?**
+
+
+
+If you’re interested in getting involved with the design and development of Kubernetes Storage, join the [Kubernetes Storage Special-Interest-Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG). We’re rapidly growing and always welcome new contributors.
+
+
+
+_-- Saad Ali, Software Engineer, Google_
+
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-10-00-Globally-Distributed-Services-Kubernetes-Cluster-Federation.md b/blog/_posts/2016-10-00-Globally-Distributed-Services-Kubernetes-Cluster-Federation.md
new file mode 100644
index 00000000000..31dd69b5805
--- /dev/null
+++ b/blog/_posts/2016-10-00-Globally-Distributed-Services-Kubernetes-Cluster-Federation.md
@@ -0,0 +1,435 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Building Globally Distributed Services using Kubernetes Cluster Federation "
+date: Saturday, October 14, 2016
+pagination:
+ enabled: true
+---
+_Editor's note: Today’s post is by Allan Naim, Product Manager, and Quinton Hoole, Staff Engineer at Google, showing how to deploy a multi-homed service behind a global load balancer and have requests sent to the closest cluster._
+
+In Kubernetes 1.3, we announced Kubernetes Cluster Federation and introduced the concept of Cross Cluster Service Discovery, enabling developers to deploy a service that was sharded across a federation of clusters spanning different zones, regions or cloud providers. This enables developers to achieve higher availability for their applications, without sacrificing quality of service, as detailed in our [previous](http://blog.kubernetes.io/2016/07/cross-cluster-services.html) blog post.
+
+In the latest release, [Kubernetes 1.4](http://blog.kubernetes.io/2016/09/kubernetes-1.4-making-it-easy-to-run-on-kuberentes-anywhere.html), we've extended Cluster Federation to support Replica Sets, Secrets, Namespaces and Ingress objects. This means that you no longer need to deploy and manage these objects individually in each of your federated clusters. Just create them once in the federation, and have its built-in controllers automatically handle that for you.
+
+[**Federated Replica Sets**](http://kubernetes.io/docs/user-guide/federation/replicasets/) leverage the same configuration as non-federated Kubernetes Replica Sets and automatically distribute Pods across one or more federated clusters. By default, replicas are evenly distributed across all clusters, but for cases where that is not the desired behavior, we've introduced Replica Set preferences, which allow replicas to be distributed across only some clusters, or in non-equal proportions ([define annotations](https://github.com/kubernetes/kubernetes/blob/master/federation/apis/federation/types.go#L114)).
+
+Starting with Google Cloud Platform (GCP), we’ve introduced [**Federated Ingress**](http://kubernetes.io/docs/user-guide/federation/federated-ingress/) as a Kubernetes 1.4 alpha feature which enables external clients point to a single IP address and have requests sent to the closest cluster with usable capacity in any region, zone of the Federation.
+
+[**Federated Secrets**](http://kubernetes.io/docs/user-guide/federation/secrets/) automatically create and manage secrets across all clusters in a Federation, automatically ensuring that these are kept globally consistent and up-to-date, even if some clusters are offline when the original updates are applied.
+
+[**Federated Namespaces**](http://kubernetes.io/docs/user-guide/federation/namespaces/) are similar to the traditional [Kubernetes Namespaces](http://kubernetes.io/docs/user-guide/namespaces/) providing the same functionality. Creating them in the Federation control plane ensures that they are synchronized across all the clusters in Federation.
+
+[**Federated Events**](http://kubernetes.io/docs/user-guide/federation/events/) are similar to the traditional Kubernetes Events providing the same functionality. Federation Events are stored only in Federation control plane and are not passed on to the underlying kubernetes clusters.
+
+Let’s walk through how all this stuff works. We’re going to provision 3 clusters per region, spanning 3 continents (Europe, North America and Asia).
+
+
+
+[](https://2.bp.blogspot.com/-Gj83DdcKqTI/WAE8pwAEZYI/AAAAAAAAAwI/9dbyBFipvDIGkPQWRB1dRxNwkrvzlcYMwCLcB/s1600/k8s%2Bfed%2Bmap.png)
+
+
+
+The next step is to federate these clusters. Kelsey Hightower developed a [tutorial](https://github.com/kelseyhightower/kubernetes-cluster-federation) for setting up a Kubernetes Cluster Federation. Follow the tutorial to configure a Cluster Federation with clusters in 3 zones in each of the 3 GCP regions, us-central1, europe-west1 and asia-east1. For the purpose of this blog post, we’ll provision the Federation Control Plane in the us-central1-b zone. Note that more highly available, multi-cluster deployments are also available, but not used here in the interests of simplicity.
+
+
+
+The rest of the blog post assumes that you have a running Kubernetes Cluster Federation provisioned.
+
+
+
+Let’s verify that we have 9 clusters in 3 regions running.
+
+
+
+ ```
+$ kubectl --context=federation-cluster get clusters
+
+
+NAME STATUS AGE
+gce-asia-east1-a Ready 17m
+gce-asia-east1-b Ready 15m
+gce-asia-east1-c Ready 10m
+gce-europe-west1-b Ready 7m
+gce-europe-west1-c Ready 7m
+gce-europe-west1-d Ready 4m
+gce-us-central1-a Ready 1m
+gce-us-central1-b Ready 53s
+gce-us-central1-c Ready 39s
+ ```
+
+
+
+|You can download the source used in this blog post [here](https://github.com/allannaim/federated-ingress-sample). The source consists of the following files:|
+|--|--|
+|configmaps/zonefetch.yaml| retrieves the zone from the instance metadata server and concatenates into volume mount path|
+|replicasets/nginx-rs.yaml | deploys a Pod consisting of an nginx and busybox container|
+|ingress/ingress.yaml | creates a load balancer with a global VIP that distributes requests to the closest nginx backend|
+|services/nginx.yaml| exposes the nginx backend as an external service|
+{: .post-table}
+
+
+
+In our example, we’ll be deploying the service and ingress object using the federated control plane. The [ConfigMap](http://kubernetes.io/docs/user-guide/configmap/) object isn’t currently supported by Federation, so we’ll be deploying it manually in each of the underlying Federation clusters. Our cluster deployment will look as follows:
+
+
+
+We’re going to deploy a Service that is sharded across our 9 clusters. The backend deployment will consist of a Pod with 2 containers:
+
+- busybox container that fetches the zone and outputs an HTML with the zone embedded in it into a Pod volume mount path
+- nginx container that reads from that Pod volume mount path and serves an HTML containing the zone it’s running in
+
+
+Let’s start by creating a federated service object in the federation-cluster context.
+
+
+
+```
+$ kubectl --context=federation-cluster create -f services/nginx.yaml
+ ```
+
+
+
+It will take a few minutes for the service to propagate across the 9 clusters.
+
+
+
+
+```
+$ kubectl --context=federation-cluster describe services nginx
+
+
+Name: nginx
+Namespace: default
+Labels: app=nginx
+Selector: app=nginx
+Type: LoadBalancer
+IP:
+LoadBalancer Ingress: 108.59.xx.xxx, 104.199.xxx.xxx, ...
+Port: http 80/TCP
+
+NodePort: http 30061/TCP
+Endpoints:
+Session Affinity: None
+ ```
+
+
+
+Let’s now create a Federated Ingress. Federated Ingresses are created in much that same way as traditional [Kubernetes Ingresses](http://kubernetes.io/docs/user-guide/ingress/): by making an API call which specifies the desired properties of your logical ingress point. In the case of Federated Ingress, this API call is directed to the Federation API endpoint, rather than a Kubernetes cluster API endpoint. The API for Federated Ingress is 100% compatible with the API for traditional Kubernetes Services.
+
+
+
+
+```
+$ cat ingress/ingress.yaml
+
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: nginx
+spec:
+ backend:
+ serviceName: nginx
+ servicePort: 80
+ ```
+
+
+
+
+```
+$ kubectl --context=federation-cluster create -f ingress/ingress.yaml
+ingress "nginx" created
+ ```
+
+
+
+Once created, the Federated Ingress controller automatically:
+
+1. 1.creates matching Kubernetes Ingress objects in every cluster underlying your Cluster Federation
+2. 2.ensures that all of these in-cluster ingress objects share the same logical global L7 (i.e. HTTP(S)) load balancer and IP address
+3. 3.monitors the health and capacity of the service “shards” (i.e. your Pods) behind this ingress in each cluster
+4. 4.ensures that all client connections are routed to an appropriate healthy backend service endpoint at all times, even in the event of Pod, cluster, availability zone or regional outages
+We can verify the ingress objects are matching in the underlying clusters. Notice the ingress IP addresses for all 9 clusters is the same.
+
+
+
+
+```
+$ for c in $(kubectl config view -o jsonpath='{.contexts[*].name}'); do kubectl --context=$c get ingress; done
+
+NAME HOSTS ADDRESS PORTS AGE
+nginx \* 80 1h
+NAME HOSTS ADDRESS PORTS AGE
+nginx \* 130.211.40.xxx 80 40m
+NAME HOSTS ADDRESS PORTS AGE
+nginx \* 130.211.40.xxx 80 1h
+NAME HOSTS ADDRESS PORTS AGE
+nginx \* 130.211.40.xxx 80 26m
+NAME HOSTS ADDRESS PORTS AGE
+nginx \* 130.211.40.xxx 80 1h
+NAME HOSTS ADDRESS PORTS AGE
+nginx \* 130.211.40.xxx 80 25m
+NAME HOSTS ADDRESS PORTS AGE
+nginx \* 130.211.40.xxx 80 38m
+NAME HOSTS ADDRESS PORTS AGE
+nginx \* 130.211.40.xxx 80 3m
+NAME HOSTS ADDRESS PORTS AGE
+nginx \* 130.211.40.xxx 80 57m
+NAME HOSTS ADDRESS PORTS AGE
+nginx \* 130.211.40.xxx 80 56m
+ ```
+
+
+
+Note that in the case of Google Cloud Platform, the logical L7 load balancer is not a single physical device (which would present both a single point of failure, and a single global network routing choke point), but rather a [truly global, highly available load balancing managed service](https://cloud.google.com/load-balancing/), globally reachable via a single, static IP address.
+
+
+
+Clients inside your federated Kubernetes clusters (i.e. Pods) will be automatically routed to the cluster-local shard of the Federated Service backing the Ingress in their cluster if it exists and is healthy, or the closest healthy shard in a different cluster if it does not. Note that this involves a network trip to the HTTP(S) load balancer, which resides outside your local Kubernetes cluster but inside the same GCP region.
+
+
+
+The next step is to schedule the service backends. Let’s first create the ConfigMap in each cluster in the Federation.
+
+
+
+We do this by submitting the ConfigMap to each cluster in the Federation.
+
+
+
+
+```
+$ for c in $(kubectl config view -o jsonpath='{.contexts[\*].name}'); do kubectl --context=$c create -f configmaps/zonefetch.yaml; done
+ ```
+
+
+
+Let’s have a quick peek at our Replica Set:
+
+
+
+
+```
+$ cat replicasets/nginx-rs.yaml
+
+
+apiVersion: extensions/v1beta1
+kind: ReplicaSet
+metadata:
+ name: nginx
+ labels:
+ app: nginx
+ type: demo
+spec:
+ replicas: 9
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - image: nginx
+ name: frontend
+ ports:
+ - containerPort: 80
+ volumeMounts:
+ - name: html-dir
+ mountPath: /usr/share/nginx/html
+ - image: busybox
+ name: zone-fetcher
+ command:
+ - "/bin/sh"
+ - "-c"
+ - "/zonefetch/zonefetch.sh"
+ volumeMounts:
+ - name: zone-fetch
+ mountPath: /zonefetch
+ - name: html-dir
+ mountPath: /usr/share/nginx/html
+ volumes:
+ - name: zone-fetch
+ configMap:
+ defaultMode: 0777
+ name: zone-fetch
+ - name: html-dir
+ emptyDir:
+ medium: ""
+ ```
+
+
+
+The Replica Set consists of 9 replicas, spread evenly across 9 clusters within the Cluster Federation. Annotations can also be used to control which clusters Pods are scheduled to. This is accomplished by adding annotations to the Replica Set spec, as follows:
+
+
+
+
+```
+apiVersion: extensions/v1beta1
+kind: ReplicaSet
+metadata:
+ name: nginx-us
+ annotations:
+ federation.kubernetes.io/replica-set-preferences: ```
+ {
+ "rebalance": true,
+ "clusters": {
+ "gce-us-central1-a": {
+ "minReplicas": 2,
+ "maxReplicas": 4,
+ "weight": 1
+ },
+ "gce-us-central10b": {
+ "minReplicas": 2,
+ "maxReplicas": 4,
+ "weight": 1
+ }
+ }
+ }
+ ```
+
+
+
+For the purpose of our demo, we’ll keep things simple and spread our Pods evenly across the Cluster Federation.
+
+
+
+Let’s create the federated Replica Set:
+
+
+
+
+```
+$ kubectl --context=federation-cluster create -f replicasets/nginx-rs.yaml
+ ```
+
+
+
+Verify the Replica Sets and Pods were created in each cluster:
+
+
+
+
+```
+$ for c in $(kubectl config view -o jsonpath='{.contexts[\*].name}'); do kubectl --context=$c get rs; done
+
+NAME DESIRED CURRENT READY AGE
+nginx 1 1 1 42s
+NAME DESIRED CURRENT READY AGE
+nginx 1 1 1 14m
+NAME DESIRED CURRENT READY AGE
+nginx 1 1 1 45s
+NAME DESIRED CURRENT READY AGE
+nginx 1 1 1 46s
+NAME DESIRED CURRENT READY AGE
+nginx 1 1 1 47s
+NAME DESIRED CURRENT READY AGE
+nginx 1 1 1 48s
+NAME DESIRED CURRENT READY AGE
+nginx 1 1 1 49s
+NAME DESIRED CURRENT READY AGE
+nginx 1 1 1 49s
+NAME DESIRED CURRENT READY AGE
+nginx 1 1 1 49s
+
+
+$ for c in $(kubectl config view -o jsonpath='{.contexts[\*].name}'); do kubectl --context=$c get po; done
+
+NAME READY STATUS RESTARTS AGE
+nginx-ph8zx 2/2 Running 0 25s
+NAME READY STATUS RESTARTS AGE
+nginx-sbi5b 2/2 Running 0 27s
+NAME READY STATUS RESTARTS AGE
+nginx-pf2dr 2/2 Running 0 28s
+NAME READY STATUS RESTARTS AGE
+nginx-imymt 2/2 Running 0 30s
+NAME READY STATUS RESTARTS AGE
+nginx-9cd5m 2/2 Running 0 31s
+NAME READY STATUS RESTARTS AGE
+nginx-vxlx4 2/2 Running 0 33s
+NAME READY STATUS RESTARTS AGE
+nginx-itagl 2/2 Running 0 33s
+NAME READY STATUS RESTARTS AGE
+nginx-u7uyn 2/2 Running 0 33s
+NAME READY STATUS RESTARTS AGE
+nginx-i0jh6 2/2 Running 0 34s
+ ```
+
+
+
+Below is an illustration of how the nginx service and associated ingress deployed. To summarize, we have a global VIP (130.211.23.176) exposed using a Global L7 load balancer that forwards requests to the closest cluster with available capacity.
+
+[](https://1.bp.blogspot.com/-vDz5dEG_-yI/WAE81YPVlYI/AAAAAAAAAwM/jvt46qwIViQbsbftCqFenUocGfssuLbjwCLcB/s1600/Copy%2Bof%2BFederation%2BBlog%2BDrawing%2B%25281%2529.png)
+
+
+To test this out, we’re going to spin up 2 Google Cloud Engine (GCE) instances, one in us-west1-b and the other in asia-east1-a. All client requests are automatically routed, via the shortest network path, to a healthy Pod in the closest cluster to the origin of the request. So for example, HTTP(S) requests from Asia will be routed directly to the closest cluster in Asia that has available capacity. If there are no such clusters in Asia, the request will be routed to the next closest cluster (in this case the U.S.). This works irrespective of whether the requests originate from a GCE instance or anywhere else on the internet. We only use a GCE instance for simplicity in the demo.
+
+
+
+
+
+ 
+
+
+
+
+We can SSH directly into the VMs using the Cloud Console or by issuing a gcloud SSH command.
+
+
+
+
+```
+$ gcloud compute ssh test-instance-asia --zone asia-east1-a
+
+-----
+
+user@test-instance-asia:~$ curl 130.211.40.186
+
+
+
+Welcome to the global site!
+
+
+Welcome to the global site! You are being served from asia-east1-b
+Congratulations!
+
+
+user@test-instance-asia:~$ exit
+
+----
+
+
+$ gcloud compute ssh test-instance-us --zone us-west1-b
+
+----
+
+user@test-instance-us:~$ curl 130.211.40.186
+
+
+
+Welcome to the global site!
+
+
+Welcome to the global site! You are being served from us-central1-b
+Congratulations!
+
+
+----
+ ```
+
+
+
+Federations of Kubernetes Clusters can include clusters running in different cloud providers (e.g. GCP, AWS), and on-premises (e.g. on OpenStack). However, in Kubernetes 1.4, Federated Ingress is only supported across Google Cloud Platform clusters. In future versions we intend to support hybrid cloud Ingress-based deployments.
+
+
+
+To summarize, we walked through leveraging the Kubernetes 1.4 Federated Ingress alpha feature to deploy a multi-homed service behind a global load balancer. External clients point to a single IP address and are sent to the closest cluster with usable capacity in any region, zone of the Federation, providing higher levels of availability without sacrificing latency or ease of operation.
+
+
+
+We'd love to hear feedback on Kubernetes Cross Cluster Services. To join the community:
+
+- Post issues or feature requests on [GitHub](https://github.com/kubernetes/kubernetes/tree/master/federation)
+- Join us in the #federation channel on [Slack](https://kubernetes.slack.com/messages/sig-federation)
+- Participate in the [Cluster Federation SIG](https://groups.google.com/forum/#!forum/kubernetes-sig-federation)
+- [Download](http://get.k8s.io/) Kubernetes
+- Follow Kubernetes on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-10-00-Helm-Charts-Making-It-Simple-To-Package-And-Deploy-Apps-On-Kubernetes.md b/blog/_posts/2016-10-00-Helm-Charts-Making-It-Simple-To-Package-And-Deploy-Apps-On-Kubernetes.md
new file mode 100644
index 00000000000..f8e471003a5
--- /dev/null
+++ b/blog/_posts/2016-10-00-Helm-Charts-Making-It-Simple-To-Package-And-Deploy-Apps-On-Kubernetes.md
@@ -0,0 +1,126 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Helm Charts: making it simple to package and deploy common applications on Kubernetes "
+date: Tuesday, October 10, 2016
+pagination:
+ enabled: true
+---
+There are thousands of people and companies packaging their applications for deployment on Kubernetes. This usually involves crafting a few different Kubernetes resource definitions that configure the application runtime, as well as defining the mechanism that users and other apps leverage to communicate with the application. There are some very common applications that users regularly look for guidance on deploying, such as databases, CI tools, and content management systems. These types of applications are usually not ones that are developed and iterated on by end users, but rather their configuration is customized to fit a specific use case. Once that application is deployed users can link it to their existing systems or leverage their functionality to solve their pain points.
+
+For best practices on how these applications should be configured, users could look at the many resources available such as: the [examples folder](https://github.com/kubernetes/kubernetes/tree/master/examples) in the Kubernetes repository, the Kubernetes [contrib repository](https://github.com/kubernetes/contrib), the [Helm Charts repository](https://github.com/helm/charts), and the [Bitnami Charts repository](https://github.com/bitnami/charts). While these different locations provided guidance, it was not always formalized or consistent such that users could leverage similar installation procedures across different applications.
+
+So what do you do when there are too many places for things to be found?
+
+
+
+[](https://lh5.googleusercontent.com/l6CowJsfGRoH2wgWHlxtId4Foil2Fcs7AZ0NbOT7jGrXliESRSc6jNH8bdMmfpU-_gDRqy9UDSYCj7WaSKF1ZLK1a7t2qNo5JaIOglozee2SDIPteuOZ6aHzNMyBBJXukBv0zF9x)
+
+[xkcd Standards](https://xkcd.com/927/)
+
+
+
+In this case, we’re not creating Yet Another Place for Applications, rather promoting an existing one as the canonical location. As part of the Special Interest Group Apps ([SIG Apps](https://github.com/kubernetes/community/tree/master/sig-apps)) work for the [Kubernetes 1.4 release](http://blog.kubernetes.io/2016/09/kubernetes-1.4-making-it-easy-to-run-on-kuberentes-anywhere.html), we began to provide a home for these Kubernetes deployable applications that provides continuous releases of well documented and user friendly packages. These packages are being created as Helm [**Charts**](https://github.com/kubernetes/helm/blob/master/docs/charts.md) and can be installed using the Helm tool. **[Helm](https://github.com/kubernetes/helm)** allows users to easily templatize their Kubernetes manifests and provide a set of configuration parameters that allows users to customize their deployment.
+
+**Helm is the package manager** (analogous to yum and apt) and **Charts are packages** (analogous to debs and rpms). The home for these Charts is the [Kubernetes Charts repository](https://github.com/kubernetes/charts) which provides continuous integration for pull requests, as well as automated releases of Charts in the master branch.
+
+There are two main folders where charts reside. The [stable folder](https://github.com/kubernetes/charts/tree/master/stable) hosts those applications which meet minimum requirements such as proper documentation and inclusion of only Beta or higher Kubernetes resources. The [incubator folder](https://github.com/kubernetes/charts/tree/master/incubator) provides a place for charts to be submitted and iterated on until they’re ready for promotion to stable at which time they will automatically be pushed out to the default repository. For more information on the repository structure and requirements for being in stable, have a look at [this section in the README](https://github.com/kubernetes/charts#repository-structure).
+
+The following applications are now available:
+
+
+
+
+|Stable repository | Incubating repository |
+|--|--|
+|[Drupal](https://github.com/kubernetes/charts/tree/master/stable/drupal) | [Consul](https://github.com/kubernetes/charts/tree/master/incubator/consul) |
+|[Jenkins](https://github.com/kubernetes/charts/tree/master/stable/jenkins)|[Elasticsearch](https://github.com/kubernetes/charts/tree/master/incubator/elasticsearch) |
+| [MariaDB](https://github.com/kubernetes/charts/tree/master/stable/mariadb) | [etcd](https://github.com/kubernetes/charts/tree/master/incubator/etcd) |
+| [MySQL](https://github.com/kubernetes/charts/tree/master/stable/mysql) | [Grafana](https://github.com/kubernetes/charts/tree/master/incubator/grafana) |
+| [Redmine](https://github.com/kubernetes/charts/tree/master/stable/redmine)|[MongoDB](https://github.com/kubernetes/charts/tree/master/incubator/mongodb)|
+| [Wordpress](https://github.com/kubernetes/charts/tree/master/stable/wordpress)|[Patroni](https://github.com/kubernetes/charts/tree/master/incubator/patroni) |
+||[Prometheus](https://github.com/kubernetes/charts/tree/master/incubator/prometheus)|
+| | [Spark](https://github.com/kubernetes/charts/tree/master/incubator/spark)|
+| | [ZooKeeper](https://github.com/kubernetes/charts/tree/master/incubator/zookeeper) |
+{: .post-table}
+
+**Example workflow for a Chart developer**
+
+
+1. [Create a chart](https://github.com/kubernetes/helm/blob/master/docs/charts.md)
+2. Developer provides parameters via the [values.yaml](https://github.com/kubernetes/helm/blob/master/docs/charts.md#values-files) file allowing users to customize their deployment. This can be seen as the API between chart devs and chart users.
+3. A [README](https://github.com/kubernetes/charts/tree/master/stable/mariadb) is written to help describe the application and its parameterized values.
+4. Once the application installs properly and the values customize the deployment appropriately, the developer adds a [NOTES.txt](https://github.com/kubernetes/helm/blob/master/docs/charts.md#chart-license-readme-and-notes) file that is shown as soon as the user installs. This file generally points out the next steps for the user to connect to or use the application.
+5. If the application requires persistent storage, the developer adds a mechanism to store the data such that pod restarts do not lose data. Most charts requiring this today are using [dynamic volume provisioning](http://blog.kubernetes.io/2016/10/dynamic-provisioning-and-storage-in-kubernetes.html) to abstract away underlying storage details from the user which allows a single configuration to work against Kubernetes installations.
+6. Submit a [Pull Request to the Kubernetes Charts repo](https://github.com/kubernetes/charts/pulls). Once tested and reviewed, the PR will be merged.
+7. Once merged to the master branch, the chart will be packaged and released to Helm’s default repository and available for users to install.
+
+**Example workflow for a Chart user**
+
+
+1. 1.[Install Helm](https://github.com/kubernetes/helm/blob/master/docs/quickstart.md#install-helm)
+2. 2.[Initialize Helm](https://github.com/kubernetes/helm/blob/master/docs/quickstart.md#install-an-example-chart)
+3. 3.[Search for a chart](https://github.com/kubernetes/helm/blob/master/docs/using_helm.md#helm-search-finding-charts)
+
+```
+$ helm search
+NAME VERSION DESCRIPTION stable/drupal 0.3.1 One of the most versatile open source content m...stable/jenkins 0.1.0 A Jenkins Helm chart for Kubernetes. stable/mariadb 0.4.0 Chart for MariaDB stable/mysql 0.1.0 Chart for MySQL stable/redmine 0.3.1 A flexible project management web application. stable/wordpress 0.3.0 Web publishing platform for building blogs and ...
+ ```
+
+4. 4.[Install the chart](https://github.com/kubernetes/helm/blob/master/docs/using_helm.md#helm-install-installing-a-package)
+
+```
+$ helm install stable/jenkins
+ ```
+
+5. 5.After the install
+
+```
+Notes:
+
+
+
+1. Get your 'admin' user password by running:
+
+ printf $(printf '\%o' `kubectl get secret --namespace default brawny-frog-jenkins -o jsonpath="{.data.jenkins-admin-password[*]}"`);echo
+
+
+
+2. Get the Jenkins URL to visit by running these commands in the same shell:
+
+\*\*\*\* NOTE: It may take a few minutes for the LoadBalancer IP to be available. \*\*\*\*
+
+\*\*\*\* You can watch the status of by running 'kubectl get svc -w brawny-frog-jenkins' \*\*\*\*
+
+ export SERVICE\_IP=$(kubectl get svc --namespace default brawny-frog-jenkins -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+
+ echo http://$SERVICE\_IP:8080/login
+```
+
+
+3. Login with the password from step 1 and the username: admin
+
+
+
+For more information on running Jenkins on Kubernetes, visit [here](https://cloud.google.com/solutions/jenkins-on-container-engine).
+
+
+
+**Conclusion**
+
+Now that you’ve seen workflows for both developers and users, we hope that you’ll join us in consolidating the breadth of application deployment knowledge into a more centralized place. Together we can raise the quality bar for both developers and users of Kubernetes applications. We’re always looking for feedback on how we can better our process. Additionally, we’re looking for contributions of new charts or updates to existing ones. Join us in the following places to get engaged:
+
+
+- SIG Apps - [Slack Channel](https://kubernetes.slack.com/messages/sig-apps/)
+- SIG Apps - [Weekly Meeting](https://github.com/kubernetes/community/tree/master/sig-apps#meeting)
+- [Submit a Kubernetes Charts Issue](https://github.com/kubernetes/charts/issues)
+A big thank you to the folks at Bitnami, Deis, Google and the [other contributors](https://github.com/kubernetes/charts/graphs/contributors) who have helped get the Charts repository to where it is today. We still have a lot of work to do but it's been wonderful working together as a community to move this effort forward.
+
+_--Vic Iglesias, Cloud Solutions Architect, Google_
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-10-00-Kubernetes-And-Openstack-At-Yahoo-Japan.md b/blog/_posts/2016-10-00-Kubernetes-And-Openstack-At-Yahoo-Japan.md
new file mode 100644
index 00000000000..38210aae85e
--- /dev/null
+++ b/blog/_posts/2016-10-00-Kubernetes-And-Openstack-At-Yahoo-Japan.md
@@ -0,0 +1,197 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " How We Architected and Run Kubernetes on OpenStack at Scale at Yahoo! JAPAN "
+date: Tuesday, October 24, 2016
+pagination:
+ enabled: true
+---
+
+_Editor’s note: today’s post is by the Infrastructure Engineering team at Yahoo! JAPAN, talking about how they run OpenStack on Kubernetes. This post has been translated and edited for context with permission -- originally published on the [Yahoo! JAPAN engineering blog](http://techblog.yahoo.co.jp/infrastructure/os_n_k8s/). _
+
+
+**Intro**
+This post outlines how Yahoo! JAPAN, with help from Google and Solinea, built an automation tool chain for “one-click” code deployment to Kubernetes running on OpenStack.
+
+We’ll also cover the basic security, networking, storage, and performance needs to ensure production readiness.
+
+Finally, we will discuss the ecosystem tools used to build the CI/CD pipeline, Kubernetes as a deployment platform on VMs/bare metal, and an overview of Kubernetes architecture to help you architect and deploy your own clusters.
+
+**Preface**
+Since our company started using OpenStack in 2012, our internal environment has changed quickly. Our initial goal of virtualizing hardware was achieved with OpenStack. However, due to the progress of cloud and container technology, we needed the capability to launch services on various platforms. This post will provide our example of taking applications running on OpenStack and porting them to Kubernetes.
+
+**Coding Lifecycle**
+The goal of this project is to create images for all required platforms from one application code, and deploy those images onto each platform. For example, when code is changed at the code registry, bare metal images, Docker containers and VM images are created by CI (continuous integration) tools, pushed into our image registry, then deployed to each infrastructure platform.
+
+
+
+ 
+
+
+
+We use following products in our CICD pipeline:
+
+
+
+
+| Function | Product |
+|--|--|
+| Code registry | GitHub Enterprise |
+| CI tools | Jenkins |
+| Image registry | Artifactory |
+| Bug tracking system | JIRA |
+| deploying Bare metal platform | OpenStack Ironic |
+| deploying VM platform |OpenStack |
+| deploying container platform | Kubernetes |
+{: .post-table}
+
+
+Image Creation. Each image creation workflow is shown in the next diagram.
+
+
+
+**VM Image Creation** :
+
+
+
+[](https://4.bp.blogspot.com/-saBA4FKmJEM/WAppk0keRfI/AAAAAAAAAxM/7Y3uw-H3I0Ae_p6IqUu429pJqtwqTGxIgCLcB/s1600/Untitled%2Bdrawing.png)
+
+
+
+1. 1.push code to GitHub
+2. 2.hook to Jenkins master
+3. 3.Launch job at Jenkins slave
+4. 4.checkout Packer repository
+5. 5.Run Service Job
+6. 6.Execute Packer by build script
+7. 7.Packer start VM for OpenStack Glance
+8. 8.Configure VM and install required applications
+9. 9.create snapshot and register to glance
+10.10.Download the new created image from Glance
+11.11.Upload the image to Artifactory
+
+**Bare Metal Image Creation:**
+
+
+
+[](https://1.bp.blogspot.com/-0aPKFfhF33k/WApqIabmf1I/AAAAAAAAAxQ/jR33xg1OoMolm9T2Jt3FFixZt6294zUsACLcB/s1600/Untitled%2Bdrawing%2B%25281%2529.png)
+
+1. 1.push code to GitHub
+2. 2.hook to Jenkins master
+3. 3.Launch job at Jenkins slave
+4. 4.checkout Packer repository
+5. 5.Run Service Job
+6. 6.Download base bare metal image by build script
+7. 7.build script execute diskimage-builder with Packer to create bare metal image
+8. 8.Upload new created image to Glance
+9. 9.Upload the image to Artifactory
+
+**Container Image Creation:**
+
+
+
+[](https://2.bp.blogspot.com/-5su8_2KmuYw/WApqvvw0k8I/AAAAAAAAAxU/36NZG0lTQ1whl-JcCuKCb-kjuISR-PSGwCLcB/s1600/Untitled%2Bdrawing%2B%25282%2529.png)
+
+1. 1.push code to GitHub
+2. 2.hook to Jenkins master
+3. 3.Launch job at Jenkins slave
+4. 4.checkout Dockerfile repository
+5. 5.Run Service Job
+6. 6.Download base docker image from Artifactory
+7. 7.If no docker image found at Artifactory, download from Docker Hub
+8. 8.Execute docker build and create image
+9. 9.Upload the image to Artifactory
+
+**Platform Architecture.**
+
+
+
+Let’s focus on the container workflow to walk through how we use Kubernetes as a deployment platform. This platform architecture is as below.
+
+
+[](https://2.bp.blogspot.com/-qiqHdUwASOU/WApsUZF7fRI/AAAAAAAAAxc/26b1XqOnybwWiqDoFUXW9QOxoG3ub7nDACLcB/s1600/Untitled%2Bdrawing%2B%25284%2529.png)
+
+
+
+
+
+
+
+
+
+
+
+|Function |Product |
+|--|--|
+|Infrastructure Services |OpenStack |
+|Container Host |CentOS |
+|Container Cluster Manager |Kubernetes |
+|Container Networking |Project Calico |
+|Container Engine |Docker |
+|Container Registry |Artifactory |
+|Service Registry |etcd|
+|Source Code Management |GitHub Enterprise |
+|CI tool |Jenkins |
+|Infrastructure Provisioning |Terraform |
+|Logging |Fluentd, Elasticsearch, Kibana |
+|Metrics |Heapster, Influxdb, Grafana |
+|Service Monitoring |Prometheus |
+{: .post-table}
+
+
+We use CentOS for Container Host (OpenStack instances) and install Docker, Kubernetes, Calico, etcd and so on. Of course, it is possible to run various container applications on Kubernetes. In fact, we run OpenStack as one of those applications. That's right, OpenStack on Kubernetes on OpenStack. We currently have more than 30 OpenStack clusters, that quickly become hard to manage and operate. As such, we wanted to create a simple, base OpenStack cluster to provide the basic functionality needed for Kubernetes and make our OpenStack environment easier to manage.
+
+
+
+**Kubernetes Architecture**
+
+
+
+Let me explain Kubernetes architecture in some more detail. The architecture diagram is below.
+
+[](https://s.yimg.jp/images/tecblog/2016-1H/os_n_k8s/kubernetes.png)
+
+
+
+
+
+
+
+|Product |Description |
+|OpenStack Keystone|Kubernetes Authentication and Authorization |
+|OpenStack Cinder |External volume used from Pod (grouping of multiple containers) |
+|kube-apiserver |Configure and validate objects like Pod or Services (definition of access to services in container) through REST API|
+|kube-scheduler |Allocate Pods to each node |
+|kube-controller-manager |Execute Status management, manage replication controller |
+|kubelet |Run on each node as agent and manage Pod |
+|calico |Enable inter-Pod connection using BGP |
+|kube-proxy |Configure iptable NAT tables to configure IP and load balance (ClusterIP) |
+|etcd |Distribute KVS to store Kubernetes and Calico information |
+|etcd-proxy |Run on each node and transfer client request to etcd clusters|
+{: .post-table}
+
+**Tenant Isolation** To enable multi-tenant usage like OpenStack, we utilize OpenStack Keystone for authentication and authorization.
+
+**Authentication** With a Kubernetes plugin, OpenStack Keystone can be used for Authentication. By Adding authURL of Keystone at startup Kubernetes API server, we can use OpenStack OS\_USERNAME and OS\_PASSWORD for Authentication. **Authorization** We currently use the ABAC (Attribute-Based Access Control) mode of Kubernetes Authorization. We worked with a consulting company, Solinea, who helped create a utility to convert OpenStack Keystone user and tenant information to Kubernetes JSON policy file that maps Kubernetes ABAC user and namespace information to OpenStack tenants. We then specify that policy file when launching Kubernetes API Server. This utility also creates namespaces from tenant information. These configurations enable Kubernetes to authenticate with OpenStack Keystone and operate in authorized namespaces. **Volumes and Data Persistence** Kubernetes provides “Persistent Volumes” subsystem which works as persistent storage for Pods. “Persistent Volumes” is capable to support cloud-provider storage, it is possible to utilize OpenStack cinder-volume by using OpenStack as cloud provider. **Networking** Flannel and various networking exists as networking model for Kubernetes, we used Project Calico for this project. Yahoo! JAPAN recommends to build data center with pure L3 networking like redistribute ARP validation or IP CLOS networking, Project Calico matches this direction. When we apply overlay model like Flannel, we cannot access to Pod IP from outside of Kubernetes clusters. But Project Calico makes it possible. We also use Project Calico for Load Balancing we describe later.
+
+
+
+[](https://s.yimg.jp/images/tecblog/2016-1H/os_n_k8s/network.png)
+
+In Project Calico, broadcast production IP by BGP working on BIRD containers (OSS routing software) launched on each nodes of Kubernetes. By default, it broadcast in cluster only. By setting peering routers outside of clusters, it makes it possible to access a Pod from outside of the clusters. **External Service Load Balancing**
+
+There are multiple choices of external service load balancers (access to services from outside of clusters) for Kubernetes such as NodePort, LoadBalancer and Ingress. We could not find solution which exactly matches our requirements. However, we found a solution that almost matches our requirements by broadcasting Cluster IP used for Internal Service Load Balancing (access to services from inside of clusters) with Project Calico BGP which enable External Load Balancing at Layer 4 from outside of clusters.
+
+
+
+ 
+
+**Service Discovery**
+
+Service Discovery is possible at Kubernetes by using SkyDNS addon. This is provided as cluster internal service, it is accessible in cluster like ClusterIP. By broadcasting ClusterIP by BGP, name resolution works from outside of clusters. By combination of Image creation workflow and Kubernetes, we built the following tool chain which makes it easy from code push to deployment.
+
+[](https://s.yimg.jp/images/tecblog/2016-1H/os_n_k8s/workflow_k8s_all.png)
+
+**Summary**
+
+In summary, by combining Image creation workflows and [Kubernetes](http://www.kubernetes.io/), Yahoo! JAPAN, with help from [Google](https://cloud.google.com/) and [Solinea](http://www.solinea.com/), successfully built an automated tool chain which makes it easy to go from code push to deployment, while taking multi-tenancy, authn/authz, storage, networking, service discovery and other necessary factors for production deployment. We hope you found the discussion of ecosystem tools used to build the CI/CD pipeline, Kubernetes as a deployment platform on VMs/bare-metal, and the overview of Kubernetes architecture to help you architect and deploy your own clusters. Thank you to all of the people who helped with this project. _--Norifumi Matsuya, Hirotaka Ichikawa, Masaharu Miyamoto and Yuta Kinoshita._ _This post has been translated and edited for context with permission -- originally published on the [Yahoo! JAPAN engineer blog](http://techblog.yahoo.co.jp/infrastructure/os_n_k8s/) where this was one in a series of posts focused on Kubernetes._
diff --git a/blog/_posts/2016-10-00-Kubernetes-Service-Technology-Partners-Program.md b/blog/_posts/2016-10-00-Kubernetes-Service-Technology-Partners-Program.md
new file mode 100644
index 00000000000..a2f23d0fdfb
--- /dev/null
+++ b/blog/_posts/2016-10-00-Kubernetes-Service-Technology-Partners-Program.md
@@ -0,0 +1,26 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Introducing Kubernetes Service Partners program and a redesigned Partners page "
+date: Tuesday, October 31, 2016
+pagination:
+ enabled: true
+---
+Kubernetes has become a leading container orchestration system by being a powerful and flexible way to run distributed systems at scale. Through our very active open source community, equating to hundreds of person years of work, Kubernetes achieved four major releases in just one year to become a critical part of thousands of companies infrastructures. However, even with all that momentum, adopting cloud native computing is a significant transition for many organizations. It can be challenging to adopt a new methodology, and many teams are looking for advice and support through that journey.
+
+Today, we’re excited to launch the Kubernetes **Service Partners** program. A Service Partner is a company that provides support and consulting for customers building applications on Kubernetes. This program is an addition to our existing Kubernetes **Technology Partners** who provide software and offer support services for their software.
+
+The Service Partners provide hands-on best practice guidance for running your apps on Kubernetes, and are available to work with companies of all sizes to get started; the first batch of participants includes: Apprenda, Container Solutions, Deis, Livewyer, ReactiveOps and Samsung SDS. You’ll find their listings along with our existing Technology Partners on the newly redesigned [Partners Page](http://kubernetes.io/partners/), giving you a single view into the Kubernetes ecosystem.
+
+The list of partners will grow weekly, and we look forward to collaborating with the community to build a vibrant Kubernetes ecosystem.
+
+
+_--Allan Naim, Product Manager, Google, on behalf of the Kubernetes team._
+
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-10-00-Production-Kubernetes-Dashboard-UI-1.4-improvements_3.md b/blog/_posts/2016-10-00-Production-Kubernetes-Dashboard-UI-1.4-improvements_3.md
new file mode 100644
index 00000000000..3f6572cfe40
--- /dev/null
+++ b/blog/_posts/2016-10-00-Production-Kubernetes-Dashboard-UI-1.4-improvements_3.md
@@ -0,0 +1,82 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " How we improved Kubernetes Dashboard UI in 1.4 for your production needs "
+date: Tuesday, October 03, 2016
+pagination:
+ enabled: true
+---
+With the release of [Kubernetes 1.4](http://blog.kubernetes.io/2016/09/kubernetes-1.4-making-it-easy-to-run-on-kuberentes-anywhere.html) last week, Dashboard – the official web UI for Kubernetes – has a number of exciting updates and improvements of its own. The past three months have been busy ones for the Dashboard team, and we’re excited to share the resulting features of that effort here. If you’re not familiar with Dashboard, the [GitHub repo](https://github.com/kubernetes/dashboard#kubernetes-dashboard) is a great place to get started.
+
+A quick recap before unwrapping our shiny new features: Dashboard was initially released March 2016. One of the focuses for Dashboard throughout its lifetime has been the onboarding experience; it’s a less intimidating way for Kubernetes newcomers to get started, and by showing multiple resources at once, it provides contextualization lacking in [kubectl](http://kubernetes.io/docs/user-guide/kubectl-overview/) (the CLI). After that initial release though, the product team realized that fine-tuning for a beginner audience was getting ahead of ourselves: there were still fundamental product requirements that Dashboard needed to satisfy in order to have a productive UX to onboard new users too. That became our mission for this release: closing the gap between Dashboard and kubectl by showing more resources, leveraging a web UI’s strengths in monitoring and troubleshooting, and architecting this all in a user friendly way.
+
+**Monitoring Graphs**
+Real time visualization is a strength that UI’s have over CLI’s, and with 1.4 we’re happy to capitalize on that capability with the introduction of real-time CPU and memory usage graphs for all workloads running on your cluster. Even with the numerous third-party solutions for monitoring, Dashboard should include at least some basic out-of-the box functionality in this area. Next up on the roadmap for graphs is extending the timespan the graph represents, adding drill-down capabilities to reveal more details, and improving the UX of correlating data between different graphs.
+
+
+[](https://lh5.googleusercontent.com/q2xNqiQkdcaAY9UdAlxXJkhofpb-AwMKoxE8Jdd3qRB0v8qffi4_s8GUaszmYGclNemAWCrEmbTqegKPfRoUgYHy9aRAYILXqRX1BCdLBQCUGHd-Euv0PuT5VI9viT3iSXBRHshv){:.big-img}
+
+
+
+**Logs**
+Based on user research with Kubernetes’ predecessor [Borg](http://research.google.com/pubs/pub43438.html) and continued community feedback, we know logs are tremendously important to users. For this reason we’re constantly looking for ways to improve these features in Dashboard. This release includes a fix for an issue wherein large numbers of logs would crash the system, as well as the introduction of the ability to view logs by date.
+
+**Showing More Resources**
+The previous release brought all workloads to Dashboard: Pods, Pet Sets, Daemon Sets, Replication Controllers, Replica Set, Services, & Deployments. With 1.4, we expand upon that set of objects by including Services, Ingresses, Persistent Volume Claims, Secrets, & Config Maps. We’ve also introduced an “Admin” section with the Namespace-independent global objects of Namespaces, Nodes, and Persistent Volumes. With the addition of roles, these will be shown only to cluster operators, and developers’ side nav will begin with the Namespace dropdown.
+
+Like glue binding together a loose stack of papers into a book, we needed some way to impose order on these resources for their value to be realized, so one of the features we’re most excited to announce in 1.4 is navigation.
+
+**Navigation**
+In 1.1, all resources were simply stacked on top of each other in a single page. The introduction of a side nav provides quick access to any aspect of your cluster you’d like to check out. Arriving at this solution meant a lot of time put toward thinking about the hierarchy of Kubernetes objects – a difficult task since by design things fit together more like a living organism than a nested set of linear relationships. The solution we’ve arrived at balances the organizational need for grouping and desire to retain a bird’s-eye view of as much relevant information as possible. The design of the side nav is simple and flexible, in order to accommodate more resources in the future. Its top level objects (e.g. “Workloads”, “Services and Discovery”) roll up their child objects and will eventually include aggregated data for said objects.
+
+
+
+[{:.big-img}](https://lh4.googleusercontent.com/wam1i4Y3GGLwNFxynWYK17me9UDCaw3yo0dDqqTt7Y79bJ5YK7uHd3yreRnftPOtRkOvo-CjlWNPEx2raBdCN5JTxG2fU3fwqeIPsDaeuqhnWl0IrSYQ32uC7cVt2q51LQNhialX)
+
+
+
+**Closer Alignment with Material Design**
+Dashboard follows Google’s [Material design](https://material.google.com/) system, and the implementation of those principles is refined in the new UI: the global create options have been reduced from two choices to one initial “Create” button, the official Kubernetes logo is displayed as an SVG rather than simply as text, and cards were introduced to help better group different types of content (e.g. a table of Replication Controllers and a table of Pods on your “Workloads” page). Material’s guidelines around desktop-focused enterprise-level software are currently limited (and instead focus on a mobile-first context), so we’ve had to improvise with some aspects of the UI and have worked closely with the UX team at Google Cloud Platform to do this – drawing on their expertise in implementing Material in a more information-dense setting.
+
+**Sample Use Case**
+To showcase Dashboard 1.4’s new suite of features and how they’ll make users’ lives better in the real world, let’s imagine the following scenario:
+
+I am a cluster operator and a customer pings me warning that their app, Kubernetes Dashboard, is suffering performance issues. My first step in addressing the issue is to switch to the correct Namespace, kube-system, to examine what could be going on.
+
+
+
+ {:.big-img}
+
+Once in the relevant Namespace, I check out my Deployments to see if anything seems awry. Sure enough, I notice a spike in CPU usage.
+
+ {:.big-img}
+
+I realize we need to perform a rolling update to a newer version of that app that can handle the increased requests it’s evidently getting, so I update this Deployment’s image, which in turn creates a new [Replica Set](http://kubernetes.io/docs/user-guide/replicasets/).
+
+
+
+ {:.big-img}
+
+Now that that Replica Set’s been created, I can open the logs for one of its pods to confirm that it’s been successfully connected to the API server.
+
+
+
+ {:.big-img}
+
+Easy as that, we’ve debugged our issue. Dashboard provided us a centralized location to scan for the origin of the problem, and once we had that identified we were able to drill down and address the root of the problem.
+
+**Why the Skipped Versions?**
+If you’ve been following along with Dashboard since 1.0, you may have been confused by the jump in our versioning; we went 1.0, 1.1...1.4. We did this to synchronize with the main Kubernetes distro, and hopefully going forward this will make that relationship easier to understand.
+
+**There’s a Lot More Where That Came From**
+Dashboard is gaining momentum, and these early stages are a very exciting and rewarding time to be involved. If you’d like to learn more about contributing, check out [SIG UI](https://github.com/kubernetes/community/blob/master/sig-ui/README.md). Chat with us Kubernetes Slack: [#sig-ui channel](https://kubernetes.slack.com/messages/sig-ui/).
+
+_--Dan Romlein, UX designer, Apprenda_
+
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-10-00-Tail-Kubernetes-With-Stern.md b/blog/_posts/2016-10-00-Tail-Kubernetes-With-Stern.md
new file mode 100644
index 00000000000..32a126ad155
--- /dev/null
+++ b/blog/_posts/2016-10-00-Tail-Kubernetes-With-Stern.md
@@ -0,0 +1,165 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Tail Kubernetes with Stern "
+date: Tuesday, October 31, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: today’s post is by Antti Kupila, Software Engineer, at Wercker, about building a tool to tail multiple pods and containers on Kubernetes._
+
+We love Kubernetes here at [Wercker](http://wercker.com/) and build all our infrastructure on top of it. When deploying anything you need to have good visibility to what's going on and logs are a first view into the inner workings of your application. Good old tail -f has been around for a long time and Kubernetes has this too, built right into [kubectl](http://kubernetes.io/docs/user-guide/kubectl-overview/).
+
+I should say that tail is by no means the tool to use for debugging issues but instead you should feed the logs into a more persistent place, such as [Elasticsearch](https://www.elastic.co/products/elasticsearch). However, there's still a place for tail where you need to quickly debug something or perhaps you don't have persistent logging set up yet (such as when developing an app in [Minikube](https://github.com/kubernetes/minikube)).
+
+**Multiple Pods**
+
+Kubernetes has the concept of [Replication Controllers](http://kubernetes.io/docs/user-guide/replication-controller/) which ensure that n pods are running at the same time. This allows rolling updates and redundancy. Considering they're quite easy to set up there's really no reason not to do so.
+
+However now there are multiple pods running and they all have a unique id. One issue here is that you'll need to know the exact pod id (kubectl get pods) but that changes every time a pod is created so you'll need to do this every time. Another consideration is the fact that Kubernetes load balances the traffic so you won't know at which pod the request ends up at. If you're tailing pod A but the traffic ends up at pod B you'll miss what happened.
+
+Let's say we have a pod called service with 3 replicas. Here's what that would look like:
+
+
+```
+$ kubectl get pods # get pods to find pod ids
+
+$ kubectl log -f service-1786497219-2rbt1 # pod 1
+
+$ kubectl log -f service-1786497219-8kfbp # pod 2
+
+$ kubectl log -f service-1786497219-lttxd # pod 3
+ ```
+
+
+**Multiple containers**
+
+
+
+We're heavy users [gRPC](http://www.grpc.io/) for internal services and expose the gRPC endpoints over REST using [gRPC Gateway](https://github.com/grpc-ecosystem/grpc-gateway). Typically we have server and gateway living as two containers in the same pod (same binary that sets the mode by a cli flag). The gateway talks to the server in the same pod and both ports are exposed to Kubernetes. For internal services we can talk directly to the gRPC endpoint while our website communicates using standard REST to the gateway.
+
+
+
+This poses a problem though; not only do we now have multiple pods but we also have multiple containers within the pod. When this is the case the built-in logging of kubectl requires you to specify which containers you want logs from.
+
+
+
+If we have 3 replicas of a pod and 2 containers in the pod you'll need 6 kubectl log -f \ \. We work with big monitors but this quickly gets out of hand…
+
+If our service pod has a server and gateway container we'd be looking at something like this:
+
+
+
+```
+$ kubectl get pods # get pods to find pod ids
+
+$ kubectl describe pod service-1786497219-2rbt1 # get containers in pod
+
+$ kubectl log -f service-1786497219-2rbt1 server # pod 1
+
+$ kubectl log -f service-1786497219-2rbt1 gateway # pod 1
+
+$ kubectl log -f service-1786497219-8kfbp server # pod 2
+
+$ kubectl log -f service-1786497219-8kfbp gateway # pod 2
+
+$ kubectl log -f service-1786497219-lttxd server # pod 3
+
+$ kubectl log -f service-1786497219-lttxd gateway # pod 3
+ ```
+
+
+
+**Stern**
+
+
+
+To get around this we built [Stern](https://github.com/wercker/stern). It's a super simple utility that allows you to specify both the pod id and the container id as regular expressions. Any match will be followed and the output is multiplexed together, prefixed with the pod and container id, and color-coded for human consumption (colors are stripped if piping to a file).
+
+
+
+Here's how the service example would look:
+
+
+
+```
+$ stern service
+```
+This will match any pod containing the word service and listen to all containers within it. If you only want to see traffic to the server container you could do stern --container server service and it'll stream the logs of all the server containers from the 3 pods.
+
+The output would look something like this:
+```
+$ stern service
+
++ service-1786497219-2rbt1 › server
+
++ service-1786497219-2rbt1 › gateway
+
++ service-1786497219-8kfbp › server
+
++ service-1786497219-8kfbp › gateway
+
++ service-1786497219-lttxd › server
+
++ service-1786497219-lttxd › gateway
+
++ service-1786497219-8kfbp server Log message from server
+
++ service-1786497219-2rbt1 gateway Log message from gateway
+
++ service-1786497219-8kfbp gateway Log message from gateway
+
++ service-1786497219-lttxd gateway Log message from gateway
+
++ service-1786497219-lttxd server Log message from server
+
++ service-1786497219-2rbt1 server Log message from server
+ ```
+
+
+
+In addition, if a pod is killed and recreated during a deployment Stern will stop listening to the old pod and automatically hook into the new one. There's no more need to figure out what the id of that newly created pod is.
+
+
+
+**Configuration options**
+
+
+
+Stern was deliberately designed to be minimal so there's not much to it. However, there are still a couple configuration options we can highlight here. They're very similar to the ones built into kubectl so if you're familiar with that you should feel right at home.
+
+- timestamps adds the timestamp to each line
+- since shows log entries since a certain time (for instance --since 15min)
+- kube-config allows you to specify another Kubernetes config. Defaults to ~/.kube/config
+- namespace allows you to only limit the search to a certain namespaceRun stern --help for all options.
+
+**Examples**
+
+
+
+Tail the gateway container running inside of the envvars pod on staging
+
+ + stern --context staging --container gateway envvars
+
+Show auth activity from 15min ago with timestamps
+
+ + stern -t --since 15m auth
+
+Follow the development of some-new-feature in minikube
+
+ + stern --context minikube some-new-feature
+
+View pods from another namespace
+
+ + stern --namespace kube-system kubernetes-dashboard
+
+
+
+**Get Stern**
+
+
+
+Stern is open source and [available on GitHub](https://github.com/wercker/stern), we'd love your contributions or ideas. If you don't want to build from source you can also download a precompiled binary from [GitHub releases](https://github.com/wercker/stern/releases).
+
+
+[](https://4.bp.blogspot.com/-oNscZEvpzVw/WBeWc4cW4zI/AAAAAAAAAyw/71okg07IPHM6dtBOubO_0kxdYxzwoUGOACLcB/s1600/stern-long.gif)
diff --git a/blog/_posts/2016-11-00-Bringing-Kubernetes-Support-To-Azure.md b/blog/_posts/2016-11-00-Bringing-Kubernetes-Support-To-Azure.md
new file mode 100644
index 00000000000..6dd03b73b05
--- /dev/null
+++ b/blog/_posts/2016-11-00-Bringing-Kubernetes-Support-To-Azure.md
@@ -0,0 +1,36 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Bringing Kubernetes Support to Azure Container Service "
+date: Tuesday, November 07, 2016
+
+---
+_Editor's note: Today’s post is by Brendan Burns, Partner Architect, at Microsoft & Kubernetes co-founder talking about bringing Kubernetes to Azure Container Service._
+
+With more than a thousand people coming to [KubeCon](http://events.linuxfoundation.org/events/kubecon) in my hometown of Seattle, nearly three years after I helped start the Kubernetes project, it’s amazing and humbling to see what a small group of people and a radical idea have become after three years of hard work from a large and growing community. In July of 2014, scarcely a month after Kubernetes became publicly available, Microsoft announced its initial support for Azure. The release of [Kubernetes 1.4](http://blog.kubernetes.io/2016/09/kubernetes-1.4-making-it-easy-to-run-on-kuberentes-anywhere.html), brought support for native Microsoft networking, [load-balancer](https://github.com/kubernetes/kubernetes/pull/28821) and [disk integration](https://github.com/kubernetes/kubernetes/pull/29836).
+
+Today, Microsoft [announced](https://azure.microsoft.com/en-us/blog/azure-container-service-the-cloud-s-most-open-option-for-containers/) the next step in Kubernetes on Azure: the introduction of Kubernetes as a supported orchestrator in Azure Container Service (ACS). It’s been really exciting for me to join the ACS team and help build this new addition. The integration of Kubernetes into ACS means that with a few clicks in the Azure portal, or by running a single command in the new python-based Azure command line tool, you will be able to create a fully functional Kubernetes cluster that is integrated with the rest of your Azure resources.
+
+Kubernetes is availabe in public preview in Azure Container Service today. Community participation has always been an important part of the Kubernetes experience. Over the next few months, I hope you’ll join us and provide your feedback on the experience as we bring it to general availability.
+
+In the spirit of community, we are also excited to announce a new open source project: [ACS Engine](https://github.com/azure/acs-engine). The goal of ACS Engine is to provide an open, community driven location to develop and share best practices for orchestrating containers on Azure. All of our knowledge of running containers in Azure has been captured in that repository, and we look forward to improving and extending it as we move forward with the community. Going forward, the templates in ACS Engine will be the basis for clusters deployed via the ACS API, and thus community driven improvements, features and more will have a natural path into the Azure Container Service. We’re excited to invite you to join us in improving ACS. Prior to the creation of ACS Engine, customers with unique requirements not supported by the ACS API needed to maintain variations on our templates. While these differences start small, they grew larger over time as the mainline template was improved and users also iterated their templates. These differences and drift really impact the ability for users to collaborate, since their templates are all different. Without the ability to share and collaborate, it’s difficult to form a community since every user is siloed in their own variant.
+
+To solve this problem, the core of ACS Engine is a template processor, built in Go, that enables you to dynamically combine different pieces of configuration together to form a final template that can be used to build up your cluster. Thus, each user can mix and match the pieces build the final container cluster that suits their needs. At the same time, each piece can be built and maintained collaboratively by the community. We’ve been beta testing this approach with some customers and the feedback we’ve gotten so far has been really positive.
+
+Beyond services to help you run containers on Azure, I think it’s incredibly important to improve the experience of developing and deploying containerized applications to Kubernetes. To that end, I’ve been doing a bunch of work lately to build a Kubernetes extension for the really excellent, open source, [Visual Studio Code](https://code.visualstudio.com/). The Kubernetes extension enables you to quickly deploy JSON or YAML files you are editing onto a Kubernetes cluster. Additionally, it enables you to import existing Kubernetes objects into Code for easy editing. Finally, it enables synchronization between your running containers and the source code that you are developing for easy debugging of issues you are facing in production.
+
+But really, a demo is worth a thousand words, so please have a look at this [video](https://www.youtube.com/watch?v=nhY9XdzNbbY):
+
+
+
+
+
+Of course, like everything else in Kubernetes it’s released as open source, and I look forward to working on it further with the community. Thanks again, I look forward to seeing everyone at the OpenShift Gathering today, as well as at the Microsoft Azure booth during KubeCon tomorrow and Wednesday. Welcome to Seattle!
+
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-11-00-Kompose-Tool-Go-From-Docker-Compose-To-Kubernetes.md b/blog/_posts/2016-11-00-Kompose-Tool-Go-From-Docker-Compose-To-Kubernetes.md
new file mode 100644
index 00000000000..9f361998ace
--- /dev/null
+++ b/blog/_posts/2016-11-00-Kompose-Tool-Go-From-Docker-Compose-To-Kubernetes.md
@@ -0,0 +1,193 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kompose: a tool to go from Docker-compose to Kubernetes "
+date: Wednesday, November 22, 2016
+
+---
+_Editor's note: Today’s post is by Sebastien Goasguen, Founder of Skippbox, showing a new tool to move from ‘docker-compose’ to Kubernetes._
+
+At [Skippbox](http://www.skippbox.com/), we developed **kompose** a tool to automatically transform your Docker Compose application into Kubernetes manifests. Allowing you to start a Compose application on a Kubernetes cluster with a single kompose up command. We’re extremely happy to have donated kompose to the [Kubernetes Incubator](https://github.com/kubernetes-incubator). So here’s a quick introduction about it and some motivating factors that got us to develop it.
+
+Docker is terrific for developers. It allows everyone to get started quickly with an application that has been packaged in a Docker image and is available on a Docker registry. To build a multi-container application, Docker has developed Docker-compose (aka Compose). Compose takes in a yaml based manifest of your multi-container application and starts all the required containers with a single command docker-compose up. However Compose only works locally or with a Docker Swarm cluster.
+
+But what if you wanted to use something else than Swarm? Like Kubernetes of course.
+
+The Compose format is not a standard for defining distributed applications. Hence you are left re-writing your application manifests in your container orchestrator of choice.
+
+We see kompose as a terrific way to expose Kubernetes principles to Docker users as well as to easily migrate from Docker Swarm to Kubernetes to operate your applications in production.
+
+Over the summer, Kompose has found a new gear with help from Tomas Kral and Suraj Deshmukh from Red Hat, and Janet Kuo from Google. Together with our own lead kompose developer Nguyen An-Tu they are making kompose even more exciting. We proposed Kompose to the Kubernetes Incubator within the SIG-apps and we received approval from the general Kubernetes community; you can now find kompose in the [Kubernetes Incubator](https://github.com/kubernetes-incubator/kompose).
+
+Kompose now supports Docker-compose v2 format, persistent volume claims have been added recently, as well as multiple container per pods. It can also be used to target Openshift deployments, by specifying a different provider than the default Kubernetes. Kompose is also now available in Fedora packages and we look forward to see it in CentOS distributions in the coming weeks.
+
+kompose is a single Golang binary that you build or install from the [release on GitHub](https://github.com/kubernetes-incubator/kompose). Let’s skip the build instructions and dive straight into an example.
+
+Let's take it for a spin!
+
+**Guestbook application with Docker**
+
+The Guestbook application has become the canonical example for Kubernetes. In Docker-compose format, the **guestbook** can be started with this minimal file:
+
+
+```
+version: "2"
+
+
+
+services:
+
+ redis-master:
+
+ image: gcr.io/google\_containers/redis:e2e
+
+ ports:
+
+ - "6379"
+
+ redis-slave:
+
+ image: gcr.io/google\_samples/gb-redisslave:v1
+
+ ports:
+
+ - "6379"
+
+ environment:
+
+ - GET\_HOSTS\_FROM=dns
+
+ frontend:
+
+ image: gcr.io/google-samples/gb-frontend:v4
+
+ ports:
+
+ - "80:80"
+
+ environment:
+
+ - GET\_HOSTS\_FROM=dns
+ ```
+
+
+It consists of three services. A redis-master node, a set of redis-slave that can be scaled and find the redis-master via its DNS name. And a PHP frontend that exposes itself on port 80. The resulting application allows you to leave short messages which are stored in the redis cluster.
+
+To get it started with docker-compose on a vanilla Docker host do:
+
+
+```
+$ docker-compose -f docker-guestbook.yml up -d
+
+Creating network "examples\_default" with the default driver
+
+Creating examples\_redis-slave\_1
+
+Creating examples\_frontend\_1
+
+Creating examples\_redis-master\_1
+ ```
+
+
+So far so good, this is plain Docker usage. Now let’s see how to get this on Kubernetes without having to re-write anything.
+
+**Guestbook with 'kompose'**
+
+Kompose currently has three main commands up, down and convert. Here for simplicity we will show a single usage to bring up the Guestbook application.
+
+Similarly to docker-compose, we can use the kompose up command pointing to the Docker-compose file representing the Guestbook application. Like so:
+
+
+
+
+
+
+
+```
+$ kompose -f ./examples/docker-guestbook.yml up
+
+We are going to create Kubernetes deployment and service for your dockerized application.
+
+If you need more kind of controllers, use 'kompose convert' and 'kubectl create -f' instead.
+
+
+
+INFO[0000] Successfully created service: redis-master
+
+INFO[0000] Successfully created service: redis-slave
+
+INFO[0000] Successfully created service: frontend
+
+INFO[0000] Successfully created deployment: redis-master
+
+INFO[0000] Successfully created deployment: redis-slave
+
+INFO[0000] Successfully created deployment: frontend
+
+
+
+Application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc' for details.
+ ```
+
+
+kompose automatically converted the Docker-compose file into Kubernetes objects. By default, it created one deployment and one service per **compose** services. In addition it automatically detected your current Kubernetes endpoint and created the resources onto it. A set of flags can be used to generate Replication Controllers, Replica Sets or Daemon Sets instead of Deployments.
+
+And that's it! Nothing else to do, the conversion happened automatically.
+Now, if you already now Kubernetes a bit, you’re familiar with the client kubectl and you can check what was created on your cluster.
+
+
+
+
+
+```
+$ kubectl get pods,svc,deployments
+
+NAME READY STATUS RESTARTS AGE
+
+frontend-3780173733-0ayyx 1/1 Running 0 1m
+
+redis-master-3028862641-8miqn 1/1 Running 0 1m
+
+redis-slave-3788432149-t3ejp 1/1 Running 0 1m
+
+NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+
+frontend 10.0.0.34 \ 80/TCP 1m
+
+redis-master 10.0.0.219 \ 6379/TCP 1m
+
+redis-slave 10.0.0.84 \ 6379/TCP 1m
+
+NAME DESIRED CURRENT UP-TO-DATE
+
+
+
+AVAILABLE AGE
+
+frontend 1 1 1 1 1m
+
+redis-master 1 1 1 1 1m
+
+redis-slave 1 1 1 1 1m
+ ```
+
+Indeed you see the three services, the three deployments and the resulting three pods. To access the application quickly, access the _frontend_ service locally and enjoy the Guestbook application, but this time started from a Docker-compose file.
+
+ 
+
+Hopefully this gave you a quick tour of kompose and got you excited. They are more exciting features, like creating different type of resources, creating Helm charts and even using the experimental Docker bundle format as input. Check Lachlan Evenson’s blog on [using a Docker bundle with Kubernetes](https://deis.com/blog/2016/push-docker-dab-kubernetes-cluster/). For an overall demo, see our talk from [KubeCon](https://www.youtube.com/watch?v=zqUfPPNVjI8&index=42&list=PLj6h78yzYM2PqgIGU1Qmi8nY7dqn9PCr4)
+
+
+
+Head over to the [Kubernetes Incubator](https://github.com/kubernetes-incubator/kompose) and check out kompose, it will help you move easily from your Docker compose applications to Kubernetes clusters in production.
+
+
+
+
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-11-00-Kubernetes-Certification-Training-And-Managed-Service-Provider-Program.md b/blog/_posts/2016-11-00-Kubernetes-Certification-Training-And-Managed-Service-Provider-Program.md
new file mode 100644
index 00000000000..3fcfb3d8fe5
--- /dev/null
+++ b/blog/_posts/2016-11-00-Kubernetes-Certification-Training-And-Managed-Service-Provider-Program.md
@@ -0,0 +1,21 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " CNCF Partners With The Linux Foundation To Launch New Kubernetes Certification, Training and Managed Service Provider Program "
+date: Wednesday, November 08, 2016
+
+---
+Today the CNCF is pleased to launch a new training, certification and Kubernetes Managed Service Provider (KMSP) program.
+
+The goal of the program is to ensure enterprises get the support they’re looking for to get up to speed and roll out new applications more quickly and more efficiently. The Linux Foundation, in partnership with CNCF, will develop and operate the Kubernetes training and certification.
+
+Interested in this course? Sign up [here](https://training.linuxfoundation.org/linux-courses/system-administration-training/kubernetes-fundamentals) to pre-register. The course, expected to be available in early 2017, is open now at the discounted price of $99 (regularly $199) for a limited time, and the certification program is expected to be available in the second quarter of 2017.
+
+The KMSP program is a pre-qualified tier of highly vetted service providers who have deep experience helping enterprises successfully adopt Kubernetes. The KMSP partners offer SLA-backed Kubernetes support, consulting, professional services and training for organizations embarking on their Kubernetes journey. In contrast to the Kubernetes Service Partners program outlined recently in [this blog](http://blog.kubernetes.io/2016/10/kubernetes-service-technology-partners-program.html), to become a Kubernetes Managed Service Provider the following additional requirements must be met: three or more certified engineers, an active contributor to Kubernetes, and a business model to support enterprise end users.
+
+As part of the program, a new CNCF Certification Working Group is starting up now. The group will help define the program's open source curriculum, which will be available under the [Creative Commons By Attribution 4.0 International license](https://creativecommons.org/licenses/by/4.0/) for anyone to use. Any Kubernetes expert can join the working group via this [link](https://lists.cncf.io/mailman/listinfo/cncf-kubernetescertwg.). Google has committed to assist, and many others, including Apprenda, Container Solutions, CoreOS, Deis and Samsung SDS, have expressed interest in participating in the Working Group.
+
+To learn more about the new program and the first round of KMSP partners that we expect to grow weekly, check out today's announcement [here](https://www.cncf.io/announcement/2016/11/08/cloud-native-computing-foundation-launches-certification-training-managed-service-provider-program-kubernetes).
+
+
+
diff --git a/blog/_posts/2016-11-00-Kubernetes-Containers-Logging-Monitoring-With-Sematext.md b/blog/_posts/2016-11-00-Kubernetes-Containers-Logging-Monitoring-With-Sematext.md
new file mode 100644
index 00000000000..073cbbe110c
--- /dev/null
+++ b/blog/_posts/2016-11-00-Kubernetes-Containers-Logging-Monitoring-With-Sematext.md
@@ -0,0 +1,236 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes Containers Logging and Monitoring with Sematext "
+date: Saturday, November 18, 2016
+pagination:
+ enabled: true
+---
+_Editor's note: Today’s post is by Stefan Thies, Developer Evangelist, at Sematext, showing key Kubernetes metrics and log elements to help you troubleshoot and tune Docker and Kubernetes._
+
+
+Managing microservices in containers is typically done with Cluster Managers and Orchestration tools. Each container platform has a slightly different set of options to deploy containers or schedule tasks on each cluster node. Because we do [container monitoring and logging](http://sematext.com/kubernetes) at Sematext, part of our job is to share our knowledge of these tools, especially as it pertains to container observability and devops. Today we’ll show a tutorial for Container Monitoring and Log Collection on Kubernetes.
+
+**Dynamic Deployments Require Dynamic Monitoring**
+
+The high level of automation for the container and microservice lifecycle makes the monitoring of Kubernetes more challenging than in more traditional, more static deployments. Any static setup to monitor specific application containers would not work because Kubernetes makes its own decisions according to the defined deployment rules. It is not only the deployed microservices that need to be monitored. It is equally important to watch metrics and logs for Kubernetes core services themselves, such as Kubernetes Master running etcd, controller-manager, scheduler and apiserver and Kubernetes Workers (fka minions) running kubelet and proxy service. Having a centralized place to keep an eye on all these services, their metrics and logs helps one spot problems in the cluster infrastructure. Kubernetes core services could be installed on bare metal, in virtual machines or as containers using Docker. Deploying Kubernetes core services in containers could be helpful with deployment and monitoring operations - tools for container monitoring would cover both core services and application containers. So how does one monitor such a complex and dynamic environment?
+
+**Agent for Kubernetes Metrics and Logs**
+
+There are a number of [open source docker monitoring and logging projects](https://sematext.com/blog/2016/07/19/open-source-docker-monitoring-logging/) one can cobble together to build a monitoring and log collection system (or systems). The advantage is that the code is all free. The downside is that this takes times - both initially when setting it up and later when maintaining. That’s why we built [Sematext Docker Agent](http://sematext.com/docker) - a modern, Docker-aware metrics, events, and log collection agent. It runs as a tiny container on every Docker host and collects logs, metrics and events for all cluster nodes and all containers. It discovers all containers (one pod might contain multiple containers) including containers for Kubernetes core services, if core services are deployed in Docker containers. Let’s see how to deploy this agent.
+
+**Deploying Agent to all Kubernetes Nodes **
+
+Kubernetes provides [DeamonSets](http://kubernetes.io/v1.1/docs/admin/daemons.html), which ensure pods are added to nodes as nodes are added to the cluster. We can use this to easily deploy Sematext Agent to each cluster node!
+Configure Sematext Docker Agent for Kubernetes
+Let’s assume you’ve created an SPM app for your Kubernetes metrics and events, and a Logsene app for your Kubernetes logs, each of which comes with its own token. The Sematext Docker Agent [README](https://github.com/sematext/sematext-agent-docker) lists all configurations (e.g. filter for specific pods/images/containers), but we’ll keep it simple here.
+
+
+- Grab the latest sematext-agent-daemonset.yml (raw plain-text) template (also shown below)
+- Save it somewhere on disk
+- Replace the SPM\_TOKEN and LOGSENE\_TOKEN placeholders with your SPM and Logsene App tokens
+
+```
+apiVersion: extensions/v1beta1
+kind: DaemonSet
+metadata:
+ name: sematext-agent
+spec:
+ template:
+ metadata:
+ labels:
+ app: sematext-agent
+ spec:
+ selector: {}
+ dnsPolicy: "ClusterFirst"
+ restartPolicy: "Always"
+ containers:
+ - name: sematext-agent
+ image: sematext/sematext-agent-docker:latest
+ imagePullPolicy: "Always"
+ env:
+ - name: SPM\_TOKEN
+ value: "REPLACE THIS WITH YOUR SPM TOKEN"
+ - name: LOGSENE\_TOKEN
+ value: "REPLACE THIS WITH YOUR LOGSENE TOKEN"
+ - name: KUBERNETES
+ value: "1"
+ volumeMounts:
+ - mountPath: /var/run/docker.sock
+ name: docker-sock
+ - mountPath: /etc/localtime
+ name: localtime
+ volumes:
+ - name: docker-sock
+ hostPath:
+ path: /var/run/docker.sock
+ - name: localtime
+ hostPath:
+ path: /etc/localtime
+ ```
+
+
+
+**Run Agent as DaemonSet**
+
+
+
+Activate Sematext Agent Docker with _kubectl_:
+
+
+
+```
+\> kubectl create -f sematext-agent-daemonset.yml
+
+daemonset "sematext-agent-daemonset" created
+ ```
+
+
+
+Now let’s check if the agent got deployed to all nodes:
+
+
+
+```
+\> kubectl get pods
+
+NAME READY STATUS RESTARTS AGE
+
+sematext-agent-nh4ez 0/1 ContainerCreating 0 6s
+
+sematext-agent-s47vz 0/1 ImageNotReady 0 6s
+ ```
+
+
+
+The status “ImageNotReady” or “ContainerCreating” might be visible for a short time because Kubernetes must download the image for sematext/sematext-agent-docker first. The setting imagePullPolicy: "Always" specified in sematext-agent-daemonset.yml makes sure that Sematext Agent gets updated automatically using the image from Docker-Hub.
+
+If we check again we’ll see Sematext Docker Agent got deployed to (all) cluster nodes:
+
+
+
+```
+\> kubectl get pods -l sematext-agent
+
+NAME READY STATUS RESTARTS AGE
+
+sematext-agent-nh4ez 1/1 Running 0 8s
+
+sematext-agent-s47vz 1/1 Running 0 8s
+ ```
+
+
+
+Less than a minute after the deployment you should see your Kubernetes metrics and logs! Below are screenshots of various out of the box reports and explanations of various metrics’ meanings.
+
+
+
+**Interpretation of Kubernetes Metrics**
+
+
+
+The metrics from all Kubernetes nodes are collected in a single SPM App, which aggregates metrics on several levels:
+
+- Cluster - metrics aggregated over all nodes displayed in SPM overview
+- Host / node level - metrics aggregated per node
+- Docker Image level - metrics aggregated by image name, e.g. all nginx webserver containers
+- Docker Container level - metrics aggregated for a single container
+
+
+
+| {: .big-img} |
+| Host and Container Metrics from the Kubernetes Cluster |
+
+
+
+Each detailed chart has filter options for Node, Docker Image, and Docker Container. As Kubernetes uses the pod name in the name of the Docker Containers a search by pod name in the Docker Container filter makes it easy to select all containers for a specific pod.
+
+
+
+Let’s have a look at a few Kubernetes (and Docker) key metrics provided by SPM.
+
+
+
+Host Metrics such as CPU, Memory and Disk space usage. Docker images and containers consume more disk space than regular processes installed on a host. For example, an application image might include a Linux operating system and might have a size of 150-700 MB depending on the size of the base image and installed tools in the container. Data containers consume disk space on the host as well. In our experience watching the disk space and using cleanup tools is essential for continuous operations of Docker hosts.
+
+
+
+ {:.big-img}
+
+
+
+Container count - represents the number of running containers per host
+
+
+
+|  |
+| Container Counters per Kubernetes Node over time |
+
+
+
+Container Memory and Memory Fail Counters. These metrics are important to watch and very important to tune applications. Memory limits should fit the footprint of the deployed pod (application) to avoid situations where Kubernetes uses default limits (e.g. defined for a namespace), which could lead to OOM kills of containers. Memory fail counters reflect the number of failed memory allocations in a container, and in case of OOM kills a Docker Event is triggered. This event is then displayed in SPM because [Sematext Docker Agents](https://github.com/sematext/sematext-agent-docker) collects all Docker Events. The best practice is to tune memory setting in a few iterations:
+
+- Monitor memory usage of the application container
+- Set memory limits according to the observations
+- Continue monitoring of memory, memory fail counters, and Out-Of-Memory events. If OOM events happen, the container memory limits may need to be increased, or debugging is required to find the reason for the high memory consumptions.
+
+| {: .big-img} |
+| Container memory usage, limits and fail counters |
+
+
+Container CPU usage and throttled CPU time. The CPU usage can be limited by CPU shares - unlike memory, CPU usage it is not a hard limit. Containers might use more CPU as long the resource is available, but in situations where other containers need the CPU limits apply and the CPU gets throttled to the limit.
+
+
+
+ {: .big-img}
+
+
+
+There are more [docker metrics to watch](https://sematext.com/blog/2016/06/28/top-docker-metrics-to-watch/), like disk I/O throughput, network throughput and network errors for containers, but let’s continue by looking at Kubernetes Logs next.
+
+
+
+**Understand Kubernetes Logs**
+
+Kubernetes containers’ logs are not much different from Docker container logs. However, Kubernetes users need to view logs for the deployed pods. That’s why it is very useful to have Kubernetes-specific information available for log search, such as:
+
+- Kubernetes name space
+- Kubernetes pod name
+- Kubernetes container name
+- Docker image name
+- Kubernetes UID
+
+Sematext Docker Agent extracts this information from the Docker container names and tags all logs with the information mentioned above. Having these data extracted in individual fields makes it is very easy to watch logs of deployed pods, build reports from logs, quickly narrow down to problematic pods while troubleshooting, and so on! If Kubernetes core components (such as kubelet, proxy, api server) are deployed via Docker the Sematext Docker Agent will collect Kubernetes core components logs as well.
+
+
+
+|  |
+| All logs from Kubernetes containers in Logsene |
+
+There are many other useful features Logsene and Sematext Docker Agent give you out of the box, such as:
+
+- Automatic format detection and parsing of logs
+
+ - Sematext Docker Agent includes patterns to recognize and parse many log formats
+- Custom pattern definitions for specific images and application types
+- [Automatic Geo-IP enrichment for container logs](https://sematext.com/blog/2016/04/11/automatic-geo-ip-enrichment-for-docker-logs-2/)
+- Filtering logs e.g. to exclude noisy services
+- Masking of sensitive data in specific log fields (phone numbers, payment information, authentication tokens)
+- Alerts and scheduled reports based on logs
+- Analytics for structured logs e.g. in Kibana or Grafana
+
+Most of those topics are described in our [Docker Log Management](https://sematext.com/blog/2015/08/12/docker-log-management/) post and are relevant for Kubernetes log management as well. If you want to learn more about [Docker monitoring](http://blog.sematext.com/2016/01/12/docker-swarm-collecting-metrics-events-logs/), read more on our [blog](https://sematext.com/blog/tag/docker,kubernetes).
+
+
+
+_--Stefan Thies, Developer Evangelist, at Sematext_
+
+
+
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-11-00-Skytap-Modernizing-Microservice-Architecture-With-Kubernetes.md b/blog/_posts/2016-11-00-Skytap-Modernizing-Microservice-Architecture-With-Kubernetes.md
new file mode 100644
index 00000000000..ce3f5920928
--- /dev/null
+++ b/blog/_posts/2016-11-00-Skytap-Modernizing-Microservice-Architecture-With-Kubernetes.md
@@ -0,0 +1,177 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Modernizing the Skytap Cloud Micro-Service Architecture with Kubernetes "
+date: Tuesday, November 07, 2016
+pagination:
+ enabled: true
+---
+_Editor's note: Today’s guest post is by the Tools and Infrastructure Engineering team at Skytap, a public cloud provider focused on empowering DevOps workflows, sharing their experience on adopting Kubernetes. _
+
+[Skytap](https://www.skytap.com/) is a global public cloud that provides our customers the ability to save and clone complex virtualized environments in any given state. Our customers include enterprise organizations running applications in a hybrid cloud, educational organizations providing [virtual training labs](https://www.skytap.com/solutions/virtual-training/), users who need easy-to-maintain development and test labs, and a variety of organizations with diverse DevOps workflows.
+
+Some time ago, we started growing our business at an accelerated pace — our user base and our engineering organization continue to grow simultaneously. These are exciting, rewarding challenges! However, it's difficult to scale applications and organizations smoothly, and we’re approaching the task carefully. When we first began looking at improvements to scale our toolset, it was very clear that traditional OS virtualization was not going to be an effective way to achieve our scaling goals. We found that the persistent nature of VMs encouraged engineers to build and maintain bespoke ‘pet’ VMs; this did not align well with our desire to build reusable runtime environments with a stable, predictable state. Fortuitously, growth in the Docker and Kubernetes communities has aligned with our growth, and the concurrent explosion in community engagement has (from our perspective) helped these tools mature.
+
+In this article we’ll explore how Skytap uses Kubernetes as a key component in services that handle production workloads growing the Skytap Cloud.
+
+As we add engineers, we want to maintain our agility and continue enabling ownership of components throughout the software development lifecycle. This requires a lot of modularization and consistency in key aspects of our process. Previously, we drove reuse with systems-level packaging through our VM and environment templates, but as we scale, containers have become increasingly important as a packaging mechanism due to their comparatively lightweight and precise control of the runtime environment.
+
+In addition to this packaging flexibility, containers help us establish more efficient resource utilization, and they head off growing complexity arising from the natural inclination of teams to mix resources into large, highly-specialized VMs. For example, our operations team would install tools for monitoring health and resource utilization, a development team would deploy a service, and the security team might install traffic monitoring; combining all of that into a single VM greatly increases the test burden and often results in surprises—oops, you pulled in a new system-level Ruby gem!
+
+Containerization of individual components in a service is pretty trivial with Docker. Getting started is easy, but as anyone who has built a distributed system with more than a handful of components knows, the real difficulties are deployment, scaling, availability, consistency, and communication between each unit in the cluster.
+
+**Let’s containerize! **
+
+We’d begun to trade a lot of our heavily-loved pet VMs for, [as the saying goes](https://ericsysmin.com/2016/03/07/pets-vs-cattle/), cattle.
+
+```
+_____
+/ Moo \
+\---- /
+ \ ^__^
+ \ (oo)\_______
+ (__)\ )\/\
+ ||----w |
+ || ||
+```
+
+The challenges of distributed systems aren’t simplified by creating a large herd of free-range containers, though. When we started using containers, we recognized the need for a container management framework. We evaluated Docker Swarm, Mesosphere, and Kubernetes, but we found that the Mesosphere usage model didn’t match our needs — we need the ability to manage discrete VMs; this doesn’t match the Mesosphere ‘distributed operating system’ model — and Docker Swarm was still not mature enough. So, we selected Kubernetes.
+
+
+
+Launching Kubernetes and building a new distributed service is relatively easy (inasmuch as this can be said for such a service: you can’t beat [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem)). However, we need to integrate container management with our existing platform and infrastructure. Some components of the platform are better served by VMs, and we need the ability to containerize services iteratively.
+
+We broke this integration problem down into four categories:
+
+1. 1.Service control and deployment
+2. 2.Inter-service communication
+3. 3.Infrastructure integration
+4. 4.Engineering support and education
+
+**Service Control and Deployment**
+
+
+
+We use a custom extension of [Capistrano](https://github.com/capistrano/capistrano) (we call it ‘Skycap’) to deploy services and manage those services at runtime. It is important for us to manage both containerized and classic services through a single, well-established framework. We also need to isolate Skycap from the inevitable breaking changes inherent in an actively-developed tool like Kubernetes.
+
+
+
+To handle this, we use wrappers in to our service control framework that isolate kubectl behind Skycap and handle issues like ignoring spurious log messages.
+
+
+
+Deployment adds a layer of complexity for us. Docker images are a great way to package software, but historically, we’ve deployed from source, not packages. Our engineering team expects that making changes to source is sufficient to get their work released; devs don’t expect to handle additional packaging steps. Rather than rebuild our entire deployment and orchestration framework for the sake of containerization, we use a continuous integration pipeline for our containerized services. We automatically build a new Docker image for every commit to a project, and then we tag it with the Mercurial (Hg) changeset number of that commit. On the Skycap side, a deployment from a specific Hg revision will then pull the Docker images that are tagged with that same revision number.
+
+
+
+We reuse container images across multiple environments. This requires environment-specific configuration to be injected into each container instance. Until recently, we used similar source-based principles to inject these configuration values: each container would copy relevant configuration files from Hg by cURL-ing raw files from the repo at run time. Network availability and variability are a challenge best avoided, though, so we now load the configuration into Kubernetes’ [**ConfigMap**](http://blog.kubernetes.io/2016/04/configuration-management-with-containers.html) feature. This not only simplifies our Docker images, but it also makes pod startup faster and more predictable (because containers don’t have to download files from Hg).
+
+
+
+**Inter-service communication**
+
+
+
+Our services communicate using two primary methods. The first, message brokering, is typical for process-to-process communication within the Skytap platform. The second is through direct point-to-point TCP connections, which are typical for services that communicate with the outside world (such as web services). We’ll discuss the TCP method in the next section, as a component of infrastructure integration.
+
+
+
+Managing direct connections between pods in a way that services can understand is complicated. Additionally, our containerized services need to communicate with classic VM-based services. To mitigate this complexity, we primarily use our existing message queueing system. This helped us avoid writing a TCP-based service discovery and load balancing system for handling traffic between pods and non-Kubernetes services.
+
+
+
+This reduces our configuration load—services only need to know how to talk to the message queues, rather than to every other service they need to interact with. We have additional flexibility for things like managing the run-state of pods; messages buffer in the queue while nodes are restarting, and we avoid the overhead of re-configuring TCP endpoints each time a pod is added or removed from the cluster. Furthermore, the MQ model allows us to manage load balancing with a more accurate ‘pull’ based approach, in which recipients determine when they are ready to process a new message, instead of using heuristics like ‘least connections’ that simply count the number of open sockets to estimate load.
+
+
+
+Migrating MQ-enabled services to Kubernetes is relatively straightforward compared to migrating services that use the complex TCP-based direct or load balanced connections. Additionally, the isolation provided by the message broker means that the switchover from a classic service to a container-based service is essentially transparent to any other MQ-enabled service.
+
+
+
+**Infrastructure Integration**
+
+
+
+As an infrastructure provider, we face some unique challenges in configuring Kubernetes for use with our platform. [AWS](https://aws.amazon.com/) & [GCP](https://cloud.google.com/) provide out-of-box solutions that simplify Kubernetes provisioning but make assumptions about the underlying infrastructure that do not match our reality. Some organizations have purpose-built data centers. This option would have required us to abandon our existing load balancing infrastructure, our Puppet based provisioning system and the expertise we’d built up around these tools. We weren’t interested in abandoning the tools or our vested experience, so we needed a way to manage Kubernetes that could integrate with our world instead of rebuild it.
+
+
+
+So, we use Puppet to provision and configure VMs that, in turn, run the Skytap Platform. We wrote custom deployment scripts to install Kubernetes on these, and we coordinate with our operations team to do capacity planning for Kube-master and Kube-node hosts.
+
+
+
+In the previous section, we mentioned point-to-point TCP-based communication. For customer-facing services, the pods need a way to interface with Skytap’s layer 3 network infrastructure. Examples at Skytap include our web applications and API over HTTPS, Remote Desktop over Web Sockets, FTP, TCP/UDP port forwarding services, full public IPs, etc. We need careful management of network ingress and egress for this external traffic, and have historically used [F5](https://f5.com/) load balancers. The MQ infrastructure for internal services is inadequate for handling this workload because the protocols used by various clients (like web browsers) are very specific and TCP is the lowest common denominator.
+
+
+
+To get our load balancers communicating with our Kubernetes pods, we run the kube-proxy on each node. Load balancers route to the node, and kube-proxy handles the final handoff to the appropriate pod.
+
+
+
+We mustn’t forget that Kubernetes needs to route traffic between pods (for both TCP-based and MQ-based messaging). We use the [Calico](https://www.projectcalico.org/calico-networking-for-kubernetes/) plugin for Kubernetes networking, with a specialized service to reconfigure the F5 when Kubernetes launches or reaps pods. Calico handles route advertisement with [BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol), which eases integration with the F5.
+
+
+
+F5s also need to have their [load balancing pool](https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm-concepts-11-2-0/ltm_pools.html) reconfigured when pods enter or leave the cluster. The F5 appliance maintains a pool of load-balanced back-ends; ingress to a containerized service is directed through this pool to one of the nodes hosting a service pod. This is straightforward for static network configurations – but since we're using Kubernetes to manage pod replication and availability, our networking situation becomes dynamic. To handle changes, we have a 'load balancer' pod that monitors the Kubernetes svc object for changes; if a pod is removed or added, the ‘load balancer’ pod will detect this change through the svc object, and then update the F5 configuration through the appliance's web API. This way, Kubernetes transparently handles replication and failover/recovery, and the dynamic load balancer configuration lets this process remain invisible to the service or user who originated the request. Similarly, the combination of the Calico virtual network plus the F5 load balancer means that TCP connections should behave consistently for services that are running on both the traditional VM infrastructure, or that have been migrated to containers.
+
+
+
+ {:.big-img}
+
+
+
+With dynamic reconfiguration of the network, the replication mechanics of Kubernetes make horizontal scaling and (most) failover/recovery very straightforward. We haven’t yet reached the reactive scaling milestone, but we've laid the groundwork with the Kubernetes and Calico infrastructure, making one avenue to implement it straightforward:
+
+- Configure upper and lower bounds for service replication
+- Build a load analysis and scaling service (easy, right?)
+- If load patterns match the configured triggers in the scaling service (for example, request rate or volume above certain bounds), issue: kubectl scale --replicas=COUNT rc NAME
+
+This would allow us fine-grained control of autoscaling at the platform level, instead of from the applications themselves – but we’ll also evaluate [**Horizontal Pod Autoscaling**](http://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/) in Kubernetes; which may suit our need without a custom service.
+
+
+
+Keep an eye on [our GitHub account](https://github.com/skytap) and the [Skytap blog](https://www.skytap.com/blog/); as our solutions to problems like these mature, we hope to share what we’ve built with the open source community.
+
+
+
+**Engineering Support**
+
+
+
+A transition like our containerization project requires the engineers involved in maintaining and contributing to the platform change their workflow and learn new methods for creating and troubleshooting services.
+
+
+
+Because a variety of learning styles require a multi-faceted approach, we handle this in three ways: with documentation, with direct outreach to engineers (that is, brownbag sessions or coaching teams), and by offering easy-to-access, ad-hoc support.
+
+
+
+We continue to curate a collection of documents that provide guidance on transitioning classic services to Kubernetes, creating new services, and operating containerized services. Documentation isn’t for everyone, and sometimes it’s missing or incomplete despite our best efforts, so we also run an internal #kube-help Slack channel, where anyone can stop in for assistance or arrange a more in-depth face-to-face discussion.
+
+
+
+We have one more powerful support tool: we automatically construct and test prod-like environments that include this Kubernetes infrastructure, which allows engineers a lot of freedom to experiment and work with Kubernetes hands-on. We explore the details of automated environment delivery in more detail in [this post](https://www.skytap.com/blog/continuous-delivery-fully-functional-environments-skytap-part-1/).
+
+
+
+**Final Thoughts**
+
+
+
+We’ve had great success with Kubernetes and containerization in general, but we’ve certainly found that integrating with an existing full-stack environment has presented many challenges. While not exactly plug-and-play from an enterprise lifecycle standpoint, the flexibility and configurability of Kubernetes still remains a very powerful tool for building our modularized service ecosystem.
+
+
+
+We love application modernization challenges. The Skytap platform is well suited for these sorts of migration efforts – we run Skytap in Skytap, of course, which helped us tremendously in our Kubernetes integration project. If you’re planning modernization efforts of your own, [connect with us](https://www.skytap.com/), we’re happy to help.
+
+
+
+_--Shawn Falkner-Horine and Joe Burchett, Tools and Infrastructure Engineering, Skytap_
+
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-11-00-Visualize-Kubelet-Performance-With-Node-Dashboard.md b/blog/_posts/2016-11-00-Visualize-Kubelet-Performance-With-Node-Dashboard.md
new file mode 100644
index 00000000000..34b81daad77
--- /dev/null
+++ b/blog/_posts/2016-11-00-Visualize-Kubelet-Performance-With-Node-Dashboard.md
@@ -0,0 +1,124 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Visualize Kubelet Performance with Node Dashboard "
+date: Friday, November 17, 2016
+pagination:
+ enabled: true
+---
+
+In Kubernetes 1.4, we introduced a new node performance analysis tool, called the _node performance dashboard_, to visualize and explore the behavior of the Kubelet in much richer details. This new feature will make it easy to understand and improve code performance for Kubelet developers, and lets cluster maintainer set configuration according to provided Service Level Objectives (SLOs).
+
+**Background**
+
+A Kubernetes cluster is made up of both master and worker nodes. The master node manages the cluster’s state, and the worker nodes do the actual work of running and managing pods. To do so, on each worker node, a binary, called [Kubelet](http://kubernetes.io/docs/admin/kubelet/), watches for any changes in pod configuration, and takes corresponding actions to make sure that containers run successfully. High performance of the Kubelet, such as low latency to converge with new pod configuration and efficient housekeeping with low resource usage, is essential for the entire Kubernetes cluster. To measure this performance, Kubernetes uses [end-to-end (e2e) tests](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/e2e-tests.md#overview) to continuously monitor benchmark changes of latest builds with new features.
+
+**Kubernetes SLOs are defined by the following benchmarks** :
+
+**\* API responsiveness** : 99% of all API calls return in less than 1s.
+**\* Pod startup time** : 99% of pods and their containers (with pre-pulled images) start within 5s.
+
+Prior to 1.4 release, we’ve only measured and defined these at the cluster level, opening up the risk that other factors could influence the results. Beyond these, we also want to have more performance related SLOs such as the maximum number of pods for a specific machine type allowing maximum utilization of your cluster. In order to do the measurement correctly, we want to introduce a set of tests isolated to just a node’s performance. In addition, we aim to collect more fine-grained resource usage and operation tracing data of Kubelet from the new tests.
+
+**Data Collection**
+
+The node specific density and resource usage tests are now added into e2e-node test set since 1.4. The resource usage is measured by a standalone cAdvisor pod for flexible monitoring interval (comparing with Kubelet integrated cAdvisor). The performance data, such as latency and resource usage percentile, are recorded in persistent test result logs. The tests also record time series data such as creation time, running time of pods, as well as real-time resource usage. Tracing data of Kubelet operations are recorded in its log stored together with test results.
+
+**Node Performance Dashboard**
+
+Since Kubernetes 1.4, we are continuously building the newest Kubelet code and running node performance tests. The data is collected by our new performance dashboard available at [node-perf-dash.k8s.io](http://node-perf-dash.k8s.io/). Figure 1 gives a preview of the dashboard. You can start to explore it by selecting a test, either using the drop-down list of short test names (region (a)) or by choosing test options one by one (region (b)). The test details show up in region (c) containing the full test name from Ginkgo (the Go test framework used by Kubernetes). Then select a node type (image and machine) in region (d).
+
+| |
+| Figure 1. Select a test to display in node performance dashboard. |
+
+
+The "BUILDS" page exhibits the performance data across different builds (Figure 2). The plots include pod startup latency, pod creation throughput, and CPU/memory usage of Kubelet and runtime (currently Docker). In this way it’s easy to monitor the performance change over time as new features are checked in.
+
+
+
+|  |
+| Figure 2. Performance data across different builds. |
+
+
+**Compare Different Node Configurations**
+
+It’s always interesting to compare the performance between different configurations, such as comparing startup latency of different machine types, different numbers of pods, or comparing resource usage of hosting different number of pods. The dashboard provides a convenient way to do this. Just click the "Compare it" button the right up corner of test selection menu (region (e) in Figure 1). The selected tests will be added to a comparison list in the "COMPARISON" page, as shown in Figure 3. Data across a series of builds are aggregated to a single value to facilitate comparison and are displayed in bar charts.
+
+
+
+|  |
+| Figure 3. Compare different test configurations. |
+
+
+
+**Time Series and Tracing: Diving Into Performance Data**
+
+
+
+Pod startup latency is an important metric for Kubelet, especially when creating a large number of pods per node. Using the dashboard you can see the change of latency, for example, when creating 105 pods, as shown in Figure 4. When you see the highly variable lines, you might expect that the variance is due to different builds. However, as these test here were run against the same Kubernetes code, we can conclude the variance is due to performance fluctuation. The variance is close to 40s when we compare the 99% latency of build #162 and #173, which is very large. To drill into the source of the fluctuation, let’s check out the "TIME SERIES" page.
+
+
+
+|  |
+| Figure 4. Pod startup latency when creating 105 pods. |
+
+
+Looking specifically at build #162, we are able to see that the tracing data plotted in the pod creation latency chart (Figure 5). Each curve is an accumulated histogram of the number of pod operations which have already arrive at a certain tracing probe. The timestamp of tracing pod is either collected from the performance tests or by parsing the Kubelet log. Currently we collect the following tracing data:
+
+- "create" (in test): the test creates pods through API client;
+- "running" (in test): the test watches that pods are running from API server;
+- "pod\_config\_change": pod config change detected by Kubelet SyncLoop;
+- "runtime\_manager": runtime manager starts to create containers;
+- "infra\_container\_start": the infra container of a pod starts;
+- "container\_start': the container of a pod starts;
+- "pod\_running": a pod is running;
+- "pod\_status\_running": status manager updates status for a running pod;
+
+The time series chart illustrates that it is taking a long time for the status manager to update pod status (the data of "running" is not shown since it overlaps with "pod\_status\_running"). We figure out this latency is introduced due to the query per second (QPS) limits of Kubelet to the API server (default is 5). After being aware of this, we find in additional tests that by increasing QPS limits, curve "running" gradually converges with "pod\_running', and results in much lower latency. Therefore the previous e2e test pod startup results reflect the combined latency of both Kubelet and time of uploading status, the performance of Kubelet is thus under-estimated.
+
+
+|  |
+| Figure 5. Time series page using data from build #162. |
+
+
+Further, by comparing the time series data of build #162 (Figure 5) and build #173 (Figure 6), we find that the performance pod startup latency fluctuation actually happens during updating pod statuses. Build #162 has several straggler "pod\_status\_running" events with a long latency tails. It thus provides useful ideas for future optimization.
+
+
+
+|  |
+| Figure 6. Pod startup latency of build #173. |
+
+
+
+In future we plan to use events in Kubernetes which has a fixed log format to collect tracing data more conveniently. Instead of extracting existing log entries, then you can insert your own tracing probes inside Kubelet and obtain the break-down latency of each segment.
+
+
+
+You can check the latency between any two probes across different builds in the “TRACING” page, as shown in Figure 7. For example, by selecting "pod\_config\_change" as the start probe, and "pod\_status\_running' as the end probe, it gives the latency variance of Kubelet over continuous builds without status updating overhead. With this feature, developers are able to monitor the performance change of a specific part of code inside Kubelet.
+
+
+|  |
+| Figure 7. Plotting latency between any two probes. |
+
+
+
+**Future Work**
+
+
+
+The [node performance dashboard](http://node-perf-dash.k8s.io/) is a brand new feature. It is still alpha version under active development. We will keep optimizing the data collecting and visualization, providing more tests, metrics and tools to the developers and the cluster maintainers.
+
+
+
+Please join our community and help us build the future of Kubernetes! If you’re particularly interested in nodes or performance testing, participate by chatting with us in our [Slack channel](https://kubernetes.slack.com/messages/sig-scale/) or join our meeting which meets every Tuesday at 10 AM PT on this [SIG-Node Hangout](https://github.com/kubernetes/community/tree/master/sig-node).
+
+
+
+_--Zhou Fang, Software Engineering Intern, Google_
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-12-00-Cluster-Federation-In-Kubernetes-1.5.md b/blog/_posts/2016-12-00-Cluster-Federation-In-Kubernetes-1.5.md
new file mode 100644
index 00000000000..f7e9e91e332
--- /dev/null
+++ b/blog/_posts/2016-12-00-Cluster-Federation-In-Kubernetes-1.5.md
@@ -0,0 +1,322 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Cluster Federation in Kubernetes 1.5 "
+date: Friday, December 22, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: this post is part of a [series of in-depth articles](http://blog.kubernetes.io/2016/12/five-days-of-kubernetes-1.5.html) on what's new in Kubernetes 1.5_
+
+In the latest [Kubernetes 1.5 release](http://blog.kubernetes.io/2016/12/kubernetes-1.5-supporting-production-workloads.html), you’ll notice that support for Cluster Federation is maturing. That functionality was introduced in Kubernetes 1.3, and the 1.5 release includes a number of new features, including an easier setup experience and a step closer to supporting all Kubernetes API objects.
+
+A new command line tool called ‘**[kubefed](http://kubernetes.io/docs/admin/federation/kubefed/)**’ was introduced to make getting started with Cluster Federation much simpler. Also, alpha level support was added for Federated DaemonSets, Deployments and ConfigMaps. In summary:
+
+- **DaemonSets** are Kubernetes deployment rules that guarantee that a given pod is always present at every node, as new nodes are added to the cluster (more [info](http://kubernetes.io/docs/admin/daemons/)).
+- **Deployments** describe the desired state of Replica Sets (more [info](http://kubernetes.io/docs/user-guide/deployments/)).
+- **ConfigMaps** are variables applied to Replica Sets (which greatly improves image reusability as their parameters can be externalized - more [info](http://kubernetes.io/docs/user-guide/configmap/)).
+**Federated DaemonSets** , **Federated Deployments** , **Federated ConfigMaps** take the qualities of the base concepts to the next level. For instance, Federated DaemonSets guarantee that a pod is deployed on every node of the newly added cluster.
+
+But what actually is “federation”? Let’s explain it by what needs it satisfies. Imagine a service that operates globally. Naturally, all its users expect to get the same quality of service, whether they are located in Asia, Europe, or the US. What this means is that the service must respond equally fast to requests at each location. This sounds simple, but there’s lots of logic involved behind the scenes. This is what Kubernetes Cluster Federation aims to do.
+
+How does it work? One of the Kubernetes clusters must become a master by running a **Federation Control Plane**. In practice, this is a controller that monitors the health of other clusters, and provides a single entry point for administration. The entry point behaves like a typical Kubernetes cluster. It allows creating [Replica Sets](http://kubernetes.io/docs/user-guide/replicasets/), [Deployments](http://kubernetes.io/docs/user-guide/deployments/), [Services](http://kubernetes.io/docs/user-guide/services/), but the federated control plane passes the resources to underlying clusters. This means that if we request the federation control plane to create a Replica Set with 1,000 replicas, it will spread the request across all underlying clusters. If we have 5 clusters, then by default each will get its share of 200 replicas.
+
+This on its own is a powerful mechanism. But there’s more. It’s also possible to create a Federated Ingress. Effectively, this is a global application-layer load balancer. Thanks to an understanding of the application layer, it allows load balancing to be “smarter” -- for instance, by taking into account the geographical location of clients and servers, and routing the traffic between them in an optimal way.
+
+In summary, with Kubernetes Cluster Federation, we can facilitate administration of all the clusters (single access point), but also optimize global content delivery around the globe. In the following sections, we will show how it works.
+
+**Creating a Federation Plane**
+
+In this exercise, we will federate a few clusters. For convenience, all commands have been grouped into 6 scripts available [here](https://github.com/ContainerSolutions/k8shserver/tree/master/scripts):
+
+- 0-settings.sh
+- 1-create.sh
+- 2-getcredentials.sh
+- 3-initfed.sh
+- 4-joinfed.sh
+- 5-destroy.sh
+First we need to define several variables (0-settings.sh)
+
+```
+$ cat 0-settings.sh && . 0-settings.sh
+
+# this project create 3 clusters in 3 zones. FED\_HOST\_CLUSTER points to the one, which will be used to deploy federation control plane
+
+export FED\_HOST\_CLUSTER=us-east1-b
+
+
+# Google Cloud project name
+
+export FED\_PROJECT=\
+
+
+# DNS suffix for this federation. Federated Service DNS names are published with this suffix. This must be a real domain name that you control and is programmable by one of the DNS providers (Google Cloud DNS or AWS Route53)
+
+export FED\_DNS\_ZONE=\
+ ```
+
+
+And get kubectl and kubefed binaries. (for installation instructions refer to guides [here](http://kubernetes.io/docs/user-guide/prereqs/) and [here](http://kubernetes.io/docs/admin/federation/kubefed/#getting-kubefed)).
+Now the setup is ready to create a few Google Container Engine (GKE) clusters with gcloud container clusters create (1-create.sh). In this case one is in US, one in Europe and one in Asia.
+
+```
+$ cat 1-create.sh && . 1-create.sh
+
+gcloud container clusters create gce-us-east1-b --project=${FED\_PROJECT} --zone=us-east1-b --scopes cloud-platform,storage-ro,logging-write,monitoring-write,service-control,service-management,https://www.googleapis.com/auth/ndev.clouddns.readwrite
+
+
+gcloud container clusters create gce-europe-west1-b --project=${FED\_PROJECT} --zone=europe-west1-b --scopes cloud-platform,storage-ro,logging-write,monitoring-write,service-control,service-management,https://www.googleapis.com/auth/ndev.clouddns.readwrite
+
+
+gcloud container clusters create gce-asia-east1-a --project=${FED\_PROJECT} --zone=asia-east1-a --scopes cloud-platform,storage-ro,logging-write,monitoring-write,service-control,service-management,https://www.googleapis.com/auth/ndev.clouddns.readwrite
+ ```
+
+
+The next step is fetching kubectl configuration with gcloud -q container clusters get-credentials (2-getcredentials.sh). The configurations will be used to indicate the current context for kubectl commands.
+
+```
+$ cat 2-getcredentials.sh && . 2-getcredentials.sh
+
+gcloud -q container clusters get-credentials gce-us-east1-b --zone=us-east1-b --project=${FED\_PROJECT}
+
+
+gcloud -q container clusters get-credentials gce-europe-west1-b --zone=europe-west1-b --project=${FED\_PROJECT}
+
+
+gcloud -q container clusters get-credentials gce-asia-east1-a --zone=asia-east1-a --project=${FED\_PROJECT}
+ ```
+
+
+Let’s verify the setup:
+
+```
+$ kubectl config get-contexts
+
+CURRENT NAME CLUSTER AUTHINFO NAMESPACE
+
+\*
+
+gke\_container-solutions\_europe-west1-b\_gce-europe-west1-b
+
+gke\_container-solutions\_europe-west1-b\_gce-europe-west1-b
+
+gke\_container-solutions\_europe-west1-b\_gce-europe-west1-b
+
+gke\_container-solutions\_us-east1-b\_gce-us-east1-b
+
+gke\_container-solutions\_us-east1-b\_gce-us-east1-b
+
+gke\_container-solutions\_us-east1-b\_gce-us-east1-b
+
+gke\_container-solutions\_asia-east1-a\_gce-asia-east1-a
+
+gke\_container-solutions\_asia-east1-a\_gce-asia-east1-a
+
+gke\_container-solutions\_asia-east1-a\_gce-asia-east1-a
+ ```
+
+
+
+We have 3 clusters. One, indicated by the FED\_HOST\_CLUSTER environment variable, will be used to run the federation plane. For this, we will use the kubefed init federation command (3-initfed.sh).
+
+```
+$ cat 3-initfed.sh && . 3-initfed.sh
+
+kubefed init federation --host-cluster-context=gke\_${FED\_PROJECT}\_${FED\_HOST\_CLUSTER}\_gce-${FED\_HOST\_CLUSTER} --dns-zone-name=${FED\_DNS\_ZONE}
+ ```
+
+
+
+You will notice that after executing the above command, a new kubectl context has appeared:
+
+```
+$ kubectl config get-contexts
+
+CURRENT NAME CLUSTER AUTHINFO NAMESPACE
+
+...
+
+federation
+
+federation
+ ```
+
+
+
+The federation context will become our administration entry point. Now it’s time to join clusters (4-joinfed.sh):
+
+```
+$ cat 4-joinfed.sh && . 4-joinfed.sh
+
+kubefed --context=federation join cluster-europe-west1-b --cluster-context=gke\_${FED\_PROJECT}\_europe-west1-b\_gce-europe-west1-b --host-cluster-context=gke\_${FED\_PROJECT}\_${FED\_HOST\_CLUSTER}\_gce-${FED\_HOST\_CLUSTER}
+
+
+kubefed --context=federation join cluster-asia-east1-a --cluster-context=gke\_${FED\_PROJECT}\_asia-east1-a\_gce-asia-east1-a --host-cluster-context=gke\_${FED\_PROJECT}\_${FED\_HOST\_CLUSTER}\_gce-${FED\_HOST\_CLUSTER}
+
+
+kubefed --context=federation join cluster-us-east1-b --cluster-context=gke\_${FED\_PROJECT}\_us-east1-b\_gce-us-east1-b --host-cluster-context=gke\_${FED\_PROJECT}\_${FED\_HOST\_CLUSTER}\_gce-${FED\_HOST\_CLUSTER}
+ ```
+
+
+Note that cluster gce-us-east1-b is used here to run the federation control plane and also to work as a worker cluster. This circular dependency helps to use resources more efficiently and it can be verified by using the kubectl --context=federation get clusters command:
+
+```
+$ kubectl --context=federation get clusters
+
+NAME STATUS AGE
+
+cluster-asia-east1-a Ready 7s
+
+cluster-europe-west1-b Ready 10s
+
+cluster-us-east1-b Ready 10s
+ ```
+
+
+
+We are good to go.
+
+
+
+**Using Federation To Run An Application**
+
+In our [repository](https://github.com/ContainerSolutions/k8shserver) you will find instructions on how to build a docker image with a web service that displays the container’s hostname and the Google Cloud Platform (GCP) zone.
+
+
+An example output might look like this:
+
+
+```
+{"hostname":"k8shserver-6we2u","zone":"europe-west1-b"}
+ ```
+
+
+Now we will deploy the Replica Set ([k8shserver.yaml](https://github.com/ContainerSolutions/k8shserver/blob/master/rs/k8shserver.yaml)):
+
+```
+$ kubectl --context=federation create -f rs/k8shserver
+ ```
+
+
+
+And a Federated Service ([k8shserver.yaml](https://github.com/ContainerSolutions/k8shserver/blob/master/services/k8shserver.yaml)):
+
+```
+$ kubectl --context=federation create -f service/k8shserver
+ ```
+
+
+
+As you can see, the two commands refer to the “federation” context, i.e. to the federation control plane. After a few minutes, you will realize that underlying clusters run the Replica Set and the Service.
+
+
+**Creating The Ingress**
+
+After the Service is ready, we can create [Ingress](http://kubernetes.io/docs/user-guide/ingress/) - the global load balancer. The command is like this:
+
+
+
+```
+kubectl --context=federation create -f ingress/k8shserver.yaml
+ ```
+
+
+
+The contents of the file point to the service we created in the previous step:
+
+
+
+```
+apiVersion: extensions/v1beta1
+
+kind: Ingress
+
+metadata:
+
+ name: k8shserver
+
+spec:
+
+ backend:
+
+ serviceName: k8shserver
+
+ servicePort: 80
+ ```
+
+
+
+
+After a few minutes, we should get a global IP address:
+
+```
+$ kubectl --context=federation get ingress
+
+NAME HOSTS ADDRESS PORTS AGE
+
+k8shserver \* 130.211.40.125 80 20m
+ ```
+
+Effectively, the response of:
+
+
+
+```
+$ curl 130.211.40.125
+ ```
+
+
+depends on the location of client. Something like this would be expected in the US:
+
+
+
+```
+{"hostname":"k8shserver-w56n4","zone":"us-east1-b"}
+ ```
+
+
+Whereas in Europe, we might have:
+
+
+
+```
+{"hostname":"k8shserver-z31p1","zone":"eu-west1-b"}
+ ```
+
+
+
+
+Please refer to this [issue](https://github.com/kubernetes/kubernetes/issues/39087) for additional details on how everything we've described works.
+
+
+
+**Demo**
+
+
+
+
+**Summary**
+
+
+
+Cluster Federation is actively being worked on and is still not fully General Available. Some APIs are in beta and others are in alpha. Some features are missing, for instance cross-cloud load balancing is not supported (federated ingress currently only works on Google Cloud Platform as it depends on GCP [HTTP(S) Load Balancing](https://cloud.google.com/compute/docs/load-balancing/http/)).
+
+
+
+Nevertheless, as the functionality matures, it will become an enabler for all companies that aim at global markets, but currently cannot afford sophisticated administration techniques as used by the likes of Netflix or Amazon. That’s why we closely watch the technology, hoping that it soon fulfills its promise.
+
+
+
+PS. When done, remember to destroy your clusters:
+
+
+
+
+```
+$ . 5-destroy.sh
+ ```
+
+
+
+
+_-__-Lukasz Guminski, Software Engineer at Container Solutions. Allan Naim, Product Manager, Google_
diff --git a/blog/_posts/2016-12-00-Container-Runtime-Interface-Cri-In-Kubernetes.md b/blog/_posts/2016-12-00-Container-Runtime-Interface-Cri-In-Kubernetes.md
new file mode 100644
index 00000000000..6e90fe5e288
--- /dev/null
+++ b/blog/_posts/2016-12-00-Container-Runtime-Interface-Cri-In-Kubernetes.md
@@ -0,0 +1,233 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Introducing Container Runtime Interface (CRI) in Kubernetes "
+date: Tuesday, December 19, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: this post is part of a [series of in-depth articles](http://blog.kubernetes.io/2016/12/five-days-of-kubernetes-1.5.html) on what's new in Kubernetes 1.5_
+
+
+At the lowest layers of a Kubernetes node is the software that, among other things, starts and stops containers. We call this the “Container Runtime”. The most widely known container runtime is Docker, but it is not alone in this space. In fact, the container runtime space has been rapidly evolving. As part of the effort to make Kubernetes more extensible, we've been working on a new plugin API for container runtimes in Kubernetes, called "CRI".
+
+**What is the CRI and why does Kubernetes need it?**
+
+Each container runtime has it own strengths, and many users have asked for Kubernetes to support more runtimes. In the Kubernetes 1.5 release, we are proud to introduce the [Container Runtime Interface](https://github.com/kubernetes/kubernetes/blob/242a97307b34076d5d8f5bbeb154fa4d97c9ef1d/docs/devel/container-runtime-interface.md) (CRI) -- a plugin interface which enables kubelet to use a wide variety of container runtimes, without the need to recompile. CRI consists of a [protocol buffers](https://developers.google.com/protocol-buffers/) and [gRPC API](http://www.grpc.io/), and [libraries](https://github.com/kubernetes/kubernetes/tree/release-1.5/pkg/kubelet/server/streaming), with additional specifications and tools under active development. CRI is being released as Alpha in [Kubernetes 1.5](http://blog.kubernetes.io/2016/12/kubernetes-1.5-supporting-production-workloads.html).
+
+Supporting interchangeable container runtimes is not a new concept in Kubernetes. In the 1.3 release, we announced the [rktnetes](http://blog.kubernetes.io/2016/07/rktnetes-brings-rkt-container-engine-to-Kubernetes.html) project to enable [rkt container engine](https://github.com/coreos/rkt) as an alternative to the Docker container runtime. However, both Docker and rkt were integrated directly and deeply into the kubelet source code through an internal and volatile interface. Such an integration process requires a deep understanding of Kubelet internals and incurs significant maintenance overhead to the Kubernetes community. These factors form high barriers to entry for nascent container runtimes. By providing a clearly-defined abstraction layer, we eliminate the barriers and allow developers to focus on building their container runtimes. This is a small, yet important step towards truly enabling pluggable container runtimes and building a healthier ecosystem.
+
+**Overview of CRI**
+Kubelet communicates with the container runtime (or a CRI shim for the runtime) over Unix sockets using the gRPC framework, where kubelet acts as a client and the CRI shim as the server.
+
+
+[](https://cl.ly/3I2p0D1V0T26/Image%202016-12-19%20at%2017.13.16.png)
+
+The protocol buffers [API](https://github.com/kubernetes/kubernetes/blob/release-1.5/pkg/kubelet/api/v1alpha1/runtime/api.proto) includes two gRPC services, ImageService, and RuntimeService. The ImageService provides RPCs to pull an image from a repository, inspect, and remove an image. The RuntimeService contains RPCs to manage the lifecycle of the pods and containers, as well as calls to interact with containers (exec/attach/port-forward). A monolithic container runtime that manages both images and containers (e.g., Docker and rkt) can provide both services simultaneously with a single socket. The sockets can be set in Kubelet by --container-runtime-endpoint and --image-service-endpoint flags.
+**Pod and container lifecycle management**
+
+
+```
+service RuntimeService {
+
+ // Sandbox operations.
+
+ rpc RunPodSandbox(RunPodSandboxRequest) returns (RunPodSandboxResponse) {}
+ rpc StopPodSandbox(StopPodSandboxRequest) returns (StopPodSandboxResponse) {}
+ rpc RemovePodSandbox(RemovePodSandboxRequest) returns (RemovePodSandboxResponse) {}
+ rpc PodSandboxStatus(PodSandboxStatusRequest) returns (PodSandboxStatusResponse) {}
+ rpc ListPodSandbox(ListPodSandboxRequest) returns (ListPodSandboxResponse) {}
+
+ // Container operations.
+ rpc CreateContainer(CreateContainerRequest) returns (CreateContainerResponse) {}
+ rpc StartContainer(StartContainerRequest) returns (StartContainerResponse) {}
+ rpc StopContainer(StopContainerRequest) returns (StopContainerResponse) {}
+ rpc RemoveContainer(RemoveContainerRequest) returns (RemoveContainerResponse) {}
+ rpc ListContainers(ListContainersRequest) returns (ListContainersResponse) {}
+ rpc ContainerStatus(ContainerStatusRequest) returns (ContainerStatusResponse) {}
+
+ ...
+}
+ ```
+
+
+A Pod is composed of a group of application containers in an isolated environment with resource constraints. In CRI, this environment is called PodSandbox. We intentionally leave some room for the container runtimes to interpret the PodSandbox differently based on how they operate internally. For hypervisor-based runtimes, PodSandbox might represent a virtual machine. For others, such as Docker, it might be Linux namespaces. The PodSandbox must respect the pod resources specifications. In the v1alpha1 API, this is achieved by launching all the processes within the pod-level cgroup that kubelet creates and passes to the runtime.
+
+Before starting a pod, kubelet calls RuntimeService.RunPodSandbox to create the environment. This includes setting up networking for a pod (e.g., allocating an IP). Once the PodSandbox is active, individual containers can be created/started/stopped/removed independently. To delete the pod, kubelet will stop and remove containers before stopping and removing the PodSandbox.
+
+Kubelet is responsible for managing the lifecycles of the containers through the RPCs, exercising the container lifecycles hooks and liveness/readiness checks, while adhering to the restart policy of the pod.
+
+**Why an imperative container-centric interface?**
+
+Kubernetes has a declarative API with a _Pod_ resource. One possible design we considered was for CRI to reuse the declarative _Pod_ object in its abstraction, giving the container runtime freedom to implement and exercise its own control logic to achieve the desired state. This would have greatly simplified the API and allowed CRI to work with a wider spectrum of runtimes. We discussed this approach early in the design phase and decided against it for several reasons. First, there are many Pod-level features and specific mechanisms (e.g., the crash-loop backoff logic) in kubelet that would be a significant burden for all runtimes to reimplement. Second, and more importantly, the Pod specification was (and is) still evolving rapidly. Many of the new features (e.g., init containers) would not require any changes to the underlying container runtimes, as long as the kubelet manages containers directly. CRI adopts an imperative container-level interface so that runtimes can share these common features for better development velocity. This doesn't mean we're deviating from the "level triggered" philosophy - kubelet is responsible for ensuring that the actual state is driven towards the declared state.
+
+**Exec/attach/port-forward requests**
+
+
+```
+service RuntimeService {
+
+ ...
+
+ // ExecSync runs a command in a container synchronously.
+ rpc ExecSync(ExecSyncRequest) returns (ExecSyncResponse) {}
+ // Exec prepares a streaming endpoint to execute a command in the container.
+ rpc Exec(ExecRequest) returns (ExecResponse) {}
+ // Attach prepares a streaming endpoint to attach to a running container.
+ rpc Attach(AttachRequest) returns (AttachResponse) {}
+ // PortForward prepares a streaming endpoint to forward ports from a PodSandbox.
+ rpc PortForward(PortForwardRequest) returns (PortForwardResponse) {}
+
+ ...
+}
+ ```
+
+
+
+Kubernetes provides features (e.g. kubectl exec/attach/port-forward) for users to interact with a pod and the containers in it. Kubelet today supports these features either by invoking the container runtime’s native method calls or by using the tools available on the node (e.g., nsenter and socat). Using tools on the node is not a portable solution because most tools assume the pod is isolated using Linux namespaces. In CRI, we explicitly define these calls in the API to allow runtime-specific implementations.
+
+
+
+Another potential issue with the kubelet implementation today is that kubelet handles the connection of all streaming requests, so it can become a bottleneck for the network traffic on the node. When designing CRI, we incorporated this feedback to allow runtimes to eliminate the middleman. The container runtime can start a separate streaming server upon request (and can potentially account the resource usage to the pod!), and return the location of the server to kubelet. Kubelet then returns this information to the Kubernetes API server, which opens a streaming connection directly to the runtime-provided server and connects it to the client.
+
+
+
+There are many other aspects of CRI that are not covered in this blog post. Please see the list of [design docs and proposals](https://github.com/kubernetes/community/blob/master/contributors/devel/container-runtime-interface.md#design-docs-and-proposals) for all the details.
+
+
+
+**Current status**
+
+Although CRI is still in its early stages, there are already several projects under development to integrate container runtimes using CRI. Below are a few examples:
+
+
+
+- [cri-o](https://github.com/kubernetes-incubator/cri-o): OCI conformant runtimes.
+- [rktlet](https://github.com/kubernetes-incubator/rktlet): the rkt container runtime.
+- [frakti](https://github.com/kubernetes/frakti): hypervisor-based container runtimes.
+- [docker CRI shim](https://github.com/kubernetes/kubernetes/tree/release-1.5/pkg/kubelet/dockershim).
+
+
+
+If you are interested in trying these alternative runtimes, you can follow the individual repositories for the latest progress and instructions.
+
+
+
+
+For developers interested in integrating a new container runtime, please see the [developer guide](https://github.com/kubernetes/community/blob/master/contributors/devel/container-runtime-interface.md) for the known limitations and issues of the API. We are actively incorporating feedback from early developers to improve the API. Developers should expect occasional API breaking changes (it is Alpha, after all).
+
+
+
+**Try the new CRI-Docker integration**
+
+
+
+Kubelet does not yet use CRI by default, but we are actively working on making this happen. The first step is to re-integrate Docker with kubelet using CRI. In the 1.5 release, we extended kubelet to support CRI, and also added a built-in CRI shim for Docker. This allows kubelet to start the gRPC server on Docker’s behalf. To try out the new kubelet-CRI-Docker integration, you simply have to start the Kubernetes API server with --feature-gates=StreamingProxyRedirects=true to enable the new streaming redirect feature, and then start the kubelet with --experimental-cri=true.
+
+
+
+Besides a few [missing features](https://github.com/kubernetes/community/blob/master/contributors/devel/container-runtime-interface.md#docker-cri-integration-known-issues), the new integration has consistently passed the main end-to-end tests. We plan to expand the test coverage soon and would like to encourage the community to report any issues to help with the transition.
+
+
+
+**CRI with Minikube**
+
+
+
+If you want to try out the new integration, but don’t have the time to spin up a new test cluster in the cloud yet, [minikube](https://github.com/kubernetes/minikube) is a great tool to quickly spin up a local cluster. Before you start, follow the [instructions](https://github.com/kubernetes/minikube) to download and install minikube.
+
+
+
+1. Check the available Kubernetes versions and pick the latest 1.5.x version available. We will use v1.5.0-beta.1 as an example.
+
+
+
+
+```
+$ minikube get-k8s-versions
+ ```
+
+
+
+2. Start a minikube cluster with the built-in docker CRI integration.
+
+
+
+
+```
+$ minikube start --kubernetes-version=v1.5.0-beta.1 --extra-config=kubelet.EnableCRI=true --network-plugin=kubenet --extra-config=kubelet.PodCIDR=10.180.1.0/24 --iso-url=http://storage.googleapis.com/minikube/iso/buildroot/minikube-v0.0.6.iso
+ ```
+
+
+
+--extra-config=kubelet.EnableCRI=true` turns on the CRI implementation in kubelet. --network-plugin=kubenet and --extra-config=kubelet.PodCIDR=10.180.1.0/24 sets the network plugin to kubenet and ensures a PodCIDR is assigned to the node. Alternatively, you can use the cni plugin which does not rely on the PodCIDR. --iso-url sets an iso image for minikube to launch the node with. The image used in the example
+
+
+
+3. Check the minikube log to check that CRI is enabled.
+
+
+
+
+
+```
+$ minikube logs | grep EnableCRI
+
+I1209 01:48:51.150789 3226 localkube.go:116] Setting EnableCRI to true on kubelet.
+ ```
+
+
+
+4. Create a pod and check its status. You should see a “SandboxReceived” event as a proof that Kubelet is using CRI!
+
+
+
+
+```
+$ kubectl run foo --image=gcr.io/google\_containers/pause-amd64:3.0
+
+deployment "foo" created
+
+$ kubectl describe pod foo
+
+...
+
+... From Type Reason Message
+... ----------------- ----- --------------- -----------------------------
+
+...{default-scheduler } Normal Scheduled Successfully assigned foo-141968229-v1op9 to minikube
+...{kubelet minikube} Normal SandboxReceived Pod sandbox received, it will be created.
+
+...
+ ```
+
+
+
+_Note that kubectl attach/exec/port-forward does not work with CRI enabled in minikube yet, but this [will be addressed in the newer version of minikube](https://github.com/kubernetes/minikube/issues/896). _
+
+
+
+
+
+Community
+
+
+CRI is being actively developed and maintained by the Kubernetes [SIG-Node](https://github.com/kubernetes/community/blob/master/README.md#special-interest-groups-sig) community. We’d love to hear feedback from you. To join the community:
+
+
+
+
+-
+Post issues or feature requests on [GitHub](https://github.com/kubernetes/kubernetes)
+-
+Join the #sig-node channel on [Slack](https://kubernetes.slack.com/)
+-
+Subscribe to the [SIG-Node mailing list](mailto:kubernetes-sig-node@googlegroups.com)
+-
+Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+
+
+
+
+
+_--Yu-Ju Hong, Software Engineer, Google_
diff --git a/blog/_posts/2016-12-00-Five-Days-Of-Kubernetes-1.5.md b/blog/_posts/2016-12-00-Five-Days-Of-Kubernetes-1.5.md
new file mode 100644
index 00000000000..78541e58ee8
--- /dev/null
+++ b/blog/_posts/2016-12-00-Five-Days-Of-Kubernetes-1.5.md
@@ -0,0 +1,35 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Five Days of Kubernetes 1.5 "
+date: Tuesday, December 19, 2016
+pagination:
+ enabled: true
+---
+With the help of our growing community of 1,000 contributors, we pushed some 5,000 commits to extend support for production workloads and deliver [Kubernetes 1.5](http://blog.kubernetes.io/2016/12/kubernetes-1.5-supporting-production-workloads.html). While many improvements and new features have been added, we selected few to highlight in a series of in-depths posts listed below.
+
+This progress is our commitment in continuing to make Kubernetes best way to manage your production workloads at scale.
+
+| | Five Days of Kubernetes 1.5 |
+|--|--|
+| Day 1 | [Introducing Container Runtime Interface (CRI) in Kubernetes](http://blog.kubernetes.io/2016/12/container-runtime-interface-cri-in-kubernetes.html) |
+| Day 2 | [StatefulSet: Run and Scale Stateful Applications Easily in Kubernetes](http://blog.kubernetes.io/2016/12/statefulset-run-scale-stateful-applications-in-kubernetes.html) |
+| Day 3 | [Windows Server Support Comes to Kubernetes](http://blog.kubernetes.io/2016/12/windows-server-support-kubernetes.html) |
+| Day 4 | [Cluster Federation in Kubernetes 1.5](http://blog.kubernetes.io/2016/12/cluster-federation-in-kubernetes-1.5.html) |
+| Day 5 | [Kubernetes supports OpenAPI](http://blog.kubernetes.io/2016/12/kubernetes-supports-openapi.html) |
+{: .post-table }
+
+
+Connect
+
+
+-
+[Download](http://get.k8s.io/) Kubernetes
+-
+Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+-
+Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+-
+Connect with the community on [Slack](http://slack.k8s.io/)
+-
+Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-12-00-From-Network-Policies-To-Security-Policies.md b/blog/_posts/2016-12-00-From-Network-Policies-To-Security-Policies.md
new file mode 100644
index 00000000000..2709783d451
--- /dev/null
+++ b/blog/_posts/2016-12-00-From-Network-Policies-To-Security-Policies.md
@@ -0,0 +1,49 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " From Network Policies to Security Policies "
+date: Friday, December 08, 2016
+pagination:
+ enabled: true
+---
+_Editor's note: Today’s post is by Bernard Van De Walle, Kubernetes Lead Engineer, at Aporeto, showing how they took a new approach to the Kubernetes network policy enforcement._
+
+
+**Kubernetes Network Policies **
+
+Kubernetes supports a [new API for network policies](http://kubernetes.io/docs/user-guide/networkpolicies/) that provides a sophisticated model for isolating applications and reducing their attack surface. This feature, which came out of the [SIG-Network group](https://github.com/kubernetes/community/wiki/SIG-Network), makes it very easy and elegant to define network policies by using the built-in labels and selectors Kubernetes constructs.
+
+Kubernetes has left it up to third parties to implement these network policies and does not provide a default implementation.
+
+We want to introduce a new way to think about “Security” and “Network Policies”. We want to show that security and reachability are two different problems, and that security policies defined using endpoints (pods labels for example) do not specifically need to be implemented using network primitives.
+
+Most of us at [Aporeto](https://www.aporeto.com/) come from a Network/SDN background, and we knew how to implement those policies by using traditional networking and firewalling: Translating the pods identity and policy definitions to network constraints, such as IP addresses, subnets, and so forth.
+
+However, we also knew from past experiences that using an external control plane also introduces a whole new set of challenges: This distribution of ACLs requires very tight synchronization between Kubernetes workers; and every time a new pod is instantiated, ACLs need to be updated on all other pods that have some policy related to the new pod. Very tight synchronization is fundamentally a quadratic state problem and, while shared state mechanisms can work at a smaller scale, they often have convergence, security, and eventual consistency issues in large scale clusters.
+
+**From Network Policies to Security Policies**
+
+At Aporeto, we took a different approach to the network policy enforcement, by actually decoupling the network from the policy. We open sourced our solution as [Trireme](https://github.com/aporeto-inc/trireme), which translates the network policy to an authorization policy, and it implements a transparent authentication and authorization function for any communication between pods. Instead of using IP addresses to identify pods, it defines a cryptographically signed identity for each pod as the set of its associated labels. Instead of using ACLs or packet filters to enforce policy, it uses an authorization function where a container can only receive traffic from containers with an identity that matches the policy requirements.
+
+The authentication and authorization function in Trireme is overlaid on the TCP negotiation sequence. Identity (i.e. set of labels) is captured as a JSON Web Token (JWT), signed by local keys, and exchanged during the Syn/SynAck negotiation. The receiving worker validates that the JWTs are signed by a trusted authority (authentication step) and validates against a cached copy of the policy that the connection can be accepted. Once the connection is accepted, the rest of traffic flows through the Linux kernel and all of the protections that it can potentially offer (including conntrack capabilities if needed). The current implementation uses a simple user space process that captures the initial negotiation packets and attaches the authorization information as payload. The JWTs include nonces that are validated during the Ack packet and can defend against man-in-the-middle or replay attacks.
+
+
+ 
+
+The Trireme implementation talks directly to the Kubernetes master without an external controller and receives notifications on policy updates and pod instantiations so that it can maintain a local cache of the policy and update the authorization rules as needed. There is no requirement for any shared state between Trireme components that needs to be synchronized. Trireme can be deployed either as a standalone process in every worker or by using [Daemon Sets](http://kubernetes.io/docs/admin/daemons/). In the latter case, Kubernetes takes ownership of the lifecycle of the Trireme pods.
+
+Trireme's simplicity is derived from the separation of security policy from network transport. Policy enforcement is linked directly to the labels present on the connection, irrespective of the networking scheme used to make the pods communicate. This identity linkage enables tremendous flexibility to operators to use any networking scheme they like without tying security policy enforcement to network implementation details. Also, the implementation of security policy across the federated clusters becomes simple and viable.
+
+**Kubernetes and Trireme deployment**
+
+Kubernetes is unique in its ability to scale and provide an extensible security support for the deployment of containers and microservices. Trireme provides a simple, secure, and scalable mechanism for enforcing these policies.
+
+You can deploy and try Trireme on top of Kubernetes by using a provided Daemon Set. You'll need to modify some of the YAML parameters based on your cluster architecture. All the steps are described in detail in the [deployment GitHub folder](https://github.com/aporeto-inc/trireme-kubernetes/tree/master/deployment). The same folder contains an example 3-tier policy that you can use to test the traffic pattern.
+
+To learn more, download the code, and engage with the project, visit:
+
+- Trireme on [GitHub](https://github.com/aporeto-inc/trireme-kubernetes)
+- Trireme for Kubernetes by Aporeto on [GitHub](https://github.com/aporeto-inc/trireme-kubernetes)
+
+
+_--Bernard Van De Walle, Kubernetes lead engineer, [Aporeto](https://www.aporeto.com/)_
diff --git a/blog/_posts/2016-12-00-Kubernetes-1.5-Supporting-Production-Workloads.md b/blog/_posts/2016-12-00-Kubernetes-1.5-Supporting-Production-Workloads.md
new file mode 100644
index 00000000000..bf67cd48c62
--- /dev/null
+++ b/blog/_posts/2016-12-00-Kubernetes-1.5-Supporting-Production-Workloads.md
@@ -0,0 +1,73 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes 1.5: Supporting Production Workloads "
+date: Wednesday, December 13, 2016
+pagination:
+ enabled: true
+---
+
+Today we’re announcing the release of Kubernetes 1.5. This release follows close on the heels of KubeCon/CloundNativeCon, where users gathered to share how they’re running their applications on Kubernetes. Many of you expressed interest in running stateful applications in containers with the eventual goal of running all applications on Kubernetes. If you have been waiting to try running a distributed database on Kubernetes, or for ways to guarantee application disruption SLOs for stateful and stateless apps, this release has solutions for you.
+
+StatefulSet and PodDisruptionBudget are moving to beta. Together these features provide an easier way to deploy and scale stateful applications, and make it possible to perform cluster operations like node upgrade without violating application disruption SLOs.
+
+You will also find usability improvements throughout the release, starting with the kubectl command line interface you use so often. For those who have found it hard to set up a multi-cluster federation, a new command line tool called ‘kubefed’ is here to help. And a much requested multi-zone Highly Available (HA) master setup script has been added to kube-up.
+
+Did you know the Kubernetes community is working to support Windows containers? If you have .NET developers, take a look at the work on Windows containers in this release. This work is in early stage alpha and we would love your feedback.
+
+Lastly, for those interested in the internals of Kubernetes, 1.5 introduces Container Runtime Interface or CRI, which provides an internal API abstracting the container runtime from kubelet. This decoupling of the runtime gives users choice in selecting a runtime that best suits their needs. This release also introduces containerized node conformance tests that verify that the node software meets the minimum requirements to join a Kubernetes cluster.
+
+**What’s New**
+
+[**StatefulSet**](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) beta (formerly known as PetSet) allows workloads that require persistent identity or per-instance storage to be [created](http://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#creating-a-statefulset), [scaled](http://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#scaling-a-statefulset), [deleted](http://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#deleting-statefulsets) and [repaired](http://kubernetes.io/docs/tasks/manage-stateful-set/debugging-a-statefulset/) on Kubernetes. You can use StatefulSets to ease the deployment of any stateful service, and tutorial examples are available in the repository. In order to ensure that there are never two pods with the same identity, the Kubernetes node controller no longer force deletes pods on unresponsive nodes. Instead, it waits until the old pod is confirmed dead in one of several ways: automatically when the kubelet reports back and confirms the old pod is terminated; automatically when a cluster-admin deletes the node; or when a database admin confirms it is safe to proceed by force deleting the old pod. Users are now warned if they try to force delete pods via the CLI. For users who will be migrating from PetSets to StatefulSets, please follow the upgrade [guide](http://kubernetes.io/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set).
+
+**[PodDisruptionBudget](http://kubernetes.io/docs/admin/disruptions/)** beta is an API object that specifies the minimum number or minimum percentage of replicas of a collection of pods that must be up at any time. With PodDisruptionBudget, an application deployer can ensure that cluster operations that voluntarily evict pods will never take down so many simultaneously as to cause data loss, an outage, or an unacceptable service degradation. In Kubernetes 1.5 the “kubectl drain” command supports PodDisruptionBudget, allowing safe draining of nodes for maintenance activities, and it will soon also be used by node upgrade and cluster autoscaler (when removing nodes). This can be useful for a quorum based application to ensure the number of replicas running is never below the number needed for quorum, or for a web front end to ensure the number of replicas serving load never falls below a certain percentage.
+
+**[Kubefed](http://kubernetes.io/docs/admin/federation/kubefed.md)** alpha is a new command line tool to help you manage federated clusters, making it easy to deploy new federation control planes and add or remove clusters from existing federations. Also new in cluster federation is the addition of [ConfigMaps](http://kubernetes.io/docs/user-guide/federation/configmap.md) alpha and [DaemonSets](http://kubernetes.io/docs/user-guide/federation/daemonsets.md) alpha and [deployments](http://kubernetes.io/docs/user-guide/federation/deployment.md) alpha to the [federation API](http://kubernetes.io/docs/user-guide/federation/index.md) allowing you to create, update and delete these objects across multiple clusters from a single endpoint.
+
+**[HA Masters](http://kubernetes.io/docs/admin/ha-master-gce.md)** alpha provides the ability to create and delete clusters with highly available (replicated) masters on GCE using the kube-up/kube-down scripts. Allows setup of zone distributed HA masters, with at least one etcd replica per zone, at least one API server per zone, and master-elected components like scheduler and controller-manager distributed across zones.
+
+**[Windows server containers](http://kubernetes.io/docs/getting-started-guides/windows/)** alpha provides initial support for Windows Server 2016 nodes and scheduling Windows Server Containers.
+
+**[Container Runtime Interface](https://github.com/kubernetes/kubernetes/blob/release-1.5/docs/devel/container-runtime-interface.md)** (CRI) alpha introduces the v1 CRI API to allow pluggable container runtimes; an experimental docker-CRI integration is ready for testing and feedback.
+
+[**Node conformance test**](http://kubernetes.io/docs/admin/node-conformance.md) beta is a containerized test framework that provides a system verification and functionality test for nodes. The test validates whether the node meets the minimum requirements for Kubernetes; a node that passes the tests is qualified to join a Kubernetes. Node conformance test is available at: gcr.io/google\_containers/node-test:0.2 for users to verify node setup.
+
+These are just some of the highlights in our last release for the year. For a complete list please visit the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#v151).
+
+**Availability**
+Kubernetes 1.5 is available for download [here](https://github.com/kubernetes/kubernetes/releases/tag/v1.5.1) on GitHub and via [get.k8s.io](http://get.k8s.io/). To get started with Kubernetes, try one of the [new interactive tutorials](http://kubernetes.io/docs/tutorials/kubernetes-basics/). Don’t forget to take 1.5 for a spin before the holidays!
+
+**User Adoption**
+It’s been a year-and-a-half since GA, and the rate of [Kubernetes user adoption](http://kubernetes.io/case-studies/) continues to surpass estimates. Organizations running production workloads on Kubernetes include the world's largest companies, young startups, and everything in between. Since Kubernetes is open and runs anywhere, we’ve seen adoption on a diverse set of platforms; Pokémon Go (Google Cloud), Ticketmaster (AWS), SAP (OpenStack), Box (bare-metal), and hybrid environments that mix-and-match the above. Here are a few user highlights:
+
+
+- **[Yahoo! JAPAN](http://blog.kubernetes.io/2016/10/kubernetes-and-openstack-at-yahoo-japan.html)** -- built an automated tool chain making it easy to go from code push to deployment, all while running OpenStack on Kubernetes.
+- **[Walmart](http://www.techbetter.com/walmart-will-manage-200-distribution-centers-oneops-jenkins-nexus-kubernetes/)** -- will use Kubernetes with OneOps to manage its incredible distribution centers, helping its team with speed of delivery, systems uptime and asset utilization.
+- **[Monzo](https://www.youtube.com/watch?v=YkOY7DgXKyw)** -- a European startup building a mobile first bank, is using Kubernetes to power its core platform that can handle extreme performance and consistency requirements.
+
+**Kubernetes Ecosystem**
+The Kubernetes ecosystem is growing rapidly, including Microsoft's support for Kubernetes in Azure Container Service, VMware's integration of Kubernetes in its Photon Platform, and Canonical’s commercial support for Kubernetes. This is in addition to the thirty plus [Technology & Service Partners](http://blog.kubernetes.io/2016/10/kubernetes-service-technology-partners-program.html) that already provide commercial services for Kubernetes users.
+
+The CNCF recently announced the [Kubernetes Managed Service Provider](http://blog.kubernetes.io/2016/11/kubernetes-certification-training-and-managed-service-provider-program.html) (KMSP) program, a pre-qualified tier of service providers with experience helping enterprises successfully adopt Kubernetes. Furthering the knowledge and awareness of Kubernetes, The Linux Foundation, in partnership with CNCF, will develop and operate the Kubernetes training and certification program -- the first course designed is [Kubernetes Fundamentals](https://training.linuxfoundation.org/linux-courses/system-administration-training/kubernetes-fundamentals).
+
+
+
+**Community Velocity**
+In the past three months we’ve seen more than a hundred new contributors join the project with some 5,000 commits pushed, reaching new milestones by bringing the total for the core project to 1,000+ contributors and 40,000+ commits. This incredible momentum is only possible by having an open design, being open to new ideas, and empowering an open community to be welcoming to new and senior contributors alike. A big thanks goes out to the release team for 1.5 -- Saad Ali of Google, Davanum Srinivas of Mirantis, and Caleb Miles of CoreOS for their work bringing the 1.5 release to light.
+
+Offline, the community can be found at one of the many Kubernetes related [meetups](https://www.meetup.com/topics/kubernetes/) around the world. The strength and scale of the community was visible in the crowded halls of CloudNativeCon/KubeCon Seattle (the recorded user talks are [here](https://www.youtube.com/playlist?list=PLj6h78yzYM2PqgIGU1Qmi8nY7dqn9PCr4)). The next C[loudNativeCon + KubeCon is in Berlin](http://events.linuxfoundation.org/events/cloudnativecon-and-kubecon-europe) March 29-30, 2017, be sure to get your ticket and [submit your talk](https://docs.google.com/a/google.com/forms/d/e/1FAIpQLSc0lPQhSuDusPXLKJDTcWrH3DbOuoQlTD0lB4IGUz6NAmcf2g/viewform) before the CFP deadline of Dec 16th.
+
+
+
+Ready to start contributing? Share your voice at our weekly [community meeting](https://github.com/kubernetes/community/blob/master/community/README.md).
+
+
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+- Connect with the community on [Slack](http://slack.k8s.io/)
+
+Thank you for your contributions and support!
+
+
+_-- Aparna Sinha, Senior Product Manager, Google_
diff --git a/blog/_posts/2016-12-00-Kubernetes-Supports-Openapi.md b/blog/_posts/2016-12-00-Kubernetes-Supports-Openapi.md
new file mode 100644
index 00000000000..c062236a27a
--- /dev/null
+++ b/blog/_posts/2016-12-00-Kubernetes-Supports-Openapi.md
@@ -0,0 +1,198 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes supports OpenAPI "
+date: Saturday, December 23, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: this post is part of a [series of in-depth articles](http://blog.kubernetes.io/2016/12/five-days-of-kubernetes-1.5.html) on what's new in Kubernetes 1.5_
+
+[OpenAPI](https://www.openapis.org/) allows API providers to define their operations and models, and enables developers to automate their tools and generate their favorite language’s client to talk to that API server. Kubernetes has supported swagger 1.2 (older version of OpenAPI spec) for a while, but the spec was incomplete and invalid, making it hard to generate tools/clients based on it.
+
+In Kubernetes 1.4, we introduced alpha support for the OpenAPI spec (formerly known as swagger 2.0 before it was donated to the [Open API Initiative](https://www.openapis.org/about)) by upgrading the current models and operations. Beginning in [Kubernetes 1.5](http://blog.kubernetes.io/2016/12/kubernetes-1.5-supporting-production-workloads.html), the support for the OpenAPI spec has been completed by auto-generating the spec directly from Kubernetes source, which will keep the spec--and documentation--completely in sync with future changes in operations/models.
+
+The new spec enables us to have better API documentation and we have even introduced a supported [python client](https://github.com/kubernetes-incubator/client-python).
+
+The spec is modular, divided by GroupVersion: this is future-proof, since we intend to allow separate GroupVersions to be served out of separate API servers.
+
+The structure of spec is explained in detail in [OpenAPI spec definition](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md). We used [operation’s tags](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#tag-object) to separate each GroupVersion and filled as much information as we can about paths/operations and models. For a specific operation, all parameters, method of call, and responses are documented.
+
+For example, OpenAPI spec for reading a pod information is:
+
+
+
+```
+{
+
+...
+ "paths": {
+
+"/api/v1/namespaces/{namespace}/pods/{name}": {
+ "get": {
+ "description": "read the specified Pod",
+ "consumes": [
+ "\*/\*"
+ ],
+ "produces": [
+ "application/json",
+ "application/yaml",
+ "application/vnd.kubernetes.protobuf"
+ ],
+ "schemes": [
+ "https"
+ ],
+ "tags": [
+ "core\_v1"
+ ],
+ "operationId": "readCoreV1NamespacedPod",
+ "parameters": [
+ {
+ "uniqueItems": true,
+ "type": "boolean",
+ "description": "Should the export be exact. Exact export maintains cluster-specific fields like 'Namespace'.",
+ "name": "exact",
+ "in": "query"
+ },
+ {
+ "uniqueItems": true,
+ "type": "boolean",
+ "description": "Should this value be exported. Export strips fields that a user can not specify.",
+ "name": "export",
+ "in": "query"
+ }
+ ],
+ "responses": {
+ "200": {
+ "description": "OK",
+ "schema": {
+ "$ref": "#/definitions/v1.Pod"
+ }
+ },
+ "401": {
+ "description": "Unauthorized"
+ }
+ }
+ },
+
+…
+
+}
+
+…
+ ```
+
+
+
+Using this information and the URL of `kube-apiserver`, one should be able to make the call to the given url (/api/v1/namespaces/{namespace}/pods/{name}) with parameters such as `name`, `exact`, `export`, etc. to get pod’s information. Client libraries generators would also use this information to create an API function call for reading pod’s information. For example, [python client](https://github.com/kubernetes-incubator/client-python) makes it easy to call this operation like this:
+
+
+
+```
+from kubernetes import client
+
+ret = client.CoreV1Api().read\_namespaced\_pod(name="pods\_name", namespace="default")
+ ```
+
+
+
+A simplified version of generated read\_namespaced\_pod, can be found [here](https://gist.github.com/mbohlool/d5ec1dace27ef90cf742555c05480146).
+
+
+
+Swagger-codegen document generator would also be able to create documentation using the same information:
+
+
+
+```
+GET /api/v1/namespaces/{namespace}/pods/{name}
+
+(readCoreV1NamespacedPod)
+
+read the specified Pod
+
+Path parameters
+
+name (required)
+
+Path Parameter — name of the Pod
+
+namespace (required)
+
+Path Parameter — object name and auth scope, such as for teams and projects
+
+Consumes
+
+This API call consumes the following media types via the Content-Type request header:
+
+-
+\*/\*
+
+
+Query parameters
+
+pretty (optional)
+
+Query Parameter — If 'true', then the output is pretty printed.
+
+exact (optional)
+
+Query Parameter — Should the export be exact. Exact export maintains cluster-specific fields like 'Namespace'.
+
+export (optional)
+
+Query Parameter — Should this value be exported. Export strips fields that a user can not specify.
+
+Return type
+
+v1.Pod
+
+
+Produces
+
+This API call produces the following media types according to the Accept request header; the media type will be conveyed by the Content-Type response header.
+
+-
+application/json
+-
+application/yaml
+-
+application/vnd.kubernetes.protobuf
+
+Responses
+
+200
+
+OK v1.Pod
+
+401
+
+Unauthorized
+ ```
+
+
+
+
+
+There are two ways to access OpenAPI spec:
+
+- From `kuber-apiserver`/swagger.json. This file will have all enabled GroupVersions routes and models and would be most up-to-date file with an specific `kube-apiserver`.
+- From Kubernetes GitHub repository with all core GroupVersions enabled. You can access it on [master](https://github.com/kubernetes/kubernetes/blob/master/api/openapi-spec/swagger.json) or an specific release (for example [1.5 release](https://github.com/kubernetes/kubernetes/blob/release-1.5/api/openapi-spec/swagger.json)).
+
+There are numerous [tools](http://swagger.io/tools/) that works with this spec. For example, you can use the [swagger editor](http://swagger.io/swagger-editor/) to open the spec file and render documentation, as well as generate clients; or you can directly use [swagger codegen](http://swagger.io/swagger-codegen/) to generate documentation and clients. The clients this generates will mostly work out of the box--but you will need some support for authorization and some Kubernetes specific utilities. Use [python client](https://github.com/kubernetes-incubator/client-python) as a template to create your own client.
+
+
+
+If you want to get involved in development of OpenAPI support, client libraries, or report a bug, you can get in touch with developers at [SIG-API-Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery).
+
+
+
+_--Mehdy Bohlool, Software Engineer, Google_
+
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-12-00-Statefulset-Run-Scale-Stateful-Applications-In-Kubernetes.md b/blog/_posts/2016-12-00-Statefulset-Run-Scale-Stateful-Applications-In-Kubernetes.md
new file mode 100644
index 00000000000..549dc6aa0c2
--- /dev/null
+++ b/blog/_posts/2016-12-00-Statefulset-Run-Scale-Stateful-Applications-In-Kubernetes.md
@@ -0,0 +1,450 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " StatefulSet: Run and Scale Stateful Applications Easily in Kubernetes "
+date: Wednesday, December 20, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: this post is part of a [series of in-depth articles](http://blog.kubernetes.io/2016/12/five-days-of-kubernetes-1.5.html) on what's new in Kubernetes 1.5_
+
+In the latest release, [Kubernetes 1.5](http://blog.kubernetes.io/2016/12/kubernetes-1.5-supporting-production-workloads.html), we’ve moved the feature formerly known as PetSet into beta as [StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/). There were no major changes to the API Object, other than the community selected name, but we added the semantics of “at most one pod per index” for deployment of the Pods in the set. Along with ordered deployment, ordered termination, unique network names, and persistent stable storage, we think we have the right primitives to support many containerized stateful workloads. We don’t claim that the feature is 100% complete (it is software after all), but we believe that it is useful in its current form, and that we can extend the API in a backwards-compatible way as we progress toward an eventual GA release.
+
+**When is StatefulSet the Right Choice for my Storage Application?**
+
+[Deployments](http://kubernetes.io/docs/user-guide/deployments/) and [ReplicaSets](http://kubernetes.io/docs/user-guide/replicasets/) are a great way to run stateless replicas of an application on Kubernetes, but their semantics aren’t really right for deploying stateful applications. The purpose of StatefulSet is to provide a controller with the correct semantics for deploying a wide range of stateful workloads. However, moving your storage application onto Kubernetes isn’t always the correct choice. Before you go all in on converging your storage tier and your orchestration framework, you should ask yourself a few questions.
+
+**Can your application run using remote storage or does it require local storage media?**
+
+Currently, we recommend using StatefulSets with remote storage. Therefore, you must be ready to tolerate the performance implications of network attached storage. Even with storage optimized instances, you won’t likely realize the same performance as locally attached, solid state storage media. Does the performance of network attached storage, on your cloud, allow your storage application to meet its SLAs? If so, running your application in a StatefulSet provides compelling benefits from the perspective of automation. If the node on which your storage application is running fails, the Pod containing the application can be rescheduled onto another node, and, as it’s using network attached storage media, its data are still available after it’s rescheduled.
+
+**Do you need to scale your storage application?**
+
+What is the benefit you hope to gain by running your application in a StatefulSet? Do you have a single instance of your storage application for your entire organization? Is scaling your storage application a problem that you actually have? If you have a few instances of your storage application, and they are successfully meeting the demands of your organization, and those demands are not rapidly increasing, you’re already at a local optimum.
+
+If, however, you have an ecosystem of microservices, or if you frequently stamp out new service footprints that include storage applications, then you might benefit from automation and consolidation. If you’re already using Kubernetes to manage the stateless tiers of your ecosystem, you should consider using the same infrastructure to manage your storage applications.
+
+**How important is predictable performance?**
+
+Kubernetes doesn’t yet support isolation for network or storage I/O across containers. Colocating your storage application with a noisy neighbor can reduce the QPS that your application can handle. You can mitigate this by scheduling the Pod containing your storage application as the only tenant on a node (thus providing it a dedicated machine) or by using Pod anti-affinity rules to segregate Pods that contend for network or disk, but this means that you have to actively identify and mitigate hot spots.
+
+If squeezing the absolute maximum QPS out of your storage application isn’t your primary concern, if you’re willing and able to mitigate hotspots to ensure your storage applications meet their SLAs, and if the ease of turning up new "footprints" (services or collections of services), scaling them, and flexibly re-allocating resources is your primary concern, Kubernetes and StatefulSet might be the right solution to address it.
+
+**Does your application require specialized hardware or instance types?**
+
+If you run your storage application on high-end hardware or extra-large instance sizes, and your other workloads on commodity hardware or smaller, less expensive images, you may not want to deploy a heterogenous cluster. If you can standardize on a single instance size for all types of apps, then you may benefit from the flexible resource reallocation and consolidation, that you get from Kubernetes.
+
+**A Practical Example - ZooKeeper**
+
+[ZooKeeper](https://zookeeper.apache.org/doc/current/) is an interesting use case for StatefulSet for two reasons. First, it demonstrates that StatefulSet can be used to run a distributed, strongly consistent storage application on Kubernetes. Second, it's a prerequisite for running workloads like [Apache Hadoop](http://hadoop.apache.org/) and [Apache Kakfa](https://kafka.apache.org/) on Kubernetes. An [in-depth tutorial](http://kubernetes.io/docs/tutorials/stateful-application/zookeeper/) on deploying a ZooKeeper ensemble on Kubernetes is available in the Kubernetes documentation, and we’ll outline a few of the key features below.
+
+**Creating a ZooKeeper Ensemble**
+Creating an ensemble is as simple as using [kubectl create](http://kubernetes.io/docs/user-guide/kubectl/kubectl_create/) to generate the objects stored in the manifest.
+
+
+```
+$ kubectl create -f [http://k8s.io/docs/tutorials/stateful-application/zookeeper.yaml](https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master/docs/tutorials/stateful-application/zookeeper.yaml)
+
+service "zk-headless" created
+
+configmap "zk-config" created
+
+poddisruptionbudget "zk-budget" created
+
+statefulset "zk" created
+ ```
+
+
+When you create the manifest, the StatefulSet controller creates each Pod, with respect to its ordinal, and waits for each to be Running and Ready prior to creating its successor.
+
+
+
+```
+$ kubectl get -w -l app=zk
+
+NAME READY STATUS RESTARTS AGE
+
+zk-0 0/1 Pending 0 0s
+
+zk-0 0/1 Pending 0 0s
+
+zk-0 0/1 Pending 0 7s
+
+zk-0 0/1 ContainerCreating 0 7s
+
+zk-0 0/1 Running 0 38s
+
+zk-0 1/1 Running 0 58s
+
+zk-1 0/1 Pending 0 1s
+
+zk-1 0/1 Pending 0 1s
+
+zk-1 0/1 ContainerCreating 0 1s
+
+zk-1 0/1 Running 0 33s
+
+zk-1 1/1 Running 0 51s
+
+zk-2 0/1 Pending 0 0s
+
+zk-2 0/1 Pending 0 0s
+
+zk-2 0/1 ContainerCreating 0 0s
+
+zk-2 0/1 Running 0 25s
+
+zk-2 1/1 Running 0 40s
+ ```
+
+
+
+Examining the hostnames of each Pod in the StatefulSet, you can see that the Pods’ hostnames also contain the Pods’ ordinals.
+
+
+
+
+```
+$ for i in 0 1 2; do kubectl exec zk-$i -- hostname; done
+
+zk-0
+
+zk-1
+
+zk-2
+ ```
+
+
+
+ZooKeeper stores the unique identifier of each server in a file called “myid”. The identifiers used for ZooKeeper servers are just natural numbers. For the servers in the ensemble, the “myid” files are populated by adding one to the ordinal extracted from the Pods’ hostnames.
+
+
+```
+$ for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done
+
+myid zk-0
+
+1
+
+myid zk-1
+
+2
+
+myid zk-2
+
+3
+ ```
+
+
+
+Each Pod has a unique network address based on its hostname and the network domain controlled by the zk-headless Headless Service.
+
+
+
+
+
+```
+$ for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done
+
+zk-0.zk-headless.default.svc.cluster.local
+
+zk-1.zk-headless.default.svc.cluster.local
+
+zk-2.zk-headless.default.svc.cluster.local
+ ```
+
+
+
+The combination of a unique Pod ordinal and a unique network address allows you to populate the ZooKeeper servers’ configuration files with a consistent ensemble membership.
+
+
+
+```
+$ kubectl exec zk-0 -- cat /opt/zookeeper/conf/zoo.cfg
+
+clientPort=2181
+
+dataDir=/var/lib/zookeeper/data
+
+dataLogDir=/var/lib/zookeeper/log
+
+tickTime=2000
+
+initLimit=10
+
+syncLimit=2000
+
+maxClientCnxns=60
+
+minSessionTimeout= 4000
+
+maxSessionTimeout= 40000
+
+autopurge.snapRetainCount=3
+
+autopurge.purgeInteval=1
+
+server.1=zk-0.zk-headless.default.svc.cluster.local:2888:3888
+
+server.2=zk-1.zk-headless.default.svc.cluster.local:2888:3888
+
+server.3=zk-2.zk-headless.default.svc.cluster.local:2888:3888
+ ```
+
+
+
+StatefulSet lets you deploy ZooKeeper in a consistent and reproducible way. You won’t create more than one server with the same id, the servers can find each other via a stable network addresses, and they can perform leader election and replicate writes because the ensemble has consistent membership.
+
+
+The simplest way to verify that the ensemble works is to write a value to one server and to read it from another. You can use the “zkCli.sh” script that ships with the ZooKeeper distribution, to create a ZNode containing some data.
+
+
+
+
+```
+$ kubectl exec zk-0 zkCli.sh create /hello world
+
+...
+
+
+WATCHER::
+
+
+WatchedEvent state:SyncConnected type:None path:null
+
+Created /hello
+ ```
+
+
+
+You can use the same script to read the data from another server in the ensemble.
+
+
+
+
+```
+$ kubectl exec zk-1 zkCli.sh get /hello
+
+...
+
+
+WATCHER::
+
+
+WatchedEvent state:SyncConnected type:None path:null
+
+world
+
+...
+ ```
+
+
+You can take the ensemble down by deleting the zk StatefulSet.
+
+
+
+
+```
+$ kubectl delete statefulset zk
+
+statefulset "zk" deleted
+ ```
+
+
+
+The cascading delete destroys each Pod in the StatefulSet, with respect to the reverse order of the Pods’ ordinals, and it waits for each to terminate completely before terminating its predecessor.
+
+
+
+
+```
+$ kubectl get pods -w -l app=zk
+
+NAME READY STATUS RESTARTS AGE
+
+zk-0 1/1 Running 0 14m
+
+zk-1 1/1 Running 0 13m
+
+zk-2 1/1 Running 0 12m
+
+NAME READY STATUS RESTARTS AGE
+
+zk-2 1/1 Terminating 0 12m
+
+zk-1 1/1 Terminating 0 13m
+
+zk-0 1/1 Terminating 0 14m
+
+zk-2 0/1 Terminating 0 13m
+
+zk-2 0/1 Terminating 0 13m
+
+zk-2 0/1 Terminating 0 13m
+
+zk-1 0/1 Terminating 0 14m
+
+zk-1 0/1 Terminating 0 14m
+
+zk-1 0/1 Terminating 0 14m
+
+zk-0 0/1 Terminating 0 15m
+
+zk-0 0/1 Terminating 0 15m
+
+zk-0 0/1 Terminating 0 15m
+ ```
+
+
+
+
+
+You can use [kubectl apply](http://kubernetes.io/docs/user-guide/kubectl/kubectl_apply/) to recreate the zk StatefulSet and redeploy the ensemble.
+
+
+
+
+
+
+```
+$ kubectl apply -f [http://k8s.io/docs/tutorials/stateful-application/zookeeper.yaml](https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master/docs/tutorials/stateful-application/zookeeper.yaml)
+
+service "zk-headless" configured
+
+configmap "zk-config" configured
+
+statefulset "zk" created
+ ```
+
+
+
+If you use the “zkCli.sh” script to get the value entered prior to deleting the StatefulSet, you will find that the ensemble still serves the data.
+
+
+
+
+```
+$ kubectl exec zk-2 zkCli.sh get /hello
+
+...
+
+
+WATCHER::
+
+
+WatchedEvent state:SyncConnected type:None path:null
+
+world
+
+...
+ ```
+
+
+
+StatefulSet ensures that, even if all Pods in the StatefulSet are destroyed, when they are rescheduled, the ZooKeeper ensemble can elect a new leader and continue to serve requests.
+
+
+
+**Tolerating Node Failures**
+
+
+
+ZooKeeper replicates its state machine to different servers in the ensemble for the explicit purpose of tolerating node failure. By default, the Kubernetes Scheduler could deploy more than one Pod in the zk StatefulSet to the same node. If the zk-0 and zk-1 Pods were deployed on the same node, and that node failed, the ZooKeeper ensemble couldn’t form a quorum to commit writes, and the ZooKeeper service would experience an outage until one of the Pods could be rescheduled.
+
+
+
+You should always provision headroom capacity for critical processes in your cluster, and if you do, in this instance, the Kubernetes Scheduler will reschedule the Pods on another node and the outage will be brief.
+
+
+
+If the SLAs for your service preclude even brief outages due to a single node failure, you should use a [PodAntiAffinity](http://kubernetes.io/docs/user-guide/node-selection/) annotation. The manifest used to create the ensemble contains such an annotation, and it tells the Kubernetes Scheduler to not place more than one Pod from the zk StatefulSet on the same node.
+
+
+
+
+
+**Tolerating Planned Maintenance**
+
+
+
+
+The manifest used to create the ZooKeeper ensemble also creates a [PodDistruptionBudget](http://kubernetes.io/docs/admin/disruptions/), zk-budget. The zk-budget informs Kubernetes about the upper limit of disruptions (unhealthy Pods) that the service can tolerate.
+
+
+
+
+```
+ {
+
+ "podAntiAffinity": {
+
+ "requiredDuringSchedulingRequiredDuringExecution": [{
+
+ "labelSelector": {
+
+ "matchExpressions": [{
+
+ "key": "app",
+
+ "operator": "In",
+
+ "values": ["zk-headless"]
+
+ }]
+
+ },
+
+ "topologyKey": "kubernetes.io/hostname"
+
+ }]
+
+ }
+
+ }
+
+}
+ ```
+
+
+
+
+```
+$ kubectl get poddisruptionbudget zk-budget
+
+NAME MIN-AVAILABLE ALLOWED-DISRUPTIONS AGE
+
+zk-budget 2 1 2h
+ ```
+
+
+
+
+
+zk-budget indicates that at least two members of the ensemble must be available at all times for the ensemble to be healthy. If you attempt to drain a node prior taking it offline, and if draining it would terminate a Pod that violates the budget, the drain operation will fail. If you use [kubectl drain](http://kubernetes.io/docs/user-guide/kubectl/kubectl_drain/), in conjunction with PodDisruptionBudgets, to cordon your nodes and to evict all Pods prior to maintenance or decommissioning, you can ensure that the procedure won’t be disruptive to your stateful applications.
+
+
+
+
+**Looking Forward**
+
+
+
+As the Kubernetes development looks towards GA, we are looking at a long list of suggestions from users. If you want to dive into our backlog, checkout the [GitHub issues with the stateful label](https://github.com/kubernetes/kubernetes/labels/area%2Fstateful-apps). However, as the resulting API would be hard to comprehend, we don't expect to implement all of these feature requests. Some feature requests, like support for rolling updates, better integration with node upgrades, and using fast local storage, would benefit most types of stateful applications, and we expect to prioritize these. The intention of StatefulSet is to be able to run a large number of applications well, and not to be able to run all applications perfectly. With this in mind, we avoided implementing StatefulSets in a way that relied on hidden mechanisms or inaccessible features. Anyone can write a controller that works similarly to StatefulSets. We call this "making it forkable."
+
+Over the next year, we expect many popular storage applications to each have their own community-supported, dedicated controllers or "[operators](https://coreos.com/blog/introducing-operators.html)". We've already heard of work on custom controllers for etcd, Redis, and ZooKeeper. We expect to write some more ourselves and to support the community in developing others.
+
+
+
+The Operators for [etcd](https://coreos.com/blog/introducing-the-etcd-operator.html) and [Prometheus](https://coreos.com/blog/the-prometheus-operator.html) from CoreOS, demonstrate an approach to running stateful applications on Kubernetes that provides a level of automation and integration beyond that which is possible with StatefulSet alone. On the other hand, using a generic controller like StatefulSet or Deployment means that a wide range of applications can be managed by understanding a single config object. We think Kubernetes users will appreciate having the choice of these two approaches.
+
+
+
+_--Kenneth Owens & Eric Tune, Software Engineers, Google_
+
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2016-12-00-Windows-Server-Support-Kubernetes.md b/blog/_posts/2016-12-00-Windows-Server-Support-Kubernetes.md
new file mode 100644
index 00000000000..5f35f744842
--- /dev/null
+++ b/blog/_posts/2016-12-00-Windows-Server-Support-Kubernetes.md
@@ -0,0 +1,79 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Windows Server Support Comes to Kubernetes "
+date: Thursday, December 21, 2016
+pagination:
+ enabled: true
+---
+_Editor’s note: this post is part of a [series of in-depth articles](http://blog.kubernetes.io/2016/12/five-days-of-kubernetes-1.5.html) on what's new in Kubernetes 1.5_
+
+Extending on the theme of giving users choice, [Kubernetes 1.5 release](http://blog.kubernetes.io/2016/12/kubernetes-1.5-supporting-production-workloads.html) includes the support for Windows Servers. WIth more than [80%](http://www.gartner.com/document/3446217) of enterprise apps running Java on Linux or .Net on Windows, Kubernetes is previewing capabilities that extends its reach to the mass majority of enterprise workloads.
+
+The new Kubernetes Windows Server 2016 and Windows Container support includes public preview with the following features:
+
+- **Containerized Multiplatform Applications** - Applications developed in operating system neutral languages like Go and .NET Core were previously impossible to orchestrate between Linux and Windows. Now, with support for Windows Server 2016 in Kubernetes, such applications can be deployed on both Windows Server as well as Linux, giving the developer choice of the operating system runtime. This capability has been desired by customers for almost two decades.
+
+- **Support for Both Windows Server Containers and Hyper-V Containers** - There are two types of containers in Windows Server 2016. Windows Containers is similar to Docker containers on Linux, and uses kernel sharing. The other, called Hyper-V Containers, is more lightweight than a virtual machine while at the same time offering greater isolation, its own copy of the kernel, and direct memory assignment. Kubernetes can orchestrate both these types of containers.
+
+- **Expanded Ecosystem of Applications** - One of the key drivers of introducing Windows Server support in Kubernetes is to expand the ecosystem of applications supported by Kubernetes: IIS, .NET, Windows Services, ASP.NET, .NET Core, are some of the application types that can now be orchestrated by Kubernetes, running inside a container on Windows Server.
+
+- **Coverage for Heterogeneous Data Centers** - Organizations already use Kubernetes to host tens of thousands of application instances across Global 2000 and Fortune 500. This will allow them to expand Kubernetes to the large footprint of Windows Server.
+
+The process to bring Windows Server to Kubernetes has been a truly multi-vendor effort and championed by the [Windows Special Interest Group (SIG)](https://github.com/kubernetes/community/blob/master/sig-windows/README.md) - Apprenda, Google, Red Hat and Microsoft were all involved in bringing Kubernetes to Windows Server. On the community effort to bring Kubernetes to Windows Server, Taylor Brown, Principal Program Manager at Microsoft stated that “This new Kubernetes community work furthers Windows Server container support options for popular orchestrators, reinforcing Microsoft’s commitment to choice and flexibility for both Windows and Linux ecosystems.”
+
+**Guidance for Current Usage**
+
+
+|
+Where to use Windows Server support?
+ |
+Right now organizations should start testing Kubernetes on Windows Server and provide feedback. Most organizations take months to set up hardened production environments and general availability should be available in next few releases of Kubernetes.
+ |
+|
+What works?
+ |
+Most of the Kubernetes constructs, such as Pods, Services, Labels, etc. work with Windows Containers.
+ |
+|
+What doesn’t work yet?
+ |
+-
+Pod abstraction is not same due to networking namespaces. Net result is that Windows containers in a single POD cannot communicate over localhost. Linux containers can share networking stack by placing them in the same network namespace.
+-
+DNS capabilities are not fully implemented
+-
+UDP is not supported inside a container
+ |
+|
+When will it be ready for all production workloads (general availability)?
+ |
+The goal is to refine the networking and other areas that need work to get Kubernetes users a production version of Windows Server 2016 - including with Windows Nano Server and Windows Server Core installation options - support in the next couple releases.
+ |
+
+
+**Technical Demo**
+
+
+
+
+**Roadmap**
+
+Support for Windows Server-based containers is in alpha release mode for Kubernetes 1.5, but the community is not stopping there. Customers want enterprise hardened container scheduling and management for their entire tech portfolio. That has to include full parity of features among Linux and Windows Server in production. The [Windows Server SIG](https://github.com/kubernetes/community/blob/master/sig-windows/README.md) will deliver that parity within the next one or two releases of Kubernetes through a few key areas of investment:
+
+- **Networking** - the SIG will continue working side by side with Microsoft to enhance the networking backbone of Windows Server Containers, specifically around lighting up container mode networking and native network overlay support for container endpoints.
+- **OOBE** - Improving the setup, deployment, and diagnostics for a Windows Server node, including the ability to deploy to any cloud (Azure, AWS, GCP)
+- **Runtime Operations** - the SIG will play a key part in defining the monitoring interface of the Container Runtime Interface (CRI), leveraging it to provide deep insight and monitoring for Windows Server-based containers
+**Get Started**
+
+To get started with Kubernetes on Windows Server 2016, please visit the [GitHub guide](http://kubernetes.io/docs/getting-started-guides/windows/) for more details.
+If you want to help with Windows Server support, then please connect with the [Windows Server SIG](https://github.com/kubernetes/community/blob/master/sig-windows/README.md) or connect directly with Michael Michael, the SIG lead, on [GitHub](https://github.com/michmike).
+
+_--[Michael Michael](https://twitter.com/michmike77), Senior Director of Product Management, Apprenda _
+
+
+
+
+
+|  |
+| Kubernetes on Windows Server 2016 Architecture |
diff --git a/blog/_posts/2017-01-00-Fission-Serverless-Functions-As-Service-For-Kubernetes.md b/blog/_posts/2017-01-00-Fission-Serverless-Functions-As-Service-For-Kubernetes.md
new file mode 100644
index 00000000000..e25221e0381
--- /dev/null
+++ b/blog/_posts/2017-01-00-Fission-Serverless-Functions-As-Service-For-Kubernetes.md
@@ -0,0 +1,134 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Fission: Serverless Functions as a Service for Kubernetes "
+date: Tuesday, January 30, 2017
+pagination:
+ enabled: true
+---
+_Editor's note: Today’s post is by Soam Vasani, Software Engineer at Platform9 Systems, talking about a new open source Serverless Function (FaaS) framework for Kubernetes._
+
+[Fission](https://github.com/fission/fission) is a Functions as a Service (FaaS) / Serverless function framework built on Kubernetes.
+
+Fission allows you to easily create HTTP services on Kubernetes from functions. It works at the source level and abstracts away container images (in most cases). It also simplifies the Kubernetes learning curve, by enabling you to make useful services without knowing much about Kubernetes.
+
+To use Fission, you simply create functions and add them with a CLI. You can associate functions with HTTP routes, Kubernetes events, or other triggers. Fission supports NodeJS and Python today.
+
+Functions are invoked when their trigger fires, and they only consume CPU and memory when they're running. Idle functions don't consume any resources except storage.
+
+**Why make a FaaS framework on Kubernetes?**
+
+We think there's a need for a FaaS framework that can be run on diverse infrastructure, both in public clouds and on-premise infrastructure. Next, we had to decide whether to build it from scratch, or on top of an existing orchestration system. It was quickly clear that we shouldn't build it from scratch -- we would just end up having to re-invent cluster management, scheduling, network management, and lots more.
+
+Kubernetes offered a powerful and flexible orchestration system with a comprehensive API backed by a strong and growing community. Building on it meant Fission could leave container orchestration functionality to Kubernetes, and focus on FaaS features.
+
+The other reason we don't want a separate FaaS cluster is that FaaS works best in combination with other infrastructure. For example, it may be the right fit for a small REST API, but it needs to work with other services to store state. FaaS also works great as a mechanism for event handlers to handle notifications from storage, databases, and from Kubernetes itself. Kubernetes is a great platform for all these services to interoperate on top of.
+
+**Deploying and Using Fission**
+
+Fission can be installed with a `kubectl create` command: see the [project README](https://github.com/fission/fission#get-and-run-fission-minikube-or-local-cluster) for instructions.
+
+Here's how you’d write a "hello world" HTTP service:
+
+
+
+```
+$ cat \> hello.py
+
+def main(context):
+
+ print "Hello, world!"
+
+
+$ fission function create --name hello --env python --code hello.py --route /hello
+
+
+$ curl http://\/hello
+
+Hello, world!
+ ```
+
+
+Fission takes care of loading the function into a container, routing the request to it, and so on. We go into more details in the next section.
+
+**How Fission Is Implemented on Kubernetes**
+
+At its core, a FaaS framework must (1) turn functions into services and (2) manage the lifecycle of these services.
+
+There are a number of ways to achieve these goals, and each comes with different tradeoffs. Should the framework operate at the source-level, or at the level of Docker images (or something in-between, like "buildpacks")? What's an acceptable amount of overhead the first time a function runs? Choices made here affect platform flexibility, ease of use, resource usage and costs, and of course, performance.
+
+**Packaging, source code, and images**
+
+One of our goals is to make Fission very easy to use for new users. We chose to operate
+at the source level, so that users can avoid having to deal with container image building, pushing images to registries, managing registry credentials, image versioning, and so on.
+
+However, container images are the most flexible way to package apps. A purely source-level interface wouldn't allow users to package binary dependencies, for example.
+
+So, Fission goes with a hybrid approach -- container images that contain a dynamic loader for functions. This approach allows most users to use Fission purely at the source level, but enables them to customize the container image when needed.
+
+These images, called "environment images" in Fission, contain the runtime for the language (such as NodeJS or Python), a set of commonly used dependencies and a dynamic loader for functions. If these dependencies are sufficient for the function the user is writing, no image rebuilding is needed. Otherwise, the list of dependencies can be modified, and the image rebuilt.
+
+These environment images are the only language-specific parts of Fission. They present a uniform interface to the rest of the framework. This design allows Fission to be easily extended to more languages.
+
+**Cold start performance**
+
+One of the goals of the serverless functions is that functions use CPU/memory resources only when running. This optimizes the resource cost of functions, but it comes at the cost of some performance overhead when starting from idle (the "cold start" overhead).
+
+Cold start overhead is important in many use cases. In particular, with functions used in an interactive use case -- like a web or mobile app, where a user is waiting for the action to complete -- several-second cold start latencies would be unacceptable.
+
+To optimize cold start overheads, Fission keeps a running pool of containers for each environment. When a request for a function comes in, Fission doesn't have to deploy a new container -- it just chooses one that's already running, copies the function into the container, loads it dynamically, and routes the request to that instance. The overhead of this process takes on the order of 100msec for NodeJS and Python functions.
+
+**How Fission works on Kubernetes**
+
+
+
+ 
+Fission is designed as a set of microservices. A Controller keeps track of functions, HTTP
+routes, event triggers, and environment images. A Pool Manager manages pools of idle environment containers, the loading of functions into these containers, and the killing of function instances when they're idle. A Router receives HTTP requests and routes them to function instances, requesting an instance from the Pool Manager if necessary.
+
+The controller serves the fission API. All the other components watch the controller for updates. The router is exposed as a Kubernetes Service of the LoadBalancer or NodePort type, depending on where the Kubernetes cluster is hosted.
+
+When the router gets a request, it looks up a cache to see if this request already has a service it should be routed to. If not, it looks up the function to map the request to, and requests the poolmgr for an instance. The poolmgr has a pool of idle pods; it picks one, loads the function into it (by sending a request into a sidecar container in the pod), and returns the address of the pod to the router. The router proxies over the request to this pod. The pod is cached for subsequent requests, and if it's been idle for a few minutes, it is killed.
+
+(For now, Fission maps one function to one container; autoscaling to multiple instances is planned for a later release. Re-using function pods to host multiple functions is also planned, for cases where isolation isn't a requirement.)
+
+**Use Cases for Fission**
+
+**Bots, Webhooks, REST APIs **
+Fission is a good framework to make small REST APIs, implement webhooks, and write chatbots for Slack or other services.
+
+As an example of a simple REST API, we've made a small guestbook app that uses functions for reading and writing to guestbook, and works with a redis instance to keep track of state. You can find the app [in the Fission GitHub repo](https://github.com/fission/fission/tree/master/examples/python/guestbook).
+
+The app contains two end points -- the GET endpoint lists out guestbook entries from redis and renders them into HTML. The POST endpoint adds a new entry to the guestbook list in redis. That’s all there is -- there’s no Dockerfile to manage, and updating the app is as simple as calling fission function update.
+
+**Handling Kubernetes Events**
+Fission also supports triggering functions based on Kubernetes watches. For example, you can setup a function to watch for all pods in a certain namespace matching a certain label. The function gets the serialized object and the watch event type (added/removed/updated) in its context.
+
+These event handler functions could be used for simple monitoring -- for example, you could send a slack message whenever a new service is added to the cluster. There are also more complex use cases, such as writing a custom controller by watching Kubernetes' Third Party Resources.
+
+**Status and Roadmap**
+
+Fission is in early alpha for now (Jan 2017). It's not ready for production use just yet. We're looking for early adopters and feedback.
+
+What's ahead for Fission? We're working on making FaaS on Kubernetes more convenient, easy to use and easier to integrate with. In the coming months we're working on adding support for unit testing, integration with Git, function monitoring and log aggregation. We're also working on integration with other sources of events.
+
+Creating more language environments is also in the works. NodeJS and Python are supported today. Preliminary support for C# .NET has been contributed by Klavs Madsen.
+
+You can find our current roadmap on our GitHub [issues](https://github.com/fission/fission/issues) and [projects](https://github.com/fission/fission/projects).
+
+**Get Involved**
+
+Fission is open source and developed in the open by [Platform9 Systems](http://platform9.com/fission). Check us out on [GitHub](https://github.com/fission/fission), and join our slack channel if you’d like to chat with us. We're also on twitter at [@fissionio](https://twitter.com/fissionio).
+
+
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+
+
+
+_--Soam Vasani, Software Engineer, Platform9 Systems_
diff --git a/blog/_posts/2017-01-00-How-We-Run-Kubernetes-In-Kubernetes-Kubeception.md b/blog/_posts/2017-01-00-How-We-Run-Kubernetes-In-Kubernetes-Kubeception.md
new file mode 100644
index 00000000000..fd2d1c9195f
--- /dev/null
+++ b/blog/_posts/2017-01-00-How-We-Run-Kubernetes-In-Kubernetes-Kubeception.md
@@ -0,0 +1,126 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " How we run Kubernetes in Kubernetes aka Kubeception "
+date: Saturday, January 20, 2017
+pagination:
+ enabled: true
+---
+_Editor's note: Today’s post is by the team at Giant Swarm, showing how they run Kubernetes in Kubernetes._
+
+[Giant Swarm](https://giantswarm.io/)’s container infrastructure started out with the goal to be an easy way for developers to deploy containerized microservices. Our first generation was extensively using [fleet](https://github.com/coreos/fleet) as a base layer for our infrastructure components as well as for scheduling user containers.
+
+In order to give our users a more powerful way to manage their containers we introduced Kubernetes into our stack in early 2016. However, as we needed a quick way to flexibly spin up and manage different users’ Kubernetes clusters resiliently we kept the underlying fleet layer.
+
+As we insist on running all our underlying infrastructure components in containers, fleet gave us the flexibility of using systemd unit files to define our infrastructure components declaratively. Our self-developed deployment tooling allowed us to deploy and manage the infrastructure without the need for imperative configuration management tools.
+
+However, fleet is just a distributed init and not a complete scheduling and orchestration system. Next to a lot of work on our tooling, it required significant improvements in terms of communication between peers, its reconciliation loop, and stability that we had to work on. Also the uptake in Kubernetes usage would ensure that issues are found and fixed faster.
+
+As we had made good experience with introducing Kubernetes on the user side and with recent developments like [rktnetes](http://blog.kubernetes.io/2016/07/rktnetes-brings-rkt-container-engine-to-Kubernetes.html) and [stackanetes](https://github.com/stackanetes/stackanetes) it felt like time for us to also move our base layer to Kubernetes.
+
+
+
+**Why Kubernetes in Kubernetes**
+
+
+
+Now, you could ask, why would anyone want to run multiple Kubernetes clusters inside of a Kubernetes cluster? Are we crazy? The answer is advanced multi-tenancy use cases as well as operability and automation thereof.
+
+
+
+Kubernetes comes with its own growing feature set for multi-tenancy use cases. However, we had the goal of offering our users a fully-managed Kubernetes without any limitations to the functionality they would get using any vanilla Kubernetes environment, including privileged access to the nodes. Further, in bigger enterprise scenarios a single Kubernetes cluster with its inbuilt isolation mechanisms is often not sufficient to satisfy compliance and security requirements. More advanced (firewalled) zoning or layered security concepts are tough to reproduce with a single installation. With namespace isolation both privileged access as well as firewalled zones can hardly be implemented without sidestepping security measures.
+
+
+
+Now you could go and set up multiple completely separate (and federated) installations of Kubernetes. However, automating the deployment and management of these clusters would need additional tooling and complex monitoring setups. Further, we wanted to be able to spin clusters up and down on demand, scale them, update them, keep track of which clusters are available, and be able to assign them to organizations and teams flexibly. In fact this setup can be combined with a federation control plane to federate deployments to the clusters over one API endpoint.
+
+
+
+And wouldn’t it be nice to have an API and frontend for that?
+
+
+
+**Enter Giantnetes**
+
+
+
+Based on the above requirements we set out to build what we call Giantnetes - or if you’re into movies, Kubeception. At the most basic abstraction it is an outer Kubernetes cluster (the actual Giantnetes), which is used to run and manage multiple completely isolated user Kubernetes clusters.
+
+ {: .big-img}
+
+
+
+The physical machines are bootstrapped by using our CoreOS Container Linux bootstrapping tool, [Mayu](https://github.com/giantswarm/mayu). The Giantnetes components themselves are self-hosted, i.e. a kubelet is in charge of automatically bootstrapping the components that reside in a manifests folder. You could call this the first level of Kubeception.
+
+
+
+Once the Giantnetes cluster is running we use it to schedule the user Kubernetes clusters as well as our tooling for managing and securing them.
+
+
+
+We chose Calico as the Giantnetes network plugin to ensure security, isolation, and the right performance for all the applications running on top of Giantnetes.
+
+
+
+Then, to create the inner Kubernetes clusters, we initiate a few pods, which configure the network bridge, create certificates and tokens, and launch virtual machines for the future cluster. To do so, we use lightweight technologies such as KVM and qemu to provision CoreOS Container Linux VMs that become the nodes of an inner Kubernetes cluster. You could call this the second level of Kubeception.
+
+
+
+Currently this means we are starting Pods with Docker containers that in turn start VMs with KVM and qemu. However, we are looking into doing this with [rkt qemu-kvm](https://github.com/coreos/rkt/blob/master/Documentation/running-kvm-stage1.md), which would result in using a rktnetes setup for our Giantnetes.
+
+
+ {:.big-img}
+
+
+
+The networking solution for the inner Kubernetes clusters has two levels. It on a combination of flannel’s server/client architecture model and Calico BGP. While a flannel client is used to create the network bridge between the VMs of each virtualized inner Kubernetes cluster, Calico is running inside the virtual machines to connect the different Kubernetes nodes and create a single network for the inner Kubernetes. By using Calico, we mimic the Giantnetes networking solution inside of each Kubernetes cluster and provide the primitives to secure and isolate workloads through the Kubernetes network policy API.
+
+
+
+Regarding security, we aim for separating privileges as much as possible and making things auditable. Currently this means we use certificates to secure access to the clusters and encrypt communication between all the components that form a cluster is (i.e. VM to VM, Kubernetes components to each other, etcd master to Calico workers, etc). For this we create a PKI backend per cluster and then issue certificates per service in Vault on-demand. Every component uses a different certificate, thus, avoiding to expose the whole cluster if any of the components or nodes gets compromised. We further rotate the certificates on a regular basis.
+
+
+
+For ensuring access to the API and to services of each inner Kubernetes cluster from the outside we run a multi-level HAproxy ingress controller setup in the Giantnetes that connects the Kubernetes VMs to hardware load balancers.
+
+
+
+**Looking into Giantnetes with kubectl**
+
+
+
+Let’s have a look at a minimal sample deployment of Giantnetes.
+
+ 
+
+
+
+In the above example you see a user Kubernetes cluster `customera` running in VM-containers on top of Giantnetes. We currently use Jobs for the network and certificate setups.
+
+Peeking inside the user cluster, you see the DNS pods and a helloworld running.
+
+ 
+
+
+
+Each one of these user clusters can be scheduled and used independently. They can be spun up and down on-demand.
+
+
+
+**Conclusion**
+
+
+
+To sum up, we could show how Kubernetes is able to easily not only self-host but also flexibly schedule a multitude of inner Kubernetes clusters while ensuring higher isolation and security aspects. A highlight in this setup is the composability and automation of the installation and the robust coordination between the Kubernetes components. This allows us to easily create, destroy, and reschedule clusters on-demand without affecting users or compromising the security of the infrastructure. It further allows us to spin up clusters with varying sizes and configurations or even versions by just changing some arguments at cluster creation.
+
+
+
+This setup is still in its early days and our roadmap is planning for improvements in many areas such as transparent upgrades, dynamic reconfiguration and scaling of clusters, performance improvements, and (even more) security. Furthermore, we are looking forward to improve on our setup by making use of the ever advancing state of Kubernetes operations tooling and upcoming features, such as Init Containers, Scheduled Jobs, Pod and Node affinity and anti-affinity, etc.
+
+
+
+Most importantly, we are working on making the inner Kubernetes clusters a third party resource that can then be managed by a custom controller. The result would be much like the [Operator concept by CoreOS](https://coreos.com/blog/introducing-operators.html). And to ensure that the community at large can benefit from this project we will be open sourcing this in the near future.
+
+
+
+_-- Hector Fernandez, Software Engineer & Puja Abbassi, Developer Advocate, Giant Swarm_
diff --git a/blog/_posts/2017-01-00-Kubernetes-Ux-Survey-Infographic.md b/blog/_posts/2017-01-00-Kubernetes-Ux-Survey-Infographic.md
new file mode 100644
index 00000000000..4b75d374b5b
--- /dev/null
+++ b/blog/_posts/2017-01-00-Kubernetes-Ux-Survey-Infographic.md
@@ -0,0 +1,60 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes UX Survey Infographic "
+date: Tuesday, January 09, 2017
+pagination:
+ enabled: true
+---
+_Editor's note: Today’s post is by Dan Romlein, UX Designer at Apprenda and member of the SIG-UI, sharing UX survey results from the Kubernetes community. _
+
+The following infographic summarizes the findings of a survey that the team behind [Dashboard](https://github.com/kubernetes/dashboard), the official web UI for Kubernetes, sent during KubeCon in November 2016. Following the KubeCon launch of the survey, it was promoted on Twitter and various Slack channels over a two week period and generated over 100 responses. We’re delighted with the data it provides us to now make feature and roadmap decisions more in-line with the needs of you, our users.
+
+**Satisfaction with Dashboard**
+
+
+[](https://1.bp.blogspot.com/-aSAimiXhbkw/WHPgEveTIzI/AAAAAAAAA5s/BMa-6jVzW4Ir-JExg-njJJge2tQg6QSOwCLcB/s1600/satisfaction-with-dashboard.png)
+
+Less than a year old, Dashboard is still very early in its development and we realize it has a long way to go, but it was encouraging to hear it’s tracking on the axis of MVP and even with its basic feature set is adding value for people. Respondents indicated that they like how quickly the Dashboard project is moving forward and the activity level of its contributors. Specific appreciation was given for the value Dashboard brings to first-time Kubernetes users and encouraging exploration. Frustration voiced around Dashboard centered on its limited capabilities: notably, the lack of RBAC and limited visualization of cluster objects and their relationships.
+
+**Respondent Demographics**
+
+
+[](https://2.bp.blogspot.com/-f4lRiYxQ6Pg/WHPggSKpt7I/AAAAAAAAA5w/uThW4NAPiokHJ_Av721SRN4FThd2THAIQCLcB/s1600/respondent-demographics.png)
+
+
+**Kubernetes Usage**
+
+[](https://4.bp.blogspot.com/-iQD8MEPL7nA/WHPgEensPbI/AAAAAAAAA5o/nRAVMQpcxmM9llFJyC-pVD16emtagnxgwCEw/s1600/kubernetes-usage.png)
+
+People are using Dashboard in production, which is fantastic; it’s that setting that the team is focused on optimizing for.
+
+
+
+**Feature Priority**
+
+[](https://1.bp.blogspot.com/-gGKQKRwgOto/WHPgEdVMqQI/AAAAAAAAA5k/MiTVQtKLuHkAMmSjpvAsmiBezAdQV4zCwCEw/s1600/feature-priority.png)
+
+
+
+In building Dashboard, we want to continually make alignments between the needs of Kubernetes users and our product. Feature areas have intentionally been kept as high-level as possible, so that UX designers on the Dashboard team can creatively transform those use cases into specific features. While there’s nothing wrong with “[faster horses](http://www.goodreads.com/quotes/15297-if-i-had-asked-people-what-they-wanted-they-would)”, we want to make sure we’re creating an environment for the best possible innovation to flourish.
+
+
+
+Troubleshooting & Debugging as a strong frontrunner in requested feature area is consistent with the [previous KubeCon survey](http://static.lwy.io/img/kubernetes_dashboard_infographic.png), and this is now our top area of investment. Currently in-progress is the ability to be able to exec into a Pod, and next up will be providing aggregated logs views across objects. One of a UI’s strengths over a CLI is its ability to show things, and the troubleshooting and debugging feature area is a prime application of this capability.
+
+
+
+In addition to a continued ongoing investment in troubleshooting and debugging functionality, the other focus of the Dashboard team’s efforts currently is RBAC / IAM within Dashboard. Though #4 on the ranking of feature areas, In various conversations at KubeCon and the days following, this emerged as a top-requested feature of Dashboard, and the one people were most passionate about. This is a deal-breaker for many companies, and we’re confident its enablement will open many doors for Dashboard’s use in production.
+
+
+
+**Conclusion**
+
+
+
+It’s invaluable to have data from Kubernetes users on how they’re putting Dashboard to use and how well it’s serving their needs. If you missed the survey response window but still have something you’d like to share, we’d love to connect with you and hear feedback or answer questions:
+
+- Email us at the [SIG-UI mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-ui)
+- Chat with us on the Kubernetes Slack [#SIG-UI channel](https://kubernetes.slack.com/messages/sig-ui/)
+- Join our weekly meetings at 4PM CEST. See the [SIG-UI calendar](https://calendar.google.com/calendar/embed?src=google.com_52lm43hc2kur57dgkibltqc6kc%40group.calendar.google.com&ctz=Europe/Warsaw) for details.
diff --git a/blog/_posts/2017-01-00-Running-Mongodb-On-Kubernetes-With-Statefulsets.md b/blog/_posts/2017-01-00-Running-Mongodb-On-Kubernetes-With-Statefulsets.md
new file mode 100644
index 00000000000..89658a731b0
--- /dev/null
+++ b/blog/_posts/2017-01-00-Running-Mongodb-On-Kubernetes-With-Statefulsets.md
@@ -0,0 +1,428 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Running MongoDB on Kubernetes with StatefulSets "
+date: Tuesday, January 30, 2017
+pagination:
+ enabled: true
+---
+_Editor's note: Today’s post is by Sandeep Dinesh, Developer Advocate, Google Cloud Platform, showing how to run a database in a container._
+
+
+Conventional wisdom says you can’t run a database in a container. “Containers are stateless!” they say, and “databases are pointless without state!”
+
+Of course, this is not true at all. At Google, everything runs in a container, including databases. You just need the right tools. [Kubernetes 1.5](http://blog.kubernetes.io/2016/12/kubernetes-1.5-supporting-production-workloads.html) includes the new [StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) API object (in previous versions, StatefulSet was known as PetSet). With StatefulSets, Kubernetes makes it much easier to run stateful workloads such as databases.
+
+If you’ve followed my previous posts, you know how to create a [MEAN Stack app with Docker](http://blog.sandeepdinesh.com/2015/07/running-mean-web-application-in-docker.html), then [migrate it to Kubernetes](https://medium.com/google-cloud/running-a-mean-stack-on-google-cloud-platform-with-kubernetes-149ca81c2b5d) to provide easier management and reliability, and [create a MongoDB replica set](https://medium.com/google-cloud/mongodb-replica-sets-with-kubernetes-d96606bd9474) to provide redundancy and high availability.
+
+While the replica set in my previous blog post worked, there were some annoying steps that you needed to follow. You had to manually create a disk, a ReplicationController, and a service for each replica. Scaling the set up and down meant managing all of these resources manually, which is an opportunity for error, and would put your stateful application at risk In the previous example, we created a Makefile to ease the management of of these resources, but it would have been great if Kubernetes could just take care of all of this for us.
+
+With StatefulSets, these headaches finally go away. You can create and manage your MongoDB replica set natively in Kubernetes, without the need for scripts and Makefiles. Let’s take a look how.
+
+_Note: StatefulSets are currently a beta resource. The [sidecar container](https://github.com/cvallance/mongo-k8s-sidecar) used for auto-configuration is also unsupported._
+
+
+
+**Prerequisites and Setup**
+
+
+
+Before we get started, you’ll need a Kubernetes 1.5+ and the [Kubernetes command line tool](http://kubernetes.io/docs/user-guide/prereqs/). If you want to follow along with this tutorial and use Google Cloud Platform, you also need the [Google Cloud SDK](http://cloud.google.com/sdk).
+
+
+
+Once you have a [Google Cloud project created](https://console.cloud.google.com/projectcreate) and have your Google Cloud SDK setup (hint: gcloud init), we can create our cluster.
+
+
+
+To create a Kubernetes 1.5 cluster, run the following command:
+
+
+```
+gcloud container clusters create "test-cluster"
+ ```
+
+
+
+This will make a three node Kubernetes cluster. Feel free to [customize the command](https://cloud.google.com/sdk/gcloud/reference/container/clusters/create) as you see fit.
+
+Then, authenticate into the cluster:
+
+
+
+```
+gcloud container clusters get-credentials test-cluster
+ ```
+
+
+
+
+
+
+
+**Setting up the MongoDB replica set**
+
+
+
+To set up the MongoDB replica set, you need three things: A [StorageClass](http://kubernetes.io/docs/user-guide/persistent-volumes/#storageclasses), a [Headless Service](http://kubernetes.io/docs/user-guide/services/#headless-services), and a [StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/).
+
+
+
+I’ve created the configuration files for these already, and you can clone the example from GitHub:
+
+
+```
+git clone https://github.com/thesandlord/mongo-k8s-sidecar.git
+
+cd /mongo-k8s-sidecar/example/StatefulSet/
+ ```
+
+
+
+To create the MongoDB replica set, run these two commands:
+
+
+```
+kubectl apply -f googlecloud\_ssd.yaml
+
+kubectl apply -f mongo-statefulset.yaml
+ ```
+
+
+
+That's it! With these two commands, you have launched all the components required to run an highly available and redundant MongoDB replica set.
+
+
+
+At an high level, it looks something like this:
+
+
+
+ {:.big-img}
+
+
+
+Let’s examine each piece in more detail.
+
+
+
+**StorageClass**
+
+
+
+The storage class tells Kubernetes what kind of storage to use for the database nodes. You can set up many different types of StorageClasses in a ton of different environments. For example, if you run Kubernetes in your own datacenter, you can use [GlusterFS](https://www.gluster.org/). On GCP, your [storage choices](https://cloud.google.com/compute/docs/disks/) are SSDs and hard disks. There are currently drivers for [AWS](http://kubernetes.io/docs/user-guide/persistent-volumes/#aws), [Azure](http://kubernetes.io/docs/user-guide/persistent-volumes/#azure-disk), [Google Cloud](http://kubernetes.io/docs/user-guide/persistent-volumes/#gce), [GlusterFS](http://kubernetes.io/docs/user-guide/persistent-volumes/#glusterfs), [OpenStack Cinder](http://kubernetes.io/docs/user-guide/persistent-volumes/#openstack-cinder), [vSphere](http://kubernetes.io/docs/user-guide/persistent-volumes/#vsphere), [Ceph RBD](http://kubernetes.io/docs/user-guide/persistent-volumes/#ceph-rbd), and [Quobyte](http://kubernetes.io/docs/user-guide/persistent-volumes/#quobyte).
+
+
+
+The configuration for the StorageClass looks like this:
+
+
+```
+kind: StorageClass
+apiVersion: storage.k8s.io/v1beta1
+metadata:
+ name: fast
+provisioner: kubernetes.io/gce-pd
+parameters:
+ type: pd-ssd
+ ```
+
+
+
+This configuration creates a new StorageClass called “fast” that is backed by SSD volumes. The StatefulSet can now request a volume, and the StorageClass will automatically create it!
+
+
+
+Deploy this StorageClass:
+
+
+```
+kubectl apply -f googlecloud\_ssd.yaml
+ ```
+
+
+
+**Headless Service**
+
+
+
+Now you have created the Storage Class, you need to make a Headless Service. These are just like normal Kubernetes Services, except they don’t do any load balancing for you. When combined with StatefulSets, they can give you unique DNS addresses that let you directly access the pods! This is perfect for creating MongoDB replica sets, because our app needs to connect to all of the MongoDB nodes individually.
+
+
+
+The configuration for the Headless Service looks like this:
+
+
+```
+apiVersion: v1
+
+kind: Service
+
+metadata:
+
+ name: mongo
+
+ labels:
+
+ name: mongo
+
+spec:
+
+ ports:
+
+ - port: 27017
+
+ targetPort: 27017
+
+ clusterIP: None
+
+ selector:
+
+ role: mongo
+ ```
+
+
+
+You can tell this is a Headless Service because the clusterIP is set to “None.” Other than that, it looks exactly the same as any normal Kubernetes Service.
+
+
+
+**StatefulSet**
+
+
+
+The pièce de résistance. The StatefulSet actually runs MongoDB and orchestrates everything together. StatefulSets differ from Kubernetes [ReplicaSets](http://kubernetes.io/docs/user-guide/replicasets/) (not to be confused with MongoDB replica sets!) in certain ways that makes them more suited for stateful applications. Unlike Kubernetes ReplicaSets, pods created under a StatefulSet have a few unique attributes. The name of the pod is not random, instead each pod gets an ordinal name. Combined with the Headless Service, this allows pods to have stable identification. In addition, pods are created one at a time instead of all at once, which can help when bootstrapping a stateful system. You can read more about StatefulSets in the [documentation](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/).
+
+
+
+Just like before, [this “sidecar” container](https://github.com/cvallance/mongo-k8s-sidecar) will configure the MongoDB replica set automatically. A “sidecar” is a helper container which helps the main container do its work.
+
+
+
+The configuration for the StatefulSet looks like this:
+
+
+```
+apiVersion: apps/v1beta1
+
+kind: StatefulSet
+
+metadata:
+
+ name: mongo
+
+spec:
+
+ serviceName: "mongo"
+
+ replicas: 3
+
+ template:
+
+ metadata:
+
+ labels:
+
+ role: mongo
+
+ environment: test
+
+ spec:
+
+ terminationGracePeriodSeconds: 10
+
+ containers:
+
+ - name: mongo
+
+ image: mongo
+
+ command:
+
+ - mongod
+
+ - "--replSet"
+
+ - rs0
+
+ - "--smallfiles"
+
+ - "--noprealloc"
+
+ ports:
+
+ - containerPort: 27017
+
+ volumeMounts:
+
+ - name: mongo-persistent-storage
+
+ mountPath: /data/db
+
+ - name: mongo-sidecar
+
+ image: cvallance/mongo-k8s-sidecar
+
+ env:
+
+ - name: MONGO\_SIDECAR\_POD\_LABELS
+
+ value: "role=mongo,environment=test"
+
+ volumeClaimTemplates:
+
+ - metadata:
+
+ name: mongo-persistent-storage
+
+ annotations:
+
+ volume.beta.kubernetes.io/storage-class: "fast"
+
+ spec:
+
+ accessModes: ["ReadWriteOnce"]
+
+ resources:
+
+ requests:
+
+ storage: 100Gi
+ ```
+
+
+
+It’s a little long, but fairly straightforward.
+
+
+
+The first second describes the StatefulSet object. Then, we move into the Metadata section, where you can specify labels and the number of replicas.
+
+
+
+Next comes the pod spec. The terminationGracePeriodSeconds is used to gracefully shutdown the pod when you scale down the number of replicas, which is important for databases! Then the configurations for the two containers is shown. The first one runs MongoDB with command line flags that configure the replica set name. It also mounts the persistent storage volume to /data/db, the location where MongoDB saves its data. The second container runs the sidecar.
+
+
+
+Finally, there is the volumeClaimTemplates. This is what talks to the StorageClass we created before to provision the volume. It will provision a 100 GB disk for each MongoDB replica.
+
+
+
+**Using the MongoDB replica set**
+
+
+
+At this point, you should have three pods created in your cluster. These correspond to the three nodes in your MongoDB replica set. You can see them with this command:
+
+
+```
+kubectl get pods
+
+NAME READY STATUS RESTARTS AGE
+
+mongo-0 2/2 Running 0 3m
+
+mongo-1 2/2 Running 0 3m
+
+mongo-2 2/2 Running 0 3m
+ ```
+
+
+
+Each pod in a StatefulSet backed by a Headless Service will have a stable DNS name. The template follows this format: \.\
+
+This means the DNS names for the MongoDB replica set are:
+
+
+
+```
+mongo-0.mongo
+
+mongo-1.mongo
+
+mongo-2.mongo
+ ```
+
+
+
+You can use these names directly in the [connection string URI](http://docs.mongodb.com/manual/reference/connection-string) of your app.
+
+In this case, the connection string URI would be:
+
+
+```
+“mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo:27017/dbname\_?”
+ ```
+
+
+That’s it!
+
+**Scaling the MongoDB replica set**
+
+A huge advantage of StatefulSets is that you can scale them just like Kubernetes ReplicaSets. If you want 5 MongoDB Nodes instead of 3, just run the scale command:
+
+
+
+```
+kubectl scale --replicas=5 statefulset mongo
+ ```
+
+
+The sidecar container will automatically configure the new MongoDB nodes to join the replica set.
+
+Include the two new nodes (mongo-3.mongo & mongo-4.mongo) in your connection string URI and you are good to go. Too easy!
+
+**Cleaning Up**
+
+To clean up the deployed resources, delete the StatefulSet, Headless Service, and the provisioned volumes.
+
+Delete the StatefulSet:
+
+
+```
+kubectl delete statefulset mongo
+ ```
+
+
+
+Delete the Service:
+
+
+```
+kubectl delete svc mongo
+ ```
+
+
+
+Delete the Volumes:
+
+
+
+
+```
+kubectl delete pvc -l role=mongo
+ ```
+
+
+
+
+Finally, you can delete the test cluster:
+
+
+
+
+```
+gcloud container clusters delete "test-cluster"
+ ```
+
+
+
+Happy Hacking!
+
+
+
+For more cool Kubernetes and Container blog posts, follow me on [Twitter](https://twitter.com/sandeepdinesh) and [Medium](https://medium.com/@SandeepDinesh).
+
+
+
+_--Sandeep Dinesh, Developer Advocate, Google Cloud Platform._
diff --git a/blog/_posts/2017-01-00-Scaling-Kubernetes-Deployments-With-Policy-Base-Networking.md b/blog/_posts/2017-01-00-Scaling-Kubernetes-Deployments-With-Policy-Base-Networking.md
new file mode 100644
index 00000000000..6261e32ce68
--- /dev/null
+++ b/blog/_posts/2017-01-00-Scaling-Kubernetes-Deployments-With-Policy-Base-Networking.md
@@ -0,0 +1,60 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Scaling Kubernetes deployments with Policy-Based Networking "
+date: Friday, January 19, 2017
+pagination:
+ enabled: true
+---
+_Editor's note: Today’s post is by Harmeet Sahni, Director of Product Management, at Nuage Networks, writing about their contributions to Kubernetes and insights on policy-based networking. _
+
+Although it’s just been eighteen-months since Kubernetes 1.0 was released, we’ve seen Kubernetes emerge as the leading container orchestration platform for deploying distributed applications. One of the biggest reasons for this is the vibrant open source community that has developed around it. The large number of Kubernetes contributors come from diverse backgrounds means we, and the community of users, are assured that we are investing in an open platform. Companies like Google (Container Engine), Red Hat (OpenShift), and CoreOS (Tectonic) are developing their own commercial offerings based on Kubernetes. This is a good thing since it will lead to more standardization and offer choice to the users.
+
+**Networking requirements for Kubernetes applications**
+
+For companies deploying applications on Kubernetes, one of biggest questions is how to deploy and orchestrate containers at scale. They’re aware that the underlying infrastructure, including networking and storage, needs to support distributed applications. Software-defined networking (SDN) is a great fit for such applications because the flexibility and agility of the networking infrastructure can match that of the applications themselves. The networking requirements of such applications include:
+
+
+- Network automation
+- Distributed load balancing and service discovery
+- Distributed security with fine-grained policies
+- QoS Policies
+- Scalable Real-time Monitoring
+- Hybrid application environments with Services spread across Containers, VMs and Bare Metal Servers
+- Service Insertion (e.g. firewalls)
+- Support for Private and Public Cloud deployments
+
+**Kubernetes Networking**
+
+Kubernetes provides a core set of platform services exposed through [APIs](http://kubernetes.io/docs/api/). The platform can be extended in several ways through the extensions API, plugins and labels. This has allowed a wide variety integrations and tools to be developed for Kubernetes. Kubernetes recognizes that the network in each deployment is going to be unique. Instead of trying to make the core system try to handle all those use cases, Kubernetes chose to make the network pluggable.
+
+With [Nuage Networks](http://www.nuagenetworks.net/) we provide a scalable policy-based SDN platform. The platform is managed by a Network Policy Engine that abstracts away the complexity associated with configuring the system. There is a separate SDN Controller that comes with a very rich routing feature set and is designed to scale horizontally. Nuage uses the open source [Open vSwitch (OVS)](http://www.openvswitch.org/) for the data plane with some enhancements in the OVS user space. Just like Kubernetes, Nuage has embraced openness as a core tenet for its platform. Nuage provides open APIs that allow users to orchestrate their networks and integrate network services such as firewalls, load balancers, IPAM tools etc. Nuage is supported in a wide variety of cloud platforms like OpenStack and VMware as well as container platforms like Kubernetes and others.
+
+The Nuage platform implements a Kubernetes [network plugin](http://kubernetes.io/docs/admin/network-plugins/) that creates VXLAN overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Each Pod is given an IP address from a network that belongs to a [Namespace](https://kubernetes.io/docs/user-guide/namespaces/) and is not tied to the Kubernetes node.
+
+As cloud applications are built using microservices, the ability to control traffic among these microservices is a fundamental requirement. It is important to point out that these network policies also need to control traffic that is going to/coming from external networks and services. Nuage’s policy abstraction model makes it easy to declare fine-grained ingress/egress policies for applications. Kubernetes has a beta [Network Policy API](http://kubernetes.io/docs/user-guide/networkpolicies/) implemented using the Kubernetes Extensions API. Nuage implements this Network Policy API to address a wide variety of policy use cases such as:
+
+
+- Kubernetes Namespace isolation
+- Inter-Namespace policies
+- Policies between groups of Pods (Policy Groups) for Pods in same or different Namespaces
+- Policies between Kubernetes Pods/Namespaces and external Networks/Services
+
+
+
+[](https://3.bp.blogspot.com/-jJK65zh2wE8/WIE5o3HkXFI/AAAAAAAAA7U/QkoCoYnTWAEz60H0nyP4_wN0tVG3WVWAwCEw/s1600/k8spolicy.png)
+
+A key question for users to consider is the scalability of the policy implementation. Some networking setups require creating access control list (ACL) entries telling Pods how they can interact with one another. In most cases, this eventually leads to an n-squared pileup of ACL entries. The Nuage platform avoids this problem and can quickly assign a policy that applies to a whole group of Pods. The Nuage platform implements these policies using a fully distributed stateful firewall based on OVS.
+
+Being able to monitor the traffic flowing between Kubernetes Pods is very useful to both development and operations teams. The Nuage platform’s real-time analytics engine enables visibility and security monitoring for Kubernetes applications. Users can get a visual representation of the traffic flows between groups of Pods, making it easy to see how the network policies are taking effect. Users can also get a rich set of traffic and policy statistics. Further, users can set alerts to be triggered based on policy event thresholds.
+
+
+[](https://4.bp.blogspot.com/-5VjajIIvq-A/WIE5qN2nsNI/AAAAAAAAA7U/mMfMQpeFvH85MHNbohJifEnW658l3w1agCEw/s1600/k8spolicy2.png)
+
+
+**Conclusion**
+
+Even though we started working on our integration with Kubernetes over a year ago, it feels we are just getting started. We have always felt that this is a truly open community and we want to be an integral part of it. You can find out more about our Kubernetes integration on our [GitHub page](https://github.com/nuagenetworks/nuage-kubernetes).
+
+
+_--Harmeet Sahni, Director of Product Management, Nuage Networks_
diff --git a/blog/_posts/2017-01-00-Stronger-Foundation-For-Creating-And-Managing-Kubernetes-Clusters.md b/blog/_posts/2017-01-00-Stronger-Foundation-For-Creating-And-Managing-Kubernetes-Clusters.md
new file mode 100644
index 00000000000..228367b56a7
--- /dev/null
+++ b/blog/_posts/2017-01-00-Stronger-Foundation-For-Creating-And-Managing-Kubernetes-Clusters.md
@@ -0,0 +1,107 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " A Stronger Foundation for Creating and Managing Kubernetes Clusters "
+date: Friday, January 12, 2017
+pagination:
+ enabled: true
+---
+_Editor's note: Today’s post is by Lucas Käldström an independent Kubernetes maintainer and SIG-Cluster-Lifecycle member, sharing what the group has been building and what’s upcoming. _
+
+Last time you heard from us was in September, when we announced [kubeadm](http://blog.kubernetes.io/2016/09/how-we-made-kubernetes-easy-to-install.html). The work on making kubeadm a first-class citizen in the Kubernetes ecosystem has continued and evolved. Some of us also met before KubeCon and had a very productive meeting where we talked about what the scopes for our SIG, kubeadm, and kops are.
+
+**Continuing to Define SIG-Cluster-Lifecycle**
+
+**What is the scope for kubeadm?**
+We want kubeadm to be a common set of building blocks for all Kubernetes deployments; the piece that provides secure and recommended ways to bootstrap Kubernetes. Since there is no one true way to setup Kubernetes, kubeadm will support more than one method for each phase. We want to identify the phases every deployment of Kubernetes has in common and make configurable and easy-to-use kubeadm commands for those phases. If your organization, for example, requires that you distribute the certificates in the cluster manually or in a custom way, skip using kubeadm just for that phase. We aim to keep kubeadm useable for all other phases in that case. We want you to be able to pick which things you want kubeadm to do and let you do the rest yourself.
+
+Therefore, the scope for kubeadm is to be easily extendable, modular and very easy to use. Right now, with this v1.5 release we have, kubeadm can only do the “full meal deal” for you. In future versions that will change as kubeadm becomes more componentized, while still leaving the opportunity to do everything for you. But kubeadm will still only handle the bootstrapping of Kubernetes; it won’t ever handle provisioning of machines for you since that can be done in many more ways. In addition, we want kubeadm to work everywhere, even on multiple architectures, therefore we built in [multi-architecture support](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multi-platform.md) from the beginning.
+
+**What is the scope for kops?**
+The scope for [kops](https://github.com/kubernetes/kops) is to automate full cluster operations: installation, reconfiguration of your cluster, upgrading kubernetes, and eventual cluster deletion. kops has a rich configuration model based on the Kubernetes API Machinery, so you can easily customize some parameters to your needs. kops (unlike kubeadm) handles provisioning of resources for you. kops aims to be the ultimate out-of-the-box experience on AWS (and perhaps other providers in the future). In the future kops will be adopting more and more of kubeadm for the bootstrapping phases that exist. This will move some of the complexity inside kops to a central place in the form of kubeadm.
+
+**What is the scope for SIG-Cluster-Lifecycle?**
+The [SIG-Cluster-Lifecycle](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle) actively tries to simplify the Kubernetes installation and management story. This is accomplished by modifying Kubernetes itself in many cases, and factoring out common tasks. We are also trying to address common problems in the cluster lifecycle (like the name says!). We maintain and are responsible for kubeadm and kops. We discuss problems with the current way to bootstrap clusters on AWS (and beyond) and try to make it easier. We hangout on Slack in the [#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/) and #kubeadm channels. [We meet and discuss](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle) current topics once a week on Zoom. Feel free to come and say hi! Also, don’t be shy to [contribute](https://github.com/kubernetes/kubeadm/issues); we’d love your comments and insight!
+
+**Looking forward to v1.6**
+
+Our goals for v1.6 are centered around refactoring, stabilization and security.
+
+First and foremost, we want to get kubeadm and its composable configuration experience to beta. We will refactor kubeadm so each phase in the bootstrap process is invokable separately. We want to bring the TLS Bootstrap API, the Certificates API and the ComponentConfig API to beta, and to get kops (and other tools) using them.
+
+We will also graduate the token discovery we’re using now (aka. the gcr.io/google\_containers/kube-discovery:1.0 image) to beta by adding a new controller to the controller manager: the [BootstrapSigner](https://github.com/kubernetes/kubernetes/pull/36101). Using tokens managed as Secrets, that controller will sign the contents (a kubeconfig file) of a well known ConfigMap in a new kube-public namespace. This object will be available to unauthenticated users in order to enable a secure bootstrap with a simple and short shared token.You can read the full proposal [here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/bootstrap-discovery.md).
+
+In addition to making it possible to invoke phases separately, we will also add a new phase for bringing up the control plane in a self-hosted mode (as opposed to the current static pod technique). The self-hosted technique was developed by CoreOS in the form of [bootkube](https://github.com/kubernetes-incubator/bootkube), and will now be incorporated as an alternative into an official Kubernetes product. Thanks to CoreOS for pushing that paradigm forward! This will be done by first setting up a temporary control plane with static pods, injecting the Deployments, ConfigMaps and DaemonSets as necessary, and lastly turning down the temporary control plane. For now, etcd will still be in a static pod by default.
+
+We are supporting self hosting, initially, because we want to support doing patch release upgrades with kubeadm. It should be easy to upgrade from v1.6.2 to v1.6.4 for instance. We consider the built-in upgrade support a critical capability for a real cluster lifecycle tool. It will still be possible to upgrade without self-hosting but it will require more manual work.
+
+On the stabilization front, we want to start running kubeadm e2e tests. In this v1.5 timeframe, we added unit tests and we will continue to increase that coverage. We want to expand this to per-PR e2e tests as well that spin up a cluster with _kubeadm init_ and _kubeadm join_; runs some kubeadm-specific tests and optionally the Conformance test suite.
+
+Finally, on the security front, we also want to kubeadm to be as secure as possible by default. We look to enable RBAC for v1.6, lock down what kubelet and built-in services like kube-dns and kube-proxy can do, and maybe create specific user accounts that have different permissions.
+
+Regarding releasing, we want to have the official kubeadm v1.6 binary in the kubernetes v1.6 tarball. This means syncing our release with the official one. More details on what we’ve done so far can be found [here](https://groups.google.com/d/msg/kubernetes-sig-cluster-lifecycle/P2oh5iHWBsA/ePeoil78BAAJ). As it becomes possible, we aim to move the kubeadm code out to the kubernetes/kubeadm repo (This is blocked on some Kubernetes code-specific infrastructure issues that may take some time to resolve.)
+
+Nice-to-haves for v1.6 would include an official CoreOS Container Linux installer container that does what the debs/rpms are doing for Ubuntu/CentOS. In general, it would be nice to extend the distro support. We also want to adopt [Kubelet Dynamic Settings](https://github.com/kubernetes/kubernetes/pull/29459) so configuration passed to kubeadm init flows down to nodes automatically (it requires manual configuration currently). We want it to be possible to test Kubernetes from HEAD by using kubeadm.
+
+**Through 2017 and beyond**
+
+Apart from everything mentioned above, we want kubeadm to simply be a production grade (GA) tool you can use for bootstrapping a Kubernetes cluster. We want HA/multi-master to be much easier to achieve generally than it is now across platforms (though kops makes this easy on AWS today!). We want cloud providers to be out-of-tree and installable separately. _kubectl apply -f my-cloud-provider-here.yaml_ should just work. The documentation should be more robust and should go deeper. Container Runtime Interface (CRI) and Federation should work well with kubeadm. Outdated getting started guides should be removed so new users aren’t mislead.
+
+**Refactoring the cloud provider integration plugins**
+Right now, the cloud provider integrations are built into the controller-manager, the kubelet and the API Server. This combined with the ever-growing interest for Kubernetes makes it unmaintainable to have the cloud provider integrations compiled into the core. Features that are clearly vendor-specific should not be a part of the core Kubernetes project, rather available as an addon from third party vendors. Everything cloud-specific should be moved into one controller, or a few if there’s need. This controller will be maintained by a third-party (usually the company behind the integration) and will implement cloud-specific features. This migration from in-core to out-of-core is disruptive yes, but it has very good side effects: leaner core, making it possible for more than the seven existing clouds to be integrated with Kubernetes and much easier installation. For example, you could run the cloud controller binary in a Deployment and install it with _kubectl apply_ easily.
+
+The plan for v1.6 is to make it possible to:
+
+
+- Create and run out-of-core cloud provider integration controllers
+- Ship a new and temporary binary in the Kubernetes release: the cloud-controller-manager. This binary will include the seven existing cloud providers and will serve as a way of validating, testing and migrating to the new flow.
+In a future release (v1.9 is proposed), the `--cloud-provider` flag will stop working, and the temporary cloud-controller-manager binary won’t be shipped anymore. Instead, a repository called something like kubernetes/cloud-providers will serve as a place for officially-validated cloud providers to evolve and exist, but all providers there will be independent to each other. (issue [#2770](https://github.com/kubernetes/kubernetes/issues/2770); proposal [#128](https://github.com/kubernetes/community/pull/128); code [#3473](https://github.com/kubernetes/kubernetes/pull/34273).)
+
+
+
+**Changelogs from v1.4 to v1.5**
+
+
+
+**kubeadm**
+
+v1.5 is a stabilization release for kubeadm. We’ve worked on making kubeadm more user-friendly, transparent and stable. Some new features have been added making it more configurable.
+
+
+
+Here’s a very short extract of what’s changed:
+
+- Made the _console output_ of kubeadm cleaner and more _user-friendly_ [#37568](https://github.com/kubernetes/kubernetes/pull/37568)
+- Implemented _kubeadm reset_ and to drain and cleanup a node [#34807](https://github.com/kubernetes/kubernetes/pull/34807) and [#37831](https://github.com/kubernetes/kubernetes/pull/37831)
+- _Preflight checks_ implementation that fails fast if the environment is invalid [#34341](https://github.com/kubernetes/kubernetes/pull/34341) and [#36334](https://github.com/kubernetes/kubernetes/pull/36334)
+- _kubectl logs_ and _kubectl exec_ can now be used with kubeadm [#37568](https://github.com/kubernetes/kubernetes/pull/37568)
+- and a lot of other improvements, please read the full [changelog](https://github.com/kubernetes/kubeadm/blob/master/CHANGELOG.md).
+
+
+
+**kops**
+
+Here’s a short extract of what’s changed:
+
+- Support for CNI network plugins (Weave, Calico, Kope.io)
+- Fully private deployments, where nodes and masters do not have public IPs
+- Improved rolling update of clusters, in particular of HA clusters
+- OS support for CentOS / RHEL / Ubuntu along with Debian, and support for sysdig & perf tools
+
+Go and check out the [kops releases page](https://github.com/kubernetes/kops/releases) in order to get information about the latest and greatest kops release.
+
+
+
+**Summary**
+
+
+
+In short, we're excited on the roadmap ahead in bringing a lot of these improvements to you in the coming releases. Which we hope will make the experience to start much easier and lead to increased adoption of Kubernetes.
+
+
+
+Thank you for all the feedback and contributions. I hope this has given you some insight in what we’re doing and encouraged you to join us at our meetings to say hi!
+
+
+
+_-- [Lucas Käldström](https://twitter.com/kubernetesonarm), Independent Kubernetes maintainer and SIG-Cluster-Lifecycle member_
diff --git a/blog/_posts/2017-02-00-Caas-The-Foundation-For-Next-Gen-Paas.md b/blog/_posts/2017-02-00-Caas-The-Foundation-For-Next-Gen-Paas.md
new file mode 100644
index 00000000000..9067293be89
--- /dev/null
+++ b/blog/_posts/2017-02-00-Caas-The-Foundation-For-Next-Gen-Paas.md
@@ -0,0 +1,52 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Containers as a Service, the foundation for next generation PaaS "
+date: Wednesday, February 21, 2017
+pagination:
+ enabled: true
+---
+
+_Today’s post is by Brendan Burns, Partner Architect, at Microsoft & Kubernetes co-founder._
+
+
+
+Containers are revolutionizing the way that people build, package and deploy software. But what is often overlooked is how they are revolutionizing the way that people build the software that builds, packages and deploys software. (it’s ok if you have to read that sentence twice…) Today, and in a talk at [Container World](https://tmt.knect365.com/container-world/) tomorrow, I’m taking a look at how container orchestrators like Kubernetes form the foundation for next generation platform as a service (PaaS). In particular, I’m interested in how cloud container as a service (CaaS) platforms like [Azure Container Service](https://azure.microsoft.com/en-us/services/container-service/), [Google Container Engine](https://cloud.google.com/container-engine/) and [others](https://kubernetes.io/docs/getting-started-guides/#hosted-solutions) are becoming the new infrastructure layer that PaaS is built upon.
+
+To see this, it’s important to consider the set of services that have traditionally been provided by PaaS platforms:
+
+
+- Source code and executable packaging and distribution
+- Reliable, zero-downtime rollout of software versions
+- Healing, auto-scaling, load balancing
+
+When you look at this list, it’s clear that most of these traditional “PaaS” roles have now been taken over by containers. The container image and container image build tooling has become the way to package up your application. [Container registries](https://kubernetes.io/docs/user-guide/images/#using-a-private-registry) have become the way to distribute your application across the world. Reliable software rollout is achieved using orchestrator concepts like [Deployment](https://kubernetes.io/docs/user-guide/deployments/#what-is-a-deployment) in Kubernetes, and service healing, auto-scaling and load-balancing are all properties of an application deployed in Kubernetes using [ReplicaSets](https://kubernetes.io/docs/user-guide/replicasets/#what-is-a-replicaset) and [Services](https://kubernetes.io/docs/user-guide/services/).
+
+What then is left for PaaS? Is PaaS going to be replaced by container as a service? I think the answer is “no.” The piece that is left for PaaS is the part that was always the most important part of PaaS in the first place, and that’s the opinionated developer experience. In addition to all of the generic parts of PaaS that I listed above, the most important part of a PaaS has always been the way in which the developer experience and application framework made developers more productive within the boundaries of the platform. PaaS enables developers to go from source code on their laptop to a world-wide scalable service in less than an hour. That’s hugely powerful.
+
+However, in the world of traditional PaaS, the skills needed to build PaaS infrastructure itself, the software on which the user’s software ran, required very strong skills and experience with distributed systems. Consequently, PaaS tended to be built by distributed system engineers rather than experts in a particular vertical developer experience. This means that PaaS platforms tended towards general purpose infrastructure rather than targeting specific verticals. Recently, we have seen this start to change, first with PaaS targeted at mobile API backends, and later with PaaS targeting “function as a service”. However, these products were still built from the ground up on top of raw infrastructure.
+
+More recently, we are starting to see these platforms build on top of container infrastructure. Taking for example “function as a service” there are at least two (and likely more) open source implementations of functions as a service that run on top of Kubernetes ([fission](https://github.com/fission/fission) and [funktion](https://github.com/funktionio/funktion/)). This trend will only continue. Building a platform as a service, on top of container as a service is easy enough that you could imagine giving it out as an undergraduate computer science assignment. This ease of development means that individual developers with specific expertise in a vertical (say software for running three-dimensional simulations) can and will build PaaS platforms targeted at that specific vertical experience. In turn, by targeting such a narrow experience, they will build an experience that fits that narrow vertical perfectly, making their solution a compelling one in that target market.
+
+This then points to the other benefit of next generation PaaS being built on top of container as a service. It frees the developer from having to make an “all-in” choice on a particular PaaS platform. When layered on top of container as a service, the basic functionality (naming, discovery, packaging, etc) are all provided by the CaaS and thus common across multiple PaaS that happened to be deployed on top of that CaaS. This means that developers can mix and match, deploying multiple PaaS to the same container infrastructure, and choosing for each application the PaaS platform that best suits that particular platform. Also, importantly, they can choose to “drop down” to raw CaaS infrastructure if that is a better fit for their application. Freeing PaaS from providing the infrastructure layer, enables PaaS to diversify and target specific experiences without fear of being too narrow. The experiences become more targeted, more powerful, and yet by building on top of container as a service, more flexible as well.
+
+Kubernetes is infrastructure for next generation applications, PaaS and more. Given this, I’m really excited by our [announcement](https://azure.microsoft.com/en-us/blog/kubernetes-now-generally-available-on-azure-container-service/) today that Kubernetes on Azure Container Service has reached general availability. When you deploy your next generation application to Azure, whether on a PaaS or deployed directly onto Kubernetes itself (or both) you can deploy it onto a managed, supported Kubernetes cluster.
+
+Furthermore, because we know that the world of PaaS and software development in general is a hybrid one, we’re excited to announce the preview availability of [Windows clusters in Azure Container Service](https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough). We’re also working on [hybrid clusters](https://github.com/Azure/acs-engine/blob/master/docs/kubernetes.windows.md) in [ACS-Engine](https://github.com/Azure/acs-engine) and expect to roll those out to general availability in the coming months.
+
+I’m thrilled to see how containers and container as a service is changing the world of compute, I’m confident that we’re only scratching the surface of the transformation we’ll see in the coming months and years.
+
+
+
+
+
+_--[Brendan Burns](https://twitter.com/brendandburns), Partner Architect, at Microsoft and co-founder of Kubernetes_
+
+
+
+
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- [Download](http://get.k8s.io/) Kubernetes
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2017-02-00-Highly-Available-Kubernetes-Clusters.md b/blog/_posts/2017-02-00-Highly-Available-Kubernetes-Clusters.md
new file mode 100644
index 00000000000..b30c1231b13
--- /dev/null
+++ b/blog/_posts/2017-02-00-Highly-Available-Kubernetes-Clusters.md
@@ -0,0 +1,332 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Highly Available Kubernetes Clusters "
+date: Friday, February 02, 2017
+pagination:
+ enabled: true
+---
+
+Today’s post shows how to set-up a reliable, highly available distributed Kubernetes cluster. The support for running such clusters on Google Compute Engine (GCE) was added as an alpha feature in [Kubernetes 1.5 release](http://blog.kubernetes.io/2016/12/kubernetes-1.5-supporting-production-workloads.html).
+
+**Motivation**
+
+We will create a Highly Available Kubernetes cluster, with master replicas and worker nodes distributed among three zones of a region. Such setup will ensure that the cluster will continue operating during a zone failure.
+
+**Setting Up HA cluster**
+
+The following instructions apply to GCE. First, we will setup a cluster that will span over one zone (europe-west1-b), will contain one master and three worker nodes and will be HA-compatible (will allow adding more master replicas and more worker nodes in multiple zones in future). To implement this, we’ll export the following environment variables:
+
+
+ ```
+$ export KUBERNETES\_PROVIDER=gce
+
+$ export NUM\_NODES=3
+
+$ export MULTIZONE=true
+
+$ export ENABLE\_ETCD\_QUORUM\_READ=true
+ ```
+
+
+
+
+and run kube-up script (note that the entire cluster will be initially placed in zone europe-west1-b):
+
+
+ ```
+$ KUBE\_GCE\_ZONE=europe-west1-b ./cluster/kube-up.sh
+ ```
+
+
+
+Now, we will add two additional pools of worker nodes, each of three nodes, in zones europe-west1-c and europe-west1-d (more details on adding pools of worker nodes can be find [here](http://kubernetes.io/docs/admin/multiple-zones/)):
+
+
+ ```
+$ KUBE\_USE\_EXISTING\_MASTER=true KUBE\_GCE\_ZONE=europe-west1-c ./cluster/kube-up.sh
+
+
+$ KUBE\_USE\_EXISTING\_MASTER=true KUBE\_GCE\_ZONE=europe-west1-d ./cluster/kube-up.sh
+ ```
+
+
+
+To complete setup of HA cluster, we will add two master replicase, one in zone europe-west1-c, the other in europe-west1-d:
+
+
+
+
+ ```
+$ KUBE\_GCE\_ZONE=europe-west1-c KUBE\_REPLICATE\_EXISTING\_MASTER=true ./cluster/kube-up.sh
+
+
+$ KUBE\_GCE\_ZONE=europe-west1-d KUBE\_REPLICATE\_EXISTING\_MASTER=true ./cluster/kube-up.sh
+ ```
+
+
+
+Note that adding the first replica will take longer (~15 minutes), as we need to reassign the IP of the master to the load balancer in front of replicas and wait for it to propagate (see [design doc](https://github.com/kubernetes/kubernetes/blob/master/docs/design/ha_master.md) for more details).
+
+
+
+**Verifying in HA cluster works as intended**
+
+
+
+We may now list all nodes present in the cluster:
+
+
+ ```
+$ kubectl get nodes
+
+NAME STATUS AGE
+
+kubernetes-master Ready,SchedulingDisabled 48m
+
+kubernetes-master-2d4 Ready,SchedulingDisabled 5m
+
+kubernetes-master-85f Ready,SchedulingDisabled 32s
+
+kubernetes-minion-group-6s52 Ready 39m
+
+kubernetes-minion-group-cw8e Ready 48m
+
+kubernetes-minion-group-fw91 Ready 48m
+
+kubernetes-minion-group-h2kn Ready 31m
+
+kubernetes-minion-group-ietm Ready 39m
+
+kubernetes-minion-group-j6lf Ready 31m
+
+kubernetes-minion-group-soj7 Ready 31m
+
+kubernetes-minion-group-tj82 Ready 39m
+
+kubernetes-minion-group-vd96 Ready 48m
+ ```
+
+
+
+As we can see, we have 3 master replicas (with disabled scheduling) and 9 worker nodes.
+
+
+
+We will deploy a sample application (nginx server) to verify that our cluster is working correctly:
+
+
+ ```
+$ kubectl run nginx --image=nginx --expose --port=80
+ ```
+
+
+
+After waiting for a while, we can verify that both the deployment and the service were correctly created and are running:
+
+
+ ```
+$ kubectl get pods
+
+NAME READY STATUS RESTARTS AGE
+
+...
+
+nginx-3449338310-m7fjm 1/1 Running 0 4s
+
+...
+
+
+$ kubectl run -i --tty test-a --image=busybox /bin/sh
+
+If you don't see a command prompt, try pressing enter.
+
+# wget -q -O- http://nginx.default.svc.cluster.local
+
+...
+
+\Welcome to nginx!\
+
+...
+ ```
+
+
+
+Now, let’s simulate failure of one of master’s replicas by executing halt command on it (kubernetes-master-137, zone europe-west1-c):
+
+
+ ```
+$ gcloud compute ssh kubernetes-master-2d4 --zone=europe-west1-c
+
+...
+
+$ sudo halt
+ ```
+
+
+
+After a while the master replica will be marked as NotReady:
+
+
+ ```
+$ kubectl get nodes
+
+NAME STATUS AGE
+
+kubernetes-master Ready,SchedulingDisabled 51m
+
+kubernetes-master-2d4 NotReady,SchedulingDisabled 8m
+
+kubernetes-master-85f Ready,SchedulingDisabled 4m
+
+...
+ ```
+
+
+
+However, the cluster is still operational. We may verify it by checking if our nginx server works correctly:
+
+
+
+
+ ```
+$ kubectl run -i --tty test-b --image=busybox /bin/sh
+
+If you don't see a command prompt, try pressing enter.
+
+# wget -q -O- http://nginx.default.svc.cluster.local
+
+...
+
+\Welcome to nginx!\
+
+...
+ ```
+
+
+
+We may also run another nginx server:
+
+
+
+
+ ```
+$ kubectl run nginx-next --image=nginx --expose --port=80
+ ```
+
+
+
+The new server should be also working correctly:
+
+
+
+
+ ```
+$ kubectl run -i --tty test-c --image=busybox /bin/sh
+
+If you don't see a command prompt, try pressing enter.
+
+# wget -q -O- http://nginx-next.default.svc.cluster.local
+
+...
+
+\Welcome to nginx!\
+
+...
+ ```
+
+
+
+Let’s now reset the broken replica:
+
+
+
+
+ ```
+$ gcloud compute instances start kubernetes-master-2d4 --zone=europe-west1-c
+ ```
+
+
+
+After a while, the replica should be re-attached to the cluster:
+
+
+
+
+ ```
+$ kubectl get nodes
+
+NAME STATUS AGE
+
+kubernetes-master Ready,SchedulingDisabled 57m
+
+kubernetes-master-2d4 Ready,SchedulingDisabled 13m
+
+kubernetes-master-85f Ready,SchedulingDisabled 9m
+
+...
+ ```
+
+
+
+**Shutting down HA cluster**
+
+
+
+To shutdown the cluster, we will first shut down master replicas in zones D and E:
+
+
+
+
+ ```
+$ KUBE\_DELETE\_NODES=false KUBE\_GCE\_ZONE=europe-west1-c ./cluster/kube-down.sh
+
+
+$ KUBE\_DELETE\_NODES=false KUBE\_GCE\_ZONE=europe-west1-d ./cluster/kube-down.sh
+ ```
+
+
+
+Note that the second removal of replica will take longer (~15 minutes), as we need to reassign the IP of the load balancer in front of replicas to the remaining master and wait for it to propagate (see [design doc](https://github.com/kubernetes/kubernetes/blob/master/docs/design/ha_master.md) for more details).
+
+
+
+Then, we will remove the additional worker nodes from zones europe-west1-c and europe-west1-d:
+
+
+ ```
+$ KUBE\_USE\_EXISTING\_MASTER=true KUBE\_GCE\_ZONE=europe-west1-c ./cluster/kube-down.sh
+
+
+$ KUBE\_USE\_EXISTING\_MASTER=true KUBE\_GCE\_ZONE=europe-west1-d ./cluster/kube-down.sh
+ ```
+
+
+
+And finally, we will shutdown the remaining master with the last group of nodes (zone europe-west1-b):
+
+
+
+
+ ```
+$ KUBE\_GCE\_ZONE=europe-west1-b ./cluster/kube-down.sh
+ ```
+
+
+
+**Conclusions**
+
+
+
+We have shown how, by adding worker node pools and master replicas, a Highly Available Kubernetes cluster can be created. As of Kubernetes version 1.5.2, it is supported in kube-up/kube-down scripts for GCE (as alpha). Additionally, there is a support for HA cluster on AWS in kops scripts (see [this article](http://kubecloud.io/setup-ha-k8s-kops/) for more details).
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+
+
+
+_--Jerzy Szczepkowski, Software Engineer, Google_
diff --git a/blog/_posts/2017-02-00-Inside-Jd-Com-Shift-To-Kubernetes-From-Openstack.md b/blog/_posts/2017-02-00-Inside-Jd-Com-Shift-To-Kubernetes-From-Openstack.md
new file mode 100644
index 00000000000..a928b771279
--- /dev/null
+++ b/blog/_posts/2017-02-00-Inside-Jd-Com-Shift-To-Kubernetes-From-Openstack.md
@@ -0,0 +1,135 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Inside JD.com's Shift to Kubernetes from OpenStack "
+date: Saturday, February 10, 2017
+pagination:
+ enabled: true
+---
+_Editor's note: Today’s post is by the Infrastructure Platform Department team at JD.com about their transition from OpenStack to Kubernetes. JD.com is one of China’s largest companies and the first Chinese Internet company to make the Global Fortune 500 list._
+
+
+[](https://upload.wikimedia.org/wikipedia/en/7/79/JD_logo.png)
+
+
+**History of cluster building**
+
+**The era of physical machines (2004-2014)**
+
+Before 2014, our company's applications were all deployed on the physical machine. In the age of physical machines, we needed to wait an average of one week for the allocation to application coming on-line. Due to the lack of isolation, applications would affected each other, resulting in a lot of potential risks. At that time, the average number of tomcat instances on each physical machine was no more than nine. The resource of physical machine was seriously wasted and the scheduling was inflexible. The time of application migration took hours due to the breakdown of physical machines. And the auto-scaling cannot be achieved. To enhance the efficiency of application deployment, we developed compilation-packaging, automatic deployment, log collection, resource monitoring and some other systems.
+
+**Containerized era (2014-2016)**
+
+The Infrastructure Platform Department ([IPD](https://github.com/ipdcode)) led by Liu Haifeng--Chief Architect of JD.COM, sought a new resolution in the fall of 2014. Docker ran into our horizon. At that time, docker had been rising, but was slightly weak and lacked of experience in production environment. We had repeatedly tested docker. In addition, docker was customized to fix a couple of issues, such as system crash caused by device mapper and some Linux kernel bugs. We also added plenty of new features into docker, including disk speed limit, capacity management, and layer merging in image building and so on.
+
+To manage the container cluster properly, we chose the architecture of OpenStack + Novadocker driver. Containers are managed as virtual machines. It is known as the first generation of JD container engine platform--JDOS1.0 (JD Datacenter Operating System). The main purpose of JDOS 1.0 is to containerize the infrastructure. All applications run in containers rather than physical machines since then. As for the operation and maintenance of applications, we took full advantage of existing tools. The time for developers to request computing resources in production environment reduced to several minutes rather than a week. After the pooling of computing resources, even the scaling of 1,000 containers would be finished in seconds. Application instances had been isolated from each other. Both the average deployment density of applications and the physical machine utilization had increased by three times, which brought great economic benefits.
+
+We deployed clusters in each IDC and provided unified global APIs to support deployment across the IDC. There are 10,000 compute nodes at most and 4,000 at least in a single OpenStack distributed container cluster in our production environment. The first generation of container engine platform (JDOS 1.0) successfully supported the “6.18” and “11.11” promotional activities in both 2015 and 2016. There are already 150,000 running containers online by November 2016.
+
+_“6.18” and “11.11” are known as the two most popular online promotion of JD.COM, similar to the black Friday promotions. Fulfilled orders in November 11, 2016 reached 30 million. _
+
+In the practice of developing and promoting JDOS 1.0, applications were migrated directly from physical machines to containers. Essentially, JDOS 1.0 was an implementation of IaaS. Therefore, deployment of applications was still heavily dependent on compilation-packaging and automatic deployment tools. However, the practice of JDOS1.0 is very meaningful. Firstly, we successfully moved business into containers. Secondly, we have a deep understanding of container network and storage, and know how to polish them to the best. Finally, all the experiences lay a solid foundation for us to develop a brand new application container platform.
+
+**New container engine platform (JDOS 2.0)**
+
+**Platform architecture**
+
+When JDOS 1.0 grew from 2,000 containers to 100,000, we launched a new container engine platform (JDOS 2.0). The goal of JDOS 2.0 is not just an infrastructure management platform, but also a container engine platform faced to applications. On the basic of JDOS 1.0 and Kubernetes, JDOS 2.0 integrates the storage and network of JDOS 1.0, gets through the process of CI/CD from the source to the image, and finally to the deployment. Also, JDOS 2.0 provides one-stop service such as log, monitor, troubleshooting, terminal and orchestration. The platform architecture of JDOS 2.0 is shown below.
+
+
+
+ {:big-img}
+
+
+|Function |Product |
+|--|--|
+|Source Code Management |Gitlab |
+|Container Tool |Docker |
+|Container Networking |Cane |
+|Container Engine |Kubernetes |
+|Image Registry |Harbor |
+|CI Tool |Jenkins |
+|Log Management |Logstash + Elastic Search |
+|Monitor |Prometheus |
+{:.post-table}
+
+In JDOS 2.0, we define two levels, system and application. A system consists of several applications and an application consists of several Pods which provide the same service. In general, a department can apply for one or more systems which directly corresponds to the namespace of Kubernetes. This means that the Pods of the same system will be in the same namespace.
+
+Most of the JDOS 2.0 components (GitLab / Jenkins / Harbor / Logstash / Elastic Search / Prometheus) are also containerized and deployed on the Kubernetes platform.
+
+**One Stop Solution**
+
+
+
+ {:.big-img}
+
+
+
+
+1. 1.JDOS 2.0 takes docker image as the core to implement continuous integration and continuous deployment.
+2. 2.Developer pushes code to git.
+3. 3.Git triggers the jenkins master to generate build job.
+4. 4.Jenkins master invokes Kubernetes to create jenkins slave Pod.
+5. 5.Jenkins slave pulls the source code, compiles and packs.
+6. 6.Jenkins slave sends the package and the Dockerfile to the image build node with docker.
+7. 7.The image build node builds the image.
+8. 8.The image build node pushes the image to the image registry Harbor.
+9. 9.User creates or updates app Pods in different zone.
+
+The docker image in JDOS 1.0 consisted primarily of the operating system and the runtime software stack of the application. So, the deployment of applications was still dependent on the auto-deployment and some other tools. While in JDOS 2.0, the deployment of the application is done during the image building. And the image contains the complete software stack, including App. With the image, we can achieve the goal of running applications as designed in any environment.
+
+ 
+
+
+
+**Networking and External Service Load Balancing**
+
+
+
+JDOS 2.0 takes the network solution of JDOS 1.0, which is implemented with the VLAN model of OpenStack Neutron. This solution enables highly efficient communication between containers, making it ideal for a cluster environment within a company. Each Pod occupies a port in Neutron, with a separate IP. Based on the Container Network Interface standard ([CNI](https://github.com/containernetworking/cni)) standard, we have developed a new project Cane for integrating kubelet and Neutron.
+
+
+
+ 
+
+
+
+
+
+At the same time, Cane is also responsible for the management of LoadBalancer in Kubernetes service. When a LoadBalancer is created / deleted / modified, Cane will call the creating / removing / modifying interface of the lbaas service in Neutron. In addition, the Hades component in the Cane project provides an internal DNS resolution service for the Pods.
+
+_The source code of the Cane project is currently being finished and will be released on GitHub soon._
+
+
+
+**Flexible Scheduling**
+
+
+
+
+
+[](https://lh6.googleusercontent.com/P6aA1V-ND_i0l6flYQ1TFvjq651FpUznfLRrL6VqmnMYLdP_WUhDDICq9J0d2gcIu16I0Bz2KLAJnfk4RQ1tv1MuKj_hfF6cLdh5JVktH1xFmbFnsNus3anpL7q5NK8WAS0JQFz6cNT32S72Yg)JDOS 2.0 accesses applications, including big data, web applications, deep learning and some other types, and takes more diverse and flexible scheduling approaches. In some IDCs, we experimentally mixed deployment of online tasks and offline tasks. Compared to JDOS 1.0, overall resource utilization increased by about 30%.
+
+
+
+**Summary**
+
+
+
+The rich functionality of Kubernetes allows us to pay more attention to the entire ecosystem of the platform, such as network performance, rather than the platform itself. In particular, the SREs highly appreciated the functionality of replication controller. With it, the scaling of the applications is achieved in several seconds. JDOS 2.0 now has accessed about 20% of the applications, and deployed 2 clusters with about 20,000 Pods running daily. We plan to access more applications of our company, to replace the current JDOS 1.0. And we are also glad to share our experience in this process with the community.
+
+
+
+Thank you to all the contributors of Kubernetes and the other open source projects.
+
+
+
+
+_--Infrastructure Platform Department team at JD.com_
+
+
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- [Download](http://get.k8s.io/) Kubernetes
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2017-02-00-Postgresql-Clusters-Kubernetes-Statefulsets.md b/blog/_posts/2017-02-00-Postgresql-Clusters-Kubernetes-Statefulsets.md
new file mode 100644
index 00000000000..e4386ee7c6a
--- /dev/null
+++ b/blog/_posts/2017-02-00-Postgresql-Clusters-Kubernetes-Statefulsets.md
@@ -0,0 +1,314 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Deploying PostgreSQL Clusters using StatefulSets "
+date: Saturday, February 24, 2017
+pagination:
+ enabled: true
+---
+_Editor’s note: Today’s guest post is by Jeff McCormick, a developer at Crunchy Data, showing how to build a PostgreSQL cluster using the new Kubernetes StatefulSet feature._
+
+In an earlier [post](http://blog.kubernetes.io/2016/09/creating-postgresql-cluster-using-helm.html), I described how to deploy a PostgreSQL cluster using [Helm](https://github.com/kubernetes/helm), a Kubernetes package manager. The following example provides the steps for building a PostgreSQL cluster using the new Kubernetes [StatefulSets](https://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) feature.
+
+**StatefulSets Example**
+
+**Step 1** - Create Kubernetes Environment
+
+StatefulSets is a new feature implemented in [Kubernetes 1.5](http://blog.kubernetes.io/2016/12/kubernetes-1.5-supporting-production-workloads.html) (prior versions it was known as PetSets). As a result, running this example will require an environment based on Kubernetes 1.5.0 or above.
+
+The example in this blog deploys on Centos7 using [kubeadm](https://kubernetes.io/docs/admin/kubeadm/). Some instructions on what kubeadm provides and how to deploy a Kubernetes cluster is located [here](http://linoxide.com/containers/setup-kubernetes-kubeadm-centos).
+
+**Step 2** - Install NFS
+
+The example in this blog uses NFS for the Persistent Volumes, but any shared file system would also work (ex: ceph, gluster).
+
+The example script assumes your NFS server is running locally and your hostname resolves to a known IP address.
+
+In summary, the steps used to get NFS working on a Centos 7 host are as follows:
+
+
+
+```
+sudo setsebool -P virt\_use\_nfs 1
+
+sudo yum -y install nfs-utils libnfsidmap
+
+sudo systemctl enable rpcbind nfs-server
+
+sudo systemctl start rpcbind nfs-server rpc-statd nfs-idmapd
+
+sudo mkdir /nfsfileshare
+
+sudo chmod 777 /nfsfileshare/
+
+sudo vi /etc/exports
+
+sudo exportfs -r
+ ```
+
+
+
+The /etc/exports file should contain a line similar to this one except with the applicable IP address specified:
+
+
+
+```
+/nfsfileshare 192.168.122.9(rw,sync)
+ ```
+
+
+
+After these steps NFS should be running in the test environment.
+
+
+
+**Step 3** - Clone the Crunchy PostgreSQL Container Suite
+
+
+
+The example used in this blog is found at in the Crunchy Containers GitHub repo [here](https://github.com/CrunchyData/crunchy-containers.git). Clone the Crunchy Containers repository to your test Kubernertes host and go to the example:
+
+
+
+```
+cd $HOME
+
+git clone https://github.com/CrunchyData/crunchy-containers.git
+
+cd crunchy-containers/examples/kube/statefulset
+ ```
+
+
+
+Next, pull down the Crunchy PostgreSQL container image:
+
+
+
+```
+docker pull crunchydata/crunchy-postgres:centos7-9.5-1.2.6
+ ```
+
+
+
+**Step 4** - Run the Example
+
+
+
+To begin, it is necessary to set a few of the environment variables used in the example:
+
+
+
+```
+export BUILDBASE=$HOME/crunchy-containers
+
+export CCP\_IMAGE\_TAG=centos7-9.5-1.2.6
+ ```
+
+
+
+BUILDBASE is where you cloned the repository and CCP\_IMAGE\_TAG is the container image version we want to use.
+
+
+
+Next, run the example:
+
+
+
+```
+./run.sh
+ ```
+
+
+
+That script will create several Kubernetes objects including:
+
+- Persistent Volumes (pv1, pv2, pv3)
+- Persistent Volume Claim (pgset-pvc)
+- Service Account (pgset-sa)
+- Services (pgset, pgset-master, pgset-replica)
+- StatefulSet (pgset)
+- Pods (pgset-0, pgset-1)
+
+At this point, two pods will be running in the Kubernetes environment:
+
+
+
+```
+$ kubectl get pod
+
+NAME READY STATUS RESTARTS AGE
+
+pgset-0 1/1 Running 0 2m
+
+pgset-1 1/1 Running 1 2m
+ ```
+
+
+
+Immediately after the pods are created, the deployment will be as depicted below:
+
+[](https://lh5.googleusercontent.com/tGg-37a7SoVQR9Zn3R209iKbkegX5XqRQdRa5ZD6q-vpm1hWqtBxnhOBiGw2uHHkZ5lc_VBKrSEEP29BmAzoWc1xydV7G4I8kaQqVZoYOdRCvBf755Rxf9aj-pm7FhfmgECBW3gR)
+
+
+
+**Step 5** - What Just Happened?
+
+
+
+This example will deploy a StatefulSet, which in turn creates two pods.
+
+
+
+The containers in those two pods run the PostgreSQL database. For a PostgreSQL cluster, we need one of the containers to assume the master role and the other containers to assume the replica role.
+
+
+
+So, how do the containers determine who will be the master, and who will be the replica?
+
+
+
+This is where the new StateSet mechanics come into play. The StateSet mechanics assign a unique ordinal value to each pod in the set.
+
+
+
+The StatefulSets provided unique ordinal value always start with 0. During the initialization of the container, each container examines its assigned ordinal value. An ordinal value of 0 causes the container to assume the master role within the PostgreSQL cluster. For all other ordinal values, the container assumes a replica role. This is a very simple form of discovery made possible by the StatefulSet mechanics.
+
+
+
+PostgreSQL replicas are configured to connect to the master database via a Service dedicated to the master database. In order to support this replication, the example creates a separate Service for each of the master role and the replica role. Once the replica has connected, the replica will begin replicating state from the master.
+
+
+
+During the container initialization, a master container will use a [Service Account](https://kubernetes.io/docs/user-guide/service-accounts/) (pgset-sa) to change it’s container label value to match the master Service selector. Changing the label is important to enable traffic destined to the master database to reach the correct container within the Stateful Set. All other pods in the set assume the replica Service label by default.
+
+
+
+**Step 6** - Deployment Diagram
+
+
+
+The example results in a deployment depicted below:
+
+ 
+
+In this deployment, there is a Service for the master and a separate Service for the replica. The replica is connected to the master and replication of state has started.
+
+
+
+The Crunchy PostgreSQL container supports other forms of cluster deployment, the style of deployment is dictated by setting the PG\_MODE environment variable for the container. In the case of a StatefulSet deployment, that value is set to: PG\_MODE=set
+
+
+
+This environment variable is a hint to the container initialization logic as to the style of deployment we intend.
+
+
+
+**Step 7** - Testing the Example
+
+
+
+The tests below assume that the psql client has been installed on the test system. If if not, the psql client has been previously installed, it can be installed as follows:
+
+
+
+```
+sudo yum -y install postgresql
+ ```
+
+
+
+In addition, the tests below assume that the tested environment DNS resolves to the Kube DNS and that the tested environment DNS search path is specified to match the applicable Kube namespace and domain. The master service is named pgset-master and the replica service is named pgset-replica.
+
+
+
+Test the master as follows (the password is password):
+
+
+
+```
+psql -h pgset-master -U postgres postgres -c 'table pg\_stat\_replication'
+ ```
+
+
+
+If things are working, the command above will return output indicating that a single replica is connecting to the master.
+
+
+
+Next, test the replica as follows:
+
+
+
+```
+psql -h pgset-replica -U postgres postgres -c 'create table foo (id int)'
+ ```
+
+
+
+The command above should fail as the replica is **read-only** within a PostgreSQL cluster.
+
+
+
+Next, scale up the set as follows:
+
+
+
+```
+kubectl scale statefulset pgset --replicas=3
+ ```
+
+
+
+The command above should successfully create a new replica pod called **pgset-2** as depicted below:
+
+ 
+
+
+
+
+
+**Step 8** - Persistence Explained
+
+
+
+Take a look at the persisted PostgreSQL data files on the resulting NFS mount path:
+
+
+
+```
+$ ls -l /nfsfileshare/
+
+total 12
+
+drwx------ 20 26 26 4096 Jan 17 16:35 pgset-0
+
+drwx------ 20 26 26 4096 Jan 17 16:35 pgset-1
+
+drwx------ 20 26 26 4096 Jan 17 16:48 pgset-2
+ ```
+
+
+
+Each container in the stateful set binds to the single NFS Persistent Volume Claim (pgset-pvc) created in the example script.
+
+
+
+Since NFS and the PVC can be shared, each pod can write to this NFS path.
+
+
+
+The container is designed to create a subdirectory on that path using the pod host name for uniqueness.
+
+
+
+**Conclusion**
+
+
+
+StatefulSets is an exciting feature added to Kubernetes for container builders that are implementing clustering. The ordinal values assigned to the set provide a very simple mechanism to make clustering decisions when deploying a PostgreSQL cluster.
+
+
+
+
+
+_--Jeff McCormick, Developer, [Crunchy Data](http://crunchydata.com/)_
diff --git a/blog/_posts/2017-02-00-Run-Deep-Learning-With-Paddlepaddle-On-Kubernetes.md b/blog/_posts/2017-02-00-Run-Deep-Learning-With-Paddlepaddle-On-Kubernetes.md
new file mode 100644
index 00000000000..4fe9a6898ff
--- /dev/null
+++ b/blog/_posts/2017-02-00-Run-Deep-Learning-With-Paddlepaddle-On-Kubernetes.md
@@ -0,0 +1,172 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Run Deep Learning with PaddlePaddle on Kubernetes "
+date: Thursday, February 08, 2017
+pagination:
+ enabled: true
+---
+
+_Editor's note: Today's post is a joint post from the deep learning team at Baidu and the etcd team at CoreOS._
+
+
+
+**[](https://3.bp.blogspot.com/-Mwn3FU9hffI/WJk8QBxA6SI/AAAAAAAAA8w/AS5QoMdPTN8bL9jnixlsCXzj1IfYerhRQCLcB/s1600/baidu_research_logo_rgb.png)**
+
+
+
+
+**What is PaddlePaddle**
+
+PaddlePaddle is an easy-to-use, efficient, flexible and scalable deep learning platform originally developed at Baidu for applying deep learning to Baidu products since 2014.
+
+There have been more than 50 innovations created using PaddlePaddle supporting 15 Baidu products ranging from the search engine, online advertising, to Q&A and system security.
+
+In September 2016, Baidu open sourced [PaddlePaddle](https://github.com/PaddlePaddle/Paddle), and it soon attracted many contributors from outside of Baidu.
+
+**Why Run PaddlePaddle on Kubernetes**
+
+PaddlePaddle is designed to be slim and independent of computing infrastructure. Users can run it on top of Hadoop, Spark, Mesos, Kubernetes and others.. We have a strong interest with Kubernetes because of its flexibility, efficiency and rich features.
+
+While we are applying PaddlePaddle in various Baidu products, we noticed two main kinds of PaddlePaddle usage -- research and product. Research data does not change often, and the focus is fast experiments to reach the expected scientific measurement. Products data changes often. It usually comes from log messages generated from the Web services.
+
+A successful deep learning project includes both the research and the data processing pipeline. There are many parameters to be tuned. A lot of engineers work on the different parts of the project simultaneously.
+
+To ensure the project is easy to manage and utilize hardware resource efficiently, we want to run all parts of the project on the same infrastructure platform.
+
+The platform should provide:
+
+
+- fault-tolerance. It should abstract each stage of the pipeline as a service, which consists of many processes that provide high throughput and robustness through redundancy.
+
+- auto-scaling. In the daytime, there are usually many active users, the platform should scale out online services. While during nights, the platform should free some resources for deep learning experiments.
+
+- job packing and isolation. It should be able to assign a PaddlePaddle trainer process requiring the GPU, a web backend service requiring large memory, and a CephFS process requiring disk IOs to the same node to fully utilize its hardware.
+
+What we want is a platform which runs the deep learning system, the Web server (e.g., Nginx), the log collector (e.g., fluentd), the distributed queue service (e.g., Kafka), the log joiner and other data processors written using Storm, Spark, and Hadoop MapReduce on the same cluster. We want to run all jobs -- online and offline, production and experiments -- on the same cluster, so we could make full utilization of the cluster, as different kinds of jobs require different hardware resource.
+
+We chose container based solutions since the overhead introduced by VMs is contradictory to our goal of efficiency and utilization.
+
+Based on our research of different container based solutions, Kubernetes fits our requirement the best.
+
+**Distributed Training on Kubernetes**
+
+PaddlePaddle supports distributed training natively. There are two roles in a PaddlePaddle cluster: **parameter server** and **trainer**. Each parameter server process maintains a shard of the global model. Each trainer has its local copy of the model, and uses its local data to update the model. During the training process, trainers send model updates to parameter servers, parameter servers are responsible for aggregating these updates, so that trainers can synchronize their local copy with the global model.
+
+
+
+| {:big-img} |
+| Figure 1: Model is partitioned into two shards. Managed by two parameter servers respectively. |
+
+
+
+Some other approaches use a set of parameter servers to collectively hold a very large model in the CPU memory space on multiple hosts. But in practice, it is not often that we have such big models, because it would be very inefficient to handle very large model due to the limitation of GPU memory. In our configuration, multiple parameter servers are mostly for fast communications. Suppose there is only one parameter server process working with all trainers, the parameter server would have to aggregate gradients from all trainers and becomes a bottleneck. In our experience, an experimentally efficient configuration includes the same number of trainers and parameter servers. And we usually run a pair of trainer and parameter server on the same node. In the following Kubernetes job configuration, we start a job that runs N Pods, and in each Pod there are a parameter server and a trainer process.
+
+
+
+```
+yaml
+
+apiVersion: batch/v1
+
+kind: Job
+
+metadata:
+
+ name: PaddlePaddle-cluster-job
+
+spec:
+
+ parallelism: 3
+
+ completions: 3
+
+ template:
+
+ metadata:
+
+ name: PaddlePaddle-cluster-job
+
+ spec:
+
+ volumes:
+
+ - name: jobpath
+
+ hostPath:
+
+ path: /home/admin/efs
+
+ containers:
+
+ - name: trainer
+
+ image: your\_repo/paddle:mypaddle
+
+ command: ["bin/bash", "-c", "/root/start.sh"]
+
+ env:
+
+ - name: JOB\_NAME
+
+ value: paddle-cluster-job
+
+ - name: JOB\_PATH
+
+ value: /home/jobpath
+
+ - name: JOB\_NAMESPACE
+
+ value: default
+
+ volumeMounts:
+
+ - name: jobpath
+
+ mountPath: /home/jobpath
+
+ restartPolicy: Never
+ ```
+
+
+We can see from the config that parallelism, completions are both set to 3. So this job will simultaneously start up 3 PaddlePaddle pods, and this job will be finished when all 3 pods finishes.
+
+
+| [](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/usage/k8s/src/start_paddle.py){:big-img} |
+|
+
+Figure 2: Job A of three pods and Job B of one pod running on two nodes.
+ |
+
+
+The entrypoint of each pod is [start.sh](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/usage/k8s/src/k8s_train/start.sh). It downloads data from a storage service, so that trainers can read quickly from the pod-local disk space. After downloading completes, it runs a Python script, [start\_paddle.py](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/usage/k8s/src/k8s_train/start_paddle.py), which starts a parameter server, waits until parameter servers of all pods are ready to serve, and then starts the trainer process in the pod.
+
+This waiting is necessary because each trainer needs to talk to all parameter servers, as shown in Figure. 1. Kubernetes [API](http://kubernetes.io/docs/api-reference/v1/operations/#_list_or_watch_objects_of_kind_pod) enables trainers to check the status of pods, so the Python script could wait until all parameter servers’ status change to "running" before it triggers the training process.
+
+Currently, the mapping from data shards to pods/trainers is static. If we are going to run N trainers, we would need to partition the data into N shards, and statically assign each shard to a trainer. Again we rely on the Kubernetes API to enlist pods in a job so could we index pods / trainers from 1 to N. The i-th trainer would read the i-th data shard.
+
+Training data is usually served on a distributed filesystem. In practice we use CephFS on our on-premise clusters and Amazon Elastic File System on AWS. If you are interested in building a Kubernetes cluster to run distributed PaddlePaddle training jobs, please follow [this tutorial](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/usage/k8s/k8s_aws_en.md).
+
+**What’s Next**
+
+We are working on running PaddlePaddle with Kubernetes more smoothly.
+
+As you might notice the current trainer scheduling fully relies on Kubernetes based on a static partition map. This approach is simple to start, but might cause a few efficiency problems.
+
+First, slow or dead trainers block the entire job. There is no controlled preemption or rescheduling after the initial deployment. Second, the resource allocation is static. So if Kubernetes has more available resources than we anticipated, we have to manually change the resource requirements. This is tedious work, and is not aligned with our efficiency and utilization goal.
+
+To solve the problems mentioned above, we will add a PaddlePaddle master that understands Kubernetes API, can dynamically add/remove resource capacity, and dispatches shards to trainers in a more dynamic manner. The PaddlePaddle master uses etcd as a fault-tolerant storage of the dynamic mapping from shards to trainers. Thus, even if the master crashes, the mapping is not lost. Kubernetes can restart the master and the job will keep running.
+
+Another potential improvement is better PaddlePaddle job configuration. Our experience of having the same number of trainers and parameter servers was mostly collected from using special-purpose clusters. That strategy was observed performant on our clients' clusters that run only PaddlePaddle jobs. However, this strategy might not be optimal on general-purpose clusters that run many kinds of jobs.
+
+PaddlePaddle trainers can utilize multiple GPUs to accelerate computations. GPU is not a first class resource in Kubernetes yet. We have to manage GPUs semi-manually. We would love to work with Kubernetes community to improve GPU support to ensure PaddlePaddle runs the best on Kubernetes.
+
+_--Yi Wang, [Baidu Research](http://research.baidu.com/) and Xiang Li, [CoreOS](https://coreos.com/)_
+
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2017-03-00-Advanced-Scheduling-In-Kubernetes.md b/blog/_posts/2017-03-00-Advanced-Scheduling-In-Kubernetes.md
new file mode 100644
index 00000000000..cda2cb3ab2f
--- /dev/null
+++ b/blog/_posts/2017-03-00-Advanced-Scheduling-In-Kubernetes.md
@@ -0,0 +1,277 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Advanced Scheduling in Kubernetes "
+date: Saturday, March 31, 2017
+pagination:
+ enabled: true
+---
+_Editor’s note: this post is part of a [series of in-depth articles](http://blog.kubernetes.io/2017/03/five-days-of-kubernetes-1.6.html) on what's new in Kubernetes 1.6_
+
+The Kubernetes scheduler’s default behavior works well for most cases -- for example, it ensures that pods are only placed on nodes that have sufficient free resources, it ties to spread pods from the same set ([ReplicaSet](https://kubernetes.io/docs/user-guide/replicasets/), [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/), etc.) across nodes, it tries to balance out the resource utilization of nodes, etc.
+
+But sometimes you want to control how your pods are scheduled. For example, perhaps you want to ensure that certain pods only schedule on nodes with specialized hardware, or you want to co-locate services that communicate frequently, or you want to dedicate a set of nodes to a particular set of users. Ultimately, you know much more about how your applications should be scheduled and deployed than Kubernetes ever will. So **[Kubernetes 1.6](http://blog.kubernetes.io/2017/03/kubernetes-1.6-multi-user-multi-workloads-at-scale.html) offers four advanced scheduling features: node affinity/anti-affinity, taints and tolerations, pod affinity/anti-affinity, and custom schedulers**. Each of these features are now in _beta_ in Kubernetes 1.6.
+
+**Node Affinity/Anti-Affinity**
+
+[Node Affinity/Anti-Affinity](https://kubernetes.io/docs/user-guide/node-selection/#node-affinity-beta-feature) is one way to set rules on which nodes are selected by the scheduler. This feature is a generalization of the [nodeSelector](https://kubernetes.io/docs/user-guide/node-selection/#nodeselector) feature which has been in Kubernetes since version 1.0. The rules are defined using the familiar concepts of custom labels on nodes and selectors specified in pods, and they can be either required or preferred, depending on how strictly you want the scheduler to enforce them.
+
+Required rules must be met for a pod to schedule on a particular node. If no node matches the criteria (plus all of the other normal criteria, such as having enough free resources for the pod’s resource request), then the pod won’t be scheduled. Required rules are specified in the requiredDuringSchedulingIgnoredDuringExecution field of nodeAffinity.
+
+For example, if we want to require scheduling on a node that is in the us-central1-a GCE zone of a multi-zone Kubernetes cluster, we can specify the following affinity rule as part of the Pod spec:
+
+
+```
+affinity:
+
+ nodeAffinity:
+
+ requiredDuringSchedulingIgnoredDuringExecution:
+
+ nodeSelectorTerms:
+
+ - matchExpressions:
+
+ - key: "failure-domain.beta.kubernetes.io/zone"
+
+ operator: In
+
+ values: ["us-central1-a"]
+ ```
+
+
+“IgnoredDuringExecution” means that the pod will still run if labels on a node change and affinity rules are no longer met. There are future plans to offer requiredDuringSchedulingRequiredDuringExecution which will evict pods from nodes as soon as they don’t satisfy the node affinity rule(s).
+
+Preferred rules mean that if nodes match the rules, they will be chosen first, and only if no preferred nodes are available will non-preferred nodes be chosen. You can prefer instead of require that pods are deployed to us-central1-a by slightly changing the pod spec to use preferredDuringSchedulingIgnoredDuringExecution:
+
+
+```
+affinity:
+
+ nodeAffinity:
+
+ preferredDuringSchedulingIgnoredDuringExecution:
+
+ nodeSelectorTerms:
+
+ - matchExpressions:
+
+ - key: "failure-domain.beta.kubernetes.io/zone"
+
+ operator: In
+
+ values: ["us-central1-a"]
+ ```
+
+
+Node anti-affinity can be achieved by using negative operators. So for instance if we want our pods to avoid us-central1-a we can do this:
+
+
+
+```
+affinity:
+
+ nodeAffinity:
+
+ requiredDuringSchedulingIgnoredDuringExecution:
+
+ nodeSelectorTerms:
+
+ - matchExpressions:
+
+ - key: "failure-domain.beta.kubernetes.io/zone"
+
+ operator: NotIn
+
+ values: ["us-central1-a"]
+ ```
+
+
+Valid operators you can use are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
+
+Additional use cases for this feature are to restrict scheduling based on nodes’ hardware architecture, operating system version, or specialized hardware. Node affinity/anti-affinity is _beta_ in Kubernetes 1.6.
+
+**Taints and Tolerations**
+
+A related feature is “[taints and tolerations](https://kubernetes.io/docs/user-guide/node-selection/#taints-and-toleations-beta-feature),” which allows you to mark (“taint”) a node so that no pods can schedule onto it unless a pod explicitly “tolerates” the taint. Marking nodes instead of pods (as in node affinity/anti-affinity) is particularly useful for situations where most pods in the cluster should avoid scheduling onto the node. For example, you might want to mark your master node as schedulable only by Kubernetes system components, or dedicate a set of nodes to a particular group of users, or keep regular pods away from nodes that have special hardware so as to leave room for pods that need the special hardware.
+
+The kubectl command allows you to set taints on nodes, for example:
+
+
+
+```
+kubectl taint nodes node1 key=value:NoSchedule
+ ```
+
+
+creates a taint that marks the node as unschedulable by any pods that do not have a toleration for taint with key key, value value, and effect NoSchedule. (The other taint effects are PreferNoSchedule, which is the preferred version of NoSchedule, and NoExecute, which means any pods that are running on the node when the taint is applied will be evicted unless they tolerate the taint.) The toleration you would add to a PodSpec to have the corresponding pod tolerate this taint would look like this
+
+
+
+```
+tolerations:
+
+- key: "key"
+
+ operator: "Equal"
+
+ value: "value"
+
+ effect: "NoSchedule"
+ ```
+
+
+
+In addition to moving taints and tolerations to _beta_ in Kubernetes 1.6, we have introduced an _alpha_ feature that uses taints and tolerations to allow you to customize how long a pod stays bound to a node when the node experiences a problem like a network partition instead of using the default five minutes. See [this section](https://kubernetes.io/docs/user-guide/node-selection/#per-pod-configurable-eviction-behavior-when-there-are-node-problems-alpha-feature) of the documentation for more details.
+
+
+
+**Pod Affinity/Anti-Affinity**
+
+
+
+Node affinity/anti-affinity allows you to constrain which nodes a pod can run on based on the nodes’ labels. But what if you want to specify rules about how pods should be placed relative to one another, for example to spread or pack pods within a service or relative to pods in other services? For that you can use [pod affinity/anti-affinity](https://kubernetes.io/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature), which is also _beta_ in Kubernetes 1.6.
+
+
+
+Let’s look at an example. Say you have front-ends in service S1, and they communicate frequently with back-ends that are in service S2 (a “north-south” communication pattern). So you want these two services to be co-located in the same cloud provider zone, but you don’t want to have to choose the zone manually--if the zone fails, you want the pods to be rescheduled to another (single) zone. You can specify this with a pod affinity rule that looks like this (assuming you give the pods of this service a label “service=S2” and the pods of the other service a label “service=S1”):
+
+
+
+```
+affinity:
+
+ podAffinity:
+
+ requiredDuringSchedulingIgnoredDuringExecution:
+
+ - labelSelector:
+
+ matchExpressions:
+
+ - key: service
+
+ operator: In
+
+ values: [“S1”]
+
+ topologyKey: failure-domain.beta.kubernetes.io/zone
+ ```
+
+
+As with node affinity/anti-affinity, there is also a preferredDuringSchedulingIgnoredDuringExecution variant.
+
+Pod affinity/anti-affinity is very flexible. Imagine you have profiled the performance of your services and found that containers from service S1 interfere with containers from service S2 when they share the same node, perhaps due to cache interference effects or saturating the network link. Or maybe due to security concerns you never want containers of S1 and S2 to share a node. To implement these rules, just make two changes to the snippet above -- change podAffinity to podAntiAffinity and change topologyKey to kubernetes.io/hostname.
+
+**Custom Schedulers**
+
+If the Kubernetes scheduler’s various features don’t give you enough control over the scheduling of your workloads, you can delegate responsibility for scheduling arbitrary subsets of pods to your own custom scheduler(s) that run(s) alongside, or instead of, the default Kubernetes scheduler. [Multiple schedulers](https://kubernetes.io/docs/admin/multiple-schedulers/) is _beta_ in Kubernetes 1.6.
+
+Each new pod is normally scheduled by the default scheduler. But if you provide the name of your own custom scheduler, the default scheduler will ignore that Pod and allow your scheduler to schedule the Pod to a node. Let’s look at an example.
+
+Here we have a Pod where we specify the schedulerName field:
+
+
+```
+apiVersion: v1
+
+kind: Pod
+
+metadata:
+
+ name: nginx
+
+ labels:
+
+ app: nginx
+
+spec:
+
+ schedulerName: my-scheduler
+
+ containers:
+
+ - name: nginx
+
+ image: nginx:1.10
+ ```
+
+
+
+If we create this Pod without deploying a custom scheduler, the default scheduler will ignore it and it will remain in a Pending state. So we need a custom scheduler that looks for, and schedules, pods whose schedulerName field is my-scheduler.
+
+
+
+A custom scheduler can be written in any language and can be as simple or complex as you need. Here is a very simple example of a custom scheduler written in Bash that assigns a node randomly. Note that you need to run this along with kubectl proxy for it to work.
+
+
+
+```
+#!/bin/bash
+
+SERVER='localhost:8001'
+
+while true;
+
+do
+
+ for PODNAME in $(kubectl --server $SERVER get pods -o json | jq '.items[] | select(.spec.schedulerName == "my-scheduler") | select(.spec.nodeName == null) | .metadata.name' | tr -d '"')
+
+;
+
+ do
+
+ NODES=($(kubectl --server $SERVER get nodes -o json | jq '.items[].metadata.name' | tr -d '"'))
+
+
+ NUMNODES=${#NODES[@]}
+
+ CHOSEN=${NODES[$[$RANDOM % $NUMNODES]]}
+
+ curl --header "Content-Type:application/json" --request POST --data '{"apiVersion":"v1", "kind": "Binding", "metadata": {"name": "'$PODNAME'"}, "target": {"apiVersion": "v1", "kind"
+
+: "Node", "name": "'$CHOSEN'"}}' http://$SERVER/api/v1/namespaces/default/pods/$PODNAME/binding/
+
+ echo "Assigned $PODNAME to $CHOSEN"
+
+ done
+
+ sleep 1
+
+done
+```
+
+
+
+**Learn more**
+
+
+
+The Kubernetes 1.6 [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#v160) have more information about these features, including details about how to change your configurations if you are already using the alpha version of one or more of these features (this is required, as the move from alpha to beta is a breaking change for these features).
+
+
+
+**Acknowledgements**
+
+
+
+The features described here, both in their alpha and beta forms, were a true community effort, involving engineers from Google, Huawei, IBM, Red Hat and more.
+
+
+
+**Get Involved**
+
+
+
+Share your voice at our weekly [community meeting](https://github.com/kubernetes/community/blob/master/communication.md#weekly-meeting):
+
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+- Connect with the community on [Slack](http://slack.k8s.io/) (room #sig-scheduling)
+
+
+Many thanks for your contributions.
+
+
+
+_--Ian Lewis, Developer Advocate, and David Oppenheimer, Software Engineer, Google_
diff --git a/blog/_posts/2017-03-00-Dynamic-Provisioning-And-Storage-Classes-Kubernetes.md b/blog/_posts/2017-03-00-Dynamic-Provisioning-And-Storage-Classes-Kubernetes.md
new file mode 100644
index 00000000000..42032f934be
--- /dev/null
+++ b/blog/_posts/2017-03-00-Dynamic-Provisioning-And-Storage-Classes-Kubernetes.md
@@ -0,0 +1,216 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Dynamic Provisioning and Storage Classes in Kubernetes "
+date: Thursday, March 29, 2017
+pagination:
+ enabled: true
+---
+
+_Editor’s note: this post is part of a [series of in-depth articles](http://blog.kubernetes.io/2017/03/five-days-of-kubernetes-1.6.html) on what's new in Kubernetes 1.6_
+
+
+
+Storage is a critical part of running stateful containers, and Kubernetes offers powerful primitives for managing it. Dynamic volume provisioning, a feature unique to Kubernetes, allows storage volumes to be created on-demand. Before dynamic provisioning, cluster administrators had to manually make calls to their cloud or storage provider to provision new storage volumes, and then create PersistentVolume objects to represent them in Kubernetes. With dynamic provisioning, these two steps are automated, eliminating the need for cluster administrators to pre-provision storage. Instead, the storage resources can be dynamically provisioned using the provisioner specified by the StorageClass object (see [user-guide](https://kubernetes.io/docs/user-guide/persistent-volumes/index#storageclasses)). StorageClasses are essentially blueprints that abstract away the underlying storage provider, as well as other parameters, like disk-type (e.g.; solid-state vs standard disks).
+
+StorageClasses use provisioners that are specific to the storage platform or cloud provider to give Kubernetes access to the physical media being used. Several storage provisioners are provided in-tree (see [user-guide](https://kubernetes.io/docs/user-guide/persistent-volumes/index#provisioner)), but additionally out-of-tree provisioners are now supported (see [kubernetes-incubator](https://github.com/kubernetes-incubator/external-storage)).
+
+In the [Kubernetes 1.6 release](http://blog.kubernetes.io/2017/03/kubernetes-1.6-multi-user-multi-workloads-at-scale.html), **dynamic provisioning has been promoted to stable** (having entered beta in 1.4). This is a big step forward in completing the Kubernetes storage automation vision, allowing cluster administrators to control how resources are provisioned and giving users the ability to focus more on their application. With all of these benefits, **there are a few important user-facing changes (discussed below) that are important to understand before using Kubernetes 1.6**.
+
+**Storage Classes and How to Use them**
+
+StorageClasses are the foundation of dynamic provisioning, allowing cluster administrators to define abstractions for the underlying storage platform. Users simply refer to a StorageClass by name in the PersistentVolumeClaim (PVC) using the “storageClassName” parameter.
+
+In the following example, a PVC refers to a specific storage class named “gold”.
+
+ ```
+apiVersion: v1
+
+kind: PersistentVolumeClaim
+
+metadata:
+
+ name: mypvc
+
+ namespace: testns
+
+spec:
+
+ accessModes:
+
+ - ReadWriteOnce
+
+ resources:
+
+ requests:
+
+ storage: 100Gi
+
+ storageClassName: gold
+ ```
+
+
+In order to promote the usage of dynamic provisioning this feature permits the cluster administrator to specify a **default** StorageClass. When present, the user can create a PVC without having specifying a storageClassName, further reducing the user’s responsibility to be aware of the underlying storage provider. When using default StorageClasses, there are some operational subtleties to be aware of when creating PersistentVolumeClaims (PVCs). This is particularly important if you already have existing PersistentVolumes (PVs) that you want to re-use:
+
+- PVs that are already “Bound” to PVCs will remain bound with the move to 1.6
+
+ - They will not have a StorageClass associated with them unless the user manually adds it
+ - If PVs become “Available” (i.e.; if you delete a PVC and the corresponding PV is recycled), then they are subject to the following
+- If storageClassName **is not specified** in the PVC, the **default storage class will be used** for provisioning.
+
+ - Existing, “Available”, PVs that do not have the default storage class label **will not be considered** for binding to the PVC
+- If storageClassName **is set to an empty string** **(‘’)** in the PVC, no storage class will be used (i.e.; dynamic provisioning is disabled for this PVC)
+
+ - Existing, “Available”, PVs (that do not have a specified storageClassName) **will be considered** for binding to the PVC
+- If storageClassName is set to a specific value, then the matching storage class will be used
+
+ - Existing, “Available”, PVs that have a matching storageClassName **will be considered** for binding to the PVC
+ - If no corresponding storage class exists, the PVC will fail.
+To reduce the burden of setting up default StorageClasses in a cluster, beginning with 1.6, Kubernetes installs (via the add-on manager) default storage classes for several cloud providers. To use these default StorageClasses, users **do not** need refer to them by name – that is, storageClassName need not be specified in the PVC.
+
+The following table provides more detail on default storage classes pre-installed by cloud provider as well as the specific parameters used by these defaults.
+
+
+| Cloud Provider | Default StorageClass Name | Default Provisioner |
+|--|--|--|
+| Amazon Web Services | gp2 | aws-ebs |
+| Microsoft Azure | standard | azure-disk |
+| Google Cloud Platform | standard | gce-pd |
+| OpenStack | standard | cinder |
+| VMware vSphere | thin | vsphere-volume |
+{:.post-table}
+
+While these pre-installed default storage classes are chosen to be “reasonable” for most storage users, [this guide](https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class) provides instructions on how to specify your own default.
+
+
+
+**Dynamically Provisioned Volumes and the Reclaim Policy**
+
+
+
+All PVs have a reclaim policy associated with them that dictates what happens to a PV once it becomes released from a claim (see [user-guide](https://kubernetes.io/docs/user-guide/persistent-volumes/#reclaiming)). Since the goal of dynamic provisioning is to completely automate the lifecycle of storage resources, the default reclaim policy for dynamically provisioned volumes is “delete”. This means that when a PersistentVolumeClaim (PVC) is released, the dynamically provisioned volume is de-provisioned (deleted) on the storage provider and the data is likely irretrievable. If this is not the desired behavior, the user must change the reclaim policy on the corresponding PersistentVolume (PV) object after the volume is provisioned.
+
+
+
+**How do I change the reclaim policy on a dynamically provisioned volume?**
+
+You can change the reclaim policy by editing the PV object and changing the “persistentVolumeReclaimPolicy” field to the desired value. For more information on various reclaim policies see [user-guide](https://kubernetes.io/docs/user-guide/persistent-volumes/#reclaim-policy).
+
+
+
+**FAQs**
+
+
+
+**How do I use a default StorageClass?**
+
+If your cluster has a default StorageClass that meets your needs, then all you need to do is create a PersistentVolumeClaim (PVC) and the default provisioner will take care of the rest – there is no need to specify the storageClassName:
+
+ ```
+apiVersion: v1
+
+kind: PersistentVolumeClaim
+
+metadata:
+
+ name: mypvc
+
+ namespace: testns
+
+spec:
+
+ accessModes:
+
+ - ReadWriteOnce
+
+ resources:
+
+ requests:
+
+ storage: 100Gi
+ ```
+
+
+
+**Can I add my own storage classes?**
+Yes. To add your own storage class, first determine which provisioners will work in your cluster. Then, create a StorageClass object with parameters customized to meet your needs (see user-guide for more detail). For many users, the easiest way to create the object is to write a yaml file and apply it with “kubectl create -f”. The following is an example of a StorageClass for Google Cloud Platform named “gold” that creates a “pd-ssd”. Since multiple classes can exist within a cluster, the administrator may leave the default enabled for most workloads (since it uses a “pd-standard”), with the “gold” class reserved for workloads that need extra performance.
+
+
+ ```
+kind: StorageClass
+
+apiVersion: storage.k8s.io/v1
+
+metadata:
+
+ name: gold
+
+provisioner: kubernetes.io/gce-pd
+
+parameters:
+
+ type: pd-ssd
+ ```
+
+
+
+**How do I check if I have a default StorageClass Installed?**
+
+You can use kubectl to check for StorageClass objects. In the example below there are two storage classes: “gold” and “standard”. The “gold” class is user-defined, and the “standard” class is installed by Kubernetes and is the default.
+
+
+
+ ```
+$ kubectl get sc
+
+NAME TYPE
+
+gold kubernetes.io/gce-pd
+
+standard (default) kubernetes.io/gce-pd
+ ```
+
+
+
+
+
+ ```
+$ kubectl describe storageclass standard
+
+Name: standard
+
+IsDefaultClass: Yes
+
+Annotations: storageclass.beta.kubernetes.io/is-default-class=true
+
+Provisioner: kubernetes.io/gce-pd
+
+Parameters: type=pd-standard
+
+Events: \
+ ```
+
+
+
+**Can I delete/turn off the default StorageClasses?**
+You cannot delete the default storage class objects provided. Since they are installed as cluster addons, they will be recreated if they are deleted.
+
+You can, however, disable the defaulting behavior by removing (or setting to false) the following annotation: storageclass.beta.kubernetes.io/is-default-class.
+
+If there are no StorageClass objects marked with the default annotation, then PersistentVolumeClaim objects (without a StorageClass specified) will not trigger dynamic provisioning. They will, instead, fall back to the legacy behavior of binding to an available PersistentVolume object.
+
+**Can I assign my existing PVs to a particular StorageClass?**
+Yes, you can assign a StorageClass to an existing PV by editing the appropriate PV object and adding (or setting) the desired storageClassName field to it.
+
+**What happens if I delete a PersistentVolumeClaim (PVC)?**
+If the volume was dynamically provisioned, then the default reclaim policy is set to “delete”. This means that, by default, when the PVC is deleted, the underlying PV and storage asset will also be deleted. If you want to retain the data stored on the volume, then you must change the reclaim policy from “delete” to “retain” after the PV is provisioned.
+
+_--Saad Ali & Michelle Au, Software Engineers, and Matthew De Lio, Product Manager, Google_
+
+
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Join the community portal for advocates on [K8sPort](http://k8sport.org/)
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- [Download](http://get.k8s.io/) Kubernetes
diff --git a/blog/_posts/2017-03-00-Five-Days-Of-Kubernetes-1.6.md b/blog/_posts/2017-03-00-Five-Days-Of-Kubernetes-1.6.md
new file mode 100644
index 00000000000..82b226df47c
--- /dev/null
+++ b/blog/_posts/2017-03-00-Five-Days-Of-Kubernetes-1.6.md
@@ -0,0 +1,31 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Five Days of Kubernetes 1.6 "
+date: Thursday, March 29, 2017
+pagination:
+ enabled: true
+---
+
+With the help of our growing community of 1,110 plus contributors, we pushed around 5,000 commits to deliver [Kubernetes 1.6](http://blog.kubernetes.io/2017/03/kubernetes-1.6-multi-user-multi-workloads-at-scale.html), bringing focus on multi-user, multi-workloads at scale. While many improvements have been contributed, we selected few features to highlight in a series of in-depths posts listed below.
+
+Follow along and read what’s new:
+
+|| Five Days of Kubernetes|
+|-|-|
+| Day 1 | [Dynamic Provisioning and Storage Classes in Kubernetes Stable in 1.6](http://blog.kubernetes.io/2017/03/dynamic-provisioning-and-storage-classes-kubernetes.html) |
+| Day 2 | [Scalability updates in Kubernetes 1.6](http://blog.kubernetes.io/2017/03/scalability-updates-in-kubernetes-1.6.html) |
+| Day 3 | [Advanced Scheduling in Kubernetes 1.6](http://blog.kubernetes.io/2017/03/advanced-scheduling-in-kubernetes.html) |
+| Day 4 | [Configuring Private DNS Zones and Upstream Nameservers in Kubernetes](http://blog.kubernetes.io/2017/04/configuring-private-dns-zones-upstream-nameservers-kubernetes.html) |
+|Day 5 | [RBAC support in Kubernetes](http://blog.kubernetes.io/2017/04/rbac-support-in-kubernetes.html) |
+{:.post-table}
+
+
+**Connect**
+
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Join the community portal for advocates on [K8sPort](http://k8sport.org/)
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- [Download](http://get.k8s.io/) Kubernetes
diff --git a/blog/_posts/2017-03-00-K8Sport-Engaging-The-Kubernetes-Community.md b/blog/_posts/2017-03-00-K8Sport-Engaging-The-Kubernetes-Community.md
new file mode 100644
index 00000000000..8971ce3c761
--- /dev/null
+++ b/blog/_posts/2017-03-00-K8Sport-Engaging-The-Kubernetes-Community.md
@@ -0,0 +1,53 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " The K8sPort: Engaging Kubernetes Community One Activity at a Time "
+date: Saturday, March 24, 2017
+pagination:
+ enabled: true
+---
+_Editor's note: Today’s post is by Ryan Quackenbush, Advocacy Programs Manager at Apprenda, showing a new community portal for Kubernetes advocates: the K8sPort._
+
+The [**K8sPort**](http://k8sport.org/) is a hub designed to help you, the Kubernetes community, earn credit for the hard work you’re putting forth in making this one of the most successful open source projects ever. Back at KubeCon Seattle in November, I [presented](https://youtu.be/LwViH5eLoOI) a lightning talk of a preview of K8sPort.
+
+This hub, and our intentions in helping to drive this initiative in the community, grew out of a desire to help cultivate an engaged community of Kubernetes advocates. This is done through gamification in a community hub full of different activities called “challenges,” which are activities meant to help direct members of the community to attend various events and meetings, share and provide feedback on important content, answer questions posed on sites like Stack Overflow, and more.
+
+By completing these challenges, you collect points and can redeem them for different types of rewards and experiences, examples of which include charitable donations, gift certificates, conference tickets and more. As advocates complete challenges and gain points, they’ll earn performance-related badges, move up in community tiers and participate in a fun community leaderboard.
+
+My presentation at KubeCon, simply put, was a call for early signups. Those who’ve been piloting the program have, for the most part, had positive things to say about their experiences.
+
+> I know I'm the only one playing with [@K8sPort](https://twitter.com/K8sPort) but it may be the most important thing the [#Kubernetes](https://twitter.com/hashtag/Kubernetes?src=hash) community has.
+>
+> — Justin Garrison (@rothgar) [November 22, 2016](https://twitter.com/rothgar/status/800941707558670336)
+
+- _“Great way of improving the community and documentation. The gamification of Kubernetes gave me more insight into the stack as well.”_
+ - Jonas Kint, Devops Engineer at Showpad
+- _“A great way to engage with the kubernetes project and also help the community. Fun stuff.” _
+ - Kevin Duane, Systems Engineer at The Walt Disney Company
+- _“K8sPort seems like an awesome idea for incentivising giving back to the community in a way that will hopefully cause more valuable help from more people than might usually be helping.”_
+ - William Stewart, Site Reliability Engineer at Superbalist
+
+Today I am pleased to announce that the [Cloud Native Computing Foundation](https://www.cncf.io/) (CNCF) is making the K8sPort generally available to the entire contributing community! We’ve simplified the signup process by allowing would-be advocates to authenticate and register through the use of their existing GitHub accounts.
+
+
+
+ {:.big-img}
+
+
+
+If you’re a contributing member of the Kubernetes community and you have an active GitHub account tied to the Kubernetes repository at GitHub, you can authenticate using your GitHub credentials and gain access to the K8sPort.
+
+ 
+Beyond the challenges that get posted regularly, community members will be recognized and compile points for things they’re already doing today. This will be accomplished through the K8sPort’s full integration with GitHub and the core Kubernetes repository. Once you authenticate, you’ll automatically begin earning points and recognition for various contributions -- including logging issues, making pull requests, code commits & more.
+
+
+If you’re interested in joining the advocacy hub, please join us at [k8sport.org](http://k8sport.org/)! We hope you’re as excited about what you see as we are to continue to build it and present it to you.
+
+For a quick walkthrough on K8sPort authentication and the hub itself, see this quick demo, below.
+
+
+
+
+
+
+_--Ryan Quackenbush, Advocacy Programs Manager, Apprenda_
diff --git a/blog/_posts/2017-03-00-Kubernetes-1.6-Multi-User-Multi-Workloads-At-Scale.md b/blog/_posts/2017-03-00-Kubernetes-1.6-Multi-User-Multi-Workloads-At-Scale.md
new file mode 100644
index 00000000000..3cac1058ba2
--- /dev/null
+++ b/blog/_posts/2017-03-00-Kubernetes-1.6-Multi-User-Multi-Workloads-At-Scale.md
@@ -0,0 +1,114 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes 1.6: Multi-user, Multi-workloads at Scale "
+date: Wednesday, March 28, 2017
+pagination:
+ enabled: true
+---
+Today we’re announcing the release of Kubernetes 1.6.
+
+In this release the community’s focus is on scale and automation, to help you deploy multiple workloads to multiple users on a cluster. We are announcing that 5,000 node clusters are supported. We moved dynamic storage provisioning to _stable_. Role-based access control ([RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)), [kubefed](https://kubernetes.io/docs/tutorials/federation/set-up-cluster-federation-kubefed/), [kubeadm](https://kubernetes.io/docs/getting-started-guides/kubeadm/), and several scheduling features are moving to _beta_. We have also added intelligent defaults throughout to enable greater automation out of the box.
+
+**What’s New**
+
+**Scale and Federation** : Large enterprise users looking for proof of at-scale performance will be pleased to know that Kubernetes’ stringent scalability [SLO](http://blog.kubernetes.io/2016/03/1000-nodes-and-beyond-updates-to-Kubernetes-performance-and-scalability-in-12.html) now supports 5,000 node (150,000 pod) clusters. This 150% increase in total cluster size, powered by a new version of [etcd v3](https://coreos.com/blog/etcd3-a-new-etcd.html) by CoreOS, is great news if you are deploying applications such as search or games which can grow to consume larger clusters.
+
+For users who want to scale beyond 5,000 nodes or spread across multiple regions or clouds, [federation](https://kubernetes.io/docs/concepts/cluster-administration/federation/) lets you combine multiple Kubernetes clusters and address them through a single API endpoint. In this release, the [kubefed](https://kubernetes.io//docs/tutorials/federation/set-up-cluster-federation-kubefed) command line utility graduated to _beta_ - with improved support for on-premise clusters. kubefed now [automatically configures](https://kubernetes.io//docs/tutorials/federation/set-up-cluster-federation-kubefed.md#kube-dns-configuration) kube-dns on joining clusters and can pass arguments to federated components.
+
+**Security and Setup** : Users concerned with security will find that [RBAC](https://kubernetes.io//docs/admin/authorization/rbac), now _beta_ adds a significant security benefit through more tightly scoped default roles for system components. The default RBAC policies in 1.6 grant scoped permissions to control-plane components, nodes, and controllers. RBAC allows cluster administrators to selectively grant particular users or service accounts fine-grained access to specific resources on a per-namespace basis. RBAC users upgrading from 1.5 to 1.6 should view the guidance [here](https://kubernetes.io//docs/admin/authorization/rbac.md#upgrading-from-15).
+
+Users looking for an easy way to provision a secure cluster on physical or cloud servers can use [kubeadm](https://kubernetes.io/docs/getting-started-guides/kubeadm/), which is now _beta_. kubeadm has been enhanced with a set of command line flags and a base feature set that includes RBAC setup, use of the [Bootstrap Token system](http://kubernetes.io/docs/admin/bootstrap-tokens/) and an enhanced [Certificates API](https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/).
+
+**Advanced Scheduling** : This release adds a set of [powerful and versatile scheduling constructs](https://kubernetes.io/docs/user-guide/node-selection/) to give you greater control over how pods are scheduled, including rules to restrict pods to particular nodes in heterogeneous clusters, and rules to spread or pack pods across failure domains such as nodes, racks, and zones.
+
+[Node affinity/anti-affinity](https://kubernetes.io/docs/user-guide/node-selection/#node-affinity-beta-feature), now in _beta_, allows you to restrict pods to schedule only on certain nodes based on node labels. Use built-in or custom node labels to select specific zones, hostnames, hardware architecture, operating system version, specialized hardware, etc. The scheduling rules can be required or preferred, depending on how strictly you want the scheduler to enforce them.
+
+A related feature, called [taints and tolerations](https://kubernetes.io/docs/user-guide/node-selection/#taints-and-tolerations-beta-feature), makes it possible to compactly represent rules for excluding pods from particular nodes. The feature, also now in _beta_, makes it easy, for example, to dedicate sets of nodes to particular sets of users, or to keep nodes that have special hardware available for pods that need the special hardware by excluding pods that don’t need it.
+
+Sometimes you want to co-schedule services, or pods within a service, near each other topologically, for example to optimize North-South or East-West communication. Or you want to spread pods of a service for failure tolerance, or keep antagonistic pods separated, or ensure sole tenancy of nodes. [Pod affinity and anti-affinity](https://kubernetes.io/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature), now in _beta_, enables such use cases by letting you set hard or soft requirements for spreading and packing pods relative to one another within arbitrary topologies (node, zone, etc.).
+
+Lastly, for the ultimate in scheduling flexibility, you can run your own custom scheduler(s) alongside, or instead of, the default Kubernetes scheduler. Each scheduler is responsible for different sets of pods. [Multiple schedulers](https://kubernetes.io/docs/admin/multiple-schedulers/) is _beta_ in this release.
+
+**Dynamic Storage Provisioning** : Users deploying stateful applications will benefit from the extensive storage automation capabilities in this release of Kubernetes.
+
+Since its early days, Kubernetes has been able to automatically attach and detach storage, format disk, mount and unmount volumes per the pod spec, and do so seamlessly as pods move between nodes. In addition, the PersistentVolumeClaim (PVC) and PersistentVolume (PV) objects decouple the request for storage from the specific storage implementation, making the pod spec portable across a range of cloud and on-premise environments. In this release [StorageClass](https://kubernetes.io/docs/user-guide/persistent-volumes/#storageclasses) and [dynamic volume provisioning](https://kubernetes.io/docs/user-guide/persistent-volumes/#dynamic) are promoted to _stable_, completing the automation story by creating and deleting storage on demand, eliminating the need to pre-provision.
+
+The design allows cluster administrators to define and expose multiple flavors of storage within a cluster, each with a custom set of parameters. End users can stop worrying about the complexity and nuances of how storage is provisioned, while still selecting from multiple storage options.
+
+In 1.6 Kubernetes comes with a set of built-in defaults to completely automate the storage provisioning lifecycle, freeing you to work on your applications. Specifically, Kubernetes now pre-installs system-defined StorageClass objects for AWS, Azure, GCP, OpenStack and VMware vSphere [by default](https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class). This gives Kubernetes users on these providers the benefits of dynamic storage provisioning without having to manually setup StorageClass objects. This is a [change in the default behavior](https://kubernetes.io/docs/user-guide/persistent-volumes/index#class-1) of PVC objects on these clouds. Note that default behavior is that dynamically provisioned volumes are created with the “delete” [reclaim policy](https://kubernetes.io/docs/user-guide/persistent-volumes#reclaim-policy). That means once the PVC is deleted, the dynamically provisioned volume is automatically deleted so users do not have the extra step of ‘cleaning up’.
+
+In addition, we have expanded the range of storage supported overall including:
+
+- ScaleIO Kubernetes [Volume Plugin](https://kubernetes.io/docs/user-guide/persistent-volumes/index#scaleio) enabling pods to seamlessly access and use data stored on ScaleIO volumes.
+- Portworx Kubernetes [Volume Plugin](https://kubernetes.io/docs/user-guide/persistent-volumes/index#portworx-volume) adding the capability to use Portworx as a storage provider for Kubernetes clusters. Portworx pools your server capacity and turns your servers or cloud instances into converged, highly available compute and storage nodes.
+- Support for NFSv3, NFSv4, and GlusterFS on clusters using the [COS node image](https://cloud.google.com/container-engine/docs/node-image-migration)
+- Support for user-written/run dynamic PV provisioners. A golang library and examples can be found [here](http://github.com/kubernetes-incubator/external-storage).
+- _Beta_ support for [mount options](https://kubernetes.io/docs/user-guide/persistent-volumes/index.md#mount-options) in persistent volumes
+
+**Container Runtime Interface, etcd v3 and Daemon set updates** : while users may not directly interact with the container runtime or the API server datastore, they are foundational components for user facing functionality in Kubernetes’. As such the community invests in expanding the capabilities of these and other system components.
+
+- The Docker-CRI implementation is _beta_ and is enabled by default in kubelet. _Alpha_ support for other runtimes, [cri-o](https://github.com/kubernetes-incubator/cri-o/releases/tag/v0.1), [frakti](https://github.com/kubernetes/frakti/releases/tag/v0.1), [rkt](https://github.com/coreos/rkt/issues?q=is%3Aopen+is%3Aissue+label%3Aarea%2Fcri), has also been implemented.
+- The default backend storage for the API server has been [upgraded](https://kubernetes.io/docs/admin/upgrade-1-6/) to use [etcd v3](https://coreos.com/blog/etcd3-a-new-etcd.html) by default for new clusters. If you are upgrading from a 1.5 cluster, care should be taken to ensure continuity by planning a data migration window.
+- Node reliability is improved as Kubelet exposes an admin configurable [Node Allocatable](https://kubernetes.io//docs/admin/node-allocatable.md#node-allocatable) feature to reserve compute resources for system daemons.
+- [Daemon set updates](https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set) lets you perform rolling updates on a daemon set
+
+
+
+**Alpha features** : this release was mostly focused on maturing functionality, however, a few alpha features were added to support the roadmap
+
+
+- [Out-of-tree cloud provider](https://kubernetes.io/docs/concepts/overview/components#cloud-controller-manager) support adds a new cloud-controller-manager binary that may be used for testing the new out-of-core cloud provider flow
+- [Per-pod-eviction](https://kubernetes.io/docs/user-guide/node-selection/#per-pod-configurable-eviction-behavior-when-there-are-node-problems-alpha-feature) in case of node problems combined with tolerationSeconds, lets users tune the duration a pod stays bound to a node that is experiencing problems
+- [Pod Injection Policy](https://kubernetes.io/docs/user-guide/pod-preset/) adds a new API resource PodPreset to inject information such as secrets, volumes, volume mounts, and environment variables into pods at creation time.
+- [Custom metrics](https://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/#support-for-custom-metrics) support in the Horizontal Pod Autoscaler changed to use
+- Multiple Nvidia [GPU support](https://vishh.github.io/docs/user-guide/gpus/) is introduced with the Docker runtime only
+
+
+
+These are just some of the highlights in our first release for the year. For a complete list please visit the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#v160).
+
+**Community**
+This release is possible thanks to our vast and open community. Together, we’ve pushed nearly 5,000 commits by some 275 authors. To bring our many advocates together, the community has launched a new program called [K8sPort](http://k8sport.org/), an online hub where the community can participate in gamified challenges and get credit for their contributions. Read more about the program [here](http://blog.kubernetes.io/2017/03/k8sport-engaging-the-kubernetes-community.html).
+
+
+**Release Process**
+
+A big thanks goes out to the [release team](https://github.com/kubernetes/features/blob/master/release-1.6/release_team.md) for 1.6 (lead by Dan Gillespie of CoreOS) for their work bringing the 1.6 release to light. This release team is an exemplar of the Kubernetes community’s commitment to community governance. Dan is the first non-Google release manager and he, along with the rest of the team, worked throughout the release (building on the 1.5 release manager, Saad Ali’s, great work) to uncover and document tribal knowledge, shine light on tools and processes that still require special permissions, and prioritize work to improve the Kubernetes release process. Many thanks to the team.
+
+
+
+**User Adoption**
+
+We’re continuing to see rapid adoption of Kubernetes in all sectors and sizes of businesses. Furthermore, adoption is coming from across the globe, from a startup in Tennessee, USA to a Fortune 500 company in China.
+
+
+- JD.com, one of China's largest internet companies, uses Kubernetes in conjunction with their OpenStack deployment. They’ve move 20% of their applications thus far on Kubernetes and are already running 20,000 pods daily. Read more about their setup [here](http://blog.kubernetes.io/2017/02/inside-jd-com-shift-to-kubernetes-from-openstack.html).
+- Spire, a startup based in Tennessee, witnessed their public cloud provider experience an outage, but suffered zero downtime because Kubernetes was able to move their workloads to different zones. Read their full experience [here](https://medium.com/spire-labs/mitigating-an-aws-instance-failure-with-the-magic-of-kubernetes-128a44d44c14).
+
+> _“With Kubernetes, there was never a moment of panic, just a sense of awe watching the automatic mitigation as it happened.”_
+
+- Share your Kubernetes use case story with the community [here](https://docs.google.com/a/google.com/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform).
+
+**Availability**
+Kubernetes 1.6 is available for download [here](https://github.com/kubernetes/kubernetes/releases/tag/v1.6.0) on GitHub and via [get.k8s.io](http://get.k8s.io/). To get started with Kubernetes, try one of the these [interactive tutorials](http://kubernetes.io/docs/tutorials/kubernetes-basics/).
+
+
+**Get Involved**
+[CloudNativeCon + KubeCon in Berlin](http://events.linuxfoundation.org/events/cloudnativecon-and-kubecon-europe) is this week March 29-30, 2017. We hope to get together with much of the community and share more there!
+
+
+
+Share your voice at our weekly [community meeting](https://github.com/kubernetes/community/blob/master/communication.md#weekly-meeting):
+
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+- Connect with the community on [Slack](http://slack.k8s.io/)
+
+Many thanks for your contributions and advocacy!
+
+
+
+_-- Aparna Sinha, Senior Product Manager, Kubernetes, Google_
+
+_**PS: read this [series of in-depth articles](http://blog.kubernetes.io/2017/03/five-days-of-kubernetes-1.6.html) on what's new in Kubernetes 1.6**_
diff --git a/blog/_posts/2017-03-00-Scalability-Updates-In-Kubernetes-1.6.md b/blog/_posts/2017-03-00-Scalability-Updates-In-Kubernetes-1.6.md
new file mode 100644
index 00000000000..090ecf88436
--- /dev/null
+++ b/blog/_posts/2017-03-00-Scalability-Updates-In-Kubernetes-1.6.md
@@ -0,0 +1,90 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Scalability updates in Kubernetes 1.6: 5,000 node and 150,000 pod clusters "
+date: Friday, March 30, 2017
+pagination:
+ enabled: true
+---
+_Editor’s note: this post is part of a [series of in-depth articles](http://blog.kubernetes.io/2017/03/five-days-of-kubernetes-1.6.html) on what's new in Kubernetes 1.6_
+
+Last summer we [shared](http://blog.kubernetes.io/2016/07/kubernetes-updates-to-performance-and-scalability-in-1.3.html) updates on Kubernetes scalability, since then we’ve been working hard and are proud to announce that [Kubernetes 1.6](http://blog.kubernetes.io/2017/03/kubernetes-1.6-multi-user-multi-workloads-at-scale.html) can handle 5,000-node clusters with up to 150,000 pods. Moreover, those cluster have even better end-to-end pod startup time than the previous 2,000-node clusters in the 1.3 release; and latency of the API calls are within the one-second SLO.
+
+In this blog post we review what metrics we monitor in our tests and describe our performance results from Kubernetes 1.6. We also discuss what changes we made to achieve the improvements, and our plans for upcoming releases in the area of system scalability.
+
+**X-node clusters - what does it mean?**
+
+Now that Kubernetes 1.6 is released, it is a good time to review what it means when we say we “support” X-node clusters. As described in detail in a [previous blog post](http://blog.kubernetes.io/2016/03/1000-nodes-and-beyond-updates-to-Kubernetes-performance-and-scalability-in-12.html), we currently have two performance-related [Service Level Objectives (SLO)](https://en.wikipedia.org/wiki/Service_level_objective):
+
+- **API-responsiveness** : 99% of all API calls return in less than 1s
+- **Pod startup time** : 99% of pods and their containers (with pre-pulled images) start within 5s.
+As before, it is possible to run larger deployments than the stated supported 5,000-node cluster (and users have), but performance may be degraded and it may not meet our strict SLO defined above.
+
+We are aware of the limited scope of these SLOs. There are many aspects of the system that they do not exercise. For example, we do not measure how soon a new pod that is part of a service will be reachable through the service IP address after the pod is started. If you are considering using large Kubernetes clusters and have performance requirements not covered by our SLOs, please contact the Kubernetes [Scalability SIG](https://github.com/kubernetes/community/blob/master/sig-scalability/README.md) so we can help you understand whether Kubernetes is ready to handle your workload now.
+
+The top scalability-related priority for upcoming Kubernetes releases is to enhance our definition of what it means to support X-node clusters by:
+
+- refining currently existing SLOs
+- adding more SLOs (that will cover various areas of Kubernetes, including networking)
+
+**Kubernetes 1.6 performance metrics at scale**
+
+So how does performance in large clusters look in Kubernetes 1.6? The following graph shows the end-to-end pod startup latency with 2000- and 5000-node clusters. For comparison, we also show the same metric from Kubernetes 1.3, which we published in our previous scalability blog post that described support for 2000-node clusters. As you can see, Kubernetes 1.6 has better pod startup latency with both 2000 and 5000 nodes compared to Kubernetes 1.3 with 2000 nodes [1].
+
+ 
+
+The next graph shows API response latency for a 5000-node Kubernetes 1.6 cluster. The latencies at all percentiles are less than 500ms, and even 90th percentile is less than about 100ms.
+
+
+ 
+
+**How did we get here?**
+
+Over the past nine months (since the last scalability blog post), there have been a huge number of performance and scalability related changes in Kubernetes. In this post we will focus on the two biggest ones and will briefly enumerate a few others.
+
+**etcd v3**
+In Kubernetes 1.6 we switched the default storage backend (key-value store where the whole cluster state is stored) from etcd v2 to [etcd v3](https://coreos.com/etcd/docs/3.0.17/index.html). The initial works towards this transition has been started during the 1.3 release cycle. You might wonder why it took us so long, given that:
+
+- the first stable version of etcd supporting the v3 API [was announced](https://coreos.com/blog/etcd3-a-new-etcd.html) on June 30, 2016
+- the new API was designed together with the Kubernetes team to support our needs (from both a feature and scalability perspective)
+- the integration of etcd v3 with Kubernetes had already mostly been finished when etcd v3 was announced (indeed CoreOS used Kubernetes as a proof-of-concept for the new etcd v3 API)
+As it turns out, there were a lot of reasons. We will describe the most important ones below.
+
+- Changing storage in a backward incompatible way, as is in the case for the etcd v2 to v3 migration, is a big change, and thus one for which we needed a strong justification. We found this justification in September when we determined that we would not be able to scale to 5000-node clusters if we continued to use etcd v2 ([kubernetes/32361](https://github.com/kubernetes/kubernetes/issues/32361) contains some discussion about it). In particular, what didn’t scale was the watch implementation in etcd v2. In a 5000-node cluster, we need to be able to send at least 500 watch events per second to a single watcher, which wasn’t possible in etcd v2.
+- Once we had the strong incentive to actually update to etcd v3, we started thoroughly testing it. As you might expect, we found some issues. There were some minor bugs in Kubernetes, and in addition we requested a performance improvement in etcd v3’s watch implementation (watch was the main bottleneck in etcd v2 for us). This led to the 3.0.10 etcd patch release.
+- Once those changes had been made, we were convinced that _new_ Kubernetes clusters would work with etcd v3. But the large challenge of migrating _existing_ clusters remained. For this we needed to automate the migration process, thoroughly test the underlying CoreOS etcd upgrade tool, and figure out a contingency plan for rolling back from v3 to v2.
+But finally, we are confident that it should work.
+
+**Switching storage data format to protobuf**
+In the Kubernetes 1.3 release, we enabled [protobufs](https://developers.google.com/protocol-buffers/) as the data format for Kubernetes components to communicate with the API server (in addition to maintaining support for JSON). This gave us a huge performance improvement.
+
+However, we were still using JSON as a format in which data was stored in etcd, even though technically we were ready to change that. The reason for delaying this migration was related to our plans to migrate to etcd v3. Now you are probably wondering how this change was depending on migration to etcd v3. The reason for it was that with etcd v2 we couldn’t really store data in binary format (to workaround it we were additionally base64-encoding the data), whereas with etcd v3 it just worked. So to simplify the transition to etcd v3 and avoid some non-trivial transformation of data stored in etcd during it, we decided to wait with switching storage data format to protobufs until migration to etcd v3 storage backend is done.
+
+**Other optimizations**
+We made tens of optimizations throughout the Kubernetes codebase during the last three releases, including:
+
+- optimizing the scheduler (which resulted in 5-10x higher scheduling throughput)
+- switching all controllers to a new recommended design using shared informers, which reduced resource consumption of controller-manager - for reference see [this document](https://github.com/kubernetes/community/blob/master/contributors/devel/controllers.md)
+- optimizing individual operations in the API server (conversions, deep-copies, patch)
+- reducing memory allocation in the API server (which significantly impacts the latency of API calls)
+We want to emphasize that the optimization work we have done during the last few releases, and indeed throughout the history of the project, is a joint effort by many different companies and individuals from the whole Kubernetes community.
+
+**What’s next?**
+
+People frequently ask how far we are going to go in improving Kubernetes scalability. Currently we do not have plans to increase scalability beyond 5000-node clusters (within our SLOs) in the next few releases. If you need clusters larger than 5000 nodes, we recommend to use [federation](https://kubernetes.io/docs/concepts/cluster-administration/federation/) to aggregate multiple Kubernetes clusters.
+
+However, that doesn’t mean we are going to stop working on scalability and performance. As we mentioned at the beginning of this post, our top priority is to refine our two existing SLOs and introduce new ones that will cover more parts of the system, e.g. networking. This effort has already started within the Scalability SIG. We have made significant progress on how we would like to define performance SLOs, and this work should be finished in the coming month.
+
+**Join the effort**
+If you are interested in scalability and performance, please join our community and help us shape Kubernetes. There are many ways to participate, including:
+
+- Chat with us in the Kubernetes Slack [scalability channel](https://kubernetes.slack.com/messages/sig-scale/):
+- Join our Special Interest Group, [SIG-Scalability](https://github.com/kubernetes/community/blob/master/sig-scalability/README.md), which meets every Thursday at 9:00 AM PST
+Thanks for the support and contributions! Read more in-depth posts on what's new in Kubernetes 1.6 [here](http://blog.kubernetes.io/2017/03/five-days-of-kubernetes-1.6.html).
+
+_-- Wojciech Tyczynski, Software Engineer, Google_
+
+
+
+
+[1] We are investigating why 5000-node clusters have better startup time than 2000-node clusters. The current theory is that it is related to running 5000-node experiments using 64-core master and 2000-node experiments using 32-core master.
diff --git a/blog/_posts/2017-04-00-Configuring-Private-Dns-Zones-Upstream-Nameservers-Kubernetes.md b/blog/_posts/2017-04-00-Configuring-Private-Dns-Zones-Upstream-Nameservers-Kubernetes.md
new file mode 100644
index 00000000000..1821a3e0329
--- /dev/null
+++ b/blog/_posts/2017-04-00-Configuring-Private-Dns-Zones-Upstream-Nameservers-Kubernetes.md
@@ -0,0 +1,155 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Configuring Private DNS Zones and Upstream Nameservers in Kubernetes "
+date: Wednesday, April 04, 2017
+pagination:
+ enabled: true
+---
+_Editor’s note: this post is part of a [series of in-depth articles](http://blog.kubernetes.io/2017/03/five-days-of-kubernetes-1.6.html) on what's new in Kubernetes 1.6_
+
+Many users have existing domain name zones that they would like to integrate into their Kubernetes DNS namespace. For example, hybrid-cloud users may want to resolve their internal “.corp” domain addresses within the cluster. Other users may have a zone populated by a non-Kubernetes service discovery system (like Consul). We’re pleased to announce that, in [Kubernetes 1.6](http://blog.kubernetes.io/2017/03/kubernetes-1.6-multi-user-multi-workloads-at-scale.html), [kube-dns](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) adds support for configurable private DNS zones (often called “stub domains”) and external upstream DNS nameservers. In this blog post, we describe how to configure and use this feature.
+
+**Default lookup flow**
+
+
+[](https://2.bp.blogspot.com/-Jj4r6bGt1f8/WORRugYMobI/AAAAAAAABBE/HXH-wBGqweQcJbyQA3bqnUtYeN5aOtE9ACEw/s1600/dns2.png)
+
+Kubernetes currently supports two DNS policies specified on a per-pod basis using the dnsPolicy flag: “Default” and “ClusterFirst”. If dnsPolicy is not explicitly specified, then “ClusterFirst” is used:
+
+- If dnsPolicy is set to “Default”, then the name resolution configuration is inherited from the node the pods run on. Note: this feature cannot be used in conjunction with dnsPolicy: “Default”.
+- If dnsPolicy is set to “ClusterFirst”, then DNS queries will be sent to the kube-dns service. Queries for domains rooted in the configured cluster domain suffix (any address ending in “.cluster.local” in the example above) will be answered by the kube-dns service. All other queries (for example, www.kubernetes.io) will be forwarded to the upstream nameserver inherited from the node.
+Before this feature, it was common to introduce stub domains by replacing the upstream DNS with a custom resolver. However, this caused the custom resolver itself to become a critical path for DNS resolution, where issues with scalability and availability may cause the cluster to lose DNS functionality. This feature allows the user to introduce custom resolution without taking over the entire resolution path.
+
+**Customizing the DNS Flow**
+
+Beginning in Kubernetes 1.6, cluster administrators can specify custom stub domains and upstream nameservers by providing a ConfigMap for kube-dns. For example, the configuration below inserts a single stub domain and two upstream nameservers. As specified, DNS requests with the “.acme.local” suffix will be forwarded to a DNS listening at 1.2.3.4. Additionally, Google Public DNS will serve upstream queries. See ConfigMap Configuration Notes at the end of this section for a few notes about the data format.
+
+
+```
+apiVersion: v1
+
+kind: ConfigMap
+
+metadata:
+
+ name: kube-dns
+
+ namespace: kube-system
+
+data:
+
+ stubDomains: |
+
+ {“acme.local”: [“1.2.3.4”]}
+
+ upstreamNameservers: |
+
+ [“8.8.8.8”, “8.8.4.4”]
+ ```
+
+
+The diagram below shows the flow of DNS queries specified in the configuration above. With the dnsPolicy set to “ClusterFirst” a DNS query is first sent to the DNS caching layer in kube-dns. From here, the suffix of the request is examined and then forwarded to the appropriate DNS. In this case, names with the cluster suffix (e.g.; “.cluster.local”) are sent to kube-dns. Names with the stub domain suffix (e.g.; “.acme.local”) will be sent to the configured custom resolver. Finally, requests that do not match any of those suffixes will be forwarded to the upstream DNS.
+
+[](https://1.bp.blogspot.com/-IeFx2Uuq_i0/WORRuQpxG_I/AAAAAAAABBA/g1P3ljd7YGYMShoHJnPRK1IfX5h3o9GvACEw/s1600/dns.png)
+
+
+Below is a table of example domain names and the destination of the queries for those domain names:
+
+| | |
+|--|--|
+| Domain name | Server answering the query |
+| kubernetes.default.svc.cluster.local | kube-dns |
+| foo.acme.local | custom DNS (1.2.3.4) |
+| widget.com | upstream DNS (one of 8.8.8.8, 8.8.4.4) |
+{:.post-table}
+
+
+**ConfigMap Configuration Notes**
+
+- stubDomains (optional)
+
+ - Format: a JSON map using a DNS suffix key (e.g.; “acme.local”) and a value consisting of a JSON array of DNS IPs.
+ - Note: The target nameserver may itself be a Kubernetes service. For instance, you can run your own copy of dnsmasq to export custom DNS names into the ClusterDNS namespace.
+- upstreamNameservers (optional)
+
+ - Format: a JSON array of DNS IPs.
+ - Note: If specified, then the values specified replace the nameservers taken by default from the node’s /etc/resolv.conf
+ - Limits: a maximum of three upstream nameservers can be specified
+
+**Example #1: Adding a Consul DNS Stub Domain**
+
+
+
+In this example, the user has Consul DNS service discovery system they wish to integrate with kube-dns. The consul domain server is located at 10.150.0.1, and all consul names have the suffix “.consul.local”. To configure Kubernetes, the cluster administrator simply creates a ConfigMap object as shown below. Note: in this example, the cluster administrator did not wish to override the node’s upstream nameservers, so they didn’t need to specify the optional upstreamNameservers field.
+
+
+```
+apiVersion: v1
+
+kind: ConfigMap
+
+metadata:
+
+ name: kube-dns
+
+ namespace: kube-system
+
+data:
+
+ stubDomains: |
+
+ {“consul.local”: [“10.150.0.1”]}
+ ```
+
+
+
+**Example #2: Replacing the Upstream Nameservers**
+
+
+
+In this example the cluster administrator wants to explicitly force all non-cluster DNS lookups to go through their own nameserver at 172.16.0.1. Again, this is easy to accomplish; they just need to create a ConfigMap with the upstreamNameservers field specifying the desired nameserver.
+
+
+```
+apiVersion: v1
+
+kind: ConfigMap
+
+metadata:
+
+ name: kube-dns
+
+ namespace: kube-system
+
+data:
+
+ upstreamNameservers: |
+
+ [“172.16.0.1”]
+
+
+
+
+**Get involved**
+
+If you’d like to contribute or simply help provide feedback and drive the roadmap, [join our community](https://github.com/kubernetes/community#kubernetes-community). Specifically for network related conversations participate though one of these channels:
+
+- Chat with us on the Kubernetes [Slack network channel](https://kubernetes.slack.com/messages/sig-network/)
+- Join our Special Interest Group, [SIG-Network](https://github.com/kubernetes/community/wiki/SIG-Network), which meets on Tuesdays at 14:00 PT
+Thanks for your support and contributions. Read more in-depth posts on what's new in Kubernetes 1.6 [here](http://blog.kubernetes.io/2017/03/five-days-of-kubernetes-1.6.html).
+
+
+
+
+
+_--Bowei Du, Software Engineer and Matthew DeLio, Product Manager, Google_
+
+
+
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Join the community portal for advocates on [K8sPort](http://k8sport.org/)
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- [Download](http://get.k8s.io/) Kubernetes
diff --git a/blog/_posts/2017-04-00-Multi-Stage-Canary-Deployments-With-Kubernetes-In-The-Cloud-Onprem.md b/blog/_posts/2017-04-00-Multi-Stage-Canary-Deployments-With-Kubernetes-In-The-Cloud-Onprem.md
new file mode 100644
index 00000000000..5f23324654e
--- /dev/null
+++ b/blog/_posts/2017-04-00-Multi-Stage-Canary-Deployments-With-Kubernetes-In-The-Cloud-Onprem.md
@@ -0,0 +1,221 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " How Bitmovin is Doing Multi-Stage Canary Deployments with Kubernetes in the Cloud and On-Prem "
+date: Saturday, April 21, 2017
+pagination:
+ enabled: true
+---
+_Editor's Note: Today’s post is by Daniel Hoelbling-Inzko, Infrastructure Architect at Bitmovin, a company that provides services that transcode digital video and audio to streaming formats, sharing insights about their use of Kubernetes._
+
+Running a large scale video encoding infrastructure on multiple public clouds is tough. At [Bitmovin](http://bitmovin.com/), we have been doing it successfully for the last few years, but from an engineering perspective, it’s neither been enjoyable nor particularly fun.
+
+So obviously, one of the main things that really sold us on using Kubernetes, was it’s common abstraction from the different supported cloud providers and the well thought out programming interface it provides. More importantly, the Kubernetes project did not settle for the lowest common denominator approach. Instead, they added the necessary abstract concepts that are required and useful to run containerized workloads in a cloud and then did all the hard work to map these concepts to the different cloud providers and their offerings.
+
+The great stability, speed and operational reliability we saw in our early tests in mid-2016 made the migration to Kubernetes a no-brainer.
+
+And, it didn’t hurt that the vision for scale the Kubernetes project has been pursuing is closely aligned with our own goals as a company. Aiming for \>1,000 node clusters might be a lofty goal, but for a fast growing video company like ours, having your infrastructure aim to support future growth is essential. Also, after initial brainstorming for our new infrastructure, we immediately knew that we would be running a huge number of containers and having a system, with the expressed goal of working at global scale, was the perfect fit for us. Now with the recent [Kubernetes 1.6](http://blog.kubernetes.io/2017/03/kubernetes-1.6-multi-user-multi-workloads-at-scale.html) release and its [support for 5,000 node clusters](http://blog.kubernetes.io/2017/03/scalability-updates-in-kubernetes-1.6.html), we feel even more validated in our choice of a container orchestration system.
+
+During the testing and migration phase of getting our infrastructure running on Kubernetes, we got quite familiar with the Kubernetes API and the whole ecosystem around it. So when we were looking at expanding our cloud video encoding offering for customers to use in their own datacenters or cloud environments, we quickly decided to leverage Kubernetes as our ubiquitous cloud operating system to base the solution on.
+
+Just a few months later this effort has become our newest service offering: [Bitmovin Managed On-Premise encoding](https://bitmovin.com/managed-on-premise-encoding/). Since all Kubernetes clusters share the same API, adapting our cloud encoding service to also run on Kubernetes enabled us to deploy into our customer’s datacenter, regardless of the hardware infrastructure running underneath. With great tools from the community, like kube-up and turnkey solutions, like Google Container Engine, anyone can easily provision a new Kubernetes cluster, either within their own infrastructure or in their own cloud accounts.
+
+To give us the maximum flexibility for customers that deploy to bare metal and might not have any custom cloud integrations for Kubernetes yet, we decided to base our solution solely on facilities that are available in any Kubernetes install and don’t require any integration into the surrounding infrastructure (it will even run inside [Minikube](https://github.com/kubernetes/minikube)!). We don’t rely on Services of type LoadBalancer, primarily because enterprise IT is usually reluctant to open up ports to the open internet - and not every bare metal Kubernetes install supports externally provisioned load balancers out of the box. To avoid these issues, we deploy a BitmovinAgent that runs inside the Cluster and polls our API for new encoding jobs without requiring any network setup. This agent then uses the locally available Kubernetes credentials to start up new deployments that run the encoders on the available hardware through the Kubernetes API.
+
+Even without having a full cloud integration available, the consistent scheduling, health checking and monitoring we get from using the Kubernetes API really enabled us to focus on making the encoder work inside a container rather than spending precious engineering resources on integrating a bunch of different hypervisors, machine provisioners and monitoring systems.
+
+
+ 
+**Multi-Stage Canary Deployments**
+
+Our first encounters with the Kubernetes API were not for the On-Premise encoding product. Building our containerized encoding workflow on Kubernetes was rather a decision we made after seeing how incredibly easy and powerful the Kubernetes platform proved during development and rollout of our Bitmovin API infrastructure. We migrated to Kubernetes around four months ago and it has enabled us to provide rapid development iterations to our service while meeting our requirements of downtime-free deployments and a stable development to production pipeline. To achieve this we came up with an architecture that runs almost a thousand containers and meets the following requirements we had laid out on day one:
+
+
+1. 1.Zero downtime deployments for our customers
+2. 2.Continuous deployment to production on each git mainline push
+3. 3.High stability of deployed services for customers
+
+Obviously #2 and #3 are at odds with each other, if each merged feature gets deployed to production right away - how can we ensure these releases are bug-free and don’t have adverse side effects for our customers?
+
+To overcome this oxymoron, we came up with a four-stage canary pipeline for each microservice where we simultaneously deploy to production and keep changes away from customers until the new build has proven to work reliably and correctly in the production environment.
+
+Once a new build is pushed, we deploy it to an internal stage that’s only accessible for our internal tests and the integration test suite. Once the internal test suite passes, QA reports no issues, and we don’t detect any abnormal behavior, we push the new build to our free stage. This means that 5% of our free users would get randomly assigned to this new build. After some time in this stage the build gets promoted to the next stage that gets 5% of our paid users routed to it. Only once the build has successfully passed all 3 of these hurdles, does it get deployed to the production tier, where it will receive all traffic from our remaining users as well as our enterprise customers, which are not part of the paid bucket and never see their traffic routed to a canary track.
+
+
+ {:.big-image}
+
+This setup makes us a pretty big Kubernetes installation by default, since all of our canary tiers are available at a minimum replication of 2. Since we are currently deploying around 30 microservices (and growing) to our clusters, it adds up to a minimum of 10 pods per service (8 application pods + minimum 2 HAProxy pods that do the canary routing). Although, in reality our preferred standard configuration is usually running 2 internal, 4 free, 4 others and 10 production pods alongside 4 HAProxy pods - totalling around 700 pods in total. This also means that we are running at least 150 services that provide a static ClusterIP to their underlying microservice canary tier.
+
+A typical deployment looks like this:
+
+
+| Services (ClusterIP) | Deployments | #Pods |
+| account-service | account-service-haproxy | 4 |
+| account-service-internal | account-service-internal-v1.18.0 | 2 |
+| account-service-canary | account-service-canary-v1.17.0 | 4 |
+| account-service-paid | account-service-paid-v1.15.0 | 4 |
+| account-service-production | account-service-production-v1.15.0 | 10 |
+{: .post-table}
+
+An example service definition the production track will have the following label selectors:
+
+
+```
+apiVersion: v1
+
+kind: Service
+
+metadata:
+
+ name: account-service-production
+
+ labels:
+
+ app: account-service-production
+
+ tier: service
+
+ lb: private
+
+spec:
+
+ ports:
+
+ - port: 8080
+
+ name: http
+
+ targetPort: 8080
+
+ protocol: TCP
+
+ selector:
+
+ app: account-service
+
+ tier: service
+
+ track: production
+ ```
+
+
+
+In front of the Kubernetes services, load balancing the different canary versions of the service, lives a small cluster of HAProxy pods that get their haproxy.conf from the Kubernetes [ConfigMaps](https://kubernetes.io/docs/tasks/configure-pod-container/configmap/) that looks something like this:
+
+
+
+```
+frontend http-in
+
+ bind \*:80
+
+ log 127.0.0.1 local2 debug
+
+
+ acl traffic\_internal hdr(X-Traffic-Group) -m str -i INTERNAL
+
+ acl traffic\_free hdr(X-Traffic-Group) -m str -i FREE
+
+ acl traffic\_enterprise hdr(X-Traffic-Group) -m str -i ENTERPRISE
+
+
+ use\_backend internal if traffic\_internal
+
+ use\_backend canary if traffic\_free
+
+ use\_backend enterprise if traffic\_enterprise
+
+
+ default\_backend paid
+
+
+backend internal
+
+ balance roundrobin
+
+ server internal-lb user-resource-service-internal:8080 resolvers dns check inter 2000
+
+backend canary
+
+ balance roundrobin
+
+ server canary-lb user-resource-service-canary:8080 resolvers dns check inter 2000 weight 5
+
+ server production-lb user-resource-service-production:8080 resolvers dns check inter 2000 weight 95
+
+backend paid
+
+ balance roundrobin
+
+ server canary-paid-lb user-resource-service-paid:8080 resolvers dns check inter 2000 weight 5
+
+ server production-lb user-resource-service-production:8080 resolvers dns check inter 2000 weight 95
+
+backend enterprise
+
+ balance roundrobin
+
+ server production-lb user-resource-service-production:8080 resolvers dns check inter 2000 weight 100
+ ```
+
+
+
+Each HAProxy will inspect a header that gets assigned by our API-Gateway called X-Traffic-Group that determines which bucket of customers this request belongs to. Based on that, a decision is made to hit either a canary deployment or the production deployment.
+
+
+
+Obviously, at this scale, kubectl (while still our main day-to-day tool to work on the cluster) doesn’t really give us a good overview of whether everything is actually running as it’s supposed to and what is maybe over or under replicated.
+
+
+
+Since we do blue/green deployments, we sometimes forget to shut down the old version after the new one comes up, so some services might be running over replicated and finding these issues in a soup of 25 deployments listed in kubectl is not trivial, to say the least.
+
+So, having a container orchestrator like Kubernetes, that’s very API driven, was really a godsend for us, as it allowed us to write tools that take care of that.
+
+
+
+We built tools that either run directly off kubectl (eg bash-scripts) or interact directly with the API and understand our special architecture to give us a quick overview of the system. These tools were mostly built in Go using the [client-go](https://github.com/kubernetes/client-go) library.
+
+
+
+One of these tools is worth highlighting, as it’s basically our only way to really see service health at a glance. It goes through all our Kubernetes services that have the tier: service selector and checks if the accompanying HAProxy deployment is available and all pods are running with 4 replicas. It also checks if the 4 services behind the HAProxys (internal, free, others and production) have at least 2 endpoints running. If any of these conditions are not met, we immediately get a notification in Slack and by email.
+
+
+
+Managing this many pods with our previous orchestrator proved very unreliable and the overlay network frequently caused issues. Not so with Kubernetes - even doubling our current workload for test purposes worked flawlessly and in general, the cluster has been working like clockwork ever since we installed it.
+
+
+
+Another advantage of switching over to Kubernetes was the availability of the kubernetes resource specifications, in addition to the API (which we used to write some internal tools for deployment). This enabled us to have a Git repo with all our Kubernetes specifications, where each track is generated off a common template and only contains placeholders for variable things like the canary track and the names.
+
+
+
+All changes to the cluster have to go through tools that modify these resource specifications and get checked into git automatically so, whenever we see issues, we can debug what changes the infrastructure went through over time!
+
+
+
+To summarize this post - by migrating our infrastructure to Kubernetes, Bitmovin is able to have:
+
+- Zero downtime deployments, allowing our customers to encode 24/7 without interruption
+- Fast development to production cycles, enabling us to ship new features faster
+- Multiple levels of quality assurance and high confidence in production deployments
+- Ubiquitous abstractions across cloud architectures and on-premise deployments
+- Stable and reliable health-checking and scheduling of services
+- Custom tooling around our infrastructure to check and validate the system
+- History of deployments (resource specifications in git + custom tooling)
+
+
+We want to thank the Kubernetes community for the incredible job they have done with the project. The velocity at which the project moves is just breathtaking! Maintaining such a high level of quality and robustness in such a diverse environment is really astonishing.
+
+
+
+_--Daniel Hoelbling-Inzko, Infrastructure Architect, Bitmovin_
+
+
+
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Join the community portal for advocates on [K8sPort](http://k8sport.org/)
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- [Download](http://get.k8s.io/) Kubernetes
diff --git a/blog/_posts/2017-04-00-Rbac-Support-In-Kubernetes.md b/blog/_posts/2017-04-00-Rbac-Support-In-Kubernetes.md
new file mode 100644
index 00000000000..a0dbbc958af
--- /dev/null
+++ b/blog/_posts/2017-04-00-Rbac-Support-In-Kubernetes.md
@@ -0,0 +1,134 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " RBAC Support in Kubernetes "
+date: Friday, April 06, 2017
+pagination:
+ enabled: true
+---
+_Editor’s note: this post is part of a [series of in-depth articles](http://blog.kubernetes.io/2017/03/five-days-of-kubernetes-1.6.html) on what's new in Kubernetes 1.6_
+
+
+One of the highlights of the [Kubernetes 1.6](http://blog.kubernetes.io/2017/03/kubernetes-1.6-multi-user-multi-workloads-at-scale.html) release is the RBAC authorizer feature moving to _beta_. RBAC, Role-based access control, is an an authorization mechanism for managing permissions around Kubernetes resources. RBAC allows configuration of flexible authorization policies that can be updated without cluster restarts.
+
+The focus of this post is to highlight some of the interesting new capabilities and best practices.
+
+**RBAC vs ABAC**
+
+Currently there are several [authorization mechanisms](https://kubernetes.io/docs/admin/authorization/) available for use with Kubernetes. Authorizers are the mechanisms that decide who is permitted to make what changes to the cluster using the Kubernetes API. This affects things like kubectl, system components, and also certain applications that run in the cluster and manipulate the state of the cluster, like Jenkins with the Kubernetes plugin, or [Helm](https://github.com/kubernetes/helm) that runs in the cluster and uses the Kubernetes API to install applications in the cluster. Out of the available authorization mechanisms, ABAC and RBAC are the mechanisms local to a Kubernetes cluster that allow configurable permissions policies.
+
+ABAC, Attribute Based Access Control, is a powerful concept. However, as implemented in Kubernetes, ABAC is difficult to manage and understand. It requires ssh and root filesystem access on the master VM of the cluster to make authorization policy changes. For permission changes to take effect the cluster API server must be restarted.
+
+RBAC permission policies are configured using kubectl or the Kubernetes API directly. Users can be authorized to make authorization policy changes using RBAC itself, making it possible to delegate resource management without giving away ssh access to the cluster master. RBAC policies map easily to the resources and operations used in the Kubernetes API.
+
+Based on where the Kubernetes community is focusing their development efforts, going forward RBAC should be preferred over ABAC.
+
+**Basic Concepts**
+
+The are a few basic ideas behind RBAC that are foundational in understanding it. At its core, RBAC is a way of granting users granular access to [Kubernetes API resources](https://kubernetes.io/docs/api-reference/v1.6/).
+
+
+[](https://1.bp.blogspot.com/-v6KLs1tT_xI/WOa0anGP4sI/AAAAAAAABBo/KIgYfp8PjusuykUVTfgu9-2uKj_wXo4lwCLcB/s1600/rbac1.png)
+
+
+
+The connection between user and resources is defined in RBAC using two objects.
+
+**Roles**
+A Role is a collection of permissions. For example, a role could be defined to include read permission on pods and list permission for pods. A ClusterRole is just like a Role, but can be used anywhere in the cluster.
+
+**Role Bindings**
+A RoleBinding maps a Role to a user or set of users, granting that Role's permissions to those users for resources in that namespace. A ClusterRoleBinding allows users to be granted a ClusterRole for authorization across the entire cluster.
+
+
+[](https://1.bp.blogspot.com/-ixDe91-cnqw/WOa0auxC0mI/AAAAAAAABBs/4LxVsr6shEgTYqUapt5QPISUeuTuztVwwCEw/s1600/rbac2.png)
+
+
+Additionally there are cluster roles and cluster role bindings to consider. Cluster roles and cluster role bindings function like roles and role bindings except they have wider scope. The exact differences and how cluster roles and cluster role bindings interact with roles and role bindings are covered in the [Kubernetes documentation](https://kubernetes.io/docs/admin/authorization/rbac/#rolebinding-and-clusterrolebinding).
+
+**RBAC in Kubernetes**
+
+RBAC is now deeply integrated into Kubernetes and used by the system components to grant the permissions necessary for them to function. [System roles](https://kubernetes.io/docs/admin/authorization/rbac/#default-roles-and-role-bindings) are typically prefixed with system: so they can be easily recognized.
+
+
+ ```
+➜ kubectl get clusterroles --namespace=kube-system
+
+NAME KIND
+
+admin ClusterRole.v1beta1.rbac.authorization.k8s.io
+
+cluster-admin ClusterRole.v1beta1.rbac.authorization.k8s.io
+
+edit ClusterRole.v1beta1.rbac.authorization.k8s.io
+
+kubelet-api-admin ClusterRole.v1beta1.rbac.authorization.k8s.io
+
+system:auth-delegator ClusterRole.v1beta1.rbac.authorization.k8s.io
+
+system:basic-user ClusterRole.v1beta1.rbac.authorization.k8s.io
+
+system:controller:attachdetach-controller ClusterRole.v1beta1.rbac.authorization.k8s.io
+
+system:controller:certificate-controller ClusterRole.v1beta1.rbac.authorization.k8s.io
+
+...
+ ```
+
+
+The RBAC system roles have been expanded to cover the necessary permissions for running a Kubernetes cluster with RBAC only.
+
+During the permission translation from ABAC to RBAC, some of the permissions that were enabled by default in many deployments of ABAC authorized clusters were identified as unnecessarily broad and were [scoped down](https://kubernetes.io/docs/admin/authorization/rbac/#upgrading-from-15) in RBAC. The area most likely to impact workloads on a cluster is the permissions available to service accounts. With the permissive ABAC configuration, requests from a pod using the pod mounted token to authenticate to the API server have broad authorization. As a concrete example, the curl command at the end of this sequence will return a JSON formatted result when ABAC is enabled and an error when only RBAC is enabled.
+
+
+ ```
+➜ kubectl run nginx --image=nginx:latest
+
+➜ kubectl exec -it $(kubectl get pods -o jsonpath='{.items[0].metadata.name}') bash
+
+➜ apt-get update && apt-get install -y curl
+
+➜ curl -ik \
+
+ -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
+
+ https://kubernetes/api/v1/namespaces/default/pods
+ ```
+
+
+Any applications you run in your Kubernetes cluster that interact with the Kubernetes API have the potential to be affected by the permissions changes when transitioning from ABAC to RBAC.
+
+To smooth the transition from ABAC to RBAC, you can create Kubernetes 1.6 clusters with both [ABAC and RBAC authorizers](https://kubernetes.io/docs/admin/authorization/rbac/#parallel-authorizers) enabled. When both ABAC and RBAC are enabled, authorization for a resource is granted if either authorization policy grants access. However, under that configuration the most permissive authorizer is used and it will not be possible to use RBAC to fully control permissions.
+
+At this point, RBAC is complete enough that ABAC support should be considered deprecated going forward. It will still remain in Kubernetes for the foreseeable future but development attention is focused on RBAC.
+
+
+
+Two different talks at the at the Google Cloud Next conference touched on RBAC related changes in Kubernetes 1.6, jump to the relevant parts [here](https://www.youtube.com/watch?v=Cd4JU7qzYbE#t=8m01s) and [here](https://www.youtube.com/watch?v=18P7cFc6nTU#t=41m06s). For more detailed information about using RBAC in Kubernetes 1.6 read the full [RBAC documentation](https://kubernetes.io/docs/admin/authorization/rbac/).
+
+
+**Get Involved**
+
+If you’d like to contribute or simply help provide feedback and drive the roadmap, [join our community](https://github.com/kubernetes/community#kubernetes-community). Specifically interested in security and RBAC related conversation, participate through one of these channels:
+
+- Chat with us on the Kubernetes [Slack sig-auth channel](https://kubernetes.slack.com/messages/sig-auth/)
+- Join the biweekly [SIG-Auth meetings](https://github.com/kubernetes/community/blob/master/sig-auth/README.md) on Wednesday at 11:00 AM PT
+
+Thanks for your support and contributions. Read more in-depth posts on what's new in Kubernetes 1.6 [here](http://blog.kubernetes.io/2017/03/five-days-of-kubernetes-1.6.html).
+
+
+
+
+
+_-- Jacob Simpson, Greg Castle & CJ Cullen, Software Engineers at Google_
+
+
+
+
+
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Join the community portal for advocates on [K8sPort](http://k8sport.org/)
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- [Download](http://get.k8s.io/) Kubernetes
diff --git a/blog/_posts/2017-05-00-Draft-Kubernetes-Container-Development.md b/blog/_posts/2017-05-00-Draft-Kubernetes-Container-Development.md
new file mode 100644
index 00000000000..c2564ef32dd
--- /dev/null
+++ b/blog/_posts/2017-05-00-Draft-Kubernetes-Container-Development.md
@@ -0,0 +1,200 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Draft: Kubernetes container development made easy "
+date: Thursday, May 31, 2017
+pagination:
+ enabled: true
+---
+_Today's post is by __Brendan Burns, Director of Engineering at Microsoft Azure and Kubernetes co-founder._
+
+About a month ago Microsoft announced the acquisition of Deis to expand our expertise in containers and Kubernetes. Today, I’m excited to announce a new open source project derived from this newly expanded Azure team: Draft.
+
+While by now the strengths of Kubernetes for deploying and managing applications at scale are well understood. The process of developing a new application for Kubernetes is still too hard. It’s harder still if you are new to containers, Kubernetes, or developing cloud applications.
+
+Draft fills this role. As it’s name implies it is a tool that helps you begin that first draft of a containerized application running in Kubernetes. When you first run the draft tool, it automatically discovers the code that you are working on and builds out the scaffolding to support containerizing your application. Using heuristics and a variety of pre-defined project templates draft will create an initial Dockerfile to containerize your application, as well as a Helm Chart to enable your application to be deployed and maintained in a Kubernetes cluster. Teams can even bring their own draft project templates to customize the scaffolding that is built by the tool.
+
+But the value of draft extends beyond simply scaffolding in some files to help you create your application. Draft also deploys a server into your existing Kubernetes cluster that is automatically kept in sync with the code on your laptop. Whenever you make changes to your application, the draft daemon on your laptop synchronizes that code with the draft server in Kubernetes and a new container is built and deployed automatically without any user action required. Draft enables the “inner loop” development experience for the cloud.
+
+Of course, as is the expectation with all infrastructure software today, Draft is available as an open source project, and it itself is in “draft” form :) We eagerly invite the community to come and play around with draft today, we think it’s pretty awesome, even in this early form. But we’re especially excited to see how we can develop a community around draft to make it even more powerful for all developers of containerized applications on Kubernetes.
+
+To give you a sense for what Draft can do, here is an example drawn from the [Getting Started](https://github.com/Azure/draft/blob/master/docs/getting-started.md) page in the [GitHub repository](https://github.com/Azure/draft).
+
+There are multiple example applications included within the [examples directory](https://github.com/Azure/draft/blob/master/examples). For this walkthrough, we'll be using the [python example application](https://github.com/Azure/draft/blob/master/examples/python) which uses [Flask](http://flask.pocoo.org/) to provide a very simple Hello World webserver.
+
+
+ ```
+$ cd examples/python
+ ```
+
+
+**Draft Create**
+
+We need some "scaffolding" to deploy our app into a [Kubernetes](https://kubernetes.io/) cluster. Draft can create a [Helm](https://github.com/kubernetes/helm) chart, a Dockerfile and a draft.toml with draft create:
+
+
+ ```
+$ draft create
+
+--\> Python app detected
+
+--\> Ready to sail
+
+$ ls
+
+Dockerfile app.py chart/ draft.toml requirements.txt
+ ```
+
+
+The chart/ and Dockerfile assets created by Draft default to a basic Python configuration. This Dockerfile harnesses the [python:onbuild image](https://hub.docker.com/_/python/), which will install the dependencies in requirements.txt and copy the current directory into /usr/src/app. And to align with the service values in chart/values.yaml, this Dockerfile exposes port 80 from the container.
+
+The draft.toml file contains basic configuration about the application like the name, which namespace it will be deployed to, and whether to deploy the app automatically when local files change.
+
+
+ ```
+$ cat draft.toml
+[environments]
+ [environments.development]
+ name = "tufted-lamb"
+ namespace = "default"
+ watch = true
+ watch\_delay = 2
+ ```
+
+
+
+**Draft Up**
+
+
+
+Now we're ready to deploy app.py to a Kubernetes cluster.
+
+Draft handles these tasks with one draft up command:
+
+- reads configuration from draft.toml
+- compresses the chart/ directory and the application directory as two separate tarballs
+- uploads the tarballs to draftd, the server-side component
+- draftd then builds the docker image and pushes the image to a registry
+- draftd instructs helm to install the Helm chart, referencing the Docker registry image just built
+
+With the watch option set to true, we can let this run in the background while we make changes later on…
+
+
+
+ ```
+$ draft up
+--\> Building Dockerfile
+Step 1 : FROM python:onbuild
+onbuild: Pulling from library/python
+...
+Successfully built 38f35b50162c
+--\> Pushing docker.io/microsoft/tufted-lamb:5a3c633ae76c9bdb81b55f5d4a783398bf00658e
+The push refers to a repository [docker.io/microsoft/tufted-lamb]
+...
+5a3c633ae76c9bdb81b55f5d4a783398bf00658e: digest: sha256:9d9e9fdb8ee3139dd77a110fa2d2b87573c3ff5ec9c045db6009009d1c9ebf5b size: 16384
+--\> Deploying to Kubernetes
+ Release "tufted-lamb" does not exist. Installing it now.
+--\> Status: DEPLOYED
+--\> Notes:
+ 1. Get the application URL by running these commands:
+ NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+ You can watch the status of by running 'kubectl get svc -w tufted-lamb-tufted-lamb'
+ export SERVICE\_IP=$(kubectl get svc --namespace default tufted-lamb-tufted-lamb -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+ echo http://$SERVICE\_IP:80
+
+Watching local files for changes...
+ ```
+
+
+
+**Interact with the Deployed App**
+
+
+
+Using the handy output that follows successful deployment, we can now contact our app. Note that it may take a few minutes before the load balancer is provisioned by Kubernetes. Be patient!
+
+
+
+ ```
+$ export SERVICE\_IP=$(kubectl get svc --namespace default tufted-lamb-tufted-lamb -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+$ curl [http://$SERVICE\_IP](http://%24service_ip/)
+ ```
+
+
+
+When we curl our app, we see our app in action! A beautiful "Hello World!" greets us.
+
+
+
+**Update the App**
+
+
+
+Now, let's change the "Hello, World!" output in app.py to output "Hello, Draft!" instead:
+
+
+
+ ```
+$ cat \<\ app.py
+from flask import Flask
+
+app = Flask(\_\_name\_\_)
+
+@app.route("/")
+def hello():
+ return "Hello, Draft!\n"
+
+if \_\_name\_\_ == "\_\_main\_\_":
+ app.run(host='0.0.0.0', port=8080)
+EOF
+ ```
+
+
+
+**Draft Up(grade)**
+
+
+
+Now if we watch the terminal that we initially called draft up with, Draft will notice that there were changes made locally and call draft up again. Draft then determines that the Helm release already exists and will perform a helm upgrade rather than attempting another helm install:
+
+
+
+ ```
+--\> Building Dockerfile
+Step 1 : FROM python:onbuild
+...
+Successfully built 9c90b0445146
+--\> Pushing docker.io/microsoft/tufted-lamb:f031eb675112e2c942369a10815850a0b8bf190e
+The push refers to a repository [docker.io/microsoft/tufted-lamb]
+...
+--\> Deploying to Kubernetes
+--\> Status: DEPLOYED
+--\> Notes:
+ 1. Get the application URL by running these commands:
+ NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+ You can watch the status of by running 'kubectl get svc -w tufted-lamb-tufted-lamb'
+ export SERVICE\_IP=$(kubectl get svc --namespace default tufted-lamb-tufted-lamb -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+ echo [http://$SERVICE\_IP:80](http://%24service_ip/)
+ ```
+
+
+
+Now when we run curl http://$SERVICE\_IP, our first app has been deployed and updated to our Kubernetes cluster via Draft!
+
+We hope this gives you a sense for everything that Draft can do to streamline development for Kubernetes. Happy drafting!
+
+
+
+_--Brendan Burns, Director of Engineering, Microsoft Azure_
+
+
+
+
+
+
+
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Join the community portal for advocates on [K8sPort](http://k8sport.org/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
diff --git a/blog/_posts/2017-05-00-Kubernetes-Monitoring-Guide.md b/blog/_posts/2017-05-00-Kubernetes-Monitoring-Guide.md
new file mode 100644
index 00000000000..8aefc809ead
--- /dev/null
+++ b/blog/_posts/2017-05-00-Kubernetes-Monitoring-Guide.md
@@ -0,0 +1,90 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes: a monitoring guide "
+date: Saturday, May 19, 2017
+pagination:
+ enabled: true
+---
+_Today’s post is by Jean-Mathieu Saponaro, Research & Analytics Engineer at Datadog, discussing what Kubernetes changes for monitoring, and how you can prepare to properly monitor a containerized infrastructure orchestrated by Kubernetes._
+
+
+Container technologies are taking the infrastructure world by storm. While containers solve or simplify infrastructure management processes, they also introduce significant complexity in terms of orchestration. That’s where Kubernetes comes to our rescue. Just like a conductor directs an orchestra, [Kubernetes](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/) oversees our ensemble of containers—starting, stopping, creating, and destroying them automatically to keep our applications humming along.
+
+Kubernetes makes managing a containerized infrastructure much easier by creating levels of abstractions such as [pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/) and [services](https://kubernetes.io/docs/concepts/services-networking/service/). We no longer have to worry about where applications are running or if they have enough resources to work properly. But that doesn’t change the fact that, in order to ensure good performance, we need to monitor our applications, the containers running them, and Kubernetes itself.
+
+**Rethinking monitoring for the Kubernetes era**
+
+Just as containers have completely transformed how we think about running services on virtual machines, Kubernetes has changed the way we interact with containers. The good news is that with proper monitoring, the abstraction levels inherent to Kubernetes provide a comprehensive view of your infrastructure, even if the containers and applications are constantly moving. But Kubernetes monitoring requires us to rethink and reorient our strategies, since it differs from monitoring traditional hosts such as VMs or physical machines in several ways.
+
+**Tags and labels become essential**
+With containers and their orchestration completely managed by Kubernetes, labels are now the only way we have to interact with pods and containers. That’s why they are absolutely crucial for monitoring since all metrics and events will be sliced and diced using [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) across the different layers of your infrastructure. Defining your labels with a logical and easy-to-understand schema is essential so your metrics will be as useful as possible.
+
+**There are now more components to monitor**
+
+[](https://lh5.googleusercontent.com/tN8tzKcXWAFWF0TD9u9UkTFJakHsrdjtRx56WiF75UYwMKu8teFyr6LpLGjpuOWSr52M-l3do5r3a6VWi6VwhRWuaquCpGty8ksI585D9YuCL3t7DAcItJUwW6mlrM2jUw_jVq6A)
+In traditional, host-centric infrastructure, we were used to monitoring only two layers: applications and the hosts running them. Now with containers in the middle and Kubernetes itself needing to be monitored, there are four different components to monitor and collect metrics from.
+
+**Applications are constantly moving**
+Kubernetes schedules applications dynamically based on scheduling policy, so you don’t always know where applications are running. But they still need to be monitored. That’s why using a monitoring system or tool with service discovery is a must. It will automatically adapt metric collection to moving containers so applications can be continuously monitored without interruption.
+
+
+**Be prepared for distributed clusters**
+
+Kubernetes has the [ability](https://kubernetes.io/docs/tasks/federation/federation-service-discovery/#hybrid-cloud-capabilities) to distribute containerized applications across multiple data centers and potentially different cloud providers. That means metrics must be collected and aggregated among all these different sources.
+
+
+
+For more details about all these new monitoring challenges inherent to Kubernetes and how to overcome them, we recently published an [in-depth Kubernetes monitoring guide](https://www.datadoghq.com/blog/monitoring-kubernetes-era/). Part 1 of the series covers how to adapt your monitoring strategies to the Kubernetes era.
+
+
+
+**Metrics to monitor**
+
+
+
+Whether you use [Heapster](https://github.com/kubernetes/heapster) data or a monitoring tool integrating with Kubernetes and its different APIs, there are several key types of metrics that need to be closely tracked:
+
+- **Running pods** and their **deployments**
+- Usual **resource metrics** such as CPU, memory usage, and disk I/O
+- **Container-native [metrics](https://www.datadoghq.com/blog/monitoring-kubernetes-performance-metrics/)**
+- Application metrics for which a service discovery feature in your monitoring tool is essential
+
+All these metrics should be aggregated using Kubernetes labels and correlated with events from Kubernetes and container technologies.
+
+
+
+[Part 2](https://www.datadoghq.com/blog/monitoring-kubernetes-performance-metrics/) of our series on Kubernetes monitoring guides you through all the data that needs to be collected and tracked.
+
+
+
+**Collecting these metrics**
+
+
+
+Whether you want to track these key performance metrics by combining Heapster, a storage backend, and a graphing tool, or by integrating a monitoring tool with the different components of your infrastructure, [Part 3](https://www.datadoghq.com/blog/monitoring-kubernetes-with-datadog/), about Kubernetes metric collection, has you covered.
+
+
+
+**Anchors aweigh!**
+
+
+
+Using Kubernetes drastically simplifies container management. But it requires us to rethink our monitoring strategies on several fronts, and to make sure all the key metrics from the different components are properly collected, aggregated, and tracked. We hope our monitoring guide will help you to effectively monitor your Kubernetes clusters. [Feedback and suggestions](https://github.com/DataDog/the-monitor) are more than welcome.
+
+
+
+
+
+_--Jean-Mathieu Saponaro, Research & Analytics Engineer, Datadog_
+
+
+
+-
+Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+-
+Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+-
+Connect with the community on [Slack](http://slack.k8s.io/)
+-
+Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2017-05-00-Kubernetes-Security-Process-Explained.md b/blog/_posts/2017-05-00-Kubernetes-Security-Process-Explained.md
new file mode 100644
index 00000000000..b2bc7bc8937
--- /dev/null
+++ b/blog/_posts/2017-05-00-Kubernetes-Security-Process-Explained.md
@@ -0,0 +1,39 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Dancing at the Lip of a Volcano: The Kubernetes Security Process - Explained "
+date: Friday, May 18, 2017
+pagination:
+ enabled: true
+---
+_Editor's note: Today’s post is by __Jess Frazelle of Google and Brandon Philips of CoreOS about the Kubernetes security disclosures and response policy.__ _
+
+
+Software running on servers underpins ever growing amounts of the world's commerce, communications, and physical infrastructure. And nearly all of these systems are connected to the internet; which means vital security updates must be applied rapidly. As software developers and IT professionals, we often find ourselves dancing on the edge of a volcano: we may either fall into magma induced oblivion from a security vulnerability exploited before we can fix it, or we may slide off the side of the mountain because of an inadequate process to address security vulnerabilities.
+
+The Kubernetes community believes that we can help teams restore their footing on this volcano with a foundation built on Kubernetes. And the bedrock of this foundation requires a process for quickly acknowledging, patching, and releasing security updates to an ever growing community of Kubernetes users.
+
+With over 1,200 contributors and [over a million lines of code](https://www.openhub.net/p/kubernetes), each release of Kubernetes is a massive undertaking staffed by brave volunteer [release managers](https://github.com/kubernetes/community/wiki). These normal releases are fully transparent and the process happens in public. However, security releases must be handled differently to keep potential attackers in the dark until a fix is made available to users.
+
+We drew inspiration from other open source projects in order to create the [**Kubernetes security release process**](https://github.com/kubernetes/community/blob/master/contributors/devel/security-release-process.md). Unlike a regularly scheduled release, a security release must be delivered on an accelerated schedule, and we created the [Product Security Team](https://github.com/kubernetes/community/blob/master/contributors/devel/security-release-process.md#product-security-team-pst) to handle this process.
+
+This team quickly selects a lead to coordinate work and manage communication with the persons that disclosed the vulnerability and the Kubernetes community. The security release process also documents ways to measure vulnerability severity using the [Common Vulnerability Scoring System (CVSS) Version 3.0 Calculator](https://www.first.org/cvss/calculator/3.0). This calculation helps inform decisions on release cadence in the face of holidays or limited developer bandwidth. By making severity criteria transparent we are able to better set expectations and hit critical timelines during an incident where we strive to:
+
+- Respond to the person or team who reported the vulnerability and staff a development team responsible for a fix within 24 hours
+- Disclose a forthcoming fix to users within 7 days of disclosure
+- Provide advance notice to vendors within 14 days of disclosure
+- Release a fix within 21 days of disclosure
+
+As we [continue to harden Kubernetes](https://lwn.net/Articles/720215/), the security release process will help ensure that Kubernetes remains a secure platform for internet scale computing. If you are interested in learning more about the security release process please watch the presentation from KubeCon Europe 2017 [on YouTube](https://www.youtube.com/watch?v=sNjylW8FV9A) and follow along with the [slides](https://speakerdeck.com/philips/kubecon-eu-2017-dancing-on-the-edge-of-a-volcano). If you are interested in learning more about authentication and authorization in Kubernetes, along with the Kubernetes cluster security model, consider joining [Kubernetes SIG Auth](https://github.com/kubernetes/community/blob/master/sig-auth/README.md). We also hope to see you at security related presentations and panels at the next Kubernetes community event: [CoreOS Fest 2017 in San Francisco on May 31 and June 1](https://coreos.com/fest/).
+
+As a thank you to the Kubernetes community, a special 25 percent discount to CoreOS Fest is available using k8s25code or via this special [25 percent off link](https://coreosfest17.eventbrite.com/?discount=k8s25code) to register today for CoreOS Fest 2017.
+
+_--Brandon Philips of CoreOS and Jess Frazelle of Google_
+
+
+
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Join the community portal for advocates on [K8sPort](http://k8sport.org/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
diff --git a/blog/_posts/2017-05-00-Kubespray-Ansible-Collaborative-Kubernetes-Ops.md b/blog/_posts/2017-05-00-Kubespray-Ansible-Collaborative-Kubernetes-Ops.md
new file mode 100644
index 00000000000..679daa692b5
--- /dev/null
+++ b/blog/_posts/2017-05-00-Kubespray-Ansible-Collaborative-Kubernetes-Ops.md
@@ -0,0 +1,125 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubespray Ansible Playbooks foster Collaborative Kubernetes Ops "
+date: Saturday, May 19, 2017
+pagination:
+ enabled: true
+---
+_Today’s guest post is by Rob Hirschfeld, co-founder of open infrastructure automation project, Digital Rebar and co-chair of the SIG Cluster Ops. _
+
+**Why Kubespray?**
+
+Making Kubernetes operationally strong is a widely held priority and I track many deployment efforts around the project. The [incubated Kubespray project](https://github.com/kubernetes-incubator/kubespray) is of particular interest for me because it uses the popular Ansible toolset to build robust, upgradable clusters on both cloud and physical targets. I believe using tools familiar to operators grows our community.
+
+We’re excited to see the breadth of platforms enabled by Kubespray and how well it handles a wide range of options like integrating Ceph for [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) persistence and Helm for easier application uploads. Those additions have allowed us to fully integrate the [OpenStack Helm charts](https://github.com/att-comdev/openstack-helm) ([demo video](https://www.youtube.com/watch?v=wZ0vMrdx4a4&list=PLXPBeIrpXjfjabMbwYyDULOX3kZmlxEXK&index=2)).
+
+By working with the upstream source instead of creating different install scripts, we get the benefits of a larger community. This requires some extra development effort; however, we believe helping share operational practices makes the whole community stronger. That was also the motivation behind the [SIG-Cluster Ops](https://github.com/kubernetes/community/tree/master/sig-cluster-ops).
+
+**With Kubespray delivering robust installs, we can focus on broader operational concerns.**
+
+For example, we can now drive parallel deployments, so it’s possible to fully exercise the options enabled by Kubespray simultaneously for development and testing.
+
+That’s helpful to built-test-destroy coordinated Kubernetes installs on CentOS, Red Hat and Ubuntu as part of an automation pipeline. We can also set up a full classroom environment from a single command using [Digital Rebar’s](https://github.com/digitalrebar/digitalrebar) providers, tenants and cluster definition JSON.
+
+**Let’s explore the classroom example:**
+
+First, we define a [student cluster in JSON](https://github.com/digitalrebar/digitalrebar/blob/master/deploy/workloads/cluster/deploy-001.json) like the snippet below
+
+
+|
+{
+
+ "attribs": {
+
+ "k8s-version": "v1.6.0",
+
+ "k8s-kube\_network\_plugin": "calico",
+
+ "k8s-docker\_version": "1.12"
+
+ },
+
+ "name": "cluster01",
+
+ "tenant": "cluster01",
+
+ "public\_keys": {
+
+ "cluster01": "ssh-rsa AAAAB..... user@example.com"
+
+ },
+
+ "provider": {
+
+ "name": "google-provider"
+
+ },
+
+ "nodes": [
+
+ {
+
+ "roles": ["etcd","k8s-addons", "k8s-master"],
+
+ "count": 1
+
+ },
+
+ {
+
+ "roles": ["k8s-worker"],
+
+ "count": 3
+
+ }
+
+ ]
+
+}
+ |
+
+
+
+Then we run the [Digital Rebar workloads Multideploy.sh](https://github.com/digitalrebar/digitalrebar/blob/master/deploy/workloads/multideploy.sh) reference script which inspects the deployment files to pull out key information. Basically, it automates the following steps:
+
+
+
+
+|
+rebar provider create {“name”:“google-provider”, [secret stuff]}
+
+rebar tenants create {“name”:“cluster01”}
+
+rebar deployments create [contents from cluster01 file]
+ |
+
+
+
+The deployments create command will automatically request nodes from the provider. Since we’re using tenants and SSH key additions, each student only gets access to their own cluster. When we’re done, adding the --destroy flag will reverse the process for the nodes and deployments but leave the providers and tenants.
+
+**We are invested in operational scripts like this example using Kubespray and Digital Rebar because if we cannot manage variation in a consistent way then we’re doomed to operational fragmentation. **
+
+I am excited to see and be part of the community progress towards enterprise-ready Kubernetes operations on both cloud and on-premises. That means I am seeing reasonable patterns emerge with sharable/reusable automation. I strongly recommend watching (or better, collaborating in) these efforts if you are deploying Kubernetes even at experimental scale. Being part of the community requires more upfront effort but returns dividends as you get the benefits of shared experience and improvement.
+
+**When deploying at scale, how do you set up a system to be both repeatable and multi-platform without compromising scale or security?**
+
+With Kubespray and Digital Rebar as a repeatable base, extensions get much faster and easier. Even better, using upstream directly allows improvements to be quickly cycled back into upstream. That means we’re closer to building a community focused on the operational side of Kubernetes with an [SRE mindset](https://rackn.com/sre).
+
+If this is interesting, please engage with us in the [Cluster Ops SIG](https://github.com/kubernetes/community/tree/master/sig-cluster-ops), [Kubespray](https://github.com/kubernetes-incubator/kubespray) or [Digital Rebar](http://rebar.digital/) communities.
+
+
+_-- Rob Hirschfeld, co-founder of RackN and co-chair of the Cluster Ops SIG_
+
+
+
+
+
+-
+Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+-
+Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+-
+Connect with the community on [Slack](http://slack.k8s.io/)
+-
+Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2017-05-00-Managing-Microservices-With-Istio-Service-Mesh.md b/blog/_posts/2017-05-00-Managing-Microservices-With-Istio-Service-Mesh.md
new file mode 100644
index 00000000000..3defa658857
--- /dev/null
+++ b/blog/_posts/2017-05-00-Managing-Microservices-With-Istio-Service-Mesh.md
@@ -0,0 +1,304 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Managing microservices with the Istio service mesh "
+date: Thursday, May 31, 2017
+pagination:
+ enabled: true
+---
+
+_Today’s post is by the Istio team showing how you can get visibility, resiliency, security and control for your microservices in Kubernetes._
+
+Services are at the core of modern software architecture. Deploying a series of modular, small (micro-)services rather than big monoliths gives developers the flexibility to work in different languages, technologies and release cadence across the system; resulting in higher productivity and velocity, especially for larger teams.
+
+With the adoption of microservices, however, new problems emerge due to the sheer number of services that exist in a larger system. Problems that had to be solved once for a monolith, like security, load balancing, monitoring, and rate limiting need to be handled for each service.
+
+**Kubernetes and Services**
+
+Kubernetes supports a microservices architecture through the [Service](https://kubernetes.io/docs/concepts/services-networking/service/) construct. It allows developers to abstract away the functionality of a set of [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/), and expose it to other developers through a well-defined API. It allows adding a name to this level of abstraction and perform rudimentary L4 load balancing. But it doesn’t help with higher-level problems, such as L7 metrics, traffic splitting, rate limiting, circuit breaking, etc.
+
+[Istio](https://istio.io/), announced last week at GlueCon 2017, addresses these problems in a fundamental way through a service mesh framework. With Istio, developers can implement the core logic for the microservices, and let the framework take care of the rest – traffic management, discovery, service identity and security, and policy enforcement. Better yet, this can be also done for existing microservices without rewriting or recompiling any of their parts. Istio uses [Envoy](https://lyft.github.io/envoy/) as its runtime proxy component and provides an [extensible intermediation layer](https://istio.io/docs/concepts/policy-and-control/mixer.html) which allows global cross-cutting policy enforcement and telemetry collection.
+
+The current release of Istio is targeted to Kubernetes users and is packaged in a way that you can install in a few lines and get visibility, resiliency, security and control for your microservices in Kubernetes out of the box.
+
+In a series of blog posts, we'll look at a simple application that is composed of 4 separate microservices. We'll start by looking at how the application can be deployed using plain Kubernetes. We'll then deploy the exact same services into an Istio-enabled cluster without changing any of the application code -- and see how we can observe metrics.
+
+In subsequent posts, we’ll focus on more advanced capabilities such as HTTP request routing, policy, identity and security management.
+
+**Example Application: BookInfo**
+
+We will use a simple application called BookInfo, that displays information, reviews and ratings for books in a store. The application is composed of four microservices written in different languages:
+
+ Since the container images for these microservices can all be found in Docker Hub, all we need to deploy this application in Kubernetes are the yaml configurations.
+
+It’s worth noting that these services have no dependencies on Kubernetes and Istio, but make an interesting case study. Particularly, the multitude of services, languages and versions for the reviews service make it an interesting service mesh example. More information about this example can be found [here](https://istio.io/docs/samples/bookinfo.html).
+
+
+
+**Running the Bookinfo Application in Kubernetes**
+
+In this post we’ll focus on the v1 version of the app:
+
+
+
+ 
+
+Deploying it with Kubernetes is straightforward, no different than deploying any other services. Service and Deployment resources for the **productpage** microservice looks like this:
+
+
+
+ ```
+apiVersion: v1
+
+kind: Service
+
+metadata:
+
+ name: productpage
+
+ labels:
+
+ app: productpage
+
+spec:
+
+ type: NodePort
+
+ ports:
+
+ - port: 9080
+
+ name: http
+
+ selector:
+
+ app: productpage
+
+---
+
+apiVersion: extensions/v1beta1
+
+kind: Deployment
+
+metadata:
+
+ name: productpage-v1
+
+spec:
+
+ replicas: 1
+
+ template:
+
+ metadata:
+
+ labels:
+
+ app: productpage
+
+ track: stable
+
+ spec:
+
+ containers:
+
+ - name: productpage
+
+ image: istio/examples-bookinfo-productpage-v1
+
+ imagePullPolicy: IfNotPresent
+
+ ports:
+
+ - containerPort: 9080
+ ```
+
+
+
+
+The other two services that we will need to deploy if we want to run the app are **details** and **reviews-v1**. We don’t need to deploy the **ratings** service at this time because v1 of the reviews service doesn’t use it. The remaining services follow essentially the same pattern as **productpage**. The yaml files for all services can be found [here](https://raw.githubusercontent.com/istio/istio/master/samples/apps/bookinfo/bookinfo-v1.yaml).
+
+
+
+To run the services as an ordinary Kubernetes app:
+
+ ```
+kubectl apply -f bookinfo-v1.yaml
+ ```
+
+
+
+To access the application from outside the cluster we’ll need the NodePort address of the **productpage** service:
+
+
+
+ ```
+export BOOKINFO\_URL=$(kubectl get po -l app=productpage -o jsonpath={.items[0].status.hostIP}):$(kubectl get svc productpage -o jsonpath={.spec.ports[0].nodePort})
+ ```
+
+
+
+We can now point the browser to http://$BOOKINFO\_URL/productpage, and see:
+
+
+
+ {: .big-img}
+
+
+
+
+**Running the Bookinfo Application with Istio**
+
+Now that we’ve seen the app, we’ll adjust our deployment slightly to make it work with Istio. We first need to [install Istio](https://istio.io/docs/tasks/installing-istio.html) in our cluster. To see all of the metrics and tracing features in action, we also install the optional Prometheus, Grafana, and Zipkin addons. We can now delete the previous app and start the Bookinfo app again using the exact same yaml file, this time with Istio:
+
+
+
+ ```
+kubectl delete -f bookinfo-v1.yaml
+
+kubectl apply -f \<(istioctl kube-inject -f bookinfo-v1.yaml)
+ ```
+
+
+
+Notice that this time we use the istioctl kube-inject command to modify bookinfo-v1.yaml before creating the deployments. It injects the Envoy sidecar into the Kubernetes pods as documented [here](https://istio.io/docs/reference/commands/istioctl.html#istioctl-kube-inject). Consequently, all of the microservices are packaged with an Envoy sidecar that manages incoming and outgoing traffic for the service.
+
+
+In the Istio service mesh we will not want to access the application **productpage** directly, as we did in plain Kubernetes. Instead, we want an Envoy sidecar in the request path so that we can use Istio’s management features (version routing, circuit breakers, policies, etc.) to control external calls to **productpage** , just like we can for internal requests. Istio’s Ingress controller is used for this purpose.
+
+
+To use the Istio Ingress controller, we need to create a Kubernetes [Ingress resource](https://raw.githubusercontent.com/istio/istio/master/samples/apps/bookinfo/bookinfo-ingress.yaml) for the app, annotated with kubernetes.io/ingress.class: "istio", like this:
+
+
+
+ ```
+cat \<\ worker nodes. Create Kubernetes controllers for deploying the containers in worker nodes, the Armada infrastructure pulls the Docker images from IBM Bluemix Docker registry to create containers. We tried deploying an application container and running a logmet agent (see Reading and displaying logs using logmet container, below) inside the containers that forwards the application logs to an IBM cloud logging service. As part of the process, YAML files are used to create a controller resource for the UrbanCode Deploy (UCD). UCD agent is deployed as a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) controller, which is used to connect to the UCD server. The whole process of deployment of application happens in UCD. To support the application for public access, we created a service resource to interact between pods and access container services. For storage support, we created persistent volume claims and mounted the volume for the containers.
+
+
+
+| {:.big-img} |
+| UCD: IBM UrbanCode Deploy is a tool for automating application deployments through your environments. Armada: Kubernetes implementation of IBM. WH Docker Registry: Docker Private image registry. Common agent containers: We expect to configure our services to use the WHC mandatory agents. We deployed all ion containers. |
+
+
+
+Reading and displaying logs using logmet container:
+
+
+
+Logmet is a cloud logging service that helps to collect, store, and analyze an application’s log data. It also aggregates application and environment logs for consolidated application or environment insights and forwards them. Metrics are transmitted with collectd. We chose a model that runs a logmet agent process inside the container. The agent takes care of forwarding the logs to the cloud logging service configured in containers.
+
+
+
+The application pod mounts the application logging directory to the storage space, which is created by persistent volume claim, and stores the logs, which are not lost even when the pod dies. Kibana is an open source data visualization plugin for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster.
+
+ {:.big-img}
+
+
+
+Exposing services with Ingress:
+
+
+
+[Ingress controllers](https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers) are reverse proxies that expose services outside cluster through URLs. They act as an external HTTP load balancer that uses a unique public entry point to route requests to the application.
+
+
+
+To expose our services to outside the cluster, we used Ingress. In Armada, if we create a paid cluster, an Ingress controller is automatically installed for us to use. We were able to access services through Ingress by creating a YAML resource file that specifies the service path.
+
+
+
+–Sandhya Kapoor, Senior Technologist, Watson Platform for Health, IBM
+
+
+
+-
+Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+-
+Join the community portal for advocates on [K8sPort](http://k8sport.org/)
+-
+Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+-
+Connect with the community on [Slack](http://slack.k8s.io/)
+-
+Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
diff --git a/blog/_posts/2017-08-00-High-Performance-Networking-With-Ec2.md b/blog/_posts/2017-08-00-High-Performance-Networking-With-Ec2.md
new file mode 100644
index 00000000000..35a4a3e9431
--- /dev/null
+++ b/blog/_posts/2017-08-00-High-Performance-Networking-With-Ec2.md
@@ -0,0 +1,82 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " High Performance Networking with EC2 Virtual Private Clouds "
+date: Saturday, August 11, 2017
+pagination:
+ enabled: true
+---
+
+
+One of the most popular platforms for running Kubernetes is Amazon Web Services’ Elastic Compute Cloud (AWS EC2). With more than a decade of experience delivering IaaS, and expanding over time to include a rich set of services with easy to consume APIs, EC2 has captured developer mindshare and loyalty worldwide.
+
+
+When it comes to networking, however, EC2 has some limits that hinder performance and make deploying Kubernetes clusters to production unnecessarily complex. The preview release of [Romana v2.0](http://romana.io/), a network and security automation solution for Cloud Native applications, includes features that address some well known network issues when running Kubernetes in EC2.
+
+
+## Traditional VPC Networking Performance Roadblocks
+
+
+A Kubernetes pod network is separate from an Amazon Virtual Private Cloud (VPC) instance network; consequently, off-instance pod traffic needs a route to the destination pods. Fortunately, VPCs support setting these routes. When building a cluster network with the [kubenet](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#kubenet) plugin, whenever new nodes are added, the AWS cloud provider will automatically add a VPC route to the pods running on that node.
+
+
+Using kubenet to set routes provides native VPC network performance and visibility. However, since kubenet does not support more advanced network functions like network policy for pod traffic isolation, many users choose to run a Container Network Interface (CNI) provider on the back end.
+
+
+Before Romana v2.0, all CNI network providers required an overlay when used across Availability Zones (AZs), leaving CNI users who want to deploy HA clusters unable to get the performance of native VPC networking.
+
+
+Even users who don’t need advanced networking encounter restriction, since the VPC route tables support a maximum of 50 entries, which limits the size of a cluster to 50 nodes (or less, if some VPC routes are needed for other purposes). Until Romana v2.0, users also needed to run an overlay network to get around this limit.
+
+
+Whether you were interested in advanced networking for traffic isolation or running large production HA clusters (or both), you were unable to get the performance and visibility of native VPC networking.
+
+
+
+
+
+## Kubernetes on Multi-Segment Networks
+
+
+
+The way to avoid running out of VPC routes is to use them sparingly by making them forward pod traffic for multiple instances. From a networking perspective, what that means is that the VPC route needs to forward to a router, which can then forward traffic on to the final destination instance.
+
+
+[Romana](http://romana.io/) is a CNI network provider that configures routes on the host to forward pod network traffic without an overlay. Since inter-node routes are installed on hosts, no VPC routes are necessary at all. However, when the VPC is split into subnets for an HA deployment across zones, VPC routes are necessary.
+
+
+Fortunately, inter-node routes on hosts allows them to act as a network router and forward traffic inbound from another zone just as it would for traffic from local pods. This makes any Kubernetes node configured by Romana able to accept inbound pod traffic from other zones and forward it to the proper destination node on the subnet.
+
+
+Because of this local routing function, top-level routes to pods on other instances on the subnet can be aggregated, collapsing the total number of routes necessary to as few as one per subnet. To avoid using a single instance to forward all traffic, more routes can be used to spread traffic across multiple instances, up to the maximum number of available routes (i.e. equivalent to kubenet).
+
+
+The net result is that you can now build clusters of any size across AZs without an overlay. Romana clusters also support network policies for better security through network isolation.
+
+
+## Making it All Work
+
+
+While the combination of aggregated routes and node forwarding on a subnet eliminates overlays and avoids the VPC 50 route limitation, it imposes certain requirements on the CNI provider. For example, hosts should be configured with inter-node routes only to other nodes in the same zone on the local subnet. Traffic to all other hosts must use the default route off host, then use the (aggregated) VPC route to forward traffic out of the zone. Also: when adding a new host, in order to maintain aggregated VPC routes, the CNI plugin needs to use IP addresses for pods that are reachable on the new host.
+
+
+The latest release of Romana also addresses questions about how VPC routes are installed; what happens when a node that is forwarding traffic fails; how forwarding node failures are detected; and how routes get updated and the cluster recovers.
+
+
+Romana v2.0 includes a new AWS route configuration function to set VPC routes. This is part of a new set of network advertising features that automate route configuration in L3 networks. Romana v2.0 includes topology-aware IP address management (IPAM) that enables VPC route aggregation to stay within the 50 route limit as described here, as well as new health checks to update VPC routes when a routing instance fails. For smaller clusters, Romana configures VPC routes as kubenet does, with a route to each instance, taking advantage of every available VPC route.
+
+
+## Native VPC Networking Everywhere
+
+
+When using Romana v2.0, native VPC networking is now available for clusters of any size, with or without network policies and for HA production deployment split across multiple zones.
+
+
+
+
+
+The preview release of Romana v2.0 is available [here](http://romana.io/preview). We welcome comments and feedback so we can make EC2 deployments of Kubernetes as fast and reliable as possible.
+
+
+
+-- _Juergen Brendel and Chris Marino, co-founders of Pani Networks, sponsor of the Romana project_
diff --git a/blog/_posts/2017-08-00-Kompose-Helps-Developers-Move-Docker.md b/blog/_posts/2017-08-00-Kompose-Helps-Developers-Move-Docker.md
new file mode 100644
index 00000000000..13a086f0b64
--- /dev/null
+++ b/blog/_posts/2017-08-00-Kompose-Helps-Developers-Move-Docker.md
@@ -0,0 +1,174 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kompose Helps Developers Move Docker Compose Files to Kubernetes "
+date: Friday, August 10, 2017
+pagination:
+ enabled: true
+---
+_Editor's note: today's post is by Charlie Drage, Software Engineer at Red Hat giving an update about the Kubernetes project Kompose._
+
+I'm pleased to announce that [Kompose](https://github.com/kubernetes/kompose), a conversion tool for developers to transition Docker Compose applications to Kubernetes, has graduated from the [Kubernetes Incubator](https://github.com/kubernetes/community/blob/master/incubator.md) to become an official part of the project.
+
+Since our first commit on June 27, 2016, Kompose has achieved 13 releases over 851 commits, gaining 21 contributors since the inception of the project. Our work started at Skippbox (now part of [Bitnami](https://bitnami.com/)) and grew through contributions from Google and Red Hat.
+
+The Kubernetes Incubator allowed contributors to get to know each other across companies, as well as collaborate effectively under guidance from Kubernetes contributors and maintainers. Our incubation led to the development and release of a new and useful tool for the Kubernetes ecosystem.
+
+We’ve created a reliable, scalable Kubernetes environment from an initial Docker Compose file. We worked hard to convert as many keys as possible to their Kubernetes equivalent. Running a single command gets you up and running on Kubernetes: kompose up.
+
+We couldn’t have done it without feedback and contributions from the community!
+
+If you haven’t yet tried [Kompose on GitHub](https://github.com/kubernetes/kompose) check it out!
+
+
+
+Kubernetes guestbook
+
+The go-to example for Kubernetes is the famous [guestbook](https://github.com/kubernetes/examples/blob/master/guestbook), which we use as a base for conversion.
+
+
+Here is an example from the official [kompose.io](https://kompose.io/) site, starting with a simple Docker Compose [file](https://raw.githubusercontent.com/kubernetes/kompose/master/examples/docker-compose.yaml)).
+
+First, we’ll retrieve the file:
+
+
+```
+$ wget https://raw.githubusercontent.com/kubernetes/kompose/master/examples/docker-compose.yaml
+ ```
+
+You can test it out by first deploying to Docker Compose:
+
+
+
+```
+$ docker-compose up -d
+
+Creating network "examples\_default" with the default driver
+
+Creating examples\_redis-slave\_1
+
+Creating examples\_frontend\_1
+
+Creating examples\_redis-master\_1
+ ```
+
+And when you’re ready to deploy to Kubernetes:
+
+
+
+```
+$ kompose up
+
+
+We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application.
+
+
+If you need different kind of resources, use the kompose convert and kubectl create -f commands instead.
+
+
+INFO Successfully created Service: redis
+
+INFO Successfully created Service: web
+
+INFO Successfully created Deployment: redis
+
+INFO Successfully created Deployment: web
+
+
+Your application has been deployed to Kubernetes. You can run kubectl get deployment,svc,pods,pvc for details
+ ```
+
+Check out [other examples](https://github.com/kubernetes/kompose/tree/master/examples) of what Kompose can do.
+
+Converting to alternative Kubernetes controllers
+
+Kompose can also convert to specific Kubernetes controllers with the use of flags:
+
+```
+$ kompose convert --help
+
+Usage:
+
+ kompose convert [file] [flags]
+
+
+Kubernetes Flags:
+
+ --daemon-set Generate a Kubernetes daemonset object
+
+ -d, --deployment Generate a Kubernetes deployment object
+
+ -c, --chart Create a Helm chart for converted objects
+
+ --replication-controller Generate a Kubernetes replication controller object
+
+…
+ ```
+
+For example, let’s convert our [guestbook](https://github.com/kubernetes/examples/blob/master/guestbook) example to a DaemonSet:
+
+
+
+```
+$ kompose convert --daemon-set
+
+INFO Kubernetes file "frontend-service.yaml" created
+
+INFO Kubernetes file "redis-master-service.yaml" created
+
+INFO Kubernetes file "redis-slave-service.yaml" created
+
+INFO Kubernetes file "frontend-daemonset.yaml" created
+
+INFO Kubernetes file "redis-master-daemonset.yaml" created
+
+INFO Kubernetes file "redis-slave-daemonset.yaml" created
+ ```
+
+Key Kompose 1.0 features
+
+With our graduation, comes the release of Kompose 1.0.0, here’s what’s new:
+
+
+
+-
+Docker Compose Version 3: Kompose now supports Docker Compose Version 3. New keys such as ‘deploy’ now convert to their Kubernetes equivalent.
+-
+Docker Push and Build Support: When you supply a ‘build’ key within your `docker-compose.yaml` file, Kompose will automatically build and push the image to the respective Docker repository for Kubernetes to consume.
+-
+New Keys: With the addition of version 3 support, new keys such as pid and deploy are supported. For full details on what Kompose supports, view our [conversion document](http://kompose.io/conversion/).
+-
+Bug Fixes: In every release we fix any bugs related to edge-cases when converting. This release fixes issues relating to converting volumes with ‘./’ in the target name.
+
+
+
+What’s ahead?
+
+As we continue development, we will strive to convert as many Docker Compose keys as possible for all future and current Docker Compose releases, converting each one to their Kubernetes equivalent. All future releases will be backwards-compatible.
+
+
+-
+[Install Kompose](https://github.com/kubernetes/kompose/blob/master/docs/installation.md)
+-
+[Kompose Quick Start Guide](https://github.com/kubernetes/kompose/blob/master/docs/installation.md)
+-
+[Kompose Web Site](http://kompose.io/)
+-
+[Kompose Documentation](https://github.com/kubernetes/kompose/tree/master/docs)
+
+
+
+--Charlie Drage, Software Engineer, Red Hat
+
+
+-
+Post questions (or answer questions) on[Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+-
+Join the community portal for advocates on[K8sPort](http://k8sport.org/)
+-
+Follow us on Twitter[@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+-
+Connect with the community on[Slack](http://slack.k8s.io/)
+-
+Get involved with the Kubernetes project on[GitHub](https://github.com/kubernetes/kubernetes)
+-
diff --git a/blog/_posts/2017-08-00-Kubernetes-Meets-High-Performance.md b/blog/_posts/2017-08-00-Kubernetes-Meets-High-Performance.md
new file mode 100644
index 00000000000..db12fb63b42
--- /dev/null
+++ b/blog/_posts/2017-08-00-Kubernetes-Meets-High-Performance.md
@@ -0,0 +1,83 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes Meets High-Performance Computing "
+date: Wednesday, August 22, 2017
+pagination:
+ enabled: true
+---
+Editor's note: today's post is by Robert Lalonde, general manager at Univa, on supporting mixed HPC and containerized applications
+
+Anyone who has worked with Docker can appreciate the enormous gains in efficiency achievable with containers. While Kubernetes excels at orchestrating containers, high-performance computing (HPC) applications can be tricky to deploy on Kubernetes.
+
+In this post, I discuss some of the challenges of running HPC workloads with Kubernetes, explain how organizations approach these challenges today, and suggest an approach for supporting mixed workloads on a shared Kubernetes cluster. We will also provide information and links to a case study on a customer, IHME, showing how Kubernetes is extended to service their HPC workloads seamlessly while retaining scalability and interfaces familiar to HPC users.
+
+## HPC workloads unique challenges
+
+In Kubernetes, the base unit of scheduling is a Pod: one or more Docker containers scheduled to a cluster host. Kubernetes assumes that workloads are containers. While Kubernetes has the notion of [Cron Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/) and [Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/) that run to completion, applications deployed on Kubernetes are typically long-running services, like web servers, load balancers or data stores and while they are highly dynamic with pods coming and going, they differ greatly from HPC application patterns.
+
+Traditional HPC applications often exhibit different characteristics:
+
+-
+In financial or engineering simulations, a job may be comprised of tens of thousands of short-running tasks, demanding low-latency and high-throughput scheduling to complete a simulation in an acceptable amount of time.
+-
+A computational fluid dynamics (CFD) problem may execute in parallel across many hundred or even thousands of nodes using a message passing library to synchronize state. This requires specialized scheduling and job management features to allocate and launch such jobs and then to checkpoint, suspend/resume or backfill them.
+-
+Other HPC workloads may require specialized resources like GPUs or require access to limited software licenses. Organizations may enforce policies around what types of resources can be used by whom to ensure projects are adequately resourced and deadlines are met.
+HPC workload schedulers have evolved to support exactly these kinds of workloads. Examples include [Univa Grid Engine](http://www.univa.com/products/), [IBM Spectrum LSF](https://www-03.ibm.com/systems/spectrum-computing/products/lsf/) and Altair’s [PBS Professional](http://www.pbsworks.com/PBSProduct.aspx?n=PBS-Professional&c=Overview-and-Capabilities). Sites managing HPC workloads have come to rely on capabilities like array jobs, configurable pre-emption, user, group or project based quotas and a variety of other features.
+
+## Blurring the lines between containers and HPC
+
+HPC users believe containers are valuable for the same reasons as other organizations. Packaging logic in a container to make it portable, insulated from environmental dependencies, and easily exchanged with other containers clearly has value. However, making the switch to containers can be difficult.
+
+HPC workloads are often integrated at the command line level. Rather than requiring coding, jobs are submitted to queues via the command line as binaries or simple shell scripts that act as wrappers. There are literally hundreds of engineering, scientific and analytic applications used by HPC sites that take this approach and have mature and certified integrations with popular workload schedulers.
+
+While the notion of packaging a workload into a Docker container, publishing it to a registry, and submitting a YAML description of the workload is second nature to users of Kubernetes, this is foreign to most HPC users. An analyst running models in R, MATLAB or Stata simply wants to submit their simulation quickly, monitor their execution, and get a result as quickly as possible.
+
+## Existing approaches
+
+To deal with the challenges of migrating to containers, organizations running container and HPC workloads have several options:
+
+-
+Maintain separate infrastructures
+
+For sites with sunk investments in HPC, this may be a preferred approach. Rather than disrupt existing environments, it may be easier to deploy new containerized applications on a separate cluster and leave the HPC environment alone. The challenge is that this comes at the cost of siloed clusters, increasing infrastructure and management cost.
+
+-
+Run containerized workloads under an existing HPC workload manager
+
+For sites running traditional HPC workloads, another approach is to use existing job submission mechanisms to launch jobs that in turn instantiate Docker containers on one or more target hosts. Sites using this approach can introduce containerized workloads with minimal disruption to their environment. Leading HPC workload managers such as [Univa Grid Engine Container Edition](http://blogs.univa.com/2016/05/new-version-of-univa-grid-engine-now-supports-docker-containers/) and [IBM Spectrum LSF](http://blogs.univa.com/2016/05/new-version-of-univa-grid-engine-now-supports-docker-containers/) are adding native support for Docker containers. [Shifter](https://github.com/NERSC/shifter) and [Singularity](http://singularity.lbl.gov/) are important open source tools supporting this type of deployment also. While this is a good solution for sites with simple requirements that want to stick with their HPC scheduler, they will not have access to native Kubernetes features, and this may constrain flexibility in managing long-running services where Kubernetes excels.
+
+-
+Use native job scheduling features in Kubernetes
+
+Sites less invested in existing HPC applications can use existing scheduling facilities in Kubernetes for [jobs that run to completion](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/). While this is an option, it may be impractical for many HPC users. HPC applications are often either optimized towards massive throughput or large scale parallelism. In both cases startup and teardown latencies have a discriminating impact. Latencies that appear to be acceptable for containerized microservices today would render such applications unable to scale to the required levels.
+
+All of these solutions involve tradeoffs. The first option doesn’t allow resources to be shared (increasing costs) and the second and third options require customers to pick a single scheduler, constraining future flexibility.
+
+## Mixed workloads on Kubernetes
+
+A better approach is to support HPC and container workloads natively in the same shared environment. Ideally, users should see the environment appropriate to their workload or workflow type.
+
+One approach to supporting mixed workloads is to allow Kubernetes and the HPC workload manager to co-exist on the same cluster, throttling resources to avoid conflicts. While simple, this means that neither workload manager can fully utilize the cluster.
+
+Another approach is to use a peer scheduler that coordinates with the Kubernetes scheduler. Navops Command by Univa is a solution that takes this third approach, augmenting the functionality of the Kubernetes scheduler. Navops Command provides its own web interface and CLI and allows additional scheduling policies to be enabled on Kubernetes without impacting the operation of the Kubernetes scheduler and existing containerized applications. Navops Command plugs into the Kubernetes architecture via the 'schedulerName' attribute in the pod spec as a peer scheduler that workloads can choose to use instead of the Kubernetes stock scheduler as shown below.
+
+ 
+
+With this approach, Kubernetes acts as a resource manager, making resources available to a separate HPC scheduler. Cluster administrators can use a visual interface to allocate resources based on policy or simply drag sliders via a web UI to allocate different proportions of the Kubernetes environment to non-container (HPC) workloads, and native Kubernetes applications and services.
+
+ {:.big-img}
+
+From a client perspective, the HPC scheduler runs as a service deployed in Kubernetes pods, operating just as it would on a bare metal cluster. Navops Command provides additional scheduling features including things like resource reservation, run-time quotas, workload preemption and more. This environment works equally well for on-premise, cloud-based or hybrid deployments.
+
+## Deploying mixed workloads at IHME
+
+One client having success with mixed workloads is the Institute for Health Metrics & Evaluation (IHME), an independent health research center at the University of Washington. In support of their globally recognized Global Health Data Exchange (GHDx), IHME operates a significantly sized environment comprised of 500 nodes and 20,000 cores running a mix of analytic, HPC, and container-based applications on Kubernetes. [This case study](http://navops.io/ihme-case-study.html) describes IHME’s success hosting existing HPC workloads on a shared Kubernetes cluster using Navops Command.
+
+ 
+
+
+
+
+For sites deploying new clusters that want access to the rich capabilities in Kubernetes but need the flexibility to run non-containerized workloads, this approach is worth a look. It offers the opportunity for sites to share infrastructure between Kubernetes and HPC workloads without disrupting existing applications and businesses processes. It also allows them to migrate their HPC workloads to use Docker containers at their own pace.
diff --git a/blog/_posts/2017-09-00-Introducing-Resource-Management-Working.md b/blog/_posts/2017-09-00-Introducing-Resource-Management-Working.md
new file mode 100644
index 00000000000..301553a3fb4
--- /dev/null
+++ b/blog/_posts/2017-09-00-Introducing-Resource-Management-Working.md
@@ -0,0 +1,93 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Introducing the Resource Management Working Group "
+date: Friday, September 21, 2017
+pagination:
+ enabled: true
+---
+_**Editor's note: today's post is by Jeremy Eder, Senior Principal Software Engineer at Red Hat, on the formation of the Resource Management Working Group**_
+
+## Why are we here?
+Kubernetes has evolved to support diverse and increasingly complex classes of applications. We can onboard and scale out modern, cloud-native web applications based on microservices, batch jobs, and stateful applications with persistent storage requirements.
+
+ However, there are still opportunities to improve Kubernetes; for example, the ability to run workloads that require specialized hardware or those that perform measurably better when hardware topology is taken into account. These conflicts can make it difficult for application classes (particularly in established verticals) to adopt Kubernetes.
+
+ We see an unprecedented opportunity here, with a high cost if it’s missed. The Kubernetes ecosystem must create a consumable path forward to the next generation of system architectures by catering to needs of as-yet unserviced workloads in meaningful ways. The Resource Management Working Group, along with other SIGs, must demonstrate the vision customers want to see, while enabling solutions to run well in a fully integrated, thoughtfully planned end-to-end stack.
+
+Kubernetes Working Groups are created when a particular challenge requires cross-SIG collaboration. The Resource Management Working Group, for example, works primarily with sig-node and sig-scheduling to drive support for additional resource management capabilities in Kubernetes. We make sure that key contributors from across SIGs are frequently consulted because working groups are not meant to make system-level decisions on behalf of any SIG.
+
+An example and key benefit of this is the working group’s relationship with sig-node. We were able to ensure completion of several releases of node reliability work (complete in 1.6) before contemplating feature design on top. Those designs are use-case driven: research into technical requirements for a variety of workloads, then sorting based on measurable impact to the largest cross-section.
+
+## Target Workloads and Use-cases
+One of the working group’s key design tenets is that user experience must remain clean and portable, while still surfacing infrastructure capabilities that are required by businesses and applications.
+
+While not representing any commitment, we hope in the fullness of time that Kubernetes can optimally run financial services workloads, machine learning/training, grid schedulers, map-reduce, animation workloads, and more. As a use-case driven group, we account for potential application integration that can also facilitate an ecosystem of complementary independent software vendors to flourish on top of Kubernetes.
+
+
+
+ 
+
+## Why do this?
+Kubernetes covers generic web hosting capabilities very well, so why go through the effort of expanding workload coverage for Kubernetes at all? The fact is that workloads elegantly covered by Kubernetes today, only represent a fraction of the world’s compute usage. We have a tremendous opportunity to safely and methodically expand upon the set of workloads that can run optimally on Kubernetes.
+
+ To date, there’s demonstrable progress in the areas of expanded workload coverage:
+
+- Stateful applications such as Zookeeper, etcd, MySQL, Cassandra, ElasticSearch
+- Jobs, such as timed events to process the day’s logs or any other batch processing
+- Machine Learning and compute-bound workload acceleration through Alpha GPU support
+Collectively, the folks working on Kubernetes are hearing from their customers that we need to go further. Following the tremendous popularity of containers in 2014, industry rhetoric circled around a more modern, container-based, datacenter-level workload orchestrator as folks looked to plan their next architectures.
+
+ As a consequence, we began advocating for increasing the scope of workloads covered by Kubernetes, from overall concepts to specific features. Our aim is to put control and choice in users hands, helping them move with confidence towards whatever infrastructure strategy they choose. In this advocacy, we quickly found a large group of like-minded companies interested in broadening the types of workloads that Kubernetes can orchestrate. And thus the working group was born.
+
+## Genesis of the Resource Management Working Group
+After extensive development/feature [discussions](https://docs.google.com/document/d/1p7scsTPzPyouktBFTxu4RhRwW8yUn5Lj7VGY9SaOo-8/edit?ts=5824ee1f) during the Kubernetes Developer Summit 2016 after [CloudNativeCon | KubeCon Seattle](http://events.linuxfoundation.org/events/kubecon/program/schedule), we decided to [formalize](https://groups.google.com/d/msg/kubernetes-dev/Sb0VlXOM8eQ/La3YCe2-CgAJ) our loosely organized group. In January 2017, the Kubernetes _[Resource Management Working Group](https://github.com/kubernetes/community/tree/master/wg-resource-management)_ was formed. This group (led by Derek Carr from Red Hat and Vishnu Kannan from Google) was originally cast as a temporary initiative to provide guidance back to sig-node and sig-scheduling (primarily). However, due to the cross-cutting nature of the goals within the working group, and the depth of [roadmap](https://docs.google.com/spreadsheets/d/1NWarIgtSLsq3izc5wOzV7ItdhDNRd-6oBVawmvs-LGw/edit) quickly uncovered, the Resource Management Working Group became its own entity within the first few months.
+
+ Recently, Brian Grant from Google (@bgrant0607) posted the following image on his [Twitter feed](https://twitter.com/bgrant0607/status/862091393723842561). This image helps to explain the role of each SIG, and shows where the Resource Management Working Group fits into the overall project organization.
+
+ {.big-img}
+
+ To help bootstrap this effort, the Resource Management Working Group had its first face-to-face kickoff meeting in May 2017. Thanks to Google for hosting!
+
+ {:.big-img}
+
+ Folks from Intel, NVIDIA, Google, IBM, Red Hat. and Microsoft (among others) participated.
+You can read the outcomes of that 3-day meeting [here](https://docs.google.com/document/d/13_nk75eItkpbgZOt62In3jj0YuPbGPC_NnvSCHpgvUM/edit).
+
+ The group’s prioritized list of features for increasing workload coverage on Kubernetes enumerated in the [charter](https://github.com/kubernetes/community/tree/master/wg-resource-management) of the Resource Management Working group includes:
+
+- Support for performance sensitive workloads (exclusive cores, cpu pinning strategies, NUMA)
+- Integrating new hardware devices (GPUs, FPGAs, Infiniband, etc.)
+- Improving resource isolation (local storage, hugepages, caches, etc.)
+- Improving Quality of Service (performance SLOs)
+- Performance benchmarking
+- APIs and extensions related to the features mentioned above
+The discussions made it clear that there was tremendous overlap between needs for various workloads, and that we ought to de-duplicate requirements, and plumb generically.
+
+## Workload Characteristics
+The set of initially targeted use-cases share one or more of the following characteristics:
+
+- Deterministic performance (address long tail latencies)
+- Isolation within a single node, as well as within groups of nodes sharing a control plane
+- Requirements on advanced hardware and/or software capabilities
+- Predictable, reproducible placement: applications need granular guarantees around placement
+The Resource Management Working Group is spearheading the feature design and development in support of these workload requirements. Our goal is to provide best practices and patterns for these scenarios.
+
+## Initial Scope
+In the months leading up to our recent face-to-face, we had discussed how to safely abstract resources in a way that retains portability and clean user experience, while still meeting application requirements. The working group came away with a multi-release [roadmap](https://docs.google.com/spreadsheets/d/1NWarIgtSLsq3izc5wOzV7ItdhDNRd-6oBVawmvs-LGw/edit) that included 4 short- to mid-term targets with great overlap between target workloads:
+
+- [Device Manager (Plugin) Proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/device-plugin.md)
+
+ - Kubernetes should provide access to hardware devices such as NICs, GPUs, FPGA, Infiniband and so on.
+- [CPU Manager](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/cpu-manager.md)
+
+ - Kubernetes should provide a way for users to request static CPU assignment via the Guaranteed QoS tier. No support for NUMA in this phase.
+- [HugePages support in Kubernetes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/hugepages.md)
+
+ - Kubernetes should provide a way for users to consume huge pages of any size.
+- [Resource Class proposal](https://github.com/kubernetes/community/pull/782)
+
+ - Kubernetes should implement an abstraction layer (analogous to StorageClasses) for devices other than CPU and memory that allows a user to consume a resource in a portable way. For example, how can a pod request a GPU that has a minimum amount of memory?
+
+## Getting Involved & Summary
+Our charter document includes a [Contact Us](https://github.com/kubernetes/community/tree/master/wg-resource-management#contact-us) section with links to our mailing list, Slack channel, and Zoom meetings. Recordings of previous meetings are uploaded to [Youtube](https://www.youtube.com/channel/UCyfvrmhAGcsFlJeGgZQvZ6g). We plan to discuss these topics and more at the 2017 Kubernetes Developer Summit at [CloudNativeCon | KubeCon](http://events.linuxfoundation.org/events/cloudnativecon-and-kubecon-north-america) in Austin. Please come and join one of our meetings (users, customers, software and hardware vendors are all welcome) and contribute to the working group!
diff --git a/blog/_posts/2017-09-00-Kubernetes-18-Security-Workloads-And.md b/blog/_posts/2017-09-00-Kubernetes-18-Security-Workloads-And.md
new file mode 100644
index 00000000000..a88f12a7020
--- /dev/null
+++ b/blog/_posts/2017-09-00-Kubernetes-18-Security-Workloads-And.md
@@ -0,0 +1,95 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes 1.8: Security, Workloads and Feature Depth "
+date: Saturday, September 29, 2017
+pagination:
+ enabled: true
+---
+_Editor's note: today's post is by Aparna Sinha, Group Product Manager, Kubernetes, Google; Ihor Dvoretskyi, Developer Advocate, CNCF; Jaice Singer DuMars, Kubernetes Ambassador, Microsoft; and Caleb Miles, Technical Program Manager, CoreOS on the latest release of Kubernetes 1.8._
+
+
+We’re pleased to announce the delivery of Kubernetes 1.8, our third release this year. Kubernetes 1.8 represents a snapshot of many exciting enhancements and refinements underway. In addition to functional improvements, we’re increasing project-wide focus on maturing [process](https://github.com/kubernetes/sig-release), formalizing [architecture](https://github.com/kubernetes/community/tree/master/sig-architecture), and strengthening Kubernetes’ [governance model](https://github.com/kubernetes/community/tree/master/community/elections/2017). The evolution of mature processes clearly signals that sustainability is a driving concern, and helps to ensure that Kubernetes is a viable and thriving project far into the future.
+
+
+## Spotlight on security
+
+Kubernetes 1.8 graduates support for [role based access control](https://en.wikipedia.org/wiki/Role-based_access_control) (RBAC) to stable. RBAC allows cluster administrators to [dynamically define roles](https://kubernetes.io/docs/admin/authorization/rbac/) to enforce access policies through the Kubernetes API. Beta support for filtering outbound traffic through [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) augments existing support for filtering inbound traffic to a pod. RBAC and Network Policies are two powerful tools for enforcing organizational and regulatory security requirements within Kubernetes.
+
+
+Transport Layer Security (TLS) [certificate rotation](https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/) for the Kubelet graduates to beta. Automatic certificate rotation eases secure cluster operation.
+
+
+## Spotlight on workload support
+
+Kubernetes 1.8 promotes the core Workload APIs to beta with the apps/v1beta2 group and version. The beta contains the current version of Deployment, DaemonSet, ReplicaSet, and StatefulSet. The Workloads APIs provide a stable foundation for migrating existing workloads to Kubernetes as well as developing cloud native applications that target Kubernetes natively.
+
+For those considering running Big Data workloads on Kubernetes, the Workloads API now enables [native Kubernetes support](https://apache-spark-on-k8s.github.io/userdocs/) in Apache Spark.
+
+Batch workloads, such as nightly ETL jobs, will benefit from the graduation of [CronJobs](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/) to beta.
+
+[Custom Resource Definitions](https://kubernetes.io/docs/concepts/api-extension/custom-resources/) (CRDs) remain in beta for Kubernetes 1.8. A CRD provides a powerful mechanism to extend Kubernetes with user-defined API objects. One use case for CRDs is the automation of complex stateful applications such as [key-value stores](https://github.com/coreos/etcd-operator), databases and [storage engines](https://rook.io/) through the Operator Pattern. Expect continued enhancements to CRDs such as [validation](https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation) as stabilization continues.
+
+
+## Spoilers ahead
+
+Volume snapshots, PV resizing, automatic taints, priority pods, kubectl plugins, oh my!
+
+In addition to stabilizing existing functionality, Kubernetes 1.8 offers a number of alpha features that preview new functionality.
+
+Each Special Interest Group (SIG) in the community continues to deliver the most requested user features for their area. For a complete list, please visit the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#v180).
+
+
+#### Availability
+
+Kubernetes 1.8 is available for [download on GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.8.0). To get started with Kubernetes, check out these [interactive tutorials](https://kubernetes.io/docs/tutorials/kubernetes-basics/).
+
+
+## Release team
+
+The [Release team](https://github.com/kubernetes/features/blob/master/release-1.8/release_team.md) for 1.8 was led by Jaice Singer DuMars, Kubernetes Ambassador at Microsoft, and was comprised of 14 individuals responsible for managing all aspects of the release, from documentation to testing, validation, and feature completeness.
+
+As the Kubernetes community has grown, our release process has become an amazing demonstration of collaboration in open source software development. Kubernetes continues to gain new users at a rapid clip. This growth creates a positive feedback cycle where more contributors commit code creating a more vibrant ecosystem.
+
+
+## User Highlights
+
+According to [Redmonk](http://redmonk.com/fryan/2017/09/10/cloud-native-technologies-in-the-fortune-100/), 54 percent of Fortune 100 companies are running Kubernetes in some form with adoption coming from every sector across the world. Recent user stories from the community include:
+
+
+- Ancestry.com currently holds 20 billion historical records and 90 million family trees, making it the largest consumer genomics DNA network in the world. With the move to Kubernetes, its deployment time for its Shaky Leaf icon service was [cut down from 50 minutes to 2 or 5 minutes](https://kubernetes.io/case-studies/ancestry/).
+- Wink, provider of smart home devices and apps, runs [80 percent of its workloads on a unified stack of Kubernetes-Docker-CoreOS](https://kubernetes.io/case-studies/wink/), allowing them to continually innovate and improve its products and services.
+- Pear Deck, a teacher communication app for students, ported their Heroku apps into Kubernetes, allowing them to deploy the exact same configuration in [lots of different clusters in 30 seconds](https://kubernetes.io/case-studies/peardeck/).
+- Buffer, social media management for agencies and marketers, has a remote team of 80 spread across a dozen different time zones. Kubernetes has provided the kind of [liquid infrastructure](https://kubernetes.io/case-studies/buffer/) where a developer could create an app and deploy it and scale it horizontally as necessary.
+
+
+Is Kubernetes helping your team? [Share your story](https://docs.google.com/a/google.com/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform) with the community.
+
+
+## Ecosystem updates
+
+Announced on September 11, [Kubernetes Certified Service Providers](https://www.cncf.io/certification/kcsp/) (KCSPs) are pre-qualified [organizations](https://kubernetes.io/partners/#kcsp) with deep experience helping enterprises successfully adopt Kubernetes. Individual professionals can now [register](https://www.cncf.io/certification/expert/) for the new Certified Kubernetes Administrator (CKA) program and exam, which requires passing an online, proctored, performance-based exam that tests one’s ability to solve multiple issues in a hands-on, command-line environment.
+CNCF also offers [online training](https://www.cncf.io/certification/training/) that teaches the skills needed to create and configure a real-world Kubernetes cluster.
+
+
+## KubeCon
+
+Join the community at [KubeCon + CloudNativeCon](http://events.linuxfoundation.org/events/cloudnativecon-and-kubecon-north-america) in Austin, December 6-8 for the largest Kubernetes gathering ever. The premiere Kubernetes event will feature technical sessions, case studies, developer deep dives, salons and more! A full schedule of events and speakers will be available [here](http://events.linuxfoundation.org/events/kubecon-and-cloudnativecon-north-america/program/schedule) on September 28. Discounted [registration](https://www.regonline.com/registration/Checkin.aspx?EventID=1903774&_ga=2.224109086.464556664.1498490094-1623727562.1496428006) ends October 6.
+
+
+## Open Source Summit EU
+
+Ihor Dvoretskyi, Kubernetes 1.8 features release lead, will [present](https://osseu17.sched.com/event/C4AA) new features and enhancements at Open Source Summit EU in Prague, October 23. Registration is [still open](http://events.linuxfoundation.org/events/open-source-summit-europe/attend/register).
+
+
+## Get involved
+
+The simplest way to get involved with Kubernetes is by joining one of the many [Special Interest Groups](https://github.com/kubernetes/community/blob/master/sig-list.md) (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly [community meeting](https://github.com/kubernetes/community/blob/master/communication.md#weekly-meeting), and through the channels below.
+
+
+- Thank you for your continued feedback and support.
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Join the community portal for advocates on [K8sPort](http://k8sport.org/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+- Chat with the community on [Slack](http://slack.k8s.io/).
+- [Share your Kubernetes story.](https://docs.google.com/a/linuxfoundation.org/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform)
diff --git a/blog/_posts/2017-09-00-Kubernetes-Statefulsets-Daemonsets.md b/blog/_posts/2017-09-00-Kubernetes-Statefulsets-Daemonsets.md
new file mode 100644
index 00000000000..5259970d21b
--- /dev/null
+++ b/blog/_posts/2017-09-00-Kubernetes-Statefulsets-Daemonsets.md
@@ -0,0 +1,1003 @@
+---
+permalink: /blog/:year/:month/:title
+
+layout: blog
+title: " Kubernetes StatefulSets & DaemonSets Updates "
+date: Thursday, September 27, 2017
+pagination:
+ enabled: true
+---
+Editor's note: today's post is by Janet Kuo and Kenneth Owens, Software Engineers at Google.
+
+
+This post talks about recent updates to the [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) and [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) API objects for Kubernetes. We explore these features using [Apache ZooKeeper](https://zookeeper.apache.org/) and [Apache Kafka](https://kafka.apache.org/) StatefulSets and a [Prometheus node exporter](https://github.com/prometheus/node_exporter) DaemonSet.
+
+
+
+In Kubernetes 1.6, we added the [RollingUpdate](https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/) update strategy to the DaemonSet API Object. Configuring your DaemonSets with the RollingUpdate strategy causes the DaemonSet controller to perform automated rolling updates to the Pods in your DaemonSets when their spec.template are updated.
+
+
+
+In Kubernetes 1.7, we enhanced the DaemonSet controller to track a history of revisions to the PodTemplateSpecs of DaemonSets. This allows the DaemonSet controller to roll back an update. We also added the [RollingUpdate](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies) strategy to the [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) API Object, and implemented revision history tracking for the StatefulSet controller. Additionally, we added the [Parallel](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#parallel-pod-management) pod management policy to support stateful applications that require Pods with unique identities but not ordered Pod creation and termination.
+
+# StatefulSet rolling update and Pod management policy
+
+First, we’re going to demonstrate how to use StatefulSet rolling updates and Pod management policies by deploying a ZooKeeper ensemble and a Kafka cluster.
+
+## Prerequisites
+
+To follow along, you’ll need to set up a Kubernetes 1.7 cluster with at least 3 schedulable nodes. Each node needs 1 CPU and 2 GiB of memory available. You will also need either a dynamic provisioner to allow the StatefulSet controller to provision 6 persistent volumes (PVs) with 10 GiB each, or you will need to manually provision the PVs prior to deploying the ZooKeeper ensemble or deploying the Kafka cluster.
+
+## Deploying a ZooKeeper ensemble
+
+Apache ZooKeeper is a strongly consistent, distributed system used by other distributed systems for cluster coordination and configuration management.
+
+
+
+Note: You can create a ZooKeeper ensemble using this [zookeeper\_mini.yaml](https://kow3ns.github.io/kubernetes-zookeeper/manifests/zookeeper_mini.yaml) manifest. You can learn more about running a ZooKeeper ensemble on Kubernetes [here, as well as a](https://kow3ns.github.io/kubernetes-zookeeper/) more in-depth explanation of [the manifest and its contents.](https://kow3ns.github.io/kubernetes-zookeeper/manifests/)
+
+
+
+When you apply the manifest, you will see output like the following.
+
+
+
+```
+$ kubectl apply -f zookeeper\_mini.yaml
+
+service "zk-hs" created
+
+service "zk-cs" created
+
+poddisruptionbudget "zk-pdb" created
+
+statefulset "zk" created
+ ```
+
+
+
+
+The manifest creates an ensemble of three ZooKeeper servers using a StatefulSet, zk; a Headless Service, zk-hs, to control the domain of the ensemble; a Service, zk-cs, that clients can use to connect to the ready ZooKeeper instances; and a PodDisruptionBugdet, zk-pdb, that allows for one planned disruption. (Note that while this ensemble is suitable for demonstration purposes, it isn’t sized correctly for production use.)
+
+
+
+If you use kubectl get to watch Pod creation in another terminal you will see that, in contrast to the [OrderedReady](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#orderedready-pod-management) strategy (the default policy that implements the full version of the StatefulSet guarantees), all of the Pods in the zk StatefulSet are created in parallel.
+
+
+
+```
+$ kubectl get po -lapp=zk -w
+
+NAME READY STATUS RESTARTS AGE
+
+
+zk-0 0/1 Pending 0 0s
+
+
+zk-0 0/1 Pending 0 0s
+
+
+zk-1 0/1 Pending 0 0s
+
+
+zk-1 0/1 Pending 0 0s
+
+
+zk-0 0/1 ContainerCreating 0 0s
+
+
+zk-2 0/1 Pending 0 0s
+
+
+zk-1 0/1 ContainerCreating 0 0s
+
+
+zk-2 0/1 Pending 0 0s
+
+
+zk-2 0/1 ContainerCreating 0 0s
+
+
+zk-0 0/1 Running 0 10s
+
+
+zk-2 0/1 Running 0 11s
+
+
+zk-1 0/1 Running 0 19s
+
+
+zk-0 1/1 Running 0 20s
+
+
+zk-1 1/1 Running 0 30s
+
+
+zk-2 1/1 Running 0 30s
+
+ ```
+
+
+
+
+This is because the zookeeper\_mini.yaml manifest sets the podManagementPolicy of the StatefulSet to Parallel.
+
+
+
+```
+apiVersion: apps/v1beta1
+kind: StatefulSet
+metadata:
+ name: zk
+
+spec:
+ serviceName: zk-hs
+
+ replicas: 3
+
+ updateStrategy:
+
+ type: RollingUpdate
+
+ podManagementPolicy: Parallel
+
+ ...
+ ```
+
+
+
+
+Many distributed systems, like ZooKeeper, do not require ordered creation and termination for their processes. You can use the Parallel Pod management policy to accelerate the creation and deletion of StatefulSets that manage these systems. Note that, when Parallel Pod management is used, the StatefulSet controller will not block when it fails to create a Pod. Ordered, sequential Pod creation and termination is performed when a StatefulSet’s podManagementPolicy is set to OrderedReady.
+
+
+## Deploying a Kafka Cluster
+
+Apache Kafka is a popular distributed streaming platform. Kafka producers write data to partitioned topics which are stored, with a configurable replication factor, on a cluster of brokers. Consumers consume the produced data from the partitions stored on the brokers.
+
+
+
+Note: Details of the manifests contents can be found [here](https://kow3ns.github.io/kubernetes-kafka/manifests/). You can learn more about running a Kafka cluster on Kubernetes [here](https://kow3ns.github.io/kubernetes-kafka/).
+
+
+
+To create a cluster, you only need to download and apply the [kafka\_mini.yaml](https://kow3ns.github.io/kubernetes-kafka/manifests/kafka_mini.yaml) manifest. When you apply the manifest, you will see output like the following:
+
+
+
+```
+$ kubectl apply -f kafka\_mini.yaml
+
+service "kafka-hs" created
+
+poddisruptionbudget "kafka-pdb" created
+
+statefulset "kafka" created
+ ```
+
+
+
+
+The manifest creates a three broker cluster using the kafka StatefulSet, a Headless Service, kafka-hs, to control the domain of the brokers; and a PodDisruptionBudget, kafka-pdb, that allows for one planned disruption. The brokers are configured to use the ZooKeeper ensemble we created above by connecting through the zk-cs Service. As with the ZooKeeper ensemble deployed above, this Kafka cluster is fine for demonstration purposes, but it’s probably not sized correctly for production use.
+
+
+
+If you watch Pod creation, you will notice that, like the ZooKeeper ensemble created above, the Kafka cluster uses the Parallel podManagementPolicy.
+
+
+
+```
+$ kubectl get po -lapp=kafka -w
+
+NAME READY STATUS RESTARTS AGE
+
+
+kafka-0 0/1 Pending 0 0s
+
+
+kafka-0 0/1 Pending 0 0s
+
+
+kafka-1 0/1 Pending 0 0s
+
+
+kafka-1 0/1 Pending 0 0s
+
+
+kafka-2 0/1 Pending 0 0s
+
+
+kafka-0 0/1 ContainerCreating 0 0s
+
+
+kafka-2 0/1 Pending 0 0s
+
+
+kafka-1 0/1 ContainerCreating 0 0s
+
+
+kafka-1 0/1 Running 0 11s
+
+
+kafka-0 0/1 Running 0 19s
+
+
+kafka-1 1/1 Running 0 23s
+
+
+kafka-0 1/1 Running 0 32s
+
+ ```
+
+## Producing and consuming data
+
+You can use kubectl run to execute the kafka-topics.sh script to create a topic named test.
+
+
+
+```
+$ kubectl run -ti --image=gcr.io/google\_containers/kubernetes-kafka:1.0-10.2.1 createtopic --restart=Never --rm -- kafka-topics.sh --create \
+
+\> --topic test \
+
+\> --zookeeper zk-cs.default.svc.cluster.local:2181 \
+
+\> --partitions 1 \
+
+\> --replication-factor 3
+ ```
+
+
+
+
+Now you can use kubectl run to execute the kafka-console-consumer.sh command to listen for messages.
+
+
+
+```
+$ kubectl run -ti --image=gcr.io/google\_containers/kubnetes-kafka:1.0-10.2.1 consume --restart=Never --rm -- kafka-console-consumer.sh --topic test --bootstrap-server kafka-0.kafka-hs.default.svc.cluster.local:9093
+ ```
+
+
+
+
+In another terminal, you can run the kafka-console-producer.sh command.
+
+
+
+```
+$kubectl run -ti --image=gcr.io/google\_containers/kubernetes-kafka:1.0-10.2.1 produce --restart=Never --rm \
+
+\> -- kafka-console-producer.sh --topic test --broker-list kafka-0.kafka-hs.default.svc.cluster.local:9093,kafka-1.kafka-hs.default.svc.cluster.local:9093,kafka-2.kafka-hs.default.svc.cluster.local:9093
+
+ ```
+
+
+
+
+Output from the second terminal appears in the first terminal. If you continue to produce and consume messages while updating the cluster, you will notice that no messages are lost. You may see error messages as the leader for the partition changes when individual brokers are updated, but the client retries until the message is committed. This is due to the ordered, sequential nature of StatefulSet rolling updates which we will explore further in the next section.
+
+
+
+Updating the Kafka cluster
+
+StatefulSet updates are like DaemonSet updates in that they are both configured by setting the spec.updateStrategy of the corresponding API object. When the update strategy is set to OnDelete, the respective controllers will only create new Pods when a Pod in the StatefulSet or DaemonSet has been deleted. When the update strategy is set to RollingUpdate, the controllers will delete and recreate Pods when a modification is made to the spec.template field of a DaemonSet or StatefulSet. You can use rolling updates to change the configuration (via environment variables or command line parameters), resource requests, resource limits, container images, labels, and/or annotations of the Pods in a StatefulSet or DaemonSet. Note that all updates are destructive, always requiring that each Pod in the DaemonSet or StatefulSet be destroyed and recreated. StatefulSet rolling updates differ from DaemonSet rolling updates in that Pod termination and creation is ordered and sequential.
+
+
+
+You can patch the kafka StatefulSet to reduce the CPU resource request to 250m.
+
+
+
+```
+$ kubectl patch sts kafka --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/requests/cpu", "value":"250m"}]'
+
+statefulset "kafka" patched
+ ```
+
+
+
+
+If you watch the status of the Pods in the StatefulSet, you will see that each Pod is deleted and recreated in reverse ordinal order (starting with the Pod with the largest ordinal and progressing to the smallest). The controller waits for each updated Pod to be running and ready before updating the subsequent Pod.
+
+
+
+```
+$kubectl get po -lapp=kafka -w
+
+NAME READY STATUS RESTARTS AGE
+
+
+kafka-0 1/1 Running 0 13m
+
+
+kafka-1 1/1 Running 0 13m
+
+
+kafka-2 1/1 Running 0 13m
+
+
+kafka-2 1/1 Terminating 0 14m
+
+
+kafka-2 0/1 Terminating 0 14m
+
+
+kafka-2 0/1 Terminating 0 14m
+
+
+kafka-2 0/1 Terminating 0 14m
+
+
+kafka-2 0/1 Pending 0 0s
+
+
+kafka-2 0/1 Pending 0 0s
+
+
+kafka-2 0/1 ContainerCreating 0 0s
+
+
+kafka-2 0/1 Running 0 10s
+
+
+kafka-2 1/1 Running 0 21s
+
+
+kafka-1 1/1 Terminating 0 14m
+
+
+kafka-1 0/1 Terminating 0 14m
+
+
+kafka-1 0/1 Terminating 0 14m
+
+
+kafka-1 0/1 Terminating 0 14m
+
+
+kafka-1 0/1 Pending 0 0s
+
+
+kafka-1 0/1 Pending 0 0s
+
+
+kafka-1 0/1 ContainerCreating 0 0s
+
+
+kafka-1 0/1 Running 0 11s
+
+
+kafka-1 1/1 Running 0 21s
+
+
+kafka-0 1/1 Terminating 0 14m
+
+
+kafka-0 0/1 Terminating 0 14m
+
+
+kafka-0 0/1 Terminating 0 14m
+
+
+kafka-0 0/1 Terminating 0 14m
+
+
+kafka-0 0/1 Pending 0 0s
+
+
+kafka-0 0/1 Pending 0 0s
+
+
+kafka-0 0/1 ContainerCreating 0 0s
+
+
+kafka-0 0/1 Running 0 10s
+
+
+kafka-0 1/1 Running 0 22s
+
+ ```
+
+
+
+
+Note that unplanned disruptions will not lead to unintentional updates during the update process. That is, the StatefulSet controller will always recreate the Pod at the correct version to ensure the ordering of the update is preserved. If a Pod is deleted, and if it has already been updated, it will be created from the updated version of the StatefulSet’s spec.template. If the Pod has not already been updated, it will be created from the previous version of the StatefulSet’s spec.template. We will explore this further in the following sections.
+
+
+## Staging an update
+
+Depending on how your organization handles deployments and configuration modifications, you may want or need to stage updates to a StatefulSet prior to allowing the roll out to progress. You can accomplish this by setting a partition for the RollingUpdate. When the StatefulSet controller detects a partition in the updateStrategy of a StatefulSet, it will only apply the updated version of the StatefulSet’s spec.template to Pods whose ordinal is greater than or equal to the value of the partition.
+
+
+
+You can patch the kafka StatefulSet to add a partition to the RollingUpdate update strategy. If you set the partition to a number greater than or equal to the StatefulSet’s spec.replicas (as below), any subsequent updates you perform to the StatefulSet’s spec.template will be staged for roll out, but the StatefulSet controller will not start a rolling update.
+
+
+
+```
+$ kubectl patch sts kafka -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":3}}}}'
+
+statefulset "kafka" patched
+ ```
+
+
+
+
+If you patch the StatefulSet to set the requested CPU to 0.3, you will notice that none of the Pods are updated.
+
+
+
+```
+$ kubectl patch sts kafka --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/requests/cpu", "value":"0.3"}]'
+
+statefulset "kafka" patched
+ ```
+
+
+
+
+Even if you delete a Pod and wait for the StatefulSet controller to recreate it, you will notice that the Pod is recreated with current CPU request.
+
+
+
+```
+$ kubectl delete po kafka-1
+
+
+pod "kafka-1" deleted
+
+
+$ kubectl get po kafka-1 -w
+
+NAME READY STATUS RESTARTS AGE
+
+
+kafka-1 0/1 ContainerCreating 0 10s
+
+
+kafka-1 0/1 Running 0 19s
+
+
+kafka-1 1/1 Running 0 21s
+
+
+
+$ kubectl get po kafka-1 -o yaml
+
+apiVersion: v1
+
+kind: Pod
+
+metadata:
+
+ ...
+
+
+ resources:
+
+
+ requests:
+
+
+ cpu: 250m
+
+
+ memory: 1Gi
+
+ ```
+
+## Rolling out a canary
+
+Often, we want to verify an image update or configuration change on a single instance of an application before rolling it out globally. If you modify the partition created above to be 2, the StatefulSet controller will roll out a [canary](http://whatis.techtarget.com/definition/canary-canary-testing) that can be used to verify that the update is working as intended.
+
+
+
+```
+$ kubectl patch sts kafka -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":2}}}}'
+
+statefulset "kafka" patched
+ ```
+
+
+
+
+You can watch the StatefulSet controller update the kafka-2 Pod and pause after the update is complete.
+
+
+
+```
+$ kubectl get po -lapp=kafka -w
+
+
+NAME READY STATUS RESTARTS AGE
+
+
+kafka-0 1/1 Running 0 50m
+
+
+kafka-1 1/1 Running 0 10m
+
+
+kafka-2 1/1 Running 0 29s
+
+
+kafka-2 1/1 Terminating 0 34s
+
+
+kafka-2 0/1 Terminating 0 38s
+
+
+kafka-2 0/1 Terminating 0 39s
+
+
+kafka-2 0/1 Terminating 0 39s
+
+
+kafka-2 0/1 Pending 0 0s
+
+
+kafka-2 0/1 Pending 0 0s
+
+
+kafka-2 0/1 Terminating 0 20s
+
+
+kafka-2 0/1 Terminating 0 20s
+
+
+kafka-2 0/1 Pending 0 0s
+
+
+kafka-2 0/1 Pending 0 0s
+
+
+kafka-2 0/1 ContainerCreating 0 0s
+
+
+kafka-2 0/1 Running 0 19s
+
+
+kafka-2 1/1 Running 0 22s
+
+ ```
+
+## Phased roll outs
+
+Similar to rolling out a canary, you can roll out updates based on a phased progression (e.g. linear, geometric, or exponential roll outs).
+
+
+
+If you patch the kafka StatefulSet to set the partition to 1, the StatefulSet controller updates one more broker.
+
+
+
+```
+$ kubectl patch sts kafka -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":1}}}}'
+
+statefulset "kafka" patched
+ ```
+
+
+
+
+If you set it to 0, the StatefulSet controller updates the final broker and completes the update.
+
+
+
+```
+$ kubectl patch sts kafka -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":0}}}}'
+
+statefulset "kafka" patched
+ ```
+
+
+
+
+Note that you don’t have to decrement the partition by one. For a larger StatefulSet--for example, one with 100 replicas--you might use a progression more like 100, 99, 90, 50, 0. In this case, you would stage your update, deploy a canary, roll out to 10 instances, update fifty percent of the Pods, and then complete the update.
+
+## Cleaning up
+
+To delete the API Objects created above, you can use kubectl delete on the two manifests you used to create the ZooKeeper ensemble and the Kafka cluster.
+
+
+
+```
+$ kubectl delete -f kafka\_mini.yaml
+
+service "kafka-hs" deleted
+
+poddisruptionbudget "kafka-pdb" deleted
+
+Statefulset “kafka” deleted
+
+
+$ kubectl delete -f zookeeper\_mini.yaml
+
+service "zk-hs" deleted
+
+service "zk-cs" deleted
+
+poddisruptionbudget "zk-pdb" deleted
+
+statefulset "zk" deleted
+ ```
+
+
+
+
+By design, the StatefulSet controller does not delete any persistent volume claims (PVCs): the PVCs created for the ZooKeeper ensemble and the Kafka cluster must be manually deleted. Depending on the storage reclamation policy of your cluster, you many also need to manually delete the backing PVs.
+
+# DaemonSet rolling update, history, and rollback
+
+In this section, we’re going to show you how to perform a rolling update on a DaemonSet, look at its history, and then perform a rollback after a bad rollout. We will use a DaemonSet to deploy a [Prometheus node exporter](https://github.com/prometheus/node_exporter) on each Kubernetes node in the cluster. These node exporters export node metrics to the Prometheus monitoring system. For the sake of simplicity, we’ve omitted the installation of the [Prometheus server](https://github.com/prometheus/prometheus) and the service for [communication with DaemonSet pods](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#communicating-with-daemon-pods) from this blogpost.
+
+## Prerequisites
+
+To follow along with this section of the blog, you need a working Kubernetes 1.7 cluster and kubectl version 1.7 or later. If you followed along with the first section, you can use the same cluster.
+
+## DaemonSet rolling update: Prometheus node exporters
+
+First, prepare the node exporter DaemonSet manifest to run a v0.13 Prometheus node exporter on every node in the cluster:
+
+
+
+```
+$ cat \>\> node-exporter-v0.13.yaml \<\ node-exporter-v0.14.yaml
+ ```
+
+
+
+
+Then apply the v0.14 node exporter DaemonSet:
+
+
+
+```
+$ kubectl apply -f node-exporter-v0.14.yaml --record
+
+daemonset "node-exporter" configured
+ ```
+
+
+
+
+Wait for the DaemonSet rolling update to complete:
+
+
+
+```
+$ kubectl rollout status ds node-exporter
+
+...
+
+Waiting for rollout to finish: 3 out of 4 new pods have been updated...
+Waiting for rollout to finish: 3 of 4 updated pods are available...
+daemon set "node-exporter" successfully rolled out
+ ```
+
+
+
+
+We just triggered a DaemonSet rolling update by updating the DaemonSet template. By default, one old DaemonSet pod will be killed and one new DaemonSet pod will be created at a time.
+
+
+
+Now we’ll cause a rollout to fail by updating the image to an invalid value:
+
+
+
+```
+$ cat node-exporter-v0.13.yaml | sed "s/v0.13.0/bad/g" \> node-exporter-bad.yaml
+
+
+$ kubectl apply -f node-exporter-bad.yaml --record
+
+daemonset "node-exporter" configured
+ ```
+
+
+
+
+Notice that the rollout never finishes:
+
+
+
+```
+$ kubectl rollout status ds node-exporter
+Waiting for rollout to finish: 0 out of 4 new pods have been updated...
+Waiting for rollout to finish: 1 out of 4 new pods have been updated…
+
+# Use ^C to exit
+ ```
+
+
+
+
+This behavior is expected. We mentioned earlier that a DaemonSet rolling update kills and creates one pod at a time. Because the new pod never becomes available, the rollout is halted, preventing the invalid specification from propagating to more than one node. StatefulSet rolling updates implement the same behavior with respect to failed deployments. Unsuccessful updates are blocked until it corrected via roll back or by rolling forward with a specification.
+
+
+
+```
+$ kubectl get pods -l app=node-exporter
+
+NAME READY STATUS RESTARTS AGE
+
+
+node-exporter-f2n14 0/1 ErrImagePull 0 3m
+
+
+...
+
+
+# N = number of nodes
+
+$ kubectl get ds node-exporter
+NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+
+node-exporter N N N-1 1 N \ 46m
+
+ ```
+
+
+
+
+## DaemonSet history, rollbacks, and rolling forward
+
+Next, perform a rollback. Take a look at the node exporter DaemonSet rollout history:
+
+
+
+
+```
+$ kubectl rollout history ds node-exporter
+daemonsets "node-exporter"
+REVISION CHANGE-CAUSE
+
+1 kubectl apply --filename=node-exporter-v0.13.yaml --record=true
+
+2 kubectl apply --filename=node-exporter-v0.14.yaml --record=true
+
+
+3 kubectl apply --filename=node-exporter-bad.yaml --record=true
+
+ ```
+
+
+
+
+Check the details of the revision you want to roll back to:
+
+
+
+```
+$ kubectl rollout history ds node-exporter --revision=2
+daemonsets "node-exporter" with revision #2
+Pod Template:
+ Labels: app=node-exporter
+
+ Containers:
+
+ node-exporter:
+
+ Image: prom/node-exporter:v0.14.0
+
+ Port: 9100/TCP
+
+ Environment: \
+
+ Mounts: \
+
+ Volumes: \
+
+ ```
+
+
+
+
+You can quickly roll back to any DaemonSet revision you found through kubectl rollout history:
+
+
+
+```
+# Roll back to the last revision
+
+$ kubectl rollout undo ds node-exporter
+daemonset "node-exporter" rolled back
+
+
+# Or use --to-revision to roll back to a specific revision
+
+$ kubectl rollout undo ds node-exporter --to-revision=2
+daemonset "node-exporter" rolled back
+ ```
+
+
+
+
+A DaemonSet rollback is done by rolling forward. Therefore, after the rollback, DaemonSet revision 2 becomes revision 4 (current revision):
+
+
+
+```
+$ kubectl rollout history ds node-exporter
+daemonsets "node-exporter"
+REVISION CHANGE-CAUSE
+
+1 kubectl apply --filename=node-exporter-v0.13.yaml --record=true
+
+3 kubectl apply --filename=node-exporter-bad.yaml --record=true
+
+4 kubectl apply --filename=node-exporter-v0.14.yaml --record=true
+
+ ```
+
+
+
+
+The node exporter DaemonSet is now healthy again:
+
+
+
+```
+$ kubectl rollout status ds node-exporter
+daemon set "node-exporter" successfully rolled out
+
+
+# N = number of nodes
+
+$ kubectl get ds node-exporter
+
+NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+
+node-exporter N N N N N \ 46m
+
+ ```
+
+
+
+
+If current DaemonSet revision is specified while performing a rollback, the rollback is skipped:
+
+
+
+```
+$ kubectl rollout undo ds node-exporter --to-revision=4
+daemonset "node-exporter" skipped rollback (current template already matches revision 4)
+ ```
+
+
+
+
+You will see this complaint from kubectl if the DaemonSet revision is not found:
+
+
+
+```
+$ kubectl rollout undo ds node-exporter --to-revision=10
+error: unable to find specified revision 10 in history
+ ```
+
+
+
+
+Note that kubectl rollout history and kubectl rollout status support StatefulSets, too!
+
+## Cleaning up
+
+```
+$ kubectl delete ds node-exporter
+ ```
+
+
+
+
+# What’s next for DaemonSet and StatefulSet
+
+Rolling updates and roll backs close an important feature gap for DaemonSets and StatefulSets. As we plan for Kubernetes 1.8, we want to continue to focus on advancing the core controllers to GA. This likely means that some advanced feature requests (e.g. automatic roll back, infant mortality detection) will be deferred in favor of ensuring the consistency, usability, and stability of the core controllers. We welcome feedback and contributions, so please feel free to reach out on [Slack](http://slack.k8s.io/), to ask questions on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes), or open issues or pull requests on [GitHub](https://github.com/kubernetes/kubernetes).
+
+
+
+-
+Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+-
+Join the community portal for advocates on [K8sPort](http://k8sport.org/)
+-
+Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+-
+Connect with the community on [Slack](http://slack.k8s.io/)
+-
+Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
diff --git a/blog/_posts/2017-09-00-Windows-Networking-At-Parity-With-Linux.md b/blog/_posts/2017-09-00-Windows-Networking-At-Parity-With-Linux.md
new file mode 100644
index 00000000000..c89050dfce6
--- /dev/null
+++ b/blog/_posts/2017-09-00-Windows-Networking-At-Parity-With-Linux.md
@@ -0,0 +1,60 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Windows Networking at Parity with Linux for Kubernetes "
+date: Saturday, September 08, 2017
+
+---
+_**Editor's note: today's post is by Jason Messer, Principal PM Manager at Microsoft, on improvements to the Windows network stack to support the Kubernetes CNI model.**_
+
+ Since I last blogged about [Kubernetes Networking for Windows](https://blogs.technet.microsoft.com/networking/2017/04/04/windows-networking-for-kubernetes/) four months ago, the Windows Core Networking team has made tremendous progress in both the platform and open source Kubernetes projects. With the updates, Windows is now on par with Linux in terms of networking. Customers can now deploy mixed-OS, Kubernetes clusters in any environment including Azure, on-premises, and on 3rd-party cloud stacks with the same network primitives and topologies supported on Linux without any workarounds, “hacks”, or 3rd-party switch extensions.
+
+ "So what?", you may ask. There are multiple application and infrastructure-related reasons why these platform improvements make a substantial difference in the lives of developers and operations teams wanting to run Kubernetes. Read on to learn more!
+
+
+
+## Tightly-Coupled Communication
+These improvements enable tightly-coupled communication between multiple Windows Server containers (without Hyper-V isolation) within a single "[Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/)". Think of Pods as the scheduling unit for the Kubernetes cluster, inside of which, one or more application containers are co-located and able to share storage and networking resources. All containers within a Pod shared the same IP address and port range and are able to communicate with each other using localhost. This enables applications to easily leverage "helper" programs for tasks such as monitoring, configuration updates, log management, and proxies. Another way to think of a Pod is as a compute host with the app containers representing processes.
+
+## Simplified Network Topology
+We also simplified the network topology on Windows nodes in a Kubernetes cluster by reducing the number of endpoints required per container (or more generally, per pod) to one. Previously, Windows containers (pods) running in a Kubernetes cluster required two endpoints - one for external (internet) communication and a second for intra-cluster communication between between other nodes or pods in the cluster. This was due to the fact that external communication from containers attached to a host network with local scope (i.e. not publicly routable) required a NAT operation which could only be provided through the Windows NAT (WinNAT) component on the host. Intra-cluster communication required containers to be attached to a separate network with "global" (cluster-level) scope through a second endpoint. Recent platform improvements now enable NAT''ing to occur directly on a container endpoint which is implemented with the Microsoft Virtual Filtering Platform (VFP) Hyper-V switch extension. Now, both external and intra-cluster traffic can flow through a single endpoint.
+
+
+
+## Load-Balancing using VFP in Windows kernel
+Kubernetes worker nodes rely on the kube-proxy to load-balance ingress network traffic to Service IPs between pods in a cluster. Previous versions of Windows implemented the Kube-proxy's load-balancing through a user-space proxy. We recently added support for "Proxy mode: iptables" which is implemented using VFP in the Windows kernel so that any IP traffic can be load-balanced more efficiently by the Windows OS kernel. Users can also configure an external load balancer by specifying the externalIP parameter in a service definition. In addition to the aforementioned improvements, we have also added platform support for the following:
+
+
+
+- Support for DNS search suffixes per container / Pod (Docker improvement - removes additional work previously done by kube-proxy to append DNS suffixes)
+- [Platform Support] 5-tuple rules for creating ACLs (Looking for help from community to integrate this with support for K8s Network Policy)
+
+ Now that Windows Server has [joined](https://blogs.technet.microsoft.com/hybridcloud/2017/07/13/new-windows-server-preview-release-available-to-windows-insiders/) the [Windows Insider Program](https://insider.windows.com/), customers and partners can take advantage of these new platform features today which accrue value to eagerly anticipated, new feature release later this year and new build after six months. The latest Windows Server insider [build](https://www.microsoft.com/en-us/software-download/windowsinsiderpreviewserver) now includes support for all of these platform improvements.
+
+ In addition to the platform improvements for Windows, the team submitted code (PRs) for CNI, kubelet, and kube-proxy with the goal of mainlining Windows support into the Kubernetes v1.8 release. These PRs remove previous work-arounds required on Windows for items such as user-mode proxy for internal load balancing, appending additional DNS suffixes to each Kube-DNS request, and a separate container endpoint for external (internet) connectivity.
+
+
+
+- [https://github.com/kubernetes/kubernetes/pull/51063](https://github.com/kubernetes/kubernetes/pull/51063)
+- [https://github.com/kubernetes/kubernetes/pull/51064](https://github.com/kubernetes/kubernetes/pull/51064)
+
+ These new platform features and work on kubelet and kube-proxy align with the CNI network model used by Kubernetes on Linux and simplify the deployment of a K8s cluster without additional configuration or custom (Azure) resource templates. To this end, we completed work on CNI network and IPAM plugins to create/remove endpoints and manage IP addresses. The CNI plugin works through kubelet to target the Windows Host Networking Service (HNS) APIs to create an 'l2bridge' network (analogous to macvlan on Linux) which is enforced by the VFP switch extension.
+
+ The 'l2bridge' network driver re-writes the MAC address of container network traffic on ingress and egress to use the container host's MAC address. This obviates the need for multiple MAC addresses (one per container running on the host) to be "learned" by the upstream network switch port to which the container host is connected. This preserves memory space in physical switch TCAM tables and relies on the Hyper-V virtual switch to do MAC address translation in the host to forward traffic to the correct container. IP addresses are managed by a default, Windows IPAM plug-in which requires that POD CIDR IPs be taken from the container host's network IP space.
+
+ The team demoed ([link](https://files.slack.com/files-pri/T09NY5SBT-F6KTG30E8/download/sigwindows-2017-08-08.mp4) to video) these new platform features and open-source updates to the SIG-Windows group on 8/8. We are working with the community to merge the kubelet and kube-proxy PRs to mainline these changes in time for the Kubernetes v1.8 release due out this September. These capabilities can then be used on current Windows Server insider builds and the [Windows Server, version 1709](https://blogs.technet.microsoft.com/windowsserver/2017/08/24/sneak-peek-1-windows-server-version-1709/).
+
+ Soon after RTM, we will also introduce these improvements into the Azure Container Service (ACS) so that Windows worker nodes and the containers hosted are first-class, Azure VNet citizens. An Azure IPAM plugin for Windows CNI will enable these endpoints to directly attach to Azure VNets with network policies for Windows containers enforced the same way as VMs.
+
+
+
+| Feature | Windows Server 2016 (In-Market) | Next Windows Server Feature Release, Semi-Annual Channel | Linux |
+| Multiple Containers per Pod with shared network namespace (Compartment) | One Container per Pod | ✔ | ✔ |
+| Single (Shared) Endpoint per Pod | Two endpoints: WinNAT (External) + Transparent (Intra-Cluster) | ✔ | ✔ |
+| User-Mode, Load Balancing | ✔ | ✔ | ✔ |
+| Kernel-Mode, Load Balancing | Not Supported | ✔ | ✔ |
+| Support for DNS search suffixes per Pod (Docker update) | Kube-Proxy added multiple DNS suffixes to each request | ✔ | ✔ |
+| CNI Plugin Support | Not Supported | ✔ | ✔ |
+ {:.post-table}
+
+ The Kubernetes SIG Windows group meets bi-weekly on Tuesdays at 12:30 PM ET. To join or view notes from previous meetings, check out this [document](https://docs.google.com/document/d/1Tjxzjjuy4SQsFSUVXZbvqVb64hjNAG5CQX8bK7Yda9w/edit#heading=h.kbz22d1yc431).
diff --git a/blog/_posts/2017-10-00-Enforcing-Network-Policies-In-Kubernetes.md b/blog/_posts/2017-10-00-Enforcing-Network-Policies-In-Kubernetes.md
new file mode 100644
index 00000000000..a7492e9be47
--- /dev/null
+++ b/blog/_posts/2017-10-00-Enforcing-Network-Policies-In-Kubernetes.md
@@ -0,0 +1,119 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Enforcing Network Policies in Kubernetes "
+date: Tuesday, October 30, 2017
+pagination:
+ enabled: true
+---
+_**Editor's note: this post is part of a [series of in-depth articles](http://blog.kubernetes.io/2017/10/five-days-of-kubernetes-18.html) on what's new in Kubernetes 1.8. Today’s post comes from Ahmet Alp Balkan, Software Engineer, Google.**_
+
+
+
+Kubernetes now offers functionality to enforce rules about which pods can communicate with each other using [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/). This feature is has become stable Kubernetes 1.7 and is ready to use with supported networking plugins. The Kubernetes 1.8 release has added better capabilities to this feature.
+
+
+## Network policy: What does it mean?
+In a Kubernetes cluster configured with default settings, all pods can discover and communicate with each other without any restrictions. The new Kubernetes object type NetworkPolicy lets you allow and block traffic to pods.
+
+If you’re running multiple applications in a Kubernetes cluster or sharing a cluster among multiple teams, it’s a security best practice to create firewalls that permit pods to talk to each other while blocking other network traffic. Networking policy corresponds to the Security Groups concepts in the Virtual Machines world.
+
+
+
+## How do I add Network Policy to my cluster?
+Networking Policies are implemented by networking plugins. These plugins typically install an overlay network in your cluster to enforce the Network Policies configured. A number of networking plugins, including [Calico](https://kubernetes.io/docs/tasks/configure-pod-container/calico-network-policy/), [Romana](https://kubernetes.io/docs/tasks/configure-pod-container/romana-network-policy/) and [Weave Net](https://kubernetes.io/docs/tasks/configure-pod-container/weave-network-policy/), support using Network Policies.
+
+Google Container Engine (GKE) also provides beta support for [Network Policies](https://cloud.google.com/container-engine/docs/network-policy) using the Calico networking plugin when you create clusters with the following command:
+
+gcloud beta container clusters create --enable-network-policy
+
+##
+
+## How do I configure a Network Policy?
+Once you install a networking plugin that implements Network Policies, you need to create a Kubernetes resource of type NetworkPolicy. This object describes two set of label-based pod selector fields, matching:
+
+1. a set of pods the network policy applies to (required)
+2. a set of pods allowed access to each other (optional). If you omit this field, it matches to no pods; therefore, no pods are allowed. If you specify an empty pod selector, it matches to all pods; therefore, all pods are allowed.
+
+## Example: restricting traffic to a pod
+The following example of a network policy blocks all in-cluster traffic to a set of web server pods, except the pods allowed by the policy configuration.
+
+ {:.big-img}
+
+
+To achieve this setup, create a NetworkPolicy with the following manifest:
+
+
+```
+kind: NetworkPolicy
+
+apiVersion: networking.k8s.io/v1
+
+metadata:
+
+ name: access-nginx
+
+spec:
+
+ podSelector:
+
+ matchLabels:
+
+ app: nginx
+
+ ingress:
+
+ - from:
+
+ - podSelector:
+
+ matchLabels:
+
+ app: foo
+ ```
+
+
+Once you apply this configuration, only pods with label **app: foo** can talk to the pods with the label **app: nginx**. For a more detailed tutorial, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/).
+
+
+## Example: restricting traffic between all pods by default
+If you specify the spec.podSelector field as empty, the set of pods the network policy matches to all pods in the namespace, blocking all traffic between pods by default. In this case, you must explicitly create network policies whitelisting all communication between the pods.
+
+ 
+
+You can enable a policy like this by applying the following manifest in your Kubernetes cluster:
+
+
+```
+apiVersion: networking.k8s.io/v1
+
+kind: NetworkPolicy
+
+metadata:
+
+ name: default-deny
+
+spec:
+
+ podSelector:
+ ```
+
+
+
+## Other Network Policy features
+In addition to the previous examples, you can make the Network Policy API enforce more complicated rules:
+
+
+
+- Egress network policies: Introduced in Kubernetes 1.8, you can restrict your workloads from establishing connections to resources outside specified IP ranges.
+- IP blocks support: In addition to using podSelector/namespaceSelector, you can specify IP ranges with CIDR blocks to allow/deny traffic in ingress or egress rules.
+- Cross-namespace policies: Using the ingress.namespaceSelector field, you can enforce Network Policies for particular or for all namespaces in the cluster. For example, you can create privileged/system namespaces that can communicate with pods even though the default policy is to block traffic.
+- Restricting traffic to port numbers: Using the ingress.ports field, you can specify port numbers for the policy to enforce. If you omit this field, the policy matches all ports by default. For example, you can use this to allow a monitoring pod to query only the monitoring port number of an application.
+- Multiple ingress rules on a single policy: Because spec.ingress field is an array, you can use the same NetworkPolicy object to give access to different ports using different pod selectors. For example, a NetworkPolicy can have one ingress rule giving pods with the kind: monitoring label access to port 9000, and another ingress rule for the label app: foo giving access to port 80, without creating an additional NetworkPolicy resource.
+
+## Learn more
+
+- Read more: [Networking Policy documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
+- Read more: [Unofficial Network Policy Guide](https://ahmet.im/blog/kubernetes-network-policy/)
+- Hands-on: [Declare a Network Policy](https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/)
+- Try: [Network Policy Recipes](https://github.com/ahmetb/kubernetes-networkpolicy-tutorial)
diff --git a/blog/_posts/2017-10-00-Five-Days-Of-Kubernetes-18.md b/blog/_posts/2017-10-00-Five-Days-Of-Kubernetes-18.md
new file mode 100644
index 00000000000..ac875440860
--- /dev/null
+++ b/blog/_posts/2017-10-00-Five-Days-Of-Kubernetes-18.md
@@ -0,0 +1,29 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Five Days of Kubernetes 1.8 "
+date: Wednesday, October 24, 2017
+pagination:
+ enabled: true
+---
+Kubernetes 1.8 is live, made possible by hundreds of contributors pushing thousands of commits in this latest releases.
+
+The community has tallied more than 66,000 commits in the main repo and continues rapid growth outside of the main repo, which signals growing maturity and stability for the project. The community has logged more than 120,000 commits across all repos and 17,839 commits across all repos for v1.7.0 to v1.8.0 alone.
+
+With the help of our growing community of 1,400 plus contributors, we issued more than 3,000 PRs and pushed more than 5,000 commits to deliver Kubernetes 1.8 with significant security and workload support updates. This all points to increased stability, a result of our project-wide focus on maturing [process](https://github.com/kubernetes/sig-release), formalizing [architecture](https://github.com/kubernetes/community/tree/master/sig-architecture), and strengthening Kubernetes’ [governance model](https://github.com/kubernetes/community/tree/master/community/elections/2017).
+
+While many improvements have been contributed, we highlight key features in this series of in-depth posts listed below. [Follow along](https://twitter.com/kubernetesio) and see what’s new and improved with storage, security and more.
+
+**Day 1:** [5 Days of Kubernetes 1.8](http://blog.kubernetes.io/2017/10/five-days-of-kubernetes-18.html)
+**Day 2:** [kubeadm v1.8 Introduces Easy Upgrades for Kubernetes Clusters](http://blog.kubernetes.io/2017/10/kubeadm-v18-released.html)
+**Day 3:** [Kuberentes v.1.8 Retrospective: It Takes a Village to Raise a Kubernetes](http://blog.kubernetes.io/2017/10/it-takes-village-to-raise-kubernetes.html)
+**Day 4:** [Using RBAC, Generally Available in Kubernetes v1.8](http://blog.kubernetes.io/2017/10/using-rbac-generally-available-18.html)
+**Day 5:** [Enforcing Network Policies in Kubernetes](http://blog.kubernetes.io/2017/10/enforcing-network-policies-in-kubernetes.html)
+
+**Connect**
+
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Join the community portal for advocates on [K8sPort](http://k8sport.org/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
diff --git a/blog/_posts/2017-10-00-It-Takes-Village-To-Raise-Kubernetes.md b/blog/_posts/2017-10-00-It-Takes-Village-To-Raise-Kubernetes.md
new file mode 100644
index 00000000000..deac67c0f75
--- /dev/null
+++ b/blog/_posts/2017-10-00-It-Takes-Village-To-Raise-Kubernetes.md
@@ -0,0 +1,39 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " It Takes a Village to Raise a Kubernetes "
+date: Friday, October 26, 2017
+pagination:
+ enabled: true
+---
+**_Editor’s note: this post is part of a [series of in-depth articles](http://blog.kubernetes.io/2017/10/five-days-of-kubernetes-18.html) on what's new in Kubernetes 1.8, written by Jaice Singer DuMars from Microsoft._**
+
+
+Each time we release a new version of Kubernetes, it’s enthralling to see how the community responds to all of the hard work that went into it. Blogs on new or enhanced capabilities crop up all over the web like wildflowers in the spring. Talks, videos, webinars, and demos are not far behind. As soon as the community seems to take this all in, we turn around and add more to the mix. It’s a thrilling time to be a part of this project, and even more so, the movement. It’s not just software anymore.
+
+When circumstances opened the door for me to lead the 1.8 release, I signed on despite a minor case of the butterflies. In a private conversation with another community member, they assured me that “being organized, following up, and knowing when to ask for help” were the keys to being a successful lead. That’s when I knew I could do it — and so I did.
+
+From that point forward, I was wrapped in a patchwork quilt of community that magically appeared at just the right moments. The community’s commitment and earnest passion for quality, consistency, and accountability formed a bedrock from which the release itself was chiseled.
+
+The 1.8 release team proved incredibly cohesive despite a late start. We approached even the most difficult situations with humor, diligence, and sincere curiosity. My experience leading large teams served me well, and underscored another difference about this release: it was more valuable for me to focus on leadership than diving into the technical weeds to solve every problem.
+
+
+Also, the uplifting power of [emoji in Slack](https://kubernetes.slack.com/archives/C2C40FMNF/p1506659664000090) cannot be overestimated.
+
+An important inflection point is underway in the Kubernetes project. If you’ve taken a ride on a “startup rollercoaster,” this is a familiar story. You come up with an idea so crazy that it might work. You build it, get traction, and slowly clickity-clack up that first big hill. The view from the top is dizzying, as you’ve poured countless hours of life into something completely unknown. Once you go over the top of that hill, everything changes. Breakneck acceleration defines or destroys what has been built.
+
+In my experience, that zero gravity point is where everyone in the company (or in this case, project) has to get serious about not only building something, but also maintaining it. Without a commitment to maintenance, things go awry really quickly. From codebases that resemble the Winchester Mystery House to epidemics of crashing production implementations, a fiery descent into chaos can happen quickly despite the outward appearance of success. Thankfully, the Kubernetes community seems to be riding our growth rollercoaster with increasing success at each release.
+
+As software startups mature, there is a natural evolution reflected in the increasing distribution of labor. Explosive adoption means that full-time security, operations, quality, documentation, and project management staff become necessary to deliver stability, reliability, and extensibility. Also, you know things are getting serious when intentional architecture becomes necessary to ensure consistency over time.
+
+Kubernetes has followed a similar path. In the absence of company departments or skill-specific teams, Special Interest Groups (SIGs) have organically formed around core project needs like storage, networking, API machinery, applications, and the operational lifecycle. As SIGs have proliferated, the Kubernetes governance model has crystallized around them, providing a framework for code ownership and shared responsibility. SIGs also help ensure the community is sustainable because success is often more about people than code.
+
+At the Kubernetes [leadership summit](https://github.com/kubernetes/community/tree/master/community/2017-events/05-leadership-summit) in June, a proposed SIG architecture was ratified with a unanimous vote, underscoring a stability theme that seemed to permeate every conversation in one way or another. The days of filling in major functionality gaps appear to be over, and a new era of feature depth has emerged in its place.
+
+Another change is the move away from project-level release “feature themes” to SIG-level initiatives delivered in increments over the course of several releases. That’s an important shift: SIGs have a mission, and everything they deliver should ultimately serve that. As a community, we need to provide facilitation and support so SIGs are empowered to do their best work with minimal overhead and maximum transparency.
+
+Wisely, the community also spotted the opportunity to provide safe mechanisms for innovation that are increasingly less dependent on the code in kubernetes/kubernetes. This in turn creates a flourishing habitat for experimentation without hampering overall velocity. The project can also address technical debt created during the initial ride up the rollercoaster. However, new mechanisms for innovation present an architectural challenge in defining what is and is not Kubernetes. SIG Architecture addresses the challenge of defining Kubernetes’ boundaries. It’s a work in progress that trends toward continuous improvement.
+
+This can be a little overwhelming at the individual level. In reality, it’s not that much different from any other successful startup, save for the fact that authority does not come from a traditional org chart. It comes from SIGs, community technical leaders, the newly-formed steering committee, and ultimately you.
+
+The Kubernetes release process provides a special opportunity to see everything that makes this project tick. I’ll tell you what I saw: people, working together, to do the best they can, in service to everyone who sets out on the cloud native journey.
diff --git a/blog/_posts/2017-10-00-Kubeadm-V18-Released.md b/blog/_posts/2017-10-00-Kubeadm-V18-Released.md
new file mode 100644
index 00000000000..07c41339b1e
--- /dev/null
+++ b/blog/_posts/2017-10-00-Kubeadm-V18-Released.md
@@ -0,0 +1,107 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " kubeadm v1.8 Released: Introducing Easy Upgrades for Kubernetes Clusters "
+date: Thursday, October 25, 2017
+pagination:
+ enabled: true
+---
+**_Editor’s note: this post is part of a [series of in-depth articles](http://blog.kubernetes.io/2017/10/five-days-of-kubernetes-18.html) on what's new in Kubernetes 1.8_**
+
+Since its debut in [September 2016](http://blog.kubernetes.io/2016/09/how-we-made-kubernetes-easy-to-install.html), the Cluster Lifecycle Special Interest Group (SIG) has established kubeadm as the easiest Kubernetes bootstrap method. Now, we’re releasing kubeadm v1.8.0 in tandem with the release of [Kubernetes v1.8.0](http://blog.kubernetes.io/2017/09/kubernetes-18-security-workloads-and.html). In this blog post, I’ll walk you through the changes we’ve made to kubeadm since the last update, the scope of kubeadm, and how you can contribute to this effort.
+
+
+## Security first: kubeadm v1.6 & v1.7
+Previously, we discussed [planned updates for kubeadm v1.6](http://blog.kubernetes.io/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters.html). Our primary focus for v1.6 was security. We started enforcing role based access control (RBAC) as it graduated to beta, gave unique identities and locked-down privileges for different system components in the cluster, disabled the insecure `localhost:8080` API server port, started authorizing all API calls to the kubelets, and [improved the token discovery](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md) method used formerly in v1.5. Token discovery (aka Bootstrap Tokens) graduated to beta in v1.8.
+
+In number of features, kubeadm v1.7.0 was a much smaller release compared to v1.6.0 and v1.8.0. The main additions were enforcing [the Node Authorizer](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/kubelet-authorizer.md), which significantly reduces the attack surface for a Kubernetes cluster, and initial, limited upgrading support from v1.6 clusters.
+
+
+## Easier upgrades, extensibility, and stabilization in v1.8
+
+We had eight weeks between Kubernetes v1.7.0 and our stabilization period (code freeze) to implement new features and to stabilize the upcoming v1.8.0 release. Our goal for kubeadm v1.8.0 was to make it more extensible. We wanted to add a lot of new features and improvements in this cycle, and we succeeded.Upgrades along with better introspectability. The most important update in kubeadm v1.8.0 (and my favorite new feature) is **one-command upgrades** of the control plane. While v1.7.0 had the ability to upgrade clusters, the user experience was far from optimal, and the process was risky.
+
+Now, you can easily check to see if your system can handle an upgrade by entering:
+
+
+
+ ```
+$ kubeadm upgrade plan
+ ```
+
+
+This gives you information about which versions you can upgrade to, as well as the health of your cluster.
+
+You can examine the effects an upgrade will have on your system by specifying the --dry-run flag. In previous versions of kubeadm, upgrades were essentially blind in that you could only make assumptions about how an upgrade would impact your cluster. With the new dry run feature, there is no more mystery. You can see exactly what applying an upgrade would do before applying it.
+
+After checking to see how an upgrade will affect your cluster, you can apply the upgrade by typing:
+
+
+
+ ```
+$ kubeadm upgrade apply v1.8.0
+ ```
+
+
+This is a much cleaner and safer way of performing an upgrade than the previous version. As with any type of upgrade or downgrade, it’s a good idea to backup your cluster first using your preferred solution.
+
+
+## Self-hosting
+Self-hosting in this context refers to a specific way of setting up the control plane. The self-hosting concept was initially developed by CoreOS in their [bootkube](https://github.com/kubernetes-incubator/bootkube) project. The long-term goal is to move this functionality (currently in an alpha stage) to the generic kubeadm toolbox. Self-hosting means that the control plane components, the API Server, Controller Manager and Scheduler are workloads themselves in the cluster they run. This means the control plane components can be managed using Kubernetes primitives, which has numerous advantages. For instance, leader-elected components like the scheduler and controller-manager will automatically be run on all masters when HA is implemented if they are run in a DaemonSet. Rolling upgrades in Kubernetes can be used for upgrades of the control plane components, and next to no extra code has to be written for that to work; it’s one of Kubernetes’ built-in primitives!
+
+Self-hosting won’t be the default until v1.9.0, but users can easily test the feature in experimental clusters. If you test this feature, we’d love your feedback!
+
+You can test out self-hosting by enabling its feature gate:
+
+
+ ```
+$ kubeadm init --feature-gates=SelfHosting=true
+ ```
+
+
+
+## Extensibility
+We’ve added some new extensibility features. You can delegate some tasks, like generating certificates or writing control plane arguments to kubeadm, but still drive the control plane bootstrap process yourself. Basically, you can let kubeadm do some parts and fill in yourself where you need customizations. Previously, you could only use kubeadm init to perform “the full meal deal.” The inclusion of the kubeadm alpha phase command supports our aim to make kubeadm more modular, letting you invoke atomic sub-steps of the bootstrap process.
+
+In v1.8.0, kubeadm alpha phase is just that: an alpha preview. We hope that we can graduate the command to beta as kubeadm phase in v1.9.0. We can’t wait for feedback from the community on how to better improve this feature!
+
+
+## Improvements
+Along with our new kubeadm features, we’ve also made improvements to existing ones. The Bootstrap Token feature that makes `kubeadm join` so short and sweet has graduated from alpha to beta and gained even more security features.
+
+If you made customizations to your system in v1.6 or v1.7, you had to remember what those customizations were when you upgraded your cluster. No longer: beginning with v1.8.0, kubeadm uploads your configuration to a ConfigMap inside of the cluster, and later reads that configuration when upgrading for a seamless user experience.
+
+The first certificate rotation feature has graduated to beta in v1.8, which is great to see. Thanks to the [Auth Special Interest Group](https://github.com/kubernetes/community/tree/master/sig-auth), the Kubernetes node component kubelet can now [rotate its client certificate](https://github.com/kubernetes/features/issues/266) automatically. We expect this area to improve continuously, and will continue to be a part of this cross-SIG effort to easily rotate all certificates in any cluster.
+
+Last but not least, kubeadm is more resilient now. kubeadm init will detect even more faulty environments earlier, and time out instead of waiting forever for the expected condition.
+
+
+## The scope of kubeadm
+As there are so many different end-to-end installers for Kubernetes, there is some fragmentation in the ecosystem. With each new release of Kubernetes, these installers naturally become more divergent. This can create problems down the line if users rely on installer-specific variations and hooks that aren’t standardized in any way. Our goal from the beginning has been to make kubeadm a building block for deploying Kubernetes clusters and to provide kubeadm init and kubeadm join as best-practice “fast paths” for new Kubernetes users. Ideally, using kubeadm as the basis of all deployments will make it easier to create conformant clusters.
+
+kubeadm performs the actions necessary to get a minimum viable cluster up and running. It only cares about bootstrapping, not about provisioning machines, by design. Likewise, installing various nice-to-have addons by default like the [Kubernetes Dashboard](https://github.com/kubernetes/dashboard), some monitoring solution, cloud provider-specific addons, etc. is not in scope. Instead, we expect higher-level and more tailored tooling to be built on top of kubeadm, that installs the software the end user needs.
+
+
+## v1.9.0 and beyond
+What’s in store for the future of kubeadm?
+
+
+#### Planned features
+We plan to address high availability (replicated etcd and multiple, redundant API servers and other control plane components) as an alpha feature in v1.9.0. This has been a regular request from our user base.
+
+Also, we want to make self-hosting the default way to deploy your control plane: Kubernetes becomes much easier to manage if we can rely on Kubernetes' own tools to manage the cluster components.
+
+
+#### Promoting kubeadm adoption and getting involved
+The [kubeadm adoption working group](https://github.com/kubernetes/community/tree/master/wg-kubeadm-adoption) is an ongoing effort between SIG Cluster Lifecycle and other parties in the Kubernetes ecosystem. This working group focuses on making kubeadm more extensible in order to promote adoption of it for other end-to-end installers in the community. Everyone is welcome to join. So far, we’re glad to announce that [kubespray](https://github.com/kubernetes-incubator/kubespray) started using kubeadm under the hood, and gained new features at the same time! We’re excited to see others follow and make the ecosystem stronger.
+
+kubeadm is a great way to learn about Kubernetes: it binds all of Kubernetes’ components together in a single package. To learn more about what kubeadm really does under the hood, [this document](https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.8.md) describes kubeadm functions in v1.8.0.
+
+If you want to get involved in these efforts, join SIG Cluster Lifecycle. We [meet on Zoom](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle) once a week on Tuesdays at 16:00 UTC. For more information about what we talk about in our weekly meetings, [check out our meeting notes](https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#). Meetings are a great educational opportunity, even if you don’t want to jump in and present your own ideas right away. You can also sign up for our [mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle), join our [Slack channel,](https://kubernetes.slack.com/messages/sig-cluster-lifecycle) [o](https://kubernetes.slack.com/messages/sig-cluster-lifecycle)r check out the [video archive](https://www.youtube.com/playlist?list=PL69nYSiGNLP29D0nYgAGWt1ZFqS9Z7lw4&disable_polymer=true) of our past mee[t](https://kubernetes.slack.com/messages/sig-cluster-lifecycle)i[n](https://kubernetes.slack.com/messages/sig-cluster-lifecycle)g[s](https://kubernetes.slack.com/messages/sig-cluster-lifecycle). Even if you’re only interested in watching the video calls initially, we’re excited to welcome you as a new member to SIG Cluster Lifecycle!
+
+If you want to know what a kubeadm developer does at a given time in the Kubernetes release cycle, check out [this doc](https://github.com/kubernetes/kubeadm/blob/master/docs/release-cycle.md). Finally, don’t hesitate to join if any of our upcoming projects are of interest to you!
+
+Thank you,
+Lucas Käldström
+Kubernetes maintainer & SIG Cluster Lifecycle co-lead
+[Weaveworks](https://www.weave.works/?utm_source=k8&utm_medium=ww&utm_campaign=blog) contractor
diff --git a/blog/_posts/2017-10-00-Kubernetes-Community-Steering-Committee-Election-Results.md b/blog/_posts/2017-10-00-Kubernetes-Community-Steering-Committee-Election-Results.md
new file mode 100644
index 00000000000..bbe3f7f8259
--- /dev/null
+++ b/blog/_posts/2017-10-00-Kubernetes-Community-Steering-Committee-Election-Results.md
@@ -0,0 +1,23 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes Community Steering Committee Election Results "
+date: Friday, October 05, 2017
+pagination:
+ enabled: true
+---
+Beginning with the announcement of Kubernetes 1.0 at OSCON in 2015, there has been a concerted effort to share the power and burden of leadership across the Kubernetes community.
+
+With the work of the Bootstrap Governance Committee, consisting of Brandon Philips, Brendan Burns, Brian Grant, Clayton Coleman, Joe Beda, Sarah Novotny and Tim Hockin - a cross section of long-time leaders representing 5 different companies with major investments of talent and effort in the Kubernetes Ecosystem - we wrote an initial [Steering Committee Charter](https://github.com/kubernetes/steering/blob/master/charter.md) and launched a community wide election to seat a Kubernetes Steering Committee.
+
+To quote from the Charter -
+
+_The initial role of the steering committee is to **instantiate the formal process for Kubernetes governance**. In addition to defining the initial governance process, the bootstrap committee strongly believes that **it is important to provide a means for iterating** the processes defined by the steering committee. We do not believe that we will get it right the first time, or possibly ever, and won’t even complete the governance development in a single shot. The role of the steering committee is to be a live, responsive body that can refactor and reform as necessary to adapt to a changing project and community._
+
+This is our largest step yet toward making an implicit governance structure explicit. Kubernetes vision has been one of an inclusive and broad community seeking to build software which empowers our users with the portability of containers. The Steering Committee will be a strong leadership voice guiding the project toward success.
+
+The Kubernetes Community is pleased to announce the results of the 2017 Steering Committee Elections. **Please congratulate Aaron Crickenberger, Derek Carr, Michelle Noorali, Phillip Wittrock, Quinton Hoole and Timothy St. Clair** , who will be joining the members of the Bootstrap Governance committee on the newly formed Kubernetes Steering Committee. Derek, Michelle, and Phillip will serve for 2 years. Aaron, Quinton, and Timothy will serve for 1 year.
+
+This group will meet regularly in order to clarify and streamline the structure and operation of the project. Early work will include electing a representative to the CNCF Governing Board, evolving project processes, refining and documenting the vision and scope of the project, and chartering and delegating to more topical community groups.
+
+Please see [the full Steering Committee backlog](https://github.com/kubernetes/steering/blob/master/backlog.md) for more details.
diff --git a/blog/_posts/2017-10-00-Request-Routing-And-Policy-Management.md b/blog/_posts/2017-10-00-Request-Routing-And-Policy-Management.md
new file mode 100644
index 00000000000..de32100f0ca
--- /dev/null
+++ b/blog/_posts/2017-10-00-Request-Routing-And-Policy-Management.md
@@ -0,0 +1,452 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Request Routing and Policy Management with the Istio Service Mesh "
+date: Wednesday, October 10, 2017
+pagination:
+ enabled: true
+---
+ **_Editor's note: Today’s post by Frank Budinsky, Software Engineer, IBM, Andra Cismaru, Software Engineer, Google, and Israel Shalom, Product Manager, Google, is the second post in a three-part series on Istio. It offers a closer look at request routing and policy management._**
+
+In a [previous article](http://blog.kubernetes.io/2017/05/managing-microservices-with-istio-service-mesh.html), we looked at a [simple application (Bookinfo)](https://istio.io/docs/guides/bookinfo.html) that is composed of four separate microservices. The article showed how to deploy an application with Kubernetes and an Istio-enabled cluster without changing any application code. The article also outlined how to view Istio provided L7 metrics on the running services.
+
+This article follows up by taking a deeper look at Istio using Bookinfo. Specifically, we’ll look at two more features of Istio: request routing and policy management.
+
+
+## Running the Bookinfo Application
+As before, we run the v1 version of the Bookinfo application. After [installing Istio](https://istio.io/docs/setup/kubernetes/quick-start.html) in our cluster, we start the app defined in [bookinfo-v1.yaml](https://raw.githubusercontent.com/istio/istio/master/samples/kubernetes-blog/bookinfo-v1.yaml) using the following command:
+
+
+
+ ```
+kubectl apply -f \<(istioctl kube-inject -f bookinfo-v1.yaml)
+ ```
+
+We created an Ingress resource for the app:
+
+
+
+
+ ```
+cat \<\ Integrations \> Kubernetes_ and click **Authenticate**. This prompts you to login with your Google credentials.
+
+Once you log in, all of your clusters are available within Codefresh.
+
+ 
+
+
+
+### Add Cluster
+To add your cluster, click the down arrow, and then click **add cluste** r, select the project and cluster name. You can now deploy images!
+
+
+
+### Optional: Use an Alternative Cluster
+To connect a non-GKE cluster we’ll need to add a token and certificate to Codefresh. Go to _Account Settings (bottom left) \> Integrations \> Kubernetes \> Configure \> Add Provider \> Custom Providers_. Expand the dropdown and click **Add Cluster**.
+
+
+ 
+
+Follow the instructions on how to generate the needed information and click Save. Your cluster now appears under the Kubernetes tab.
+
+
+
+### Deploy Static Image to Kubernetes
+Now for the fun part! Codefresh provides an easily modifiable boilerplate that takes care of the heavy lifting of configuring Kubernetes for your application.
+
+1. Click on the **Kubernetes** tab: this shows a list of namespaces.
+
+Think of namespaces as acting a bit like VLANs on a Kubernetes cluster. Each namespace can contain all the services that need to talk to each other on a Kubernetes cluster. For now, we’ll just work off the default namespace (the easy way!).
+
+2. Click **Add Service** and fill in the details.
+
+You can use the [demo application I mentioned earlier](https://github.com/containers101/demochat) that has a Node.js frontend with a MongoDB.
+
+ {: .big-img}
+
+Here’s the info we need to pass:
+
+**Cluster** - This is the cluster we added earlier, our application will be deployed there.
+**Namespace** - We’ll use default for our namespace but you can create and use a new one if you’d prefer. Namespaces are discrete units for grouping all the services associated with an application.
+**Service name** - You can name the service whatever you like. Since we’re deploying Mongo, I’ll just name it mongo!
+**Expose port** - We don’t need to expose the port outside of our cluster so we won’t check the box for now but we will specify a port where other containers can talk to this service. Mongo’s default port is ‘27017’.
+**Image** - Mongo is a public image on Dockerhub, so I can reference it by name and tag, ‘mongo:latest’.
+**Internal Ports** - This is the port the mongo application listens on, in this case it’s ‘27017’ again.
+
+We can ignore the other options for now.
+
+3. Scroll down and click **Deploy**.
+
+ 
+
+Boom! You’ve just deployed this image to Kubernetes. You can see by clicking on the status that the service, deployment, replicas, and pods are all configured and running. If you click Edit \> Advanced, you can see and edit all the raw YAML files associated with this application, or copy them and put them into your repository for use on any cluster.
+
+
+
+### Build and Deploy an Image
+To get the rest of our demo application up and running we need to build and deploy the Node.js portion of the application. To do that we’ll need to add our repository to Codefresh.
+
+1. Click on _Repositories \> Add Repository_, then copy and paste the [demochat repo url](https://github.com/containers101/demochat) (or use your own repo).
+
+ 
+
+We have the option to use a dockerfile, or to use a template if we need help creating a dockerfile. In this case, the demochat repo already has a dockerfile so we’ll select that. Click through the next few screens until the image builds.
+
+Once the build is finished the image is automatically saved inside of the Codefresh docker registry. You can also add any [other registry to your account](https://docs.codefresh.io/v1.0/docs/docker-registry) and use that instead.
+
+To deploy the image we’ll need
+
+- a pull secret
+- the image name and registry
+- the ports that will be used
+
+
+### Creating the Pull Secret
+The pull secret is a token that the Kubernetes cluster can use to access a private Docker registry. To create one, we’ll need to generate the token and save it to Codefresh.
+
+1. Click on **User Settings** (bottom left) and generate a new token.
+
+2. Copy the token to your clipboard.
+
+ 
+
+3. Go to _Account Settings \> Integrations \> Docker Registry \> Add Registry_ and select **Codefresh Registry**. Paste in your token and enter your username (entry is case sensitive). Your username must match your name displayed at the bottom left of the screen.
+
+4. Test and save it.
+
+We’ll now be able to create our secret later on when we deploy our image.
+
+
+
+### Get the image name
+1. Click on **Images** and open the image you just built. Under _Comment_ you’ll see the image name starting with r.cfcr.io.
+
+ 
+
+2. Copy the image name; we’ll need to paste it in later.
+
+
+
+### Deploy the private image to Kubernetes
+We’re now ready to deploy the image we built.
+
+1. Go to the Kubernetes page and, like we did with mongo, click Add Service and fill out the page. Make sure to select the same namespace you used to deploy mongo earlier.
+
+
+ 
+
+Now let’s expose the port so we can access this application. This provisions an IP address and automatically configures ingress.
+
+2. Click **Deploy** : your application will be up and running within a few seconds! The IP address may take longer to provision depending on your cluster location.
+
+ 
+
+From this view you can scale the replicas, see application status, and similar tasks.
+
+3. Click on the IP address to view the running application.
+
+ 
+
+At this point you should have your entire application up and running! Not so bad huh? Now to automate deployment!
+
+
+
+### Automate Deployment to Kubernetes
+Every time we make a change to our application, we want to build a new image and deploy it to our cluster. We’ve already set up automated builds, but to automate deployment:
+
+1. Click on **Repositories** (top left).
+
+2. Click on the pipeline for the demochat repo (the gear icon).
+
+ 
+
+3. It’s a good idea to run some tests before deploying. Under _Build and Unit Test_, add npm test for the unit test script.
+
+4. Click **Deploy Script** and select **Kubernetes (Beta)**. Enter the information for the service you’ve already deployed.
+
+ 
+
+You can see the option to use a deployment file from your repo, or to use the deployment file that you just generated.
+
+5. Click **Save**.
+
+You’re done with deployment automation! Now whenever a change is made, the image will build, test, and deploy.
+
+
+
+## Conclusions
+We want to make it easy for every team, not just big enterprise teams, to adopt Kubernetes while preserving all of Kubernetes’ power and flexibility. At any point on the Kubernetes service screen you can switch to YAML to view all of the YAMLfiles generated by the configuration you performed in this walkthrough. You can tweak the file content, copy and paste them into local files, etc.
+
+This walkthrough gives everyone a solid base to start with. When you’re ready, you can tweak the entities directly to specify the exact configuration you’d like.
+
+We’d love your feedback! Please share with us on [Twitter](https://twitter.com/codefresh), or [reach out directly](https://codefresh.io/contact-us/).
+
+
+
+## Addendums
+**Do you have a video to walk me through this?** [You bet](https://www.youtube.com/watch?v=oFwFuUxxFdI&list=PL8mgsmlx4BWV_j_L5oq-q8JdPnlJc3bUv).
+
+**Does this work with Helm Charts?** Yes! We’re currently piloting Helm Charts with a limited set of users. Ping us if you’d like to try it early.
+
+**Does this work with any Kubernetes cluster?** It should work with any Kubernetes cluster and is tested for Kubernetes 1.5 forward.
+
+**Can I deploy Codefresh in my own data center?** Sure, Codefresh is built on top of Kubernetes using Helm Charts. Codefresh cloud is free for open source, and 200 builds/mo. Codefresh on prem is currently for enterprise users only.
+
+**Won’t the database be wiped every time we update?** Yes, in this case we skipped creating a persistent volume. It’s a bit more work to get the persistent volume configured, if you’d like, [feel free to reach out](https://codefresh.io/contact-us/) and we’re happy to help!
diff --git a/blog/_posts/2017-11-00-Kubernetes-Is-Still-Hard-For-Developers.md b/blog/_posts/2017-11-00-Kubernetes-Is-Still-Hard-For-Developers.md
new file mode 100644
index 00000000000..d5086b61736
--- /dev/null
+++ b/blog/_posts/2017-11-00-Kubernetes-Is-Still-Hard-For-Developers.md
@@ -0,0 +1,16 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Kubernetes is Still Hard (for Developers) "
+date: Thursday, November 15, 2017
+pagination:
+ enabled: true
+---
+
+Kubernetes has made the Ops experience much easier, but how does the developer experience compare? Ops teams can deploy a Kubernetes cluster in a matter of minutes. But developers need to understand a host of new concepts before beginning to work with Kubernetes. This can be a tedious and manual process, but it doesn’t have to be. In this talk, [Michelle Noorali](https://twitter.com/michellenoorali), co-lead of SIG-Apps, reimagines the Kubernetes developer experience. She shares her top 3 tips for building a successful developer experience including:
+
+1. {:.blog-content} A framework for thinking about cloud native applications
+2. {:.blog-content} An integrated experience for debugging and fine-tuning cloud native applicationsA way to get a cloud native application out the door quickly
+Interested in learning how far the Kubernetes developer experience has come? Join us at KubeCon in Austin on December 6-8. [Register Now \>\>](https://goo.gl/TK9ET3)
+
+[Check out Michelle’s keynote](http://sched.co/CUCC) to learn about exciting new updates from CNCF projects.
diff --git a/blog/_posts/2017-11-00-Securing-Software-Supply-Chain-Grafeas.md b/blog/_posts/2017-11-00-Securing-Software-Supply-Chain-Grafeas.md
new file mode 100644
index 00000000000..84f1f29144e
--- /dev/null
+++ b/blog/_posts/2017-11-00-Securing-Software-Supply-Chain-Grafeas.md
@@ -0,0 +1,234 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Securing Software Supply Chain with Grafeas "
+date: Saturday, November 03, 2017
+pagination:
+ enabled: true
+---
+ **_Editor's note: This post is written by Kelsey Hightower, Staff Developer Advocate at Google, and Sandra Guo, Product Manager at Google._**
+
+Kubernetes has evolved to support increasingly complex classes of applications, enabling the development of two major industry trends: hybrid cloud and microservices. With increasing complexity in production environments, customers—especially enterprises—are demanding better ways to manage their software supply chain with more centralized visibility and control over production deployments.
+
+On October 12th, Google and partners [announced](https://cloudplatform.googleblog.com/2017/10/introducing-grafeas-open-source-api-.html) Grafeas, an open source initiative to define a best practice for auditing and governing the modern software supply chain. With Grafeas (“scribe” in Greek), developers can plug in components of the CI/CD pipeline into a central source of truth for tracking and enforcing policies. Google is also working on [Kritis](https://github.com/Grafeas/Grafeas/blob/master/case-studies/binary-authorization.md) (“judge” in Greek), allowing devOps teams to enforce deploy-time image policy using metadata and attestations stored in Grafeas.
+
+Grafeas allows build, auditing and compliance tools to exchange comprehensive metadata on container images using a central API. This allows enforcing policies that provide central control over the software supply process.
+
+
+[](http://2.bp.blogspot.com/-TDD4slMA7gg/WfzDeKVLr2I/AAAAAAAAAGw/dhfWOrCMdmogSNhGr5RrA2ovr02K5nn8ACK4BGAYYCw/s1600/Screen%2BShot%2B2017-11-03%2Bat%2B12.28.13%2BPM.png)
+
+
+## Example application: PaymentProcessor
+
+Let’s consider a simple application, _PaymentProcessor_, that retrieves, processes and updates payment info stored in a database. This application is made up of two containers: a standard ruby container and custom logic.
+
+
+Due to the sensitive nature of the payment data, the developers and DevOps team really want to make sure that the code meets certain security and compliance requirements, with detailed records on the provenance of this code. There are CI/CD stages that validate the quality of the PaymentProcessor release, but there is no easy way to centrally view/manage this information:
+
+
+[](http://1.bp.blogspot.com/-WeI6zpGd42A/WfzDkkIonFI/AAAAAAAAAG4/wKUaNaXYvaQ-an9p4_9T9J3EQB_zHkRXwCK4BGAYYCw/s1600/Screen%2BShot%2B2017-11-03%2Bat%2B12.28.23%2BPM.png)
+
+
+## Visibility and governance over the PaymentProcessor Code
+Grafeas provides an API for customers to centrally manage metadata created by various CI/CD components and enables deploy time policy enforcement through a Kritis implementation.
+
+[](http://4.bp.blogspot.com/-SRMfm5z606M/WfzDpHqlz-I/AAAAAAAAAHA/y2suaInhr9E0hU0u78PacBT_kZj2D7DKgCK4BGAYYCw/s1600/Screen%2BShot%2B2017-11-03%2Bat%2B12.28.34%2BPM.png)
+
+
+Let’s consider a basic example of how Grafeas can provide deploy time control for the PaymentProcessor app using a demo verification pipeline.
+
+Assume that a PaymentProcessor container image has been created and pushed to Google Container Registry. This example uses the gcr.io/exampleApp/PaymentProcessor container for testing. You as the QA engineer want to create an attestation certifying this image for production usage. Instead of trusting an image tag like 0.0.1, which can be reused and point to a different container image later, we can trust the image digest to ensure the attestation links to the full image contents.
+
+
+
+**1. Set up the environment**
+
+
+Generate a signing key:
+
+
+
+```
+gpg --quick-generate-key --yes qa\_bob@example.com
+ ```
+
+
+Export the image signer's public key:
+
+
+
+```
+gpg --armor --export image.signer@example.com \> ${GPG\_KEY\_ID}.pub
+ ```
+
+
+Create the ‘qa’ AttestationAuthority note via the Grafeas API:
+
+
+
+```
+curl -X POST \
+ "http://127.0.0.1:8080/v1alpha1/projects/image-signing/notes?noteId=qa" \
+ -d @note.json
+ ```
+
+
+Create the Kubernetes ConfigMap for admissions control and store the QA signer's public key:
+
+
+
+```
+kubectl create configmap image-signature-webhook \
+ --from-file ${GPG\_KEY\_ID}.pub
+
+kubectl get configmap image-signature-webhook -o yaml
+ ```
+
+
+Set up an admissions control webhook to require QA signature during deployment.
+
+
+
+
+```
+kubectl apply -f kubernetes/image-signature-webhook.yaml
+ ```
+
+
+
+
+
+**2. Attempt to deploy an image without QA attestation**
+
+Attempt to run the image in paymentProcessor.ymal before it is QA attested:
+
+
+
+```
+kubectl apply -f pods/nginx.yaml
+
+apiVersion: v1
+
+kind: Pod
+
+metadata:
+
+ name: payment
+
+spec:
+
+ containers:
+
+ - name: payment
+
+ image: "gcr.io/hightowerlabs/payment@sha256:aba48d60ba4410ec921f9d2e8169236c57660d121f9430dc9758d754eec8f887"
+ ```
+
+
+Create the paymentProcessor pod:
+
+
+
+```
+kubectl apply -f pods/paymentProcessor.yaml
+ ```
+
+
+Notice the paymentProcessor pod was not created and the following error was returned:
+
+
+
+```
+The "" is invalid: : No matched signatures for container image: gcr.io/hightowerlabs/payment@sha256:aba48d60ba4410ec921f9d2e8169236c57660d121f9430dc9758d754eec8f887
+ ```
+
+
+**3. Create an image signature**
+
+Assume the image digest is stored in Image-digest.txt, sign the image digest:
+
+
+
+```
+gpg -u qa\_bob@example.com \
+ --armor \
+ --clearsign \
+ --output=signature.gpg \
+ Image-digest.txt
+ ```
+
+
+
+**4. Upload the signature to the Grafeas API**
+
+Generate a pgpSignedAttestation occurrence from the signature :
+
+
+
+
+```
+cat \> occurrence.json \<\ times isn’t the right hammer for every nail in your cluster. Sometimes, you need to run a Pod on every Node, or on a subset of Nodes (for example, shared side cars like log shippers and metrics collectors, Kubernetes add-ons, and Distributed File Systems). The state of the art was Pods combined with NodeSelectors, or static Pods, but this is unwieldy. After having grown used to the ease of automation provided by Deployments, users demanded the same features for this category of application, so [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) was added to extension/v1beta1 as well.
+
+For a time, users were content, until they decided that Kubernetes needed to be able to orchestrate more than just 12 factor apps and cluster infrastructure. Whether your architecture is N-tier, service oriented, or micro-service oriented, your 12 factor apps depend on stateful workloads (for example, RDBMSs, distributed key value stores, and messaging queues) to provide services to end users and other applications. These stateful workloads can have availability and durability requirements that can only be achieved by distributed systems, and users were ready to use Kubernetes to orchestrate the entire stack.
+
+While Deployments are great for stateless workloads, they don’t provide the right guarantees for the orchestration of distributed systems. These applications can require stable network identities, ordered, sequential deployment, updates, and deletion, and stable, durable storage. [PetSet](https://kubernetes.io/docs/tasks/run-application/upgrade-pet-set-to-stateful-set/) was added to the apps/v1beta1 group version to address this category of application. Unfortunately, [we were less than thoughtful with its naming](https://github.com/kubernetes/kubernetes/issues/27430), and, as we always strive to be an inclusive community, we renamed the kind to [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/).
+
+Finally, we were done.
+
+ 
+
+...Or were we?
+
+
+
+## Kubernetes 1.8 and apps/v1beta2
+Pod, ReplicationController, ReplicaSet, Deployment, DaemonSet, and StatefulSet came to collectively be known as the core workloads API. We could finally orchestrate all of the things, but the API surface was spread across three groups, had many inconsistencies, and left users wondering about the stability of each of the core workloads kinds. It was time to stop adding new features and focus on consistency and stability.
+
+Pod and ReplicationController were at GA stability, and even though you can run a workload in a Pod, it’s a nucleus primitive that belongs in core. As Deployments are the recommended way to manage your stateless apps, moving ReplicationController would serve no purpose. In Kubernetes 1.8, we moved all the other core workloads API kinds (Deployment, DaemonSet, ReplicaSet, and StatefulSet) to the apps/v1beta2 group version. This had the benefit of providing a better aggregation across the API surface, and allowing us to break backward compatibility to fix inconsistencies. Our plan was to promote this new surface to GA, wholesale and as is, when we were satisfied with its completeness. The modifications in this release, which are also implemented in apps/v1, are described below.
+
+
+
+### Selector Defaulting Deprecated
+In prior versions of the apps and extensions groups, label selectors of the core workloads API kinds were, when left unspecified, defaulted to a label selector generated from the kind’s template’s labels.
+
+This was completely incompatible with strategic merge patch and kubectl apply. Moreover, we’ve found that defaulting the value of a field from the value of another field of the same object is an anti-pattern, in general, and particularly dangerous for the API objects used to orchestrate workloads.
+
+
+
+### Immutable Selectors
+Selector mutation, while allowing for some use cases like promotable Deployment canaries, is not handled gracefully by our workload controllers, and we have always [strongly cautioned users against it](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#label-selector-updates). To provide a consistent, usable, and stable API, selectors were made immutable for all kinds in the workloads API.
+
+We believe that there are better ways to support features like promotable canaries and orchestrated Pod relabeling, but, if restricted selector mutation is a necessary feature for our users, we can relax immutability in the future without breaking backward compatibility.
+
+The development of features like promotable canaries, orchestrated Pod relabeling, and restricted selector mutability is driven by demand signals from our users. If you are currently modifying the selectors of your core workload API objects, please tell us about your use case via a GitHub issue, or by participating in SIG apps.
+
+
+
+### Default Rolling Updates
+Prior to apps/v1beta2, some kinds defaulted their update strategy to something other than RollingUpdate (e.g. app/v1beta1/StatefulSet uses OnDelete by default). We wanted to be confident that RollingUpdate worked well prior to making it the default update strategy, and we couldn’t change the default behavior in released versions without breaking our promise with respect to backward compatibility. In apps/v1beta2 we enabled RollingUpdate for all core workloads kinds by default.
+
+
+
+### CreatedBy Annotation Deprecated
+The "kubernetes.io/created-by" was a legacy hold over from the days before garbage collection. Users should use an object’s ControllerRef from its ownerReferences to determine object ownership. We deprecated this feature in 1.8 and removed it in 1.9.
+
+
+
+### Scale Subresources
+A scale subresource was added to all of the applicable kinds in apps/v1beta2 (DaemonSet scales based on its node selector).
+
+
+## Kubernetes 1.9 and apps/v1
+In Kubernetes 1.9, as planned, we promoted the entire core workloads API surface to GA in the apps/v1 group version. We made a few more changes to make the API consistent, but apps/v1 is mostly identical to apps/v1beta2. The reality is that most users have been treating the beta versions of the core workloads API as GA for some time now. Anyone who is still using ReplicationControllers and shying away from DaemonSets, Deployments, and StatefulSets, due to a perceived lack of stability, should plan migrate their workloads (where applicable) to apps/v1. The minor changes that were made during promotion are described below.
+
+
+
+### Garbage Collection Defaults to Delete
+Prior to apps/v1 the default garbage collection policy for Pods in a DaemonSet, Deployment, ReplicaSet, or StatefulSet, was to orphan the Pods. That is, if you deleted one of these kinds, the Pods that they owned would not be deleted automatically unless cascading deletion was explicitly specified. If you use kubectl, you probably didn’t notice this, as these kinds are scaled to zero prior to deletion. In apps/v1 all core worloads API objects will now, by default, be deleted when their owner is deleted. For most users, this change is transparent.
+Status Conditions
+
+Prior to apps/v1 only Deployment and ReplicaSet had Conditions in their Status objects. For consistency's sake, either all of the objects or none of them should have conditions. After some debate, we decided that Conditions are useful, and we added Conditions to StatefulSetStatus and DaemonSetStatus. The StatefulSet and DaemonSet controllers currently don’t populate them, but we may choose communicate conditions to clients, via this mechanism, in the future.
+
+### Scale Subresource Migrated to autoscale/v1
+We originally added a scale subresource to the apps group. This was the wrong direction for integration with the autoscaling, and, at some point, we would like to use custom metrics to autoscale StatefulSets. So the apps/v1 group version uses the autoscaling/v1 scale subresource.
+
+
+
+## Migration and Deprecation
+The question most you’re probably asking now is, “What’s my migration path onto apps/v1 and how soon should I plan on migrating?” All of the group versions prior to apps/v1 are deprecated as of Kubernetes 1.9, and all new code should be developed against apps/v1, but, as discussed above, many of our users treat extensions/v1beta1 as if it were GA. We realize this, and the minimum support timelines in our [deprecation policy](https://kubernetes.io/docs/reference/deprecation-policy/) are just that, minimums.
+
+In future releases, before completely removing any of the group versions, we will disable them by default in the API Server. At this point, you will still be able to use the group version, but you will have to explicitly enable it. We will also provide utilities to upgrade the storage version of the API objects to apps/v1. Remember, all of the versions of the core workloads kinds are bidirectionally convertible. If you want to manually update your core workloads API objects now, you can use [kubectl convert](https://kubernetes.io/docs/user-guide/kubectl/v1.7/#convert) to convert manifests between group versions.
+
+
+
+## What’s Next?
+The core workloads API surface is stable, but it’s still software, and software is never complete. We often add features to stable APIs to support new use cases, and we will likely do so for the core workloads API as well. GA stability means that any new features that we do add will be strictly backward compatible with the existing API surface. From this point forward, nothing we do will break our backwards compatibility guarantees. If you’re looking to participate in the evolution of this portion of the API, please feel free to get involved in [GitHub](https://github.com/kubernetes/kubernetes) or to participate in [SIG Apps](https://github.com/kubernetes/community/tree/master/sig-apps).
+
+--Kenneth Owens, Software Engineer, Google
+
+
+- [Download](http://get.k8s.io/) Kubernetes
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
diff --git a/blog/_posts/2018-01-00-Extensible-Admission-Is-Beta.md b/blog/_posts/2018-01-00-Extensible-Admission-Is-Beta.md
new file mode 100644
index 00000000000..e64a340975a
--- /dev/null
+++ b/blog/_posts/2018-01-00-Extensible-Admission-Is-Beta.md
@@ -0,0 +1,135 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: "Extensible Admission is Beta"
+date: Friday, January 11, 2018
+pagination:
+ enabled: true
+---
+In this post we review a feature, available in the Kubernetes API server, that allows you to implement arbitrary control decisions and which has matured considerably in Kubernetes 1.9.
+
+The admission stage of API server processing is one of the most powerful tools for securing a Kubernetes cluster by restricting the objects that can be created, but it has always been limited to compiled code. In 1.9, we promoted webhooks for admission to beta, allowing you to leverage admission from outside the API server process.
+
+
+
+## What is Admission?
+[Admission](https://kubernetes.io/docs/admin/admission-controllers/#what-are-they) is the phase of [handling an API server request](https://blog.openshift.com/kubernetes-deep-dive-api-server-part-1/) that happens before a resource is persisted, but after authorization. Admission gets access to the same information as authorization (user, URL, etc) and the complete body of an API request (for most requests).
+
+[](http://2.bp.blogspot.com/-p8WGg2BATsY/WlfywbD_tAI/AAAAAAAAAJw/mDqZV0dB4_Y0gXXQp_1tQ7CtMRSd6lHVwCK4BGAYYCw/s1600/Screen%2BShot%2B2018-01-11%2Bat%2B3.22.07%2BPM.png)
+
+The admission phase is composed of individual plugins, each of which are narrowly focused and have semantic knowledge of what they are inspecting. Examples include: PodNodeSelector (influences scheduling decisions), PodSecurityPolicy (prevents escalating containers), and ResourceQuota (enforces resource allocation per namespace).
+
+Admission is split into two phases:
+
+1. Mutation, which allows modification of the body content itself as well as rejection of an API request.
+2. Validation, which allows introspection queries and rejection of an API request.
+An admission plugin can be in both phases, but all mutation happens before validation.
+
+
+
+### Mutation
+The mutation phase of admission allows modification of the resource content before it is persisted. Because the same field can be mutated multiple times while in the admission chain, the order of the admission plugins in the mutation matters.
+
+One example of a mutating admission plugin is the `PodNodeSelector` plugin, which uses an annotation on a namespace `namespace.annotations[“scheduler.alpha.kubernetes.io/node-selector”]` to find a label selector and add it to the `pod.spec.nodeselector` field. This positively restricts which nodes the pods in a particular namespace can land on, as opposed to taints, which provide negative restriction (also with an admission plugin).
+
+
+
+### Validation
+The validation phase of admission allows the enforcement of invariants on particular API resources. The validation phase runs after all mutators finish to ensure that the resource isn’t going to change again.
+
+One example of a validation admission plugin is also the `PodNodeSelector` plugin, which ensures that all pods’ `spec.nodeSelector` fields are constrained by the node selector restrictions on the namespace. Even if a mutating admission plugin tries to change the `spec.nodeSelector` field after the PodNodeSelector runs in the mutating chain, the PodNodeSelector in the validating chain prevents the API resource from being created because it fails validation.
+
+
+## What are admission webhooks?
+Admission webhooks allow a Kubernetes installer or a cluster-admin to add mutating and validating admission plugins to the admission chain of `kube-apiserver` as well as any extensions apiserver based on k8s.io/apiserver 1.9, like [metrics](https://github.com/kubernetes/metrics), [service-catalog](https://github.com/kubernetes-incubator/service-catalog), or [kube-projects](https://github.com/openshift/kube-projects), without recompiling them. Both kinds of admission webhooks run at the end of their respective chains and have the same powers and limitations as compiled admission plugins.
+
+
+### What are they good for?
+Webhook admission plugins allow for mutation and validation of any resource on any API server, so the possible applications are vast. Some common use-cases include:
+
+1. Mutation of resources like pods. Istio has talked about doing this to inject side-car containers into pods. You could also write a plugin which forcefully resolves image tags into image SHAs.
+2. Name restrictions. On multi-tenant systems, reserving namespaces has emerged as a use-case.
+3. Complex CustomResource validation. Because the entire object is visible, a clever admission plugin can perform complex validation on dependent fields (A requires B) and even external resources (compare to LimitRanges).
+4. Security response. If you forced image tags into image SHAs, you could write an admission plugin that prevents certain SHAs from running.
+
+### Registration
+Webhook admission plugins of both types are registered in the API, and all API servers (kube-apiserver and all extension API servers) share a common config for them. During the registration process, a webhook admission plugin describes:
+
+
+1. How to connect to the webhook admission server
+2. How to verify the webhook admission server (Is it really the server I expect?)
+3. Where to send the data at that server (which URL path)
+4. Which resources and which HTTP verbs it will handle
+5. What an API server should do on connection failures (for example, if the admission webhook server goes down)
+```
+1 apiVersion: admissionregistration.k8s.io/v1beta1
+2 kind: ValidatingWebhookConfiguration
+3 metadata:
+4 name: namespacereservations.admission.online.openshift.io
+5 webhooks:
+6 - name: namespacereservations.admission.online.openshift.io
+7 clientConfig:
+8 service:
+9 namespace: default
+10 name: kubernetes
+11 path: /apis/admission.online.openshift.io/v1alpha1/namespacereservations
+12 caBundle: KUBE\_CA\_HERE
+13 rules:
+14 - operations:
+15 - CREATE
+16 apiGroups:
+17 - ""
+18 apiVersions:
+19 - "\*"
+20 resources:
+21 - namespaces
+22 failurePolicy: Fail
+```
+Line 6: `name` - the name for the webhook itself. For mutating webhooks, these are sorted to provide ordering.
+Line 7: `clientConfig` - provides information about how to connect to, trust, and send data to the webhook admission server.
+Line 13: `rules` - describe when an API server should call this admission plugin. In this case, only for creates of namespaces. You can specify any resource here so specifying creates of `serviceinstances.servicecatalog.k8s.io` is also legal.
+Line 22: `failurePolicy` - says what to do if the webhook admission server is unavailable. Choices are “Ignore” (fail open) or “Fail” (fail closed). Failing open makes for unpredictable behavior for all clients.
+
+
+
+### Authentication and trust
+
+Because webhook admission plugins have a lot of power (remember, they get to see the API resource content of any request sent to them and might modify them for mutating plugins), it is important to consider:
+
+- How individual API servers verify their connection to the webhook admission server
+- How the webhook admission server authenticates precisely which API server is contacting it
+- Whether that particular API server has authorization to make the request
+There are three major categories of connection:
+
+1. From kube-apiserver or extension-apiservers to externally hosted admission webhooks (webhooks not hosted in the cluster)
+2. From kube-apiserver to self-hosted admission webhooks
+3. From extension-apiservers to self-hosted admission webhooks
+To support these categories, the webhook admission plugins accept a kubeconfig file which describes how to connect to individual servers. For interacting with externally hosted admission webhooks, there is really no alternative to configuring that file manually since the authentication/authorization and access paths are owned by the server you’re hooking to.
+
+For the self-hosted category, a cleverly built webhook admission server and topology can take advantage of the safe defaulting built into the admission plugin and have a secure, portable, zero-config topology that works from any API server.
+
+
+
+### Simple, secure, portable, zero-config topology
+If you build your webhook admission server to also be an extension API server, it becomes possible to aggregate it as a normal API server. This has a number of advantages:
+
+- Your webhook becomes available like any other API under default kube-apiserver service `kubernetes.default.svc` (e.g. [https://kubernetes.default.svc/apis/admission.example.com/v1/mymutatingadmissionreviews](https://kuberentes.default.svc/apis/admission.example.com/v1/mymutatingadmissionreviews)). Among other benefits, you can test using `kubectl`.
+- Your webhook automatically (without any config) makes use of the in-cluster authentication and authorization provided by kube-apiserver. You can restrict access to your webhook with normal RBAC rules.
+- Your extension API servers and kube-apiserver automatically (without any config) make use of their in-cluster credentials to communicate with the webhook.
+- Extension API servers do not leak their service account token to your webhook because they go through kube-apiserver, which is a secure front proxy.
+
+ {:.big-img}
+_Source: [https://drive.google.com/a/redhat.com/file/d/12nC9S2fWCbeX\_P8nrmL6NgOSIha4HDNp](https://drive.google.com/a/redhat.com/file/d/12nC9S2fWCbeX_P8nrmL6NgOSIha4HDNp)_
+
+In short: a secure topology makes use of all security mechanisms of API server aggregation and additionally requires no additional configuration.
+
+Other topologies are possible but require additional manual configuration as well as a lot of effort to create a secure setup, especially when extension API servers like service catalog come into play. The topology above is zero-config and portable to every Kubernetes cluster.
+
+
+
+### How do I write a webhook admission server?
+Writing a full server complete with authentication and authorization can be intimidating. To make it easier, there are projects based on Kubernetes 1.9 that provide a library for building your webhook admission server in 200 lines or less. Take a look at the [generic-admission-apiserver](https://github.com/openshift/generic-admission-server) and the [kubernetes-namespace-reservation](https://github.com/openshift/kubernetes-namespace-reservation) projects for the library and an example of how to build your own secure and portable webhook admission server.
+
+With the admission webhooks introduced in 1.9 we’ve made Kubernetes even more adaptable to your needs. We hope this work, driven by both Red Hat and Google, will enable many more workloads and support ecosystem components. (Istio is one example.) Now is a good time to give it a try!
+
+If you’re interested in giving feedback or contributing to this area, join us in the [SIG API machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery).
diff --git a/blog/_posts/2018-01-00-Five-Days-Of-Kubernetes-19.md b/blog/_posts/2018-01-00-Five-Days-Of-Kubernetes-19.md
new file mode 100644
index 00000000000..388caa93ac1
--- /dev/null
+++ b/blog/_posts/2018-01-00-Five-Days-Of-Kubernetes-19.md
@@ -0,0 +1,32 @@
+---
+layout: blog
+permalink: /blog/:year/:month/:title
+title: " Five Days of Kubernetes 1.9 "
+date: Tuesday, January 08, 2018
+pagination:
+ enabled: true
+---
+Kubernetes 1.9 is live, made possible by hundreds of contributors pushing thousands of commits in this latest releases.
+
+The community has tallied around 32,300 commits in the main repo and continues rapid growth outside of the main repo, which signals growing maturity and stability for the project. The community has logged more than 90,700 commits across all repos and 7,800 commits across all repos for v1.8.0 to v1.9.0 alone.
+
+With the help of our growing community of 1,400 plus contributors, we issued more than 4,490 PRs and pushed more than 7,800 commits to deliver Kubernetes 1.9 with many notable updates, including enhancements for the workloads and stateful application support areas. This all points to increased extensibility and standards-based Kubernetes ecosystem.
+
+While many improvements have been contributed, we highlight key features in this series of in-depth posts listed below. [Follow along](https://twitter.com/kubernetesio) and see what’s new and improved with workloads, Windows support and more.
+
+**Day 1:** 5 Days of Kubernetes 1.9
+**Day 2:** Windows and Docker support for Kubernetes (beta)
+**Day 3:** Storage, CSI framework (alpha)
+**Day 4:** Web Hook and Mission Critical, Dynamic Admission Control
+**Day 5:** Introducing client-go version 6
+**Day 6:** Workloads API
+
+
+
+**Connect**
+
+- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+- Join the community portal for advocates on [K8sPort](http://k8sport.org/)
+- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
+- Connect with the community on [Slack](http://slack.k8s.io/)
+- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
diff --git a/blog/_posts/2018-01-00-Introducing-Client-Go-Version-6.md b/blog/_posts/2018-01-00-Introducing-Client-Go-Version-6.md
new file mode 100644
index 00000000000..36ebb8c6daa
--- /dev/null
+++ b/blog/_posts/2018-01-00-Introducing-Client-Go-Version-6.md
@@ -0,0 +1,423 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: "Introducing client-go version 6"
+date: Saturday, January 12, 2018
+pagination:
+ enabled: true
+---
+
+The Kubernetes API server [exposes a REST interface](https://blog.openshift.com/tag/api-server/) consumable by any client. [client-go](https://github.com/kubernetes/client-go) is the official client library for the Go programming language. It is used both internally by Kubernetes itself (for example, inside kubectl) as well as by [numerous external consumers](https://github.com/search?q=k8s.io%2Fclient-go&type=Code&utf8=%E2%9C%93):operators like the [etcd-operator](https://github.com/coreos/etcd-operator) or [prometheus-operator;](https://github.com/coreos/prometheus-operator)higher level frameworks like [KubeLess](https://github.com/kubeless/kubeless) and [OpenShift](https://openshift.io/); and many more.
+
+The version 6 update to client-go adds support for Kubernetes 1.9, allowing access to the latest Kubernetes features. While the [changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md) contains all the gory details, this blog post highlights the most prominent changes and intends to guide on how to upgrade from version 5.
+
+This blog post is one of a number of efforts to make client-go more accessible to third party consumers. Easier access is a joint effort by a number of people from numerous companies, all meeting in the #client-go-docs channel of the [Kubernetes Slack](http://slack.k8s.io/). We are happy to hear feedback and ideas for further improvement, and of course appreciate anybody who wants to contribute.
+
+
+## API group changes
+
+The following API group promotions are part of Kubernetes 1.9:
+
+- Workload objects (Deployments, DaemonSets, ReplicaSets, and StatefulSets) have been [promoted to the apps/v1 API group in Kubernetes 1.9](https://kubernetes.io/docs/reference/workloads-18-19/). client-go follows this transition and allows developers to use the latest version by importing the k8s.io/api/apps/v1 package instead of k8s.io/api/apps/v1beta1 and by using Clientset.AppsV1().
+- Admission Webhook Registration has been promoted to the admissionregistration.k8s.io/v1beta1 API group in Kubernetes 1.9. The former ExternalAdmissionHookConfiguration type has been replaced by the incompatible ValidatingWebhookConfiguration and MutatingWebhookConfiguration types. Moreover, the webhook admission payload type AdmissionReview in admission.k8s.io has been promoted to v1beta1. Note that versioned objects are now passed to webhooks. Refer to the admission webhook [documentation](https://kubernetes.io/docs/admin/extensible-admission-controllers/#external-admission-webhooks) for details.
+
+
+
+## Validation for CustomResources
+In Kubernetes 1.8 we introduced CustomResourceDefinitions (CRD) [pre-persistence schema validation](https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation) as an alpha feature. With 1.9, the feature got promoted to beta and will be enabled by default. As a client-go user, you will find the API types at k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1.
+
+The [OpenAPI v3 schema](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.0.md#schemaObject) can be defined in the CRD spec as:
+
+
+```
+
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata: ...
+spec:
+ ...
+ validation:
+ openAPIV3Schema:
+ properties:
+ spec:
+ properties:
+ version:
+ type: string
+ enum:
+ - "v1.0.0"
+ - "v1.0.1"
+ replicas:
+ type: integer
+ minimum: 1
+ maximum: 10
+
+ ```
+
+
+The schema in the above CRD applies following validations for the instance:
+
+1. spec.version must be a string and must be either “v1.0.0” or “v1.0.1”.
+2. spec.replicas must be an integer and must have a minimum value of 1 and a maximum value of 10.
+A CustomResource with invalid values for spec.version (v1.0.2) and spec.replicas (15) will be rejected:
+
+
+```
+
+apiVersion: mygroup.example.com/v1
+kind: App
+metadata:
+ name: example-app
+spec:
+ version: "v1.0.2"
+ replicas: 15
+
+ ```
+
+```
+$ kubectl create -f app.yaml
+
+The App "example-app" is invalid: []: Invalid value: map[string]interface {}{"apiVersion":"mygroup.example.com/v1", "kind":"App", "metadata":map[string]interface {}{"creationTimestamp":"2017-08-31T20:52:54Z", "uid":"5c674651-8e8e-11e7-86ad-f0761cb232d1", "selfLink":"", "clusterName":"", "name":"example-app", "namespace":"default", "deletionTimestamp":interface {}(nil), "deletionGracePeriodSeconds":(\*int64)(nil)}, "spec":map[string]interface {}{"replicas":15, "version":"v1.0.2"}}:
+validation failure list:
+spec.replicas in body should be less than or equal to 10
+spec.version in body should be one of [v1.0.0 v1.0.1]
+ ```
+
+
+
+
+
+Note that with [Admission Webhooks](https://kubernetes.io/docs/admin/extensible-admission-controllers/#external-admission-webhooks), Kubernetes 1.9 provides another beta feature to validate objects before they are created or updated. Starting with 1.9, these webhooks also allow mutation of objects (for example, to set defaults or to inject values). Of course, webhooks work with CRDs as well. Moreover, webhooks can be used to implement validations that are not easily expressible with CRD validation. Note that webhooks are harder to implement than CRD validation, so for many purposes, CRD validation is the right tool.
+
+
+
+## Creating namespaced informers
+Often objects in one namespace or only with certain labels are to be processed in a controller. Informers [now allow](https://github.com/kubernetes/kubernetes/pull/54660) you to tweak the ListOptions used to query the API server to list and watch objects. Uninitialized objects (for consumption by [initializers](https://kubernetes.io/docs/admin/extensible-admission-controllers/#what-are-initializers)) can be made visible by setting IncludeUnitialized to true. All this can be done using the new NewFilteredSharedInformerFactory constructor for shared informers:
+
+```
+
+import “k8s.io/client-go/informers”
+...
+sharedInformers := informers.NewFilteredSharedInformerFactory(
+ client,
+ 30\*time.Minute,
+ “some-namespace”,
+ func(opt \*metav1.ListOptions) {
+ opt.LabelSelector = “foo=bar”
+ },
+)
+ ```
+
+
+
+Note that the corresponding lister will only know about the objects matching the namespace and the given ListOptions. Note that the same restrictions apply for a List or Watch call on a client.
+
+This [production code example](https://github.com/jetstack/cert-manager/blob/b978faa28c9f0fb0414b5d7293fab7bde65bde76/cmd/controller/app/controller.go#L123) of a cert-manager demonstrates how namespace informers can be used in real code.
+
+
+
+## Polymorphic scale client
+Historically, only types in the extensions API group would work with autogenerated Scale clients. Furthermore, different API groups use different Scale types for their /scale subresources. To remedy these issues, k8s.io/client-go/scale provides a [polymorphic scale client](https://github.com/kubernetes/client-go/tree/master/scale) to scale different resources in different API groups in a coherent way:
+
+
+```
+
+import (
+
+
+apimeta "k8s.io/apimachinery/pkg/api/meta"
+
+ discocache "k8s.io/client-go/discovery/cached"
+ "k8s.io/client-go/discovery"
+
+"k8s.io/client-go/dynamic"
+
+“k8s.io/client-go/scale”
+)
+
+...
+
+cachedDiscovery := discocache.NewMemCacheClient(client.Discovery())
+restMapper := discovery.NewDeferredDiscoveryRESTMapper(
+
+cachedDiscovery,
+
+apimeta.InterfacesForUnstructured,
+
+)
+scaleKindResolver := scale.NewDiscoveryScaleKindResolver(
+
+client.Discovery(),
+
+)
+scaleClient, err := scale.NewForConfig(
+
+client, restMapper,
+
+dynamic.LegacyAPIPathResolverFunc,
+
+scaleKindResolver,
+
+)
+scale, err := scaleClient.Scales("default").Get(groupResource, "foo")
+
+ ```
+
+
+
+The returned scale object is generic and is exposed as the autoscaling/v1.Scale object. It is backed by an internal Scale type, with conversions defined to and from all the special Scale types in the API groups supporting scaling. We planto [extend this to CustomResources in 1.10](https://github.com/kubernetes/kubernetes/pull/55168).
+
+If you’re implementing support for the scale subresource, we recommend that you expose the autoscaling/v1.Scale object.
+
+
+
+## Type-safe DeepCopy
+Deeply copying an object formerly required a call to Scheme.Copy(Object) with the notable disadvantage of losing type safety. A typical piece of code from client-go version 5 required type casting:
+
+
+```
+
+newObj, err := runtime.NewScheme().Copy(node)
+
+
+if err != nil {
+
+ return fmt.Errorf("failed to copy node %v: %s”, node, err)
+
+}
+
+
+newNode, ok := newObj.(\*v1.Node)
+
+if !ok {
+
+ return fmt.Errorf("failed to type-assert node %v", newObj)
+
+
+}
+
+ ```
+
+
+
+Thanks to [k8s.io/code-generator](https://github.com/kubernetes/code-generator), Copy has now been replaced by a type-safe DeepCopy method living on each object, allowing you to simplify code significantly both in terms of volume and API error surface:
+
+newNode := node.DeepCopy()
+
+No error handling is necessary: this call never fails. If and only if the node is nil does DeepCopy() return nil.
+
+To copy runtime.Objects there is an additional DeepCopyObject() method in the runtime.Object interface.
+
+With the old method gone for good, clients need to update their copy invocations accordingly.
+
+
+## Code generation and CustomResources
+Using client-go’s dynamic client to access CustomResources is discouraged and superseded by type-safe code using the generators in [k8s.io/code-generator](https://github.com/kubernetes/code-generator). Check out the [Deep Dive on the Open Shift blog](https://blog.openshift.com/kubernetes-deep-dive-code-generation-customresources/) to learn about using code generation with client-go.
+
+
+### Comment Blocks
+You can now place tags in the comment block just above a type or function, or in the second block above. There is no distinction anymore between these two comment blocks. This used to a be a source of [subtle errors when using the generators](https://github.com/kubernetes/kubernetes/issues/53893):
+```
+// second block above
+// +k8s:some-tag
+
+// first block above
+// +k8s:another-tag
+type Foo struct {}
+```
+
+
+### Custom Client Methods
+You can now use extended tag definitions to create custom verbs . This lets you expand beyond the verbs defined by HTTP. This opens the door to higher levels of customization.
+
+For example, this block leads to the generation of the method UpdateScale(s \*autoscaling.Scale) (\*autoscaling.Scale, error):
+```
+// genclient:method=UpdateScale,verb=update,subresource=scale,input=k8s.io/kubernetes/pkg/apis/autoscaling.Scale,result=k8s.io/kubernetes/pkg/apis/autoscaling.Scale
+```
+
+
+### Resolving Golang Naming Conflicts
+In more complex API groups it’s possible for Kinds, the group name, the Go package name, and the Go group alias name to conflict. This was not handled correctly prior to 1.9. The following tags resolve naming conflicts and make the generated code prettier:
+
+```
+// +groupName=example2.example.com
+// +groupGoName=SecondExample
+```
+
+These are usually [in the doc.go file of an API package](https://github.com/kubernetes/code-generator/blob/release-1.9/_examples/crd/apis/example2/v1/doc.go#L18). The first is used as the CustomResource group name when RESTfully speaking to the API server using HTTP. The second is used in the generated Golang code (for example, in the clientset) to access the group version:
+
+clientset.SecondExampleV1()
+
+It’s finally possible to have dots in Go package names. In this section’s example, you would put the groupName snippet into the pkg/apis/example2.example.com directory of your project.
+
+
+## Example projects
+Kubernetes 1.9 includes a number of example projects which can serve as a blueprint for your own projects:
+
+- [k8s.io/sample-apiserver](https://github.com/kubernetes/sample-apiserver) is a simple user-provided API server that is integrated into a cluster via [API aggregation](https://kubernetes.io/docs/concepts/api-extension/apiserver-aggregation/).
+- [k8s.io/sample-controller](https://github.com/kubernetes/sample-controller) is a full-featured [controller](https://github.com/kubernetes/community/blob/master/contributors/devel/controllers.md) (also called an operator) with shared informers and a workqueue to process created, changed or deleted objects. It is based on CustomResourceDefinitions and uses [k8s.io/code-generator](https://github.com/kubernetes/code-generator) to generate deepcopy functions, typed clientsets, informers, and listers.
+
+
+
+## Vendoring
+In order to update from the previous version 5 to version 6 of client-go, the library itself as well as certain third-party dependencies must be updated. Previously, this process had been tedious due to the fact that a lot of code got refactored or relocated within the existing package layout across releases. Fortunately, far less code had to move in the latest version, which should ease the upgrade procedure for most users.
+
+
+
+### State of the published repositories
+In the past [k8s.io/client-go](https://github.com/kubernetes/client-go), [k8s.io/api](https://github.com/kubernetes/api), and [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery) were updated infrequently. Tags (for example, v4.0.0) were created quite some time after the Kubernetes releases. With the 1.9 release we resumed running a nightly bot that updates all the repositories for public consumption, even before manual tagging. This includes the branches:
+
+- master
+- release-1.8 / release-5.0
+- release-1.9 / release-6.0
+Kubernetes tags (for example, v1.9.1-beta1) are also applied automatically to the published repositories, prefixed with kubernetes- (for example, kubernetes-1.9.1-beta1).
+
+These tags have limited test coverage, but can be used by early adopters of client-go and the other libraries. Moreover, they help to vendor the correct version of [k8s.io/api](https://github.com/kubernetes/api) and [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery). Note that we only create a v6.0.3-like semantic versioning tag on [k8s.io/client-go](https://github.com/kubernetes/client-go). The corresponding tag for k8s.io/api and k8s.io/apimachinery is kubernetes-1.9.3.
+
+Also note that only these tags correspond to tested releases of Kubernetes. If you depend on the release branch, e.g., release-1.9, your client is running on unreleased Kubernetes code.
+
+
+### State of vendoring of client-go
+In general, the list of which dependencies to vendor is automatically generated and written to the file Godeps/Godeps.json. Only the revisions listed there are tested. This means especially that we do not and cannot test the code-base against master branches of our dependencies. This puts us in the following situation depending on the used vendoring tool:
+
+- [godep](https://github.com/tools/godep) reads Godeps/Godeps.json by running godep restore from k8s.io/client-go in your GOPATH. Then use godep save to vendor in your project. godep will choose the correct versions from your GOPATH.
+- [glide](https://github.com/Masterminds/glide) reads Godeps/Godeps.json automatically from its dependencies including from k8s.io/client-go, both on init and on update. Hence, glide should be mostly automatic as long as there are no conflicts.
+- [dep](https://github.com/golang/dep) does not currently respect Godeps/Godeps.json in a consistent way, especially not on updates. It is crucial to specify client-go dependencies manually as constraints or overrides, also for non k8s.io/\* dependencies. Without those, dep simply chooses the dependency master branches, which can cause problems as they are updated frequently.
+- The Kubernetes and golang/dep community are aware of the problems [[issue #1124](https://github.com/golang/dep/issues/1124), [issue #1236](https://github.com/golang/dep/issues/1236)] and [are working together on solutions](https://github.com/kubernetes-dep-experiment/client-go). Until then special care must be taken.
+Please see client-go’s [INSTALL.md](https://github.com/kubernetes/client-go/blob/master/INSTALL.md) for more details.
+
+
+### Updating dependencies – golang/dep
+Even with the deficiencies of golang/dep today, dep is slowly becoming the de-facto standard in the Go ecosystem. With the necessary care and the awareness of the missing features, dep can be (and is!) used successfully. Here’s a demonstration of how to update a project with client-go 5 to the latest version 6 using dep:
+
+(If you are still running client-go version 4 and want to play it safe by not skipping a release, now is a good time to check out [this excellent blog post](https://medium.com/@andy.goldstein/upgrading-kubernetes-client-go-from-v4-to-v5-bbd5025fe381) describing how to upgrade to version 5, put together by our friends at Heptio.)
+
+Before starting, it is important to understand that client-go depends on two other Kubernetes projects: [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery) and [k8s.io/api](https://github.com/kubernetes/api). In addition, if you are using CRDs, you probably also depend on [k8s.io/apiextensions-apiserver](https://github.com/kubernetes/apiextensions-apiserver) for the CRD client. The first exposes lower-level API mechanics (such as schemes, serialization, and type conversion), the second holds API definitions, and the third provides APIs related to CustomResourceDefinitions. In order for client-go to operate correctly, it needs to have its companion libraries vendored in correspondingly matching versions. Each library repository provides a branch named release-_\_ where _\_ refers to a particular Kubernetes version; for client-go version 6, it is imperative to refer to the _release_-1.9 branch on each repository.
+
+Assuming the latest version 5 patch release of client-go being vendored through dep, the Gopkg.toml manifest file should look something like this (possibly using branches instead of versions):
+
+
+
+```
+
+
+
+
+
+[[constraint]]
+
+
+ name = "k8s.io/api"
+
+ version = "kubernetes-1.8.1"
+
+
+[[constraint]]
+
+ name = "k8s.io/apimachinery"
+
+ version = "kubernetes-1.8.1"
+
+
+[[constraint]]
+
+ name = "k8s.io/apiextensions-apiserver"
+
+ version = "kubernetes-1.8.1"
+
+
+[[constraint]]
+
+ name = "k8s.io/client-go"
+
+
+
+
+ version = "5.0.1"
+
+ ```
+
+
+Note that some of the libraries could be missing if they are not actually needed by the client.
+
+Upgrading to client-go version 6 means bumping the version and tag identifiers as following ( **emphasis** given):
+
+
+
+
+```
+
+
+
+
+
+[constraint]]
+
+
+ name = "k8s.io/api"
+
+ version = "kubernetes-1.9.0"
+
+
+[[constraint]]
+
+ name = "k8s.io/apimachinery"
+
+ version = "kubernetes-1.9.0"
+
+
+[[constraint]]
+
+ name = "k8s.io/apiextensions-apiserver"
+
+ version = "kubernetes-1.9.0"
+
+
+[[constraint]]
+
+ name = "k8s.io/client-go"
+
+
+
+
+ version = "6.0.0"
+
+
+
+ ```
+
+
+
+The result of the upgrade can be found [here](https://github.com/ncdc/client-go-4-to-5/tree/v5-to-v6).
+
+A note of caution: dep cannot capture the complete set of dependencies in a reliable and reproducible fashion as described above. This means that for a 100% future-proof project you have to add constraints (or even overrides) to many other packages listed in client-go’s Godeps/Godeps.json. Be prepared to add them if something breaks. We are working with the golang/dep community to make this an easier and more smooth experience.
+
+Finally, we need to tell dep to upgrade to the specified versions by executing dep ensure. If everything goes well, the output of the command invocation should be empty, with the only indication that it was successful being a number of updated files inside the vendor folder.
+
+If you are using CRDs, you probably also use code-generation. The following block for Gopkg.toml will add the required code-generation packages to your project:
+
+
+```
+
+required = [
+ "k8s.io/code-generator/cmd/client-gen",
+ "k8s.io/code-generator/cmd/conversion-gen",
+ "k8s.io/code-generator/cmd/deepcopy-gen",
+ "k8s.io/code-generator/cmd/defaulter-gen",
+ "k8s.io/code-generator/cmd/informer-gen",
+ "k8s.io/code-generator/cmd/lister-gen",
+]
+
+
+[[constraint]]
+
+ branch = "kubernetes-1.9.0"
+
+
+ name = "k8s.io/code-generator"
+
+ ```
+
+
+
+
+Whether you would also like to prune unneeded packages (such as test files) through dep or commit the changes into the VCS at this point is up to you -- but from an upgrade perspective, you should now be ready to harness all the fancy new features that Kubernetes 1.9 brings through client-go.
diff --git a/blog/_posts/2018-01-00-Introducing-Container-Storage-Interface.md b/blog/_posts/2018-01-00-Introducing-Container-Storage-Interface.md
new file mode 100644
index 00000000000..b69c3306135
--- /dev/null
+++ b/blog/_posts/2018-01-00-Introducing-Container-Storage-Interface.md
@@ -0,0 +1,246 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: " Introducing Container Storage Interface (CSI) Alpha for Kubernetes "
+date: Thursday, January 10, 2018
+pagination:
+ enabled: true
+---
+
+One of the key differentiators for Kubernetes has been a powerful [volume plugin system](https://kubernetes.io/docs/concepts/storage/volumes/) that enables many different types of storage systems to:
+
+1. Automatically create storage when required.
+2. Make storage available to containers wherever they’re scheduled.
+3. Automatically delete the storage when no longer needed.
+Adding support for new storage systems to Kubernetes, however, has been challenging.
+
+Kubernetes 1.9 introduces an [alpha implementation of the Container Storage Interface (CSI)](https://github.com/kubernetes/features/issues/178) which makes installing new volume plugins as easy as deploying a pod. It also enables third-party storage providers to develop solutions without the need to add to the core Kubernetes codebase.
+
+Because the feature is alpha in 1.9, it must be explicitly enabled. Alpha features are not recommended for production usage, but are a good indication of the direction the project is headed (in this case, towards a more extensible and standards based Kubernetes storage ecosystem).
+
+
+### Why Kubernetes CSI?
+Kubernetes volume plugins are currently “in-tree”, meaning they’re linked, compiled, built, and shipped with the core kubernetes binaries. Adding support for a new storage system to Kubernetes (a volume plugin) requires checking code into the core Kubernetes repository. But aligning with the Kubernetes release process is painful for many plugin developers.
+
+The existing [Flex Volume plugin](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md) attempted to address this pain by exposing an exec based API for external volume plugins. Although it enables third party storage vendors to write drivers out-of-tree, in order to deploy the third party driver files it requires access to the root filesystem of node and master machines.
+
+In addition to being difficult to deploy, Flex did not address the pain of plugin dependencies: Volume plugins tend to have many external requirements (on mount and filesystem tools, for example). These dependencies are assumed to be available on the underlying host OS which is often not the case (and installing them requires access to the root filesystem of node machine).
+
+CSI addresses all of these issues by enabling storage plugins to be developed out-of-tree, containerized, deployed via standard Kubernetes primitives, and consumed through the Kubernetes storage primitives users know and love (PersistentVolumeClaims, PersistentVolumes, StorageClasses).
+
+
+### What is CSI?
+The goal of CSI is to establish a standardized mechanism for Container Orchestration Systems (COs) to expose arbitrary storage systems to their containerized workloads. The CSI specification emerged from cooperation between community members from various Container Orchestration Systems (COs)--including Kubernetes, Mesos, Docker, and Cloud Foundry. The specification is developed, independent of Kubernetes, and maintained at [https://github.com/container-storage-interface/spec/blob/master/spec.md](https://github.com/container-storage-interface/spec/blob/master/spec.md).
+
+Kubernetes v1.9 exposes an alpha implementation of the CSI specification enabling CSI compatible volume drivers to be deployed on Kubernetes and consumed by Kubernetes workloads.
+
+
+### How do I deploy a CSI driver on a Kubernetes Cluster?
+CSI plugin authors will provide their own instructions for deploying their plugin on Kubernetes.
+
+
+### How do I use a CSI Volume?
+Assuming a CSI storage plugin is already deployed on your cluster, you can use it through the familiar Kubernetes storage primitives: PersistentVolumeClaims, PersistentVolumes, and StorageClasses.
+
+CSI is an alpha feature in Kubernetes v1.9. To enable it, set the following flags:
+
+
+```
+CSI is an alpha feature in Kubernetes v1.9. To enable it, set the following flags:
+
+API server binary:
+--feature-gates=CSIPersistentVolume=true
+--runtime-config=storage.k8s.io/v1alpha1=true
+API server binary and kubelet binaries:
+--feature-gates=MountPropagation=true
+--allow-privileged=true
+```
+
+### Dynamic Provisioning
+You can enable automatic creation/deletion of volumes for CSI Storage plugins that support dynamic provisioning by creating a StorageClass pointing to the CSI plugin.
+
+The following StorageClass, for example, enables dynamic creation of “fast-storage” volumes by a CSI volume plugin called “com.example.team/csi-driver”.
+
+
+```
+kind: StorageClass
+
+apiVersion: storage.k8s.io/v1
+
+metadata:
+
+ name: fast-storage
+
+provisioner: com.example.team/csi-driver
+
+parameters:
+
+ type: pd-ssd
+ ```
+
+
+To trigger dynamic provisioning, create a PersistentVolumeClaim object. The following PersistentVolumeClaim, for example, triggers dynamic provisioning using the StorageClass above.
+
+
+```
+apiVersion: v1
+
+kind: PersistentVolumeClaim
+
+metadata:
+
+ name: my-request-for-storage
+
+spec:
+
+ accessModes:
+
+ - ReadWriteOnce
+
+ resources:
+
+ requests:
+
+ storage: 5Gi
+
+ storageClassName: fast-storage
+ ```
+
+
+When volume provisioning is invoked, the parameter “type: pd-ssd” is passed to the CSI plugin “com.example.team/csi-driver” via a “CreateVolume” call. In response, the external volume plugin provisions a new volume and then automatically create a PersistentVolume object to represent the new volume. Kubernetes then binds the new PersistentVolume object to the PersistentVolumeClaim, making it ready to use.
+
+If the “fast-storage” StorageClass is marked default, there is no need to include the storageClassName in the PersistentVolumeClaim, it will be used by default.
+
+
+### Pre-Provisioned Volumes
+You can always expose a pre-existing volume in Kubernetes by manually creating a PersistentVolume object to represent the existing volume. The following PersistentVolume, for example, exposes a volume with the name “existingVolumeName” belonging to a CSI storage plugin called “com.example.team/csi-driver”.
+
+
+```
+apiVersion: v1
+
+kind: PersistentVolume
+
+metadata:
+
+ name: my-manually-created-pv
+
+spec:
+
+ capacity:
+
+ storage: 5Gi
+
+ accessModes:
+
+ - ReadWriteOnce
+
+ persistentVolumeReclaimPolicy: Retain
+
+ csi:
+
+ driver: com.example.team/csi-driver
+
+ volumeHandle: existingVolumeName
+
+ readOnly: false
+ ```
+
+
+
+### Attaching and Mounting
+You can reference a PersistentVolumeClaim that is bound to a CSI volume in any pod or pod template.
+
+
+```
+kind: Pod
+
+apiVersion: v1
+
+metadata:
+
+ name: my-pod
+
+spec:
+
+ containers:
+
+ - name: my-frontend
+
+ image: dockerfile/nginx
+
+ volumeMounts:
+
+ - mountPath: "/var/www/html"
+
+ name: my-csi-volume
+
+ volumes:
+
+ - name: my-csi-volume
+
+ persistentVolumeClaim:
+
+ claimName: my-request-for-storage
+ ```
+
+
+When the pod referencing a CSI volume is scheduled, Kubernetes will trigger the appropriate operations against the external CSI plugin (ControllerPublishVolume, NodePublishVolume, etc.) to ensure the specified volume is attached, mounted, and ready to use by the containers in the pod.
+
+For more details please see the CSI implementation [design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md) and [documentation](https://github.com/kubernetes-csi/docs/wiki/Setup).
+
+
+### How do I create a CSI driver?
+Kubernetes is as minimally prescriptive on the packaging and deployment of a CSI Volume Driver as possible. The minimum requirements for deploying a CSI Volume Driver on Kubernetes are documented [here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md#third-party-csi-volume-drivers).
+
+The minimum requirements document also contains a [section](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md#recommended-mechanism-for-deploying-csi-drivers-on-kubernetes) outlining the suggested mechanism for deploying an arbitrary containerized CSI driver on Kubernetes. This mechanism can be used by a Storage Provider to simplify deployment of containerized CSI compatible volume drivers on Kubernetes.
+
+As part of this recommended deployment process, the Kubernetes team provides the following sidecar (helper) containers:
+
+
+- [external-attacher](https://github.com/kubernetes-csi/external-attacher)
+
+ - Sidecar container that watches Kubernetes VolumeAttachment objects and triggers ControllerPublish and ControllerUnpublish operations against a CSI endpoint.
+- [external-provisioner](https://github.com/kubernetes-csi/external-provisioner)
+
+ - Sidecar container that watches Kubernetes PersistentVolumeClaim objects and triggers CreateVolume and DeleteVolume operations against a CSI endpoint.
+- [driver-registrar](https://github.com/kubernetes-csi/driver-registrar)
+
+ - Sidecar container that registers the CSI driver with kubelet (in the future), and adds the drivers custom NodeId (retrieved via GetNodeID call against the CSI endpoint) to an annotation on the Kubernetes Node API Object
+
+Storage vendors can build Kubernetes deployments for their plugins using these components, while leaving their CSI driver completely unaware of Kubernetes.
+
+
+### Where can I find CSI drivers?
+CSI drivers are developed and maintained by third-parties. You can find example CSI drivers [here](https://github.com/kubernetes-csi/drivers), but these are provided purely for illustrative purposes, and are not intended to be used for production workloads.
+
+
+
+### What about Flex?
+The [Flex Volume plugin](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md) exists as an exec based mechanism to create “out-of-tree” volume plugins. Although it has some drawbacks (mentioned above), the Flex volume plugin coexists with the new CSI Volume plugin. SIG Storage will continue to maintain the Flex API so that existing third-party Flex drivers (already deployed in production clusters) continue to work. In the future, new volume features will only be added to CSI, not Flex.
+
+
+### What will happen to the in-tree volume plugins?
+Once CSI reaches stability, we plan to migrate most of the in-tree volume plugins to CSI. Stay tuned for more details as the Kubernetes CSI implementation approaches stable.
+
+
+
+### What are the limitations of alpha?
+The alpha implementation of CSI has the following limitations:
+
+- The credential fields in CreateVolume, NodePublishVolume, and ControllerPublishVolume calls are not supported.
+- Block volumes are not supported; only file.
+- Specifying filesystems is not supported, and defaults to ext4.
+- CSI drivers must be deployed with the provided “external-attacher,” even if they don’t implement “ControllerPublishVolume”.
+- Kubernetes scheduler topology awareness is not supported for CSI volumes: in short, sharing information about where a volume is provisioned (zone, regions, etc.) to allow k8s scheduler to make smarter scheduling decisions.
+
+
+
+### What’s next?
+Depending on feedback and adoption, the Kubernetes team plans to push the CSI implementation to beta in either 1.10 or 1.11.
+
+
+### How Do I Get Involved?
+This project, like all of Kubernetes, is the result of hard work by many contributors from diverse backgrounds working together. A huge thank you to Vladimir Vivien ([vladimirvivien](https://github.com/vladimirvivien)), Jan Šafránek ([jsafrane](https://github.com/jsafrane)), Chakravarthy Nelluri ([chakri-nelluri](https://github.com/chakri-nelluri)), Bradley Childs ([childsb](https://github.com/childsb)), Luis Pabón ([lpabon](https://github.com/lpabon)), and Saad Ali ([saad-ali](https://github.com/saad-ali)) for their tireless efforts in bringing CSI to life in Kubernetes.
+
+If you’re interested in getting involved with the design and development of CSI or any part of the Kubernetes Storage system, join the [Kubernetes Storage Special-Interest-Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG). We’re rapidly growing and always welcome new contributors.
diff --git a/blog/_posts/2018-01-00-Kubernetes-V19-Beta-Windows-Support.md b/blog/_posts/2018-01-00-Kubernetes-V19-Beta-Windows-Support.md
new file mode 100644
index 00000000000..dde9cb8c912
--- /dev/null
+++ b/blog/_posts/2018-01-00-Kubernetes-V19-Beta-Windows-Support.md
@@ -0,0 +1,69 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: Kubernetes v1.9 releases beta support for Windows Server Containers
+date: Wednesday, January 09, 2018
+pagination:
+ enabled: true
+---
+
+With the release of Kubernetes v1.9, our mission of ensuring Kubernetes works well everywhere and for everyone takes a great step forward. We’ve advanced support for Windows Server to beta along with continued feature and functional advancements on both the Kubernetes and Windows platforms. SIG-Windows has been working since March of 2016 to open the door for many Windows-specific applications and workloads to run on Kubernetes, significantly expanding the implementation scenarios and the enterprise reach of Kubernetes.
+
+Enterprises of all sizes have made significant investments in .NET and Windows based applications. Many enterprise portfolios today contain .NET and Windows, with Gartner claiming that [80%](http://www.gartner.com/document/3446217) of enterprise apps run on Windows. According to StackOverflow Insights, 40% of professional developers use the .NET programming languages (including .NET Core).
+
+But why is all this information important? It means that enterprises have both legacy and new born-in-the-cloud (microservice) applications that utilize a wide array of programming frameworks. There is a big push in the industry to modernize existing/legacy applications to containers, using an approach similar to “lift and shift”. Modernizing existing applications into containers also provides added flexibility for new functionality to be introduced in additional Windows or Linux containers. Containers are becoming the de facto standard for packaging, deploying, and managing both existing and microservice applications. IT organizations are looking for an easier and homogenous way to orchestrate and manage containers across their Linux and Windows environments. Kubernetes v1.9 now offers beta support for Windows Server containers, making it the clear choice for orchestrating containers of any kind.
+
+
+
+### Features
+Alpha support for Windows Server containers in Kubernetes was great for proof-of-concept projects and visualizing the road map for support of Windows in Kubernetes. The alpha release had significant drawbacks, however, and lacked many features, especially in networking. SIG-Windows, Microsoft, Cloudbase Solutions, Apprenda, and other community members banded together to create a comprehensive beta release, enabling Kubernetes users to start evaluating and using Windows.
+
+Some key feature improvements for Windows Server containers on Kubernetes include:
+
+- Improved support for pods! Multiple Windows Server containers in a pod can now share the network namespace using network compartments in Windows Server. This feature brings the concept of a pod to parity with Linux-based containers
+- Reduced network complexity by using a single network endpoint per pod
+- Kernel-Based load-balancing using the Virtual Filtering Platform (VFP) Hyper-v Switch Extension (analogous to Linux iptables)
+- Container Runtime Interface (CRI) pod and node level statistics. Windows Server containers can now be profiled for Horizontal Pod Autoscaling using performance metrics gathered from the pod and the node
+- Support for kubeadm commands to add Windows Server nodes to a Kubernetes environment. Kubeadm simplifies the provisioning of a Kubernetes cluster, and with the support for Windows Server, you can use a single tool to deploy Kubernetes in your infrastructure
+- Support for ConfigMaps, Secrets, and Volumes. These are key features that allow you to separate, and in some cases secure, the configuration of the containers from the implementation
+The crown jewels of Kubernetes 1.9 Windows support, however, are the networking enhancements. With the release of Windows Server 1709, Microsoft has enabled key networking capabilities in the operating system and the Windows Host Networking Service (HNS) that paved the way to produce a number of CNI plugins that work with Windows Server containers in Kubernetes. The Layer-3 routed and network overlay plugins that are supported with Kubernetes 1.9 are listed below:
+
+1. Upstream L3 Routing - IP routes configured in upstream ToR
+2. Host-Gateway - IP routes configured on each host
+3. Open vSwitch (OVS) & Open Virtual Network (OVN) with Overlay - Supports STT and Geneve tunneling types
+You can read more about each of their [configuration, setup, and runtime capabilities](https://kubernetes.io/docs/getting-started-guides/windows/) to make an informed selection for your networking stack in Kubernetes.
+
+Even though you have to continue running the Kubernetes Control Plane and Master Components in Linux, you are now able to introduce Windows Server as a Node in Kubernetes. As a community, this is a huge milestone and achievement. We will now start seeing .NET, .NET Core, ASP.NET, IIS, Windows Services, Windows executables and many more windows-based applications in Kubernetes.
+
+### What’s coming next
+A lot of work went into this beta release, but the community realizes there are more areas of investment needed before we can release Windows support as GA (General Availability) for production workloads. Some keys areas of focus for the first two quarters of 2018 include:
+
+1. Continue to make progress in the area of networking. Additional CNI plugins are under development and nearing completion
+- Overlay - win-overlay (vxlan or IP-in-IP encapsulation using Flannel)
+- Win-l2bridge (host-gateway)
+- OVN using cloud networking - without overlays
+- Support for Kubernetes network policies in ovn-kubernetes
+- Support for Hyper-V Isolation
+- Support for StatefulSet functionality for stateful applications
+- Produce installation artifacts and documentation that work on any infrastructure and across many public cloud providers like Microsoft Azure, Google Cloud, and Amazon AWS
+- Continuous Integration/Continuous Delivery (CI/CD) infrastructure for SIG-Windows
+- Scalability and Performance testing
+Even though we have not committed to a timeline for GA, SIG-Windows estimates a GA release in the first half of 2018.
+
+
+
+### Get Involved
+As we continue to make progress towards General Availability of this feature in Kubernetes, we welcome you to get involved, contribute code, provide feedback, deploy Windows Server containers to your Kubernetes cluster, or simply join our community.
+
+- If you want to get started on deploying Windows Server containers in Kubernetes, read our getting started guide at [https://kubernetes.io/docs/getting-started-guides/windows/](https://kubernetes.io/docs/getting-started-guides/windows/)
+- We meet every other Tuesday at 12:30 Eastern Standard Time (EST) at [https://zoom.us/my/sigwindows](https://zoom.us/my/sigwindows). All our meetings are recorded on youtube and referenced at [https://www.youtube.com/playlist?list=PL69nYSiGNLP2OH9InCcNkWNu2bl-gmIU4](https://www.youtube.com/playlist?list=PL69nYSiGNLP2OH9InCcNkWNu2bl-gmIU4)
+- Chat with us on Slack at [https://kubernetes.slack.com/messages/sig-windows](https://kubernetes.slack.com/messages/sig-windows)
+- Find us on GitHub at [https://github.com/kubernetes/community/tree/master/sig-windows](https://github.com/kubernetes/community/tree/master/sig-windows)
+
+
+
+Thank you,
+
+Michael Michael (@michmike77)
+SIG-Windows Lead
+Senior Director of Product Management, Apprenda
diff --git a/blog/_posts/2018-01-00-Reporting-Errors-Using-Kubernetes-Events.md b/blog/_posts/2018-01-00-Reporting-Errors-Using-Kubernetes-Events.md
new file mode 100644
index 00000000000..d6f0a596ed1
--- /dev/null
+++ b/blog/_posts/2018-01-00-Reporting-Errors-Using-Kubernetes-Events.md
@@ -0,0 +1,91 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: "Reporting Errors from Control Plane to Applications Using Kubernetes Events"
+date: Friday, January 25, 2018
+published: true
+pagination:
+ enabled: true
+---
+
+At [Box](https://www.box.com/), we manage several large scale Kubernetes clusters that serve as an internal platform as a service (PaaS) for hundreds of deployed microservices. The majority of those microservices are applications that power box.com for over 80,000 customers. The PaaS team also deploys several services affiliated with the platform infrastructure as the _control plane_.
+
+One use case of Box’s control plane is [public key infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) (_PKI_) processing. In our infrastructure, applications needing a new SSL certificate also need to trigger some processing in the control plane. The majority of our applications are not allowed to generate new SSL certificates due to security reasons. The control plane has a different security boundary and network access, and is therefore allowed to generate certificates.
+
+
+|  |
+| Figure1: Block Diagram of the PKI flow |
+
+
+If an application needs a new certificate, the application owner explicitly adds a [Custom Resource Definition](https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/) (CRD) to the application’s Kubernetes config [1]. This CRD specifies parameters for the SSL certificate: _name, common name, and others_. A microservice in the control plane watches CRDs and triggers some processing for SSL certificate generation [2]. Once the certificate is ready, the same control plane service sends it to the API server in a Kubernetes [Secret](https://kubernetes.io/docs/concepts/configuration/secret/) [3]. After that, the application containers access their certificates using Kubernetes [Secret VolumeMounts](https://kubernetes.io/docs/concepts/storage/volumes/#secret) [4]. You can see a working demo of this system in our [example application](https://github.com/box/error-reporting-with-kubernetes-events) on GitHub.
+
+The rest of this post covers the error scenarios in this “triggered” processing in the control plane. In particular, we are especially concerned with user input errors. Because the SSL certificate parameters come from the application’s config file in a CRD format, what should happen if there is an error in that CRD specification? Even a typo results in a failure of the SSL certificate creation. The error information is available in the control plane even though the root cause is most probably inside the application’s config file. The application owner does not have access to the control plane’s state or logs.
+
+Providing the right diagnosis to the application owner so she can fix the mistake becomes a serious productivity problem at scale. Box’s rapid migration to microservices results in several new deployments every week. Numerous first time users, who do not know every detail of the infrastructure, need to succeed in deploying their services and troubleshooting problems easily. As the owners of the infrastructure, we do not want to be the bottleneck while reading the errors from the control plane logs and passing them on to application owners. If something in an owner’s configuration causes an error somewhere else, owners need a fully empowering diagnosis. This error data must flow automatically without any human involvement.
+
+After considerable thought and experimentation, we found that [Kubernetes Events](https://v1-7.docs.kubernetes.io/docs/api-reference/v1.7/#event-v1-core) work great to automatically communicate these kind of errors. If the error information is placed in a pod’s event stream, it shows up in kubectl describe output. Even beginner users can execute kubectl describe pod and obtain an error diagnosis.
+
+
+We experimented with a status web page for the control plane service as an alternative to Kubernetes Events. We determined that the status page could update every time after processing an SSL certificate, and that application owners could probe the status page and get the diagnosis from there. After experimenting with a status page initially, we have seen that this does not work as effectively as the Kubernetes Events solution. The status page becomes a new interface to learn for the application owner, a new web address to remember, and one more context switch to a distinct tool during troubleshooting efforts. On the other hand, Kubernetes Events show up cleanly at the kubectl describe output, which is easily recognized by the developers.
+
+Here is a simplified example showing how we used Kubernetes Events for error reporting across distinct services. We have open sourced a [sample golang application](https://github.com/box/error-reporting-with-kubernetes-events) representative of the previously mentioned control plane service. It watches changes on CRDs and does input parameter checking. If an error is discovered, a Kubernetes Event is generated and the relevant pod’s event stream is updated.
+
+The sample application executes this [code](https://github.com/box/error-reporting-with-kubernetes-events/blob/master/cmd/controlplane/main.go#L201) to setup the Kubernetes Event generation:
+
+
+```
+// eventRecorder returns an EventRecorder type that can be
+// used to post Events to different object's lifecycles.
+func eventRecorder(
+ kubeClient \*kubernetes.Clientset) (record.EventRecorder, error) {
+ eventBroadcaster := record.NewBroadcaster()
+ eventBroadcaster.StartLogging(glog.Infof)
+ eventBroadcaster.StartRecordingToSink(
+ &typedcorev1.EventSinkImpl{
+ Interface: kubeClient.CoreV1().Events("")})
+ recorder := eventBroadcaster.NewRecorder(
+ scheme.Scheme,
+ v1.EventSource{Component: "controlplane"})
+ return recorder, nil
+}
+ ```
+
+
+After the one-time setup, the following [code](https://github.com/box/error-reporting-with-kubernetes-events/blob/master/cmd/controlplane/main.go#L163) generates events affiliated with pods:
+
+
+```
+ref, err := reference.GetReference(scheme.Scheme, &pod)
+if err != nil {
+ glog.Fatalf("Could not get reference for pod %v: %v\n",
+ pod.Name, err)
+}
+recorder.Event(ref, v1.EventTypeWarning, "pki ServiceName error",
+ fmt.Sprintf("ServiceName: %s in pki: %s is not found in"+
+ " allowedNames: %s", pki.Spec.ServiceName, pki.Name,
+ allowedNames))
+ ```
+
+
+Further implementation details can be understood by running the sample application.
+
+As mentioned previously, here is the relevant kubectl describe output for the application owner.
+
+
+```
+Events:
+ FirstSeen LastSeen Count From SubObjectPath Type Reason Message
+ --------- -------- ----- ---- ------------- -------- ------
+ ....
+ 1d 1m 24 controlplane Warning pki ServiceName error ServiceName: appp1 in pki: app1-pki is not found in allowedNames: [app1 app2]
+ ....
+
+ ```
+
+
+We have demonstrated a practical use case with Kubernetes Events. The automated feedback to programmers in the case of configuration errors has significantly improved our troubleshooting efforts. In the future, we plan to use Kubernetes Events in various other applications under similar use cases. The recently created [sample-controller](https://github.com/kubernetes/sample-controller) example also utilizes Kubernetes Events in a similar scenario. It is great to see there are more sample applications to guide the community. We are excited to continue exploring other use cases for Events and the rest of the Kubernetes API to make development easier for our engineers.
+
+_If you have a Kubernetes experience you’d like to share, [submit your story](https://docs.google.com/a/google.com/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform). If you use Kubernetes in your organization and want to voice your experience more directly, consider joining the [CNCF End User Community](https://www.cncf.io/people/end-user-community/) that Box and dozens of like-minded companies are part of._
+
+Special thanks for Greg Lyons and Mohit Soni for their contributions.
+Hakan Baba, Sr. Software Engineer, Box
diff --git a/blog/_posts/2018-03-00-Apache-Spark-23-With-Native-Kubernetes.md b/blog/_posts/2018-03-00-Apache-Spark-23-With-Native-Kubernetes.md
new file mode 100644
index 00000000000..df73314fa00
--- /dev/null
+++ b/blog/_posts/2018-03-00-Apache-Spark-23-With-Native-Kubernetes.md
@@ -0,0 +1,112 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: "Apache Spark 2.3 with Native Kubernetes Support"
+date: Tuesday, March 6, 2018
+---
+### Kubernetes and Big Data
+
+The open source community has been working over the past year to enable first-class support for data processing, data analytics and machine learning workloads in Kubernetes. New extensibility features in Kubernetes, such as [custom resources][1] and [custom controllers][2], can be used to create deep integrations with individual applications and frameworks.
+
+Traditionally, data processing workloads have been run in dedicated setups like the YARN/Hadoop stack. However, unifying the control plane for all workloads on Kubernetes simplifies cluster management and can improve resource utilization.
+
+"Bloomberg has invested heavily in machine learning and NLP to give our clients a competitive edge when it comes to the news and financial information that powers their investment decisions. By building our Data Science Platform on top of Kubernetes, we're making state-of-the-art data science tools like Spark, TensorFlow, and our sizable GPU footprint accessible to the company's 5,000+ software engineers in a consistent, easy-to-use way." - Steven Bower, Team Lead, Search and Data Science Infrastructure at Bloomberg
+
+### Introducing Apache Spark + Kubernetes
+
+[Apache Spark 2.3][3] with native Kubernetes support combines the best of the two prominent open source projects — Apache Spark, a framework for large-scale data processing; and Kubernetes.
+
+Apache Spark is an essential tool for data scientists, offering a robust platform for a variety of applications ranging from large scale data transformation to analytics to machine learning. Data scientists are adopting containers en masse to improve their workflows by realizing benefits such as packaging of dependencies and creating reproducible artifacts. Given that Kubernetes is the de facto standard for managing containerized environments, it is a natural fit to have support for Kubernetes APIs within Spark.
+
+Starting with Spark 2.3, users can run Spark workloads in an existing Kubernetes 1.7+ cluster and take advantage of Apache Spark's ability to manage distributed data processing tasks. Apache Spark workloads can make direct use of Kubernetes clusters for multi-tenancy and sharing through [Namespaces][4] and [Quotas][5], as well as administrative features such as [Pluggable Authorization][6] and [Logging][7]. Best of all, it requires no changes or new installations on your Kubernetes cluster; simply [create a container image][8] and set up the right [RBAC roles][9] for your Spark Application and you're all set.
+
+Concretely, a native Spark Application in Kubernetes acts as a [custom controller][2], which creates Kubernetes resources in response to requests made by the Spark scheduler. In contrast with [deploying Apache Spark in Standalone Mode][10] in Kubernetes, the native approach offers fine-grained management of Spark Applications, improved elasticity, and seamless integration with logging and monitoring solutions. The community is also exploring advanced use cases such as managing streaming workloads and leveraging service meshes like [Istio][11].
+
+![][12]
+
+To try this yourself on a Kubernetes cluster, simply download the binaries for the official [Apache Spark 2.3 release][13]. For example, below, we describe running a simple Spark application to compute the mathematical constant Pi across three Spark executors, each running in a separate pod. Please note that this requires a cluster running Kubernetes 1.7 or above, a [kubectl][14] client that is configured to access it, and the necessary [RBAC rules][9] for the default namespace and service account.
+
+
+
+```
+$ kubectl cluster-info
+
+Kubernetes master is running at https://xx.yy.zz.ww
+
+$ bin/spark-submit
+
+ --master k8s://https://xx.yy.zz.ww
+
+ --deploy-mode cluster
+
+ --name spark-pi
+
+ --class org.apache.spark.examples.SparkPi
+
+ --conf spark.executor.instances=5
+
+ --conf spark.kubernetes.container.image=
+
+ --conf spark.kubernetes.driver.pod.name=spark-pi-driver
+
+ local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar
+
+ ```
+
+To watch Spark resources that are created on the cluster, you can use the following kubectl command in a separate terminal window.
+
+
+
+```
+$ kubectl get pods -l 'spark-role in (driver, executor)' -w
+
+NAME READY STATUS RESTARTS AGE
+
+spark-pi-driver 1/1 Running 0 14s
+
+spark-pi-da1968a859653d6bab93f8e6503935f2-exec-1 0/1 Pending 0 0s
+
+```
+
+
+The results can be streamed during job execution by running:
+
+
+
+```
+
+$ kubectl logs -f spark-pi-driver
+
+```
+
+When the application completes, you should see the computed value of Pi in the driver logs.
+
+In Spark 2.3, we're starting with support for Spark applications written in Java and Scala with support for resource localization from a variety of data sources including HTTP, GCS, HDFS, and more. We have also paid close attention to failure and recovery semantics for Spark executors to provide a strong foundation to build upon in the future. Get started with [the open-source documentation][15] today.
+
+### Get Involved
+
+There's lots of exciting work to be done in the near future. We're actively working on features such as dynamic resource allocation, in-cluster staging of dependencies, support for PySpark & SparkR, support for Kerberized HDFS clusters, as well as client-mode and popular notebooks' interactive execution environments. For people who fell in love with the Kubernetes way of managing applications declaratively, we've also been working on a [Kubernetes Operator][16] for spark-submit, which allows users to declaratively specify and submit Spark Applications.
+
+And we're just getting started! We would love for you to get involved and help us evolve the project further.
+
+Huge thanks to the Apache Spark and Kubernetes contributors spread across multiple organizations who spent many hundreds of hours working on this effort. We look forward to seeing more of you contribute to the project and help it evolve further.
+
+Anirudh Ramanathan and Palak Bhatia
+Google
+
+[1]: https://kubernetes.io/docs/concepts/api-extension/custom-resources/
+[2]: https://kubernetes.io/docs/concepts/api-extension/custom-resources/#custom-controllers
+[3]: http://spark.apache.org/releases/spark-release-2-3-0.html
+[4]: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
+[5]: https://kubernetes.io/docs/concepts/policy/resource-quotas/
+[6]: https://kubernetes.io/docs/admin/authorization/
+[7]: https://kubernetes.io/docs/concepts/cluster-administration/logging/
+[8]: https://spark.apache.org/docs/latest/running-on-kubernetes.html#docker-images
+[9]: https://spark.apache.org/docs/latest/running-on-kubernetes.html#rbac
+[10]: http://blog.kubernetes.io/2016/03/using-Spark-and-Zeppelin-to-process-Big-Data-on-Kubernetes.html
+[11]: https://istio.io/
+[12]: https://1.bp.blogspot.com/-hl4pnOqiH4M/Wp4w9QmzghI/AAAAAAAAAL4/jcWoDOKEp3Y6lCzGxzTOlbvl2Mq1-2YeQCK4BGAYYCw/s1600/Screen%2BShot%2B2018-03-05%2Bat%2B10.10.14%2BPM.png
+[13]: https://spark.apache.org/downloads.html
+[14]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
+[15]: https://spark.apache.org/docs/latest/running-on-kubernetes.html
+[16]: https://coreos.com/operators/
diff --git a/blog/_posts/2018-03-00-Expanding-User-Support-With-Office-Hours.md b/blog/_posts/2018-03-00-Expanding-User-Support-With-Office-Hours.md
new file mode 100644
index 00000000000..04ec690ac90
--- /dev/null
+++ b/blog/_posts/2018-03-00-Expanding-User-Support-With-Office-Hours.md
@@ -0,0 +1,44 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: "Expanding User Support with Office Hours"
+date: Wednesday, March 14, 2018
+pagination:
+ enabled: true
+---
+
+**Today's post is by Jorge Castro and Ilya Dmitichenko on Kubernetes office hours.**
+
+Today's developer has an almost overwhelming amount of resources available for learning. Kubernetes development teams use [StackOverflow][1], [user documentation][2], [Slack][3], and the [mailing lists][4]. Additionally, the community itself continues to amass an [awesome list][5] of resources.
+
+One of the challenges of large projects is keeping user resources relevant and useful. While documentation can be useful, great learning also happens in Q&A sessions at conferences, or by learning with someone whose explanation matches your learning style. Consider that learning Kung Fu from Morpheus would be a lot more fun than reading a book about Kung Fu!
+
+![][6]
+
+
+We as Kubernetes developers want to create an interactive experience: where Kubernetes users can get their questions answered by experts in real time, or at least referred to the best known documentation or code example.
+
+Having discussed a few broad ideas, we eventually decided to make [Kubernetes Office Hours][7] a live stream where we take user questions from the audience and present them to our panel of contributors and expert users. We run two sessions: one for European time zones, and one for the Americas. These [streaming setup guidelines][8] make office hours extensible—for example, if someone wants to run office hours for Asia/Pacific timezones, or for another CNCF project.
+
+To give you an idea of what Kubernetes office hours are like, here's Josh Berkus answering a question on running databases on Kubernetes. Despite the popularity of this topic, it's still difficult for a new user to get a constructive answer. Here's an excellent response from Josh:
+
+[](https://www.youtube.com/embed/Aj0yozuQ0ME?ecver=2)
+
+It's often easier to field this kind of question in office hours than it is to ask a developer to write a full-length blog post. \[Editor's note: That's legit!\] Because we don't have infinite developers with infinite time, this kind of focused communication creates high-bandwidth help while limiting developer commitments to 1 hour per month. This allows a rotating set of experts to share the load without overwhelming any one person.
+
+We hold office hours the third Wednesday of every month on the [Kubernetes YouTube Channel][9]. You can post questions on the [#office-hours channel][10] on Slack, or you can submit your question to Stack Overflow and post a link on Slack. If you post a question in advance, you might get better answers, as volunteers have more time to research and prepare. If a question can't be fully solved during the call, the team will try their best to point you in the right direction and/or ping other people in the community to take a look. [Check out this page][7] for more details on what's off- and on topic as well as meeting information for your time zone. We hope to hear your questions soon!
+
+Special thanks to Amazon, Bitnami, Giant Swarm, Heptio, Liquidweb, Northwestern Mutual, Packet.net, Pivotal, Red Hat, Weaveworks, and VMWare for donating engineering time to office hours.
+
+And thanks to Alan Pope, Joe Beda, and Charles Butler for technical support in making our livestream better.
+
+[1]: https://stackoverflow.com/questions/tagged/kubernetes
+[2]: https://kubernetes.io/docs/home
+[3]: http://slack.k8s.io/
+[4]: https://groups.google.com/forum/#!forum/kubernetes-users
+[5]: https://github.com/ramitsurana/awesome-kubernetes
+[6]: https://3.bp.blogspot.com/-Iy2GaddJp78/WqnFbVUu9FI/AAAAAAAAAM4/xUzhOSIlRDEMMZNl3SzPBd1Pa0T5y0pKQCLcBGAs/s400/24xkey.jpg
+[7]: https://github.com/kubernetes/community/blob/master/events/office-hours.md
+[8]: https://docs.google.com/document/d/1jHSnRzoOxwd1urgxwbANhNgXjMV8fb0B4NS3ZUL10IY/edit
+[9]: https://www.youtube.com/c/kubernetescommunity
+[10]: https://kubernetes.slack.com/messages/office-hours
diff --git a/blog/_posts/2018-03-00-First-Beta-Version-Of-Kubernetes-1-10.md b/blog/_posts/2018-03-00-First-Beta-Version-Of-Kubernetes-1-10.md
new file mode 100644
index 00000000000..febc7541a2d
--- /dev/null
+++ b/blog/_posts/2018-03-00-First-Beta-Version-Of-Kubernetes-1-10.md
@@ -0,0 +1,104 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: "Kubernetes: First Beta Version of Kubernetes 1.10 is Here"
+date: Friday, March 2, 2018
+---
+
+**Editor's note: Today's post is by Nick Chase. Nick is Head of Content at [Mirantis][1].**
+The Kubernetes community has released the first beta version of Kubernetes 1.10, which means you can now try out some of the new features and give your feedback to the release team ahead of the official release. The release, currently scheduled for March 21, 2018, is targeting the inclusion of more than a dozen brand new alpha features and more mature versions of more than two dozen more.
+
+Specifically, Kubernetes 1.10 will include production-ready versions of Kubelet TLS Bootstrapping, API aggregation, and more detailed storage metrics.
+
+Some of these features will look familiar because they emerged at earlier stages in previous releases. Each stage has specific meanings:
+
+* **stable**: The same as "generally available", features in this stage have been thoroughly tested and can be used in production environments.
+* **beta**: The feature has been around long enough that the team is confident that the feature itself is on track to be included as a stable feature, and any API calls aren't going to change. You can use and test these features, but including them in mission-critical production environments is not advised because they are not completely hardened.
+* **alpha**: New features generally come in at this stage. These features are still being explored. APIs and options may change in future versions, or the feature itself may disappear. Definitely not for production environments.
+You can download the latest release of Kubernetes 1.10 from . To give feedback to the development community, [create an issue in the Kubernetes 1.10 milestone][2] and tag the appropriate SIG before March 9.
+
+Here's what to look for, though you should remember that while this is the current plan as of this writing, there's always a possibility that one or more features may be held for a future release. We'll start with authentication.
+
+### Authentication (SIG-Auth)
+
+1. [Kubelet TLS Bootstrap][3] (stable): Kubelet TLS bootstrapping is probably the "headliner" of the Kubernetes 1.10 release as it becomes available for production environments. It provides the ability for a new kubelet to create a certificate signing request, which enables you to add new nodes to your cluster without having to either manually add security certificates or use self-signed certificates that eliminate many of the benefits of having certificates in the first place.
+2. [Pod Security Policy moves to its own API group][4] (beta): The beta release of the Pod Security Policy lets administrators decide what contexts pods can run in. In other words, you have the ability to prevent unprivileged users from creating privileged pods -- that is, pods that can perform actions such as writing files or accessing Secrets -- in particular namespaces.
+3. [Limit node access to API][5] (beta): Also in beta, you now have the ability to limit calls to the API on a node to just that specific node, and to ensure that a node is only calling its own API, and not those on other nodes.
+4. [External client-go credential providers][6] (alpha): client-go is the Go language client for accessing the Kubernetes API. This feature adds the ability to add external credential providers. For example, Amazon might want to create its own authenticator to validate interaction with EKS clusters; this feature enables them to do that without having to include their authenticator in the Kubernetes codebase.
+5. [TokenRequest API][7] (alpha): The TokenRequest API provides the groundwork for much needed improvements to service account tokens; this feature enables creation of tokens that aren't persisted in the Secrets API, that are targeted for specific audiences (such as external secret stores), have configurable expiries, and are bindable to specific pods.
+
+### Networking (SIG-Network)
+
+1. [Support configurable pod resolv.conf][8] (beta): You now have the ability to specifically control DNS for a single pod, rather than relying on the overall cluster DNS.
+2. Although the feature is called [Switch default DNS plugin to CoreDNS][9] (beta), that's not actually what will happen in this cycle. The community has been working on the switch from kube-dns, which includes dnsmasq, to CoreDNS, another CNCF project with fewer moving parts, for several releases. In Kubernetes 1.10, the default will still be kube-dns, but when CoreDNS reaches feature parity with kube-dns, the team will look at making it the default.
+3. [Topology aware routing of services][10] (alpha): The ability to distribute workloads is one of the advantages of Kubernetes, but one thing that has been missing until now is the ability to keep workloads and services geographically close together for latency purposes. Topology aware routing will help with this problem. (This functionality may be delayed until Kubernetes 1.11.)
+4. [Make NodePort IP address configurable][11] (alpha): Not having to specify IP addresses in a Kubernetes cluster is great -- until you actually need to know what one of those addresses is ahead of time, such as for setting up database replication or other tasks. You will now have the ability to specifically configure NodePort IP addresses to solve this problem. (This functionality may be delayed until Kubernetes 1.11.)
+
+### Kubernetes APIs (SIG-API-machinery)
+
+1. [API Aggregation][12] (stable): Kubernetes makes it possible to extend its API by creating your own functionality and registering your functions so that they can be served alongside the core K8s functionality. This capability will be upgraded to "stable" in Kubernetes 1.10, so you can use it in production. Additionally, SIG-CLI is adding a feature called [kubectl get and describe should work well with extensions][13] (alpha) to make the server, rather than the client, return this information for a smoother user experience.
+2. [Support for self-hosting authorizer webhook][14] (alpha): Earlier versions of Kubernetes brought us the authorizer webhooks, which make it possible to customize the enforcement of permissions before commands are executed. Those webhooks, however, have to live somewhere, and this new feature makes it possible to host them in the cluster itself.
+
+### Storage (SIG-Storage)
+
+1. [Detailed storage metrics of internal state][15] (stable): With a distributed system such as Kubernetes, it's particularly important to know what's going on inside the system at any given time, either for troubleshooting purposes or simply for automation. This release brings to general availability detailed metrics of what's going in inside the storage systems, including metrics such as mount and unmount time, number of volumes in a particular state, and number of orphaned pod directories. You can find a [full list in this design document][16].
+2. [Mount namespace propagation][17] (beta): This feature allows a container to mount a volume as rslave so that host mounts can be seen inside the container, or as rshared so that any mounts from inside the container are reflected in the host's mount namespace. The default for this feature is rslave.
+3. [Local Ephemeral Storage Capacity Isolation][18] (beta): Without this feature in place, every pod on a node that is using ephemeral storage is pulling from the same pool, and allocating storage is on a "best-effort" basis; in other words, a pod never knows for sure how much space it has available. This function provides the ability for a pod to reserve its own storage.
+4. [Out-of-tree CSI Volume Plugins][19] (beta): Kubernetes 1.9 announced the release of the Container Storage Interface, which provides a standard way for vendors to provide storage to Kubernetes. This function makes it possible for them to create drivers that live "out-of-tree", or out of the normal Kubernetes core. This means that vendors can control their own plugins and don't have to rely on the community for code reviews and approvals.
+5. [Local Persistent Storage][20] (beta): This feature enables PersistentVolumes to be created with locally attached disks, and not just network volumes.
+6. [Prevent deletion of Persistent Volume Claims that are used by a pod][21] (beta) and 7. [Prevent deletion of Persistent Volume that is bound to a Persistent Volume Claim][22] (beta): In previous versions of Kubernetes it was possible to delete storage that is in use by a pod, causing massive problems for the pod. These features provide validation that prevents that from happening.
+7. Running out of storage space on your Persistent Volume? If you are, you can use [Add support for online resizing of PVs][23] (alpha) to enlarge the underlying volume it without disrupting existing data. This also works in conjunction with the new [Add resize support for FlexVolume][24] (alpha); FlexVolumes are vendor-supported volumes implemented through [FlexVolume][25] plugins.
+8. [Topology Aware Volume Scheduling][26] (beta): This feature enables you to specify topology constraints on PersistentVolumes and have those constraints evaluated by the scheduler. It also delays the initial PersistentVolumeClaim binding until the Pod has been scheduled so that the volume binding decision is smarter and considers all Pod scheduling constraints as well. At the moment, this feature is most useful for local persistent volumes, but support for dynamic provisioning is under development.
+
+
+
+### Node management (SIG-Node)
+
+1. [Dynamic Kubelet Configuration][27] (beta): Kubernetes makes it easy to make changes to existing clusters, such as increasing the number of replicas or making a service available over the network. This feature makes it possible to change Kubernetes itself (or rather, the Kubelet that runs Kubernetes behind the scenes) without bringing down the node on which Kubelet is running.
+2. [CRI validation test suite][28] (beta): The Container Runtime Interface (CRI) makes it possible to run containers other than Docker (such as Rkt containers or even virtual machines using Virtlet) on Kubernetes. This features provides a suite of validation tests to make certain that these CRI implementations are compliant, enabling developers to more easily find problems.
+3. [Configurable Pod Process Namespace Sharing][29] (alpha): Although pods can easily share the Kubernetes namespace, the process, or PID namespace has been a more difficult issue due to lack of support in Docker. This feature enables you to set a parameter on the pod to determine whether containers get their own operating system processes or share a single process.
+4. [Add support for Windows Container Configuration in CRI][30] (alpha): The Container Runtime Interface was originally designed with Linux-based containers in mind, and it was impossible to implement support for Windows-based containers using CRI. This feature solves that problem, making it possible to specify a WindowsContainerConfig.
+5. [Debug Containers][31] (alpha): It's easy to debug a container if you have the appropriate utilities. But what if you don't? This feature makes it possible to run debugging tools on a container even if those tools weren't included in the original container image.
+
+### Other changes:
+
+1. Deployment (SIG-Cluster Lifecycle): [Support out-of-process and out-of-tree cloud providers][32] (beta): As Kubernetes gains acceptance, more and more cloud providers will want to make it available. To do that more easily, the community is working on extracting provider-specific binaries so that they can be more easily replaced.
+2. Kubernetes on Azure (SIG-Azure): Kubernetes has a cluster-autoscaler that automatically adds nodes to your cluster if you're running too many workloads, but until now it wasn't available on Azure. The [Add Azure support to cluster-autoscaler][33] (alpha) feature aims to fix that. Closely related, the [Add support for Azure Virtual Machine Scale Sets][34] (alpha) feature makes use of Azure's own autoscaling capabilities to make resources available.
+You can download the Kubernetes 1.10 beta from . Again, if you've got feedback (and the community hopes you do) please add an issue to the [1.10 milestone][2] and tag the relevant SIG before March 9.
+_
+(Many thanks to community members Michelle Au, Jan Šafránek, Eric Chiang, Michał Nasiadka, Radosław Pieczonka, Xing Yang, Daniel Smith, sylvain boily, Leo Sunmo, Michal Masłowski, Fernando Ripoll, ayodele abejide, Brett Kochendorfer, Andrew Randall, Casey Davenport, Duffie Cooley, Bryan Venteicher, Mark Ayers, Christopher Luciano, and Sandor Szuecs for their invaluable help in reviewing this article for accuracy.)_
+
+[1]: https://www.mirantis.com/
+[2]: https://github.com/kubernetes/kubernetes/milestone/37
+[3]: https://github.com/kubernetes/features/issues/43
+[4]: https://github.com/kubernetes/features/issues/5
+[5]: https://github.com/kubernetes/features/issues/279
+[6]: https://github.com/kubernetes/features/issues/541
+[7]: https://github.com/kubernetes/features/issues/542
+[8]: https://github.com/kubernetes/features/issues/504
+[9]: https://github.com/kubernetes/features/issues/427
+[10]: https://github.com/kubernetes/features/issues/536
+[11]: https://github.com/kubernetes/features/issues/539
+[12]: https://github.com/kubernetes/features/issues/263
+[13]: https://github.com/kubernetes/features/issues/515
+[14]: https://github.com/kubernetes/features/issues/516
+[15]: https://github.com/kubernetes/features/issues/496
+[16]: https://docs.google.com/document/d/1Fh0T60T_y888LsRwC51CQHO75b2IZ3A34ZQS71s_F0g/edit#heading=h.ys6pjpbasqdu
+[17]: https://github.com/kubernetes/features/issues/432
+[18]: https://github.com/kubernetes/features/issues/361
+[19]: https://github.com/kubernetes/features/issues/178
+[20]: https://github.com/kubernetes/features/issues/121
+[21]: https://github.com/kubernetes/features/issues/498
+[22]: https://github.com/kubernetes/features/issues/499
+[23]: https://github.com/kubernetes/features/issues/531
+[24]: https://github.com/kubernetes/features/issues/304
+[25]: http://leebriggs.co.uk/blog/2017/03/12/kubernetes-flexvolumes.html
+[26]: https://github.com/kubernetes/features/issues/490
+[27]: https://github.com/kubernetes/features/issues/281
+[28]: https://github.com/kubernetes/features/issues/292
+[29]: https://github.com/kubernetes/features/issues/495
+[30]: https://github.com/kubernetes/features/issues/547
+[31]: https://github.com/kubernetes/features/issues/277
+[32]: https://github.com/kubernetes/features/issues/88
+[33]: https://github.com/kubernetes/features/issues/514
+[34]: https://github.com/kubernetes/features/issues/513
diff --git a/blog/_posts/2018-03-00-How-To-Integrate-Rollingupdate-Strategy.md b/blog/_posts/2018-03-00-How-To-Integrate-Rollingupdate-Strategy.md
new file mode 100644
index 00000000000..f195194ba12
--- /dev/null
+++ b/blog/_posts/2018-03-00-How-To-Integrate-Rollingupdate-Strategy.md
@@ -0,0 +1,251 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: "How to Integrate RollingUpdate Strategy for TPR in Kubernetes"
+date: Tuesday, March 13, 2018
+---
+
+With Kubernetes, it's easy to manage and scale stateless applications like web apps and API services right out of the box. To date, almost all of the talks about Kubernetes has been about microservices and stateless applications.
+
+With the popularity of container-based microservice architectures, there is a strong need to deploy and manage RDBMS(Relational Database Management Systems). RDBMS requires experienced database-specific knowledge to correctly scale, upgrade, and re-configure while protecting against data loss or unavailability.
+
+For example, MySQL (the most popular open source RDBMS) needs to store data in files that are persistent and exclusive to each MySQL database's storage. Each MySQL database needs to be individually distinct, another, more complex is in cluster that need to distinguish one MySQL database from a cluster as a different role, such as master, slave, or shard. High availability and zero data loss are also hard to accomplish when replacing database nodes on failed machines.
+
+Using powerful Kubernetes API extension mechanisms, we can encode RDBMS domain knowledge into software, named WQ-RDS, running atop Kubernetes like built-in resources.
+
+WQ-RDS leverages Kubernetes primitive resources and controllers, it deliveries a number of enterprise-grade features and brings a significantly reliable way to automate time-consuming operational tasks like database setup, patching backups, and setting up high availability clusters. WQ-RDS supports mainstream versions of Oracle and MySQL (both compatible with MariaDB).
+
+Let's demonstrate how to manage a MySQL sharding cluster.
+
+### MySQL Sharding Cluster
+
+MySQL Sharding Cluster is a scale-out database architecture. Based on the hash algorithm, the architecture distributes data across all the shards of the cluster. Sharding is entirely transparent to clients: Proxy is able to connect to any Shards in the cluster and issue queries to the correct shards directly.
+
+| ----- |
+| ![][1]{: .big-img} |
+|
+
+Note: Each shard corresponds to a single MySQL instance. Currently, WQ-RDS supports a maximum of 64 shards.
+
+ |
+
+
+All of the shards are built with Kubernetes Statefulset, Services, Storage Class, configmap, secrets and MySQL. WQ-RDS manages the entire lifecycle of the sharding cluster. Advantages of the sharding cluster are obvious:
+
+* Scale out queries per second (QPS) and transactions per second (TPS)
+* Scale out storage capacity: gain more storage by distributing data to multiple nodes
+
+### Create a MySQL Sharding Cluster
+
+Let's create a Kubernetes cluster with 8 shards.
+
+```
+ kubectl create -f mysqlshardingcluster.yaml
+```
+
+Next, create a MySQL Sharding Cluster including 8 shards.
+
+* TPR : MysqlCluster and MysqlDatabase
+
+```
+[root@k8s-master ~]# kubectl get mysqlcluster
+
+
+NAME KIND
+
+clustershard-c MysqlCluster.v1.mysql.orain.com
+```
+
+MysqlDatabase from clustershard-c0 to clustershard-c7 belongs to MysqlCluster clustershard-c.
+
+```
+[root@k8s-master ~]# kubectl get mysqldatabase
+
+NAME KIND
+
+clustershard-c0 MysqlDatabase.v1.mysql.orain.com
+
+clustershard-c1 MysqlDatabase.v1.mysql.orain.com
+
+clustershard-c2 MysqlDatabase.v1.mysql.orain.com
+
+clustershard-c3 MysqlDatabase.v1.mysql.orain.com
+
+clustershard-c4 MysqlDatabase.v1.mysql.orain.com
+
+clustershard-c5 MysqlDatabase.v1.mysql.orain.com
+
+clustershard-c6 MysqlDatabase.v1.mysql.orain.com
+
+clustershard-c7 MysqlDatabase.v1.mysql.orain.com
+```
+
+
+Next, let's look at two main features: high availability and RollingUpdate strategy.
+
+To demonstrate, we'll start by running sysbench to generate some load on the cluster. In this example, QPS metrics are generated by MySQL export, collected by Prometheus, and visualized in Grafana.
+
+### Feature: high availability
+
+WQ-RDS handles MySQL instance crashes while protecting against data loss.
+
+When killing clustershard-c0, WQ-RDS will detect that clustershard-c0 is unavailable and replace clustershard-c0 on failed machine, taking about 35 seconds on average.
+
+![][2]
+
+zero data loss at same time.
+
+![][3]
+
+
+### Feature : RollingUpdate Strategy
+
+MySQL Sharding Cluster brings us not only strong scalability but also some level of maintenance complexity. For example, when updating a MySQL configuration like innodb_buffer_pool_size, a DBA has to perform a number of steps:
+
+1\. Apply change time.
+2\. Disable client access to database proxies.
+3\. Start a rolling upgrade.
+
+
+Rolling upgrades need to proceed in order and are the most demanding step of the process. One cannot continue a rolling upgrade until and unless previous updates to MySQL instances are running and ready.
+
+4 Verify the cluster.
+5\. Enable client access to database proxies.
+
+Possible problems with a rolling upgrade include:
+
+* node reboot
+* MySQL instances restart
+* human error
+Instead, WQ-RDS enables a DBA to perform rolling upgrades automatically.
+
+### StatefulSet RollingUpdate in Kubernetes
+
+Kubernetes 1.7 includes a major feature that adds automated updates to StatefulSets and supports a range of update strategies including rolling updates.
+
+**Note:** For more information about [StatefulSet RollingUpdate][4], see the Kubernetes docs.
+
+Because TPR (currently CRD) does not support the rolling upgrade strategy, we needed to integrate the RollingUpdate strategy into WQ-RDS. Fortunately, the [Kubernetes repo][5] is a treasure for learning. In the process of implementation, there are some points to share:
+
+* **MySQL Sharding Cluster has ****changed**: Each StatefulSet has its corresponding ControllerRevision, which records all the revision data and order (like git). Whenever StatefulSet is syncing, StatefulSet Controller will firstly compare it's spec to the latest corresponding ControllerRevision data (similar to git diff). If changed, a new ControllerrRevision will be generated, and the revision number will be incremented by 1. WQ-RDS borrows the process, MySQL Sharding Cluster object will record all the revision and order in ControllerRevision.
+* **How to initialize MySQL Sharding Cluster to meet request ****replicas**: Statefulset supports two [Pod management policies][4]: Parallel and OrderedReady. Because MySQL Sharding Cluster doesn't require ordered creation for its initial processes, we use the Parallel policy to accelerate the initialization of the cluster.
+* **How to perform a Rolling ****Upgrade**: Statefulset recreates pods in strictly decreasing order. The difference is that WQ-RDS updates shards instead of recreating them, as shown below:
+![][6]
+
+* **When RollingUpdate ends**: Kubernetes signals termination clearly. A rolling update completes when all of a set's Pods have been updated to the updateRevision. The status's currentRevision is set to updateRevision and its updateRevision is set to the empty string. The status's currentReplicas is set to updateReplicas and its updateReplicas are set to 0.
+
+### Controller revision in WQ-RDS
+
+Revision information is stored in MysqlCluster.Status and is no different than Statefulset.Status.
+
+```
+
+root@k8s-master ~]# kubectl get mysqlcluster -o yaml clustershard-c
+
+apiVersion: v1
+
+items:
+
+\- apiVersion: mysql.orain.com/v1
+
+ kind: MysqlCluster
+
+ metadata:
+
+ creationTimestamp: 2017-10-20T08:19:41Z
+
+ labels:
+
+ AppName: clustershard-crm
+
+ Createdby: orain.com
+
+ DBType: MySQL
+
+ name: clustershard-c
+
+ namespace: default
+
+ resourceVersion: "415852"
+
+ selfLink: /apis/mysql.orain.com/v1/namespaces/default/mysqlclusters/clustershard-c
+
+ uid: 6bb089bb-b56f-11e7-ae02-525400e717a6
+
+ spec:
+
+
+
+ dbresourcespec:
+
+ limitedcpu: 1200m
+
+ limitedmemory: 400Mi
+
+ requestcpu: 1000m
+
+ requestmemory: 400Mi
+
+
+
+ status:
+
+ currentReplicas: 8
+
+ currentRevision: clustershard-c-648d878965
+
+ replicas: 8
+
+ updateRevision: clustershard-c-648d878965
+
+kind: List
+
+```
+
+### Example: Perform a rolling upgrade
+
+Finally, We can now update "clustershard-c" to update configuration "innodb_buffer_pool_size" from 6GB to 7GB and reboot.
+
+The process takes 480 seconds.
+
+![][7]
+
+The upgrade is in monotonically decreasing manner:
+
+![][8]
+
+![][9]
+
+![][10]
+
+![][11]
+
+![][12]
+
+![][13]
+
+![][14]
+
+![][15]
+
+### Conclusion
+
+RollingUpgrade is meaningful to database administrators. It provides a more effective way to operator database.
+
+\--Orain Xiong, co-founder, Woqutech
+
+[1]: https://lh5.googleusercontent.com/4WiSkxX-XBqARVqQ0No-1tZ31op90LAUkTco3FdIO1mFScNOTVtMCgnjaO8SRUmms-6MAb46CzxlXDhLBqAAAmbx26atJnu4t1FTTALZx_CbUPqrCxjL746DW4TD42-03Ac9VB2c
+[2]: https://lh3.googleusercontent.com/sXqVqfTu6rMWn0mlHLgHHqATe_qsx1tNmMfX60HoTwyhd5HCL4A_ViFBQAZfOoVGioeXcI_XXbzVFUdq2hbKGwS0OXH6PFGqgpZshfBwrT088bz4KqeyTbHpQR2olyzE6eRo1fan
+[3]: https://lh6.googleusercontent.com/7xnN_sODa-3Ch3ScAUlggCTeYfnE3-wxRaCIHrljHCB7LnXgth8zeCv0gk_UU1jbSDBQuACQ2Mf1FO1-E7GvMWwGKjp7irenAKp4DkHlA5LR9OVuLXqubPFhhksA8kfBUh4Z4OuN
+[4]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#rolling-updates
+[5]: https://github.com/kubernetes/kubernetes
+[6]: https://lh6.googleusercontent.com/B4ig8krCsXwvMeBy8NamQi1DrihUEzBcRTHCqhn9kUvlcpPrFoYUNAxn61qh8S2HXcdg31QpOhWSsYHP0jI4QxPkKpZ5oY-k9gFp1eK63qt6rwTMMWMiBs45DObY6rw2R7c0lNPu
+[7]: https://lh4.googleusercontent.com/LOxFDdojYxnPvSHDYwivVge6vGImK7uTdyvCsKrxCMF3rIlVkw7mHeNhJiNJwz1aGzVhZXpqrgzHC6pIbkPk3JPAtuSqX9ovAYBzK01BGfzwkXvMGZomAh4L0DahGyD3QB715B-Z
+[8]: https://lh6.googleusercontent.com/DhFbY3JDh23A_91c04TujxWM9xCX_xq1xOCXHi7XAd75LzKwDtbH6Gr_2VXCscg8AeVCQzw3Inw4M-uvssWq8od4va0wd-fIyClVY63FjRfeU16fQda_XqzBYRIhrG5W3tDnCAwC
+[9]: https://lh6.googleusercontent.com/bAJtLRRl2TqQrfBOooNm9DIEuezoBhT3f-XuOyGxp8sKePzfRaQYcJ7PFvL30xw9jeUpc-3rVw6Qjr46dFRk7mmUsf3oichNEuC-BFwCEtpbxK0_BjSJxtIE4B5xR4CGw1m6Hf0D
+[10]: https://lh4.googleusercontent.com/ALYk-EP_rYibA95nIIo8TKx8BYuSY9w1Pqw4JLEiV89K9i06uBhkTrYWX26FjYtheGKVwwVMTtDKH7UTBovGf8AEpK97T3RT23RSAUTs4GyDFaDOGmlRAczbGLm0UjQglbB_NPdF
+[11]: https://lh4.googleusercontent.com/gmM6UbgVOBWPJBpIMutxeTxGiwtjFv25KAHQw3ebVAF5Kxm-uxkPKEiYKhpwYUTyDe5knYlGmQDDHiN8eefBJx0fbK7jg4IlpG5_DXMUG6rNNFIbpP7Q94ANROIeUfe5JP6t-k37
+[12]: https://lh5.googleusercontent.com/aiczeqRNRls8lh-LYbnx112kgZI2gvaBMimAk74KlLhR3EVicuKAemTKr2eKUSFPjmKbsg_gw_nY1G4YU0-3J1EjDPOhz55UUri47Py-s-jRf0dF-lAKn6TRrF6IvGtv2aldWa3k
+[13]: https://lh6.googleusercontent.com/mwRQP_wXMCzpXsC5sqb0nJ9jU4KdUl4FiUE26gQZMQbrn5zcgqSYZB03CLmGsT2Nuq-7x00W4Ar3IUAh7hxEksQEGl6ugAmY0wo7xjzisNH9VE1qto9Afx8QW2Sr6NR-SBDeJfTt
+[14]: https://lh6.googleusercontent.com/joraaJ-qX-K8zTdAFBJWeOswQQtNeX6yezKGkSM56FNYQT-XYrgsxvNLYBE0askw9huAmJhebCVU4AMvjz4B6xlIjdLwO3vMX7_dWBzkfu05HZ3-NOsFnqg-jvkLknl-ldRzUcFO
+[15]: https://lh5.googleusercontent.com/ayoUAhD-azUjUqjut7iSiW8FBFJpCJZLRJDT9mXJoy4QTutAsGgr4yPvbFumaXasOqpsmJ_zZ2k7nrQl2YrjGqPr83PXe-tXjj9OLc-GYhhtJTzBEeddWpZn5pDyBpdw9I4sD-O0
diff --git a/blog/_posts/2018-03-00-Principles-Of-Container-App-Design.md b/blog/_posts/2018-03-00-Principles-Of-Container-App-Design.md
new file mode 100644
index 00000000000..54ec6aad783
--- /dev/null
+++ b/blog/_posts/2018-03-00-Principles-Of-Container-App-Design.md
@@ -0,0 +1,50 @@
+---
+permalink: /blog/:year/:month/:title
+layout: blog
+title: "Principles of Container-based Application Design"
+date: Thursday, March 15, 2018
+---
+
+It's possible nowadays to put almost any application in a container and run it. Creating cloud-native applications, however—containerized applications that are automated and orchestrated effectively by a cloud-native platform such as Kubernetes—requires additional effort. Cloud-native applications anticipate failure; they run and scale reliably even when their infrastructure experiences outages. To offer such capabilities, cloud-native platforms like Kubernetes impose a set of contracts and constraints on applications. These contracts ensure that applications they run conform to certain constraints and allow the platform to automate application management.
+
+I've outlined [seven principles][1]for containerized applications to follow in order to be fully cloud-native.
+
+| ----- |
+| ![][2]{: .big-img} |
+| Container Design Principles |
+
+
+These seven principles cover both build time and runtime concerns.
+
+#### Build time
+
+* **Single Concern:** Each container addresses a single concern and does it well.
+* **Self-Containment:** A container relies only on the presence of the Linux kernel. Additional libraries are added when the container is built.
+* **Image Immutability:** Containerized applications are meant to be immutable, and once built are not expected to change between different environments.
+
+#### Runtime
+
+* **High Observability:** Every container must implement all necessary APIs to help the platform observe and manage the application in the best way possible.
+* **Lifecycle Conformance:** A container must have a way to read events coming from the platform and conform by reacting to those events.
+* **Process Disposability:** Containerized applications must be as ephemeral as possible and ready to be replaced by another container instance at any point in time.
+* **Runtime Confinement:** Every container must declare its resource requirements and restrict resource use to the requirements indicated.
+The build time principles ensure that containers have the right granularity, consistency, and structure in place. The runtime principles dictate what functionalities must be implemented in order for containerized applications to possess cloud-native function. Adhering to these principles helps ensure that your applications are suitable for automation in Kubernetes.
+
+The white paper is freely available for download:
+
+
+To read more about designing cloud-native applications for Kubernetes, check out my [Kubernetes Patterns][3] book.
+
+— [Bilgin Ibryam][4], Principal Architect, Red Hat
+
+Twitter:
+Blog: [http://www.ofbizian.com][5]
+Linkedin:
+
+Bilgin Ibryam (@bibryam) is a principal architect at Red Hat, open source committer at ASF, blogger, author, and speaker. He is the author of Camel Design Patterns and Kubernetes Patterns books. In his day-to-day job, Bilgin enjoys mentoring, training and leading teams to be successful with distributed systems, microservices, containers, and cloud-native applications in general.
+
+[1]: https://www.redhat.com/en/resources/cloud-native-container-design-whitepaper
+[2]: https://lh5.googleusercontent.com/1XqojkVC0CET1yKCJqZ3-0VWxJ3W8Q74zPLlqnn6eHSJsjHOiBTB7EGUX5o_BOKumgfkxVdgBeLyoyMfMIXwVm9p2QXkq_RRy2mDJG1qEExJDculYL5PciYcWfPAKxF2-DGIdiLw
+[3]: http://leanpub.com/k8spatterns/
+[4]: http://twitter.com/bibryam
+[5]: http://www.ofbizian.com/
diff --git a/blog/index.html b/blog/index.html
new file mode 100644
index 00000000000..304b2f7cb4a
--- /dev/null
+++ b/blog/index.html
@@ -0,0 +1,28 @@
+---
+layout: blog
+---
+
+ {% for post in paginator.posts %}
+
+
+
+
+
+ {{ post.content | markdownify }}
+
+ {% endfor %}
+
+
+
diff --git a/css/blog.css b/css/blog.css
new file mode 100644
index 00000000000..570c0a9357e
--- /dev/null
+++ b/css/blog.css
@@ -0,0 +1,493 @@
+/* Overall I tried to maintain as much of the look of the old site as */
+/* possible. When I made changes they were generally to accomodate jekyll */
+/* or to conform with the styling on the rest of k8s.io */
+
+
+/* Content
+----------------------------------------------- */
+.container-fluid {
+ padding: 0px;
+ margin: 0px;
+ padding-left: .01 em;
+}
+body {
+ font: normal normal 16px 'Trebuchet MS', Trebuchet, Verdana, sans-serif;
+ color: #666666;
+ background: #ffffff none repeat scroll top left;
+ padding: 0 0 0 0;
+}
+html body .region-inner {
+ min-width: 0;
+ max-width: 100%;
+ width: auto;
+}
+
+a:link {
+ text-decoration:none;
+ color: #2288bb;
+}
+a:visited {
+ text-decoration:none;
+ color: #888888;
+}
+a:hover {
+ text-decoration:underline;
+ color: #33aaff;
+}
+
+/* Header
+----------------------------------------------- */
+
+.header-outer {
+ background: transparent none repeat-x scroll 0 -400px;
+ _background-image: none;
+}
+.Header h1 {
+ font: normal normal 40px 'Trebuchet MS',Trebuchet,Verdana,sans-serif;
+ color: #000000;
+ text-shadow: 0 0 0 rgba(0, 0, 0, .2);
+}
+
+.Header h1 a {
+ color: #000000;
+}
+.Header .description {
+ font-size: 18px;
+ color: #000000;
+}
+.header-inner .Header .titlewrapper {
+ padding: 22px 0;
+}
+.header-inner .Header .descriptionwrapper {
+ padding: 0 0;
+}
+
+#blog-hero {
+ background-image: url('../images/texture.png');
+ background-color: #303030;
+ padding-left: 50px;
+}
+
+
+header#topHeader {
+ position: absolute;
+ top: 0;
+ left: 0;
+ width: 100%;
+ height: 160px;
+ background-color: #333;
+ background-image: url(http://kub.unitedcreations.xyz/images/texture.png);
+ background-position: center center;
+}
+#headerWrapper {
+ position: relative;
+ width: 1408px;
+ margin: 0px auto;
+ overflow: hidden;
+}
+div.header-title {
+ position: relative;
+ float: left;
+ width: 980px;
+ margin: 0 auto;
+ padding: 40px 0 0 20px;
+ border: true;
+}
+div.header-title h6 {
+ margin: 0;
+ padding-left: 108px;
+ font-weight: normal;
+ font-size: 18px;
+ color: white;
+}
+div.header-try {
+ width: 384px;
+ float: right;
+ color: white;
+ padding-top: 70px;
+}
+div.header-try h6 {
+ font-size: 16px;
+ margin: 0 0 20px;
+ color: transparent;
+}
+div.header-try a {
+ display: inline-block;
+ font-size: 16px;
+ padding: 10px 40px;
+ background: none;
+ border: 0;
+ background-color: #326DE6;
+ border-radius: 6px;
+ color: white;
+ outline: none;
+ cursor: pointer;
+}
+
+/* Tabs
+----------------------------------------------- */
+.blog-title {
+ clear: left
+}
+
+.tabs-inner .section:first-child {
+ border-top: 0 solid #dddddd;
+}
+.tabs-inner .section:first-child ul {
+ margin-top: -1px;
+ border-top: 1px solid #dddddd;
+ border-left: 1px solid #dddddd;
+ border-right: 1px solid #dddddd;
+}
+
+
+/* Pagination Links */
+@media (max-width: 449px) {
+ div.pagination {
+ position: relative;
+ left: 30%;
+ }
+ .pagination a {
+ margin-right: 30%;
+ }
+}
+
+@media (min-width: 450px) {
+ h4.pagination {
+ position: relative;
+ left: 30%;
+ }
+}
+
+/* Columns
+----------------------------------------------- */
+.main-outer {
+ border-top: 0 solid transparent;
+}
+@media (min-width: 500px) {
+ .blog-content {
+ padding-left: 100px;
+ }
+}
+
+@media (max-width: 499px) {
+ .blog-content {
+ padding-left: 20px;
+ }
+}
+.fauxcolumn-left-outer .fauxcolumn-inner {
+ border-right: 1px solid transparent;
+}
+.fauxcolumn-right-outer .fauxcolumn-inner {
+ border-left: 1px solid transparent;
+}
+
+/* Headings
+----------------------------------------------- */
+div.widget > h2,
+div.widget h2.title {
+ margin: 0 0 1em 0;
+ font: normal bold 11px 'Roboto',,sans-serif;
+ color: #000000;
+}
+
+/* Posts
+----------------------------------------------- */
+h2.date-header {
+ font: normal bold 11px 'Roboto', Tahoma, Helvetica, FreeSans, sans-serif;
+}
+.date-header span {
+ background-color: #bbbbbb;
+ color: #ffffff;
+ padding: 0.4em;
+ letter-spacing: 3px;
+ margin: inherit;
+}
+.main-inner {
+ padding-top: 35px;
+ padding-bottom: 65px;
+}
+.main-inner .column-center-inner {
+ padding: 0 0;
+}
+.main-inner .column-center-inner .section {
+ margin: 0 1em;
+}
+.post {
+ margin: 0 0 15px 0;
+}
+h3.post-title{
+ font: normal normal 22px 'Trebuchet MS',Trebuchet,Verdana,sans-serif;
+ margin: .75em 0 0;
+}
+.post-body {
+ font-size: 110%;
+ line-height: 1.4;
+ position: relative;
+}
+.post-body img, .post-body .tr-caption-container, .Profile img, .Image img,
+.BlogList .item-thumbnail img {
+ padding: 2px;
+ background: #ffffff;
+ border: 1px solid #eeeeee;
+ -moz-box-shadow: 1px 1px 5px rgba(0, 0, 0, .1);
+ -webkit-box-shadow: 1px 1px 5px rgba(0, 0, 0, .1);
+ box-shadow: 1px 1px 5px rgba(0, 0, 0, .1);
+ width: inherit;
+}
+.post-body img, .post-body .tr-caption-container {
+ padding: 5px;
+ width: inherit;
+}
+.post-body .tr-caption-container {
+ color: #666666;
+}
+.post-body .tr-caption-container img {
+ padding: 0;
+ background: transparent;
+ border: none;
+ -moz-box-shadow: 0 0 0 rgba(0, 0, 0, .1);
+ -webkit-box-shadow: 0 0 0 rgba(0, 0, 0, .1);
+ box-shadow: 0 0 0 rgba(0, 0, 0, .1);
+}
+.post-header {
+ margin: 0 0 1.5em;
+ line-height: 1.6;
+ font-size: 90%;
+}
+.post-footer {
+ margin: 20px -2px 0;
+ padding: 5px 10px;
+ color: #666666;
+ background-color: #eeeeee;
+ border-bottom: 1px solid #eeeeee;
+ line-height: 1.6;
+ font-size: 90%;
+}
+
+/* Mobile
+----------------------------------------------- */
+.main-inner .column-right-outer {
+ width: 320px;
+ margin-right: -320px;
+}
+.sidebar .widget a.widget-link:link,
+.sidebar .widget a.widget-link:visited,
+.sidebar .widget a.widget-link:hover,
+.sidebar .widget a.widget-link:active {
+ color: #326DE6;
+ font-size: 22px;
+}
+.widget-link {
+ position: relative;
+ display: block;
+ box-sizing: border-box;
+ padding: 0 0 0 55px;
+ margin: 10px 0;
+ line-height: 40px;
+ white-space: nowrap;
+ font-family: Roboto, sans-serif;
+}
+.widget-link .fa {
+ position: absolute;
+ left: 0; top: 0;
+ width: 40px; height: 40px;
+ color: white;
+ background-color: #326DE6;
+ border-radius: 6px;
+ font-size: 30px;
+ text-align: center;
+ line-height: 40px;
+}
+h2.date-header {
+ font-family: Roboto-Bold;
+ font-weight: bold;
+ font-style: normal;
+ font-size: 16px;
+}
+h2.date-header span {
+ background: none;
+ color: #222;
+}
+h3.post-title {
+ font-size: 30px;
+ color: #326DE6;
+}
+* a:link,
+* a:visited,
+* a:hover,
+* a:active {
+ color: #326DE6;
+}
+
+
+div.post-footer {
+ display: none;
+}
+.post-body {
+ font-family: Roboto-Light;
+ font-size: 16px;
+ line-height: 1.5;
+}
+-->
+