Various searches and replaces. Movement of TOC association to recursive loop function tocsearch.html
parent
de1dd5d26d
commit
6c9558f7c2
|
@ -0,0 +1,5 @@
|
|||
tocs:
|
||||
- guides
|
||||
- reference
|
||||
- samples
|
||||
- support
|
|
@ -7,159 +7,159 @@ toc:
|
|||
- title: Quickstarts
|
||||
section:
|
||||
- title: TODO - 5-minute Quickstart
|
||||
path: /v1.1/docs/hellonode
|
||||
path: /v1.1/docs/hellonode/
|
||||
- title: Kubernetes 101
|
||||
path: /v1.1/docs/user-guide/walkthrough/README
|
||||
path: /v1.1/docs/user-guide/walkthrough/README/
|
||||
- title: Kubernetes 201
|
||||
path: /v1.1/docs/user-guide/walkthrough/k8s201
|
||||
path: /v1.1/docs/user-guide/walkthrough/k8s201/
|
||||
|
||||
- title: Running Kubernetes
|
||||
section:
|
||||
- title: Picking the Right Solution
|
||||
path: /v1.1/docs/getting-started-guides/README
|
||||
path: /v1.1/docs/getting-started-guides/README/
|
||||
- title: Running Kubernetes on Your Local Machine
|
||||
section:
|
||||
- title: Running Kubernetes Locally via Docker
|
||||
path: /v1.1/docs/getting-started-guides/docker
|
||||
path: /v1.1/docs/getting-started-guides/docker/
|
||||
- title: Running Kubernetes Locally via Vagrant
|
||||
path: /v1.1/docs/getting-started-guides/vagrant
|
||||
path: /v1.1/docs/getting-started-guides/vagrant/
|
||||
- title: Running Kubernetes Locally with No VM
|
||||
path: /v1.1/docs/getting-started-guides/locally
|
||||
path: /v1.1/docs/getting-started-guides/locally/
|
||||
- title: Running Kubernetes on Turn-key Cloud Solutions
|
||||
section:
|
||||
- title: Running Kubernetes on Google Container Engine
|
||||
path: https://cloud.google.com/container-engine/docs/before-you-begin
|
||||
path: https://cloud.google.com/container-engine/docs/before-you-begin/
|
||||
- title: Running Kubernetes on Google Compute Engine
|
||||
path: /v1.1/docs/getting-started-guides/gce
|
||||
path: /v1.1/docs/getting-started-guides/gce/
|
||||
- title: Running Kubernetes on AWS EC2
|
||||
path: /v1.1/docs/getting-started-guides/aws
|
||||
path: /v1.1/docs/getting-started-guides/aws/
|
||||
- title: Running Kubernetes on Azure
|
||||
path: /v1.1/docs/getting-started-guides/coreos/azure/README
|
||||
path: /v1.1/docs/getting-started-guides/coreos/azure/README/
|
||||
- title: Running Kubernetes on Custom Solutions
|
||||
section:
|
||||
- title: Getting Started From Scratch
|
||||
path: /v1.1/docs/getting-started-guides/scratch
|
||||
path: /v1.1/docs/getting-started-guides/scratch/
|
||||
- title: Custom Cloud Solutions
|
||||
section:
|
||||
- title: AWS or GCE on CoreOS
|
||||
path: /v1.1/docs/getting-started-guides/coreos
|
||||
path: /v1.1/docs/getting-started-guides/coreos/
|
||||
- title: AWS or Joyent on Ubuntu
|
||||
path: /v1.1/docs/getting-started-guides/juju
|
||||
path: /v1.1/docs/getting-started-guides/juju/
|
||||
- title: Rackspace on CoreOS
|
||||
path: /v1.1/docs/getting-started-guides/rackspace
|
||||
path: /v1.1/docs/getting-started-guides/rackspace/
|
||||
- title: On-Premise VMs
|
||||
section:
|
||||
- title: Vagrant or VMware
|
||||
path: /v1.1/docs/getting-started-guides/coreos
|
||||
path: /v1.1/docs/getting-started-guides/coreos/
|
||||
- title: Cloudstack
|
||||
path: /v1.1/docs/getting-started-guides/cloudstack
|
||||
path: /v1.1/docs/getting-started-guides/cloudstack/
|
||||
- title: VMWare
|
||||
path: /v1.1/docs/getting-started-guides/vsphere
|
||||
path: /v1.1/docs/getting-started-guides/vsphere/
|
||||
- title: Juju
|
||||
path: /v1.1/docs/getting-started-guides/juju
|
||||
path: /v1.1/docs/getting-started-guides/juju/
|
||||
- title: libvirt on CoreOS
|
||||
path: /v1.1/docs/getting-started-guides/libvirt-coreos
|
||||
path: /v1.1/docs/getting-started-guides/libvirt-coreos/
|
||||
- title: oVirt
|
||||
path: /v1.1/docs/getting-started-guides/ovirt
|
||||
path: /v1.1/docs/getting-started-guides/ovirt/
|
||||
- title: libvirt or KVM
|
||||
path: /v1.1/docs/getting-started-guides/fedora/flannel_multi_node_cluster
|
||||
path: /v1.1/docs/getting-started-guides/fedora/flannel_multi_node_cluster/
|
||||
- title: Bare Metal
|
||||
section:
|
||||
- title: Offline
|
||||
path: /v1.1/docs/getting-started-guides/coreos/bare_metal_offline
|
||||
path: /v1.1/docs/getting-started-guides/coreos/bare_metal_offline/
|
||||
- title: Fedora via Ansible
|
||||
path: /v1.1/docs/getting-started-guides/fedora/fedora_ansible_config
|
||||
path: /v1.1/docs/getting-started-guides/fedora/fedora_ansible_config/
|
||||
- title: Fedora (Single Node)
|
||||
path: /v1.1/docs/getting-started-guides/fedora/fedora_manual_config
|
||||
path: /v1.1/docs/getting-started-guides/fedora/fedora_manual_config/
|
||||
- title: Fedora (Multi Node)
|
||||
path: /v1.1/docs/getting-started-guides/fedora/flannel_multi_node_cluster
|
||||
path: /v1.1/docs/getting-started-guides/fedora/flannel_multi_node_cluster/
|
||||
- title: Centos
|
||||
path: /v1.1/docs/getting-started-guides/centos/centos_manual_config
|
||||
path: /v1.1/docs/getting-started-guides/centos/centos_manual_config/
|
||||
- title: Ubuntu
|
||||
path: /v1.1/docs/getting-started-guides/ubuntu
|
||||
path: /v1.1/docs/getting-started-guides/ubuntu/
|
||||
- title: Docker (Multi Node)
|
||||
path: /v1.1/docs/getting-started-guides/docker-multinode
|
||||
path: /v1.1/docs/getting-started-guides/docker-multinode/
|
||||
|
||||
- title: Administering Clusters
|
||||
section:
|
||||
- title: Kubernetes Cluster Admin Guide
|
||||
path: /v1.1/docs/admin/introduction
|
||||
path: /v1.1/docs/admin/introduction/
|
||||
- title: Using Multiple Clusters
|
||||
path: /v1.1/docs/admin/multi-cluster
|
||||
path: /v1.1/docs/admin/multi-cluster/
|
||||
- title: Using Large Clusters
|
||||
path: /v1.1/docs/admin/cluster-large
|
||||
path: /v1.1/docs/admin/cluster-large/
|
||||
- title: Building High-Availability Clusters
|
||||
path: /v1.1/docs/admin/high-availability
|
||||
path: /v1.1/docs/admin/high-availability/
|
||||
- title: Accessing Clusters
|
||||
path: /v1.1/docs/user-guide/accessing-the-cluster
|
||||
path: /v1.1/docs/user-guide/accessing-the-cluster/
|
||||
- title: Sharing a Cluster
|
||||
path: /v1.1/docs/admin/namespaces/README
|
||||
path: /v1.1/docs/admin/namespaces/README/
|
||||
- title: Changing Cluster Size
|
||||
path: https://github.com/kubernetes/kubernetes/wiki/User-FAQ#how-do-i-change-the-size-of-my-cluster
|
||||
path: https://github.com/kubernetes/kubernetes/wiki/User-FAQ#how-do-i-change-the-size-of-my-cluster/
|
||||
- title: Creating a Custom Cluster from Scratch
|
||||
path: /v1.1/docs/getting-started-guides/scratch
|
||||
path: /v1.1/docs/getting-started-guides/scratch/
|
||||
- title: Authenticating Across Clusters with kubeconfig
|
||||
path: /v1.1/docs/user-guide/kubeconfig-file
|
||||
path: /v1.1/docs/user-guide/kubeconfig-file/
|
||||
- title: Configuring Garbage Collection
|
||||
path: /v1.1/docs/admin/garbage-collection
|
||||
path: /v1.1/docs/admin/garbage-collection/
|
||||
- title: Configuring Kubernetes with Salt
|
||||
path: /v1.1/docs/admin/salt
|
||||
path: /v1.1/docs/admin/salt/
|
||||
- title: Common Tasks
|
||||
section:
|
||||
- title: Using Nodes
|
||||
path: /v1.1/docs/admin/node
|
||||
path: /v1.1/docs/admin/node/
|
||||
- title: Assigning Pods to Nodes
|
||||
path: /v1.1/docs/user-guide/node-selection/README
|
||||
path: /v1.1/docs/user-guide/node-selection/README/
|
||||
- title: Using Configuration Files
|
||||
path: /v1.1/docs/user-guide/simple-yaml
|
||||
path: /v1.1/docs/user-guide/simple-yaml/
|
||||
- title: Configuring Containers
|
||||
path: /v1.1/docs/user-guide/configuring-containers
|
||||
path: /v1.1/docs/user-guide/configuring-containers/
|
||||
- title: Using Environment Variables
|
||||
path: /v1.1/docs/user-guide/environment-guide/README
|
||||
path: /v1.1/docs/user-guide/environment-guide/README/
|
||||
|
||||
- title: Managing Compute Resources
|
||||
path: /v1.1/docs/user-guide/compute-resources
|
||||
path: /v1.1/docs/user-guide/compute-resources/
|
||||
- title: Applying Resource Quotas and Limits
|
||||
path: /v1.1/docs/admin/resourcequota/README
|
||||
path: /v1.1/docs/admin/resourcequota/README/
|
||||
- title: Setting Pod CPU and Memory Limits
|
||||
path: /v1.1/docs/admin/limitrange/README
|
||||
path: /v1.1/docs/admin/limitrange/README/
|
||||
|
||||
- title: Managing Deployments
|
||||
path: /v1.1/docs/user-guide/managing-deployments
|
||||
path: /v1.1/docs/user-guide/managing-deployments/
|
||||
- title: Deploying Applications
|
||||
path: /v1.1/docs/user-guide/deploying-applications
|
||||
path: /v1.1/docs/user-guide/deploying-applications/
|
||||
- title: Launching, Exposing, and Killing Applications
|
||||
path: /v1.1/docs/user-guide/quick-start
|
||||
path: /v1.1/docs/user-guide/quick-start/
|
||||
- title: Connecting Applications
|
||||
path: /v1.1/docs/user-guide/connecting-applications
|
||||
path: /v1.1/docs/user-guide/connecting-applications/
|
||||
|
||||
- title: Networking in Kubernetes
|
||||
path: /v1.1/docs/admin/networking
|
||||
path: /v1.1/docs/admin/networking/
|
||||
- title: Creating Servers with External IPs
|
||||
path: /v1.1/examples/simple-nginx
|
||||
path: /v1.1/examples/simple-nginx/
|
||||
- title: Setting Up and Configuring DNS
|
||||
path: /v1.1/examples/cluster-dns/README
|
||||
path: /v1.1/examples/cluster-dns/README/
|
||||
- title: Using DNS Pods and Services
|
||||
path: /v1.1/docs/admin/dns
|
||||
path: /v1.1/docs/admin/dns/
|
||||
- title: Working with Containers
|
||||
path: /v1.1/docs/user-guide/production-pods
|
||||
path: /v1.1/docs/user-guide/production-pods/
|
||||
- title: Creating Pods with the Downward API
|
||||
path: /v1.1/docs/user-guide/downward-api/README
|
||||
path: /v1.1/docs/user-guide/downward-api/README/
|
||||
- title: Using Secrets
|
||||
path: /v1.1/docs/user-guide/secrets/README
|
||||
path: /v1.1/docs/user-guide/secrets/README/
|
||||
- title: Using Persistent Volumes
|
||||
path: /v1.1/docs/user-guide/persistent-volumes/README
|
||||
path: /v1.1/docs/user-guide/persistent-volumes/README/
|
||||
- title: Updating Live Pods
|
||||
path: /v1.1/docs/user-guide/update-demo/README
|
||||
path: /v1.1/docs/user-guide/update-demo/README/
|
||||
|
||||
|
||||
- title: Testing and Monitoring
|
||||
section:
|
||||
- title: Simulating Large Test Loads
|
||||
path: /v1.1/examples/k8petstore/README
|
||||
path: /v1.1/examples/k8petstore/README/
|
||||
- title: Checking Pod Health
|
||||
path: /v1.1/docs/user-guide/liveness/README
|
||||
path: /v1.1/docs/user-guide/liveness/README/
|
||||
- title: Using Explorer to Examine the Runtime Environment
|
||||
path: /v1.1/examples/explorer/README
|
||||
path: /v1.1/examples/explorer/README/
|
||||
- title: Resource Usage Monitoring
|
||||
path: /v1.1/docs/user-guide/monitoring
|
||||
path: /v1.1/docs/user-guide/monitoring/
|
|
@ -1,107 +1,109 @@
|
|||
bigheader: "Reference Documentation"
|
||||
abstract: "Design docs, concept definitions, and references for APIs and CLIs."
|
||||
toc:
|
||||
- title: Reference Documentation
|
||||
path: /v1.1/reference/
|
||||
|
||||
- title: Kubernetes API
|
||||
path: /v1.1/reference
|
||||
section:
|
||||
- title: Kubernetes API Overview
|
||||
path: /v1.1/docs/api
|
||||
path: /v1.1/docs/api/
|
||||
- title: Kubernetes API Reference
|
||||
path: /v1.1/api-ref
|
||||
path: /v1.1/api-ref/
|
||||
|
||||
- title: kubectl
|
||||
section:
|
||||
- title: kubectl Overview
|
||||
path: /v1.1/docs/user-guide/kubectl-overview
|
||||
path: /v1.1/docs/user-guide/kubectl-overview/
|
||||
- title: kubectl for Docker Users
|
||||
path: /v1.1/reference/docker-cli-to-kubectl
|
||||
path: /v1.1/reference/docker-cli-to-kubectl/
|
||||
- title: kubectl Reference (pretend the child nodes are here)
|
||||
path: /v1.1/docs/user-guide/kubectl/kubectl
|
||||
path: /v1.1/docs/user-guide/kubectl/kubectl/
|
||||
|
||||
- title: JSONpath
|
||||
path: /v1.1/docs/user-guide/jsonpath
|
||||
path: /v1.1/docs/user-guide/jsonpath/
|
||||
|
||||
- title: kube-apiserver
|
||||
section:
|
||||
- title: Overview
|
||||
path: /v1.1/docs/admin/kube-apiserver
|
||||
path: /v1.1/docs/admin/kube-apiserver/
|
||||
- title: Authorization Plugins
|
||||
path: /v1.1/docs/admin/authorization
|
||||
path: /v1.1/docs/admin/authorization/
|
||||
- title: Authentication
|
||||
path: /v1.1/docs/admin/authentication
|
||||
path: /v1.1/docs/admin/authentication/
|
||||
- title: Accessing the API
|
||||
path: /v1.1/docs/admin/accessing-the-api
|
||||
path: /v1.1/docs/admin/accessing-the-api/
|
||||
- title: Admission Controllers
|
||||
path: /v1.1/docs/admin/admission-controllers
|
||||
path: /v1.1/docs/admin/admission-controllers/
|
||||
- title: Managing Service Accounts
|
||||
path: /v1.1/docs/admin/service-accounts-admin
|
||||
path: /v1.1/docs/admin/service-accounts-admin/
|
||||
|
||||
- title: kub-scheduler
|
||||
path: /v1.1/docs/admin/kube-scheduler
|
||||
path: /v1.1/docs/admin/kube-scheduler/
|
||||
|
||||
- title: kubelet
|
||||
path: /v1.1/docs/admin/kubelet
|
||||
path: /v1.1/docs/admin/kubelet/
|
||||
|
||||
- title: kube-proxy
|
||||
path: /v1.1/docs/admin/kube-proxy
|
||||
path: /v1.1/docs/admin/kube-proxy/
|
||||
|
||||
- title: etcd
|
||||
path: /v1.1/docs/admin/etcd
|
||||
path: /v1.1/docs/admin/etcd/
|
||||
|
||||
- title: Concept Definitions
|
||||
section:
|
||||
- title: Container Environment
|
||||
path: /v1.1/docs/user-guide/container-environment
|
||||
path: /v1.1/docs/user-guide/container-environment/
|
||||
- title: Images
|
||||
path: /v1.1/docs/user-guide/images
|
||||
path: /v1.1/docs/user-guide/images/
|
||||
- title: Downward API
|
||||
path: /v1.1/docs/user-guide/downward-api
|
||||
path: /v1.1/docs/user-guide/downward-api/
|
||||
- title: Pods
|
||||
path: /v1.1/docs/user-guide/pods
|
||||
path: /v1.1/docs/user-guide/pods/
|
||||
- title: Labels and Selectors
|
||||
path: /v1.1/docs/user-guide/labels
|
||||
path: /v1.1/docs/user-guide/labels/
|
||||
- title: Replication Controller
|
||||
path: /v1.1/docs/user-guide/replication-controller
|
||||
path: /v1.1/docs/user-guide/replication-controller/
|
||||
- title: Services
|
||||
path: /v1.1/docs/user-guide/services
|
||||
path: /v1.1/docs/user-guide/services/
|
||||
- title: Volumes
|
||||
path: /v1.1/docs/user-guide/volumes
|
||||
path: /v1.1/docs/user-guide/volumes/
|
||||
- title: Persistent Volumes
|
||||
path: /v1.1/docs/user-guide/persistent-volumes
|
||||
path: /v1.1/docs/user-guide/persistent-volumes/
|
||||
- title: Secrets
|
||||
path: /v1.1/docs/user-guide/secrets
|
||||
path: /v1.1/docs/user-guide/secrets/
|
||||
- title: Names
|
||||
path: /v1.1/docs/user-guide/identifiers
|
||||
path: /v1.1/docs/user-guide/identifiers/
|
||||
- title: Namespaces
|
||||
path: /v1.1/docs/user-guide/namespaces
|
||||
path: /v1.1/docs/user-guide/namespaces/
|
||||
- title: Service Accounts
|
||||
path: /v1.1/docs/user-guide/service-accounts
|
||||
path: /v1.1/docs/user-guide/service-accounts/
|
||||
- title: Annotations
|
||||
path: /v1.1/docs/user-guide/annotations
|
||||
path: /v1.1/docs/user-guide/annotations/
|
||||
- title: Daemon Sets
|
||||
path: /v1.1/docs/admin/daemons
|
||||
path: /v1.1/docs/admin/daemons/
|
||||
- title: Deployments
|
||||
path: /v1.1/docs/user-guide/deployments
|
||||
path: /v1.1/docs/user-guide/deployments/
|
||||
- title: Ingress Resources
|
||||
path: /v1.1/docs/user-guide/ingress
|
||||
path: /v1.1/docs/user-guide/ingress/
|
||||
- title: Horizontal Pod Autoscaling
|
||||
path: /v1.1/docs/user-guide/horizontal-pod-autoscaler
|
||||
path: /v1.1/docs/user-guide/horizontal-pod-autoscaler/
|
||||
- title: Jobs
|
||||
path: /v1.1/docs/user-guide/jobs
|
||||
path: /v1.1/docs/user-guide/jobs/
|
||||
- title: Resource Quotas
|
||||
path: /v1.1/docs/admin/resource-quota
|
||||
path: /v1.1/docs/admin/resource-quota/
|
||||
|
||||
- title: Kubernetes Design Docs
|
||||
section:
|
||||
- title: Kubernetes Architecture
|
||||
path: /v1.1/docs/design/architecture
|
||||
path: /v1.1/docs/design/architecture/
|
||||
- title: Kubernetes Design Overview
|
||||
path: /v1.1/docs/design/README
|
||||
path: /v1.1/docs/design/README/
|
||||
- title: Security in Kubernetes
|
||||
path: /v1.1/docs/design/security
|
||||
path: /v1.1/docs/design/security/
|
||||
- title: Kubernetes Identity and Access Management
|
||||
path: /v1.1/docs/design/access
|
||||
path: /v1.1/docs/design/access/
|
||||
- title: Security Contexts
|
||||
path: /v1.1/docs/design/security_context
|
||||
path: /v1.1/docs/design/security_context/
|
||||
- title: Kubernetes OpenVSwitch GRE/VxLAN networking
|
||||
path: /v1.1/docs/admin/ovs-networking
|
||||
path: /v1.1/docs/admin/ovs-networking/
|
|
@ -1,50 +1,52 @@
|
|||
bigheader: "Samples"
|
||||
abstract: "A collection of example applications that show how to use Kubernetes."
|
||||
toc:
|
||||
- title: Samples Overview
|
||||
path: /v1.1/samples/
|
||||
|
||||
- title: Clustered Application Samples
|
||||
path: /v1.1/samples
|
||||
section:
|
||||
- title: Apache Cassandra Database
|
||||
path: /v1.1/examples/cassandra/README
|
||||
path: /v1.1/examples/cassandra/README/
|
||||
- title: Apache Spark
|
||||
path: /v1.1/examples/spark/README
|
||||
path: /v1.1/examples/spark/README/
|
||||
- title: Apache Storm
|
||||
path: /v1.1/examples/storm/README
|
||||
path: /v1.1/examples/storm/README/
|
||||
- title: Distributed Task Queue
|
||||
path: /v1.1/examples/celery-rabbitmq/README
|
||||
path: /v1.1/examples/celery-rabbitmq/README/
|
||||
- title: Hazelcast
|
||||
path: /v1.1/examples/hazelcast/README
|
||||
path: /v1.1/examples/hazelcast/README/
|
||||
- title: Meteor Applications
|
||||
path: /v1.1/examples/meteor/README
|
||||
path: /v1.1/examples/meteor/README/
|
||||
- title: Redis
|
||||
path: /v1.1/examples/redis/README
|
||||
path: /v1.1/examples/redis/README/
|
||||
- title: RethinkDB
|
||||
path: /v1.1/examples/rethinkdb/README
|
||||
path: /v1.1/examples/rethinkdb/README/
|
||||
- title: Elasticsearch/Kibana Logging Demonstration
|
||||
path: /v1.1/docs/user-guide/logging-demo/README
|
||||
path: /v1.1/docs/user-guide/logging-demo/README/
|
||||
- title: Elasticsearch
|
||||
path: /v1.1/examples/elasticsearch/README
|
||||
path: /v1.1/examples/elasticsearch/README/
|
||||
- title: OpenShift Origin
|
||||
path: /v1.1/examples/openshift-origin/README
|
||||
path: /v1.1/examples/openshift-origin/README/
|
||||
- title: Ceph
|
||||
path: /v1.1/examples/rbd/README
|
||||
path: /v1.1/examples/rbd/README/
|
||||
|
||||
- title: Persistent Volume Samples
|
||||
section:
|
||||
- title: WordPress on a Kubernetes Persistent Volume
|
||||
path: /v1.1/examples/mysql-wordpress-pd/README
|
||||
path: /v1.1/examples/mysql-wordpress-pd/README/
|
||||
- title: GlusterFS
|
||||
path: /v1.1/examples/glusterfs/README
|
||||
path: /v1.1/examples/glusterfs/README/
|
||||
- title: iSCSI
|
||||
path: /v1.1/examples/iscsi/README
|
||||
path: /v1.1/examples/iscsi/README/
|
||||
- title: NFS
|
||||
path: /v1.1/examples/nfs/README
|
||||
path: /v1.1/examples/nfs/README/
|
||||
|
||||
- title: Multi-tier Application Samples
|
||||
section:
|
||||
- title: Guestbook - Go Server
|
||||
path: /v1.1/examples/guestbook-go/README
|
||||
path: /v1.1/examples/guestbook-go/README/
|
||||
- title: GuestBook - PHP Server
|
||||
path: /v1.1/examples/guestbook/README
|
||||
path: /v1.1/examples/guestbook/README/
|
||||
- title: MySQL - Phabricator Server
|
||||
path: /v1.1/examples/phabricator/README
|
||||
path: /v1.1/examples/phabricator/README/
|
|
@ -1,46 +1,48 @@
|
|||
bigheader: "Support"
|
||||
abstract: "Troubleshooting resources, frequently asked questions, and community support channels."
|
||||
toc:
|
||||
- title: Support Overview
|
||||
path: /v1.1/support/
|
||||
|
||||
- title: Troubleshooting
|
||||
path: /v1.1/support
|
||||
section:
|
||||
- title: Web Interface
|
||||
path: /v1.1/docs/user-guide/ui
|
||||
path: /v1.1/docs/user-guide/ui/
|
||||
- title: Logging
|
||||
path: /v1.1/docs/user-guide/logging
|
||||
path: /v1.1/docs/user-guide/logging/
|
||||
- title: Container Access (exec)
|
||||
path: /v1.1/docs/user-guide/getting-into-containers
|
||||
path: /v1.1/docs/user-guide/getting-into-containers/
|
||||
- title: Connect with Proxies
|
||||
path: /v1.1/docs/user-guide/connecting-to-applications-proxy
|
||||
path: /v1.1/docs/user-guide/connecting-to-applications-proxy/
|
||||
- title: Connect with Port Forwarding
|
||||
path: /v1.1/docs/user-guide/connecting-to-applications-port-forward
|
||||
path: /v1.1/docs/user-guide/connecting-to-applications-port-forward/
|
||||
- title: Troubleshooting Applications
|
||||
path: /v1.1/docs/user-guide/application-troubleshooting
|
||||
path: /v1.1/docs/user-guide/application-troubleshooting/
|
||||
- title: Troubleshooting Clusters
|
||||
path: /v1.1/docs/admin/cluster-troubleshooting
|
||||
path: /v1.1/docs/admin/cluster-troubleshooting/
|
||||
- title: Best Practices for Configuration
|
||||
path: /v1.1/docs/user-guide/config-best-practices
|
||||
path: /v1.1/docs/user-guide/config-best-practices/
|
||||
|
||||
- title: Frequently Asked Questions
|
||||
section:
|
||||
- title: User FAQ
|
||||
path: https://github.com/kubernetes/kubernetes/wiki/User-FAQ
|
||||
path: https://github.com/kubernetes/kubernetes/wiki/User-FAQ/
|
||||
- title: Debugging FAQ
|
||||
path: https://github.com/kubernetes/kubernetes/wiki/Debugging-FAQ
|
||||
path: https://github.com/kubernetes/kubernetes/wiki/Debugging-FAQ/
|
||||
- title: Services FAQ
|
||||
path: https://github.com/kubernetes/kubernetes/wiki/Services-FAQ
|
||||
path: https://github.com/kubernetes/kubernetes/wiki/Services-FAQ/
|
||||
|
||||
- title: Other Resources
|
||||
section:
|
||||
- title: Known Issues
|
||||
path: /v1.1/docs/user-guide/known-issues
|
||||
path: /v1.1/docs/user-guide/known-issues/
|
||||
- title: Kubernetes Issue Tracker on GitHub
|
||||
path: https://github.com/kubernetes/kubernetes/issues
|
||||
path: https://github.com/kubernetes/kubernetes/issues/
|
||||
- title: Report a Security Vulnerability
|
||||
path: /v1.1/docs/reporting-security-issues
|
||||
path: /v1.1/docs/reporting-security-issues/
|
||||
- title: Release Notes
|
||||
path: https://github.com/kubernetes/kubernetes/releases
|
||||
path: https://github.com/kubernetes/kubernetes/releases/
|
||||
- title: Release Roadmap
|
||||
path: /v1.1/docs/roadmap
|
||||
path: /v1.1/docs/roadmap/
|
||||
- title: Contributing to Kubernetes Documentation
|
||||
path: /v1.1/editdocs
|
||||
path: /v1.1/editdocs/
|
||||
|
|
|
@ -1 +1,2 @@
|
|||
<h2>Table of contents</h2>
|
||||
<div id="pageTOC" style="padding-top:15px; font-weight: bold; width: auto;"></div>
|
|
@ -0,0 +1 @@
|
|||
{% for item in tree %}{% if item.section %}{% assign tree = item.section %}{% include tocsearch.html %}{% else %}{% if item.path == page.url %}{% assign foundTOC = thistoc %}{% break %}{% endif %}{% endif %}{% endfor %}
|
|
@ -3,4 +3,4 @@
|
|||
<div class="container">{% assign tree = item.section %}{% include tree.html %}
|
||||
</div>
|
||||
</div>{% else %}
|
||||
<a class="item" data-title="{{ item.title }}" href="{{ item.path }}" ></a>{% endif %}{% endfor %}
|
||||
<a class="item" data-title="{{ item.title }}" href="{{ item.path }}" ></a>{% endif %}{% endfor %}
|
|
@ -24,8 +24,8 @@ layout: headerfooter
|
|||
<section id="encyclopedia">
|
||||
<div id="docsToc">
|
||||
<div class="pi-accordion">
|
||||
{% assign tree = site.data[page.versionfilesafe][page.section].toc %}
|
||||
{% include tree.html %}
|
||||
{% for thistoc in site.data[page.versionfilesafe].globals.tocs %}{% assign tree = site.data[page.versionfilesafe][thistoc].toc %}{% include tocsearch.html %}{% endfor %}
|
||||
{% if foundTOC %}{% assign tree = site.data[page.versionfilesafe][foundTOC].toc %}{% include tree.html %}{% endif %}
|
||||
</div> <!-- /pi-accordion -->
|
||||
</div> <!-- /docsToc -->
|
||||
<div id="docsContent">{% if page.showedit == true %}<a href="/{{ page.version }}/editdocs#{{ page.path }}" class="button" style="float:right;margin-top:20px;">Edit This Page</a>{% endif %}
|
||||
|
|
|
@ -2,11 +2,9 @@
|
|||
title: "Kubernetes API Reference"
|
||||
---
|
||||
|
||||
## {{ page.title }} ##
|
||||
|
||||
Use these reference documents to learn how to interact with Kubernetes through the REST API.
|
||||
|
||||
You can also view details about the *Extensions API*. For more about extensions, see [API versioning](docs/api.html).
|
||||
You can also view details about the *Extensions API*. For more about extensions, see [API versioning](/{{page.version}}/docs/api).
|
||||
|
||||
<p>Table of Contents:</p>
|
||||
<ul id="toclist"></ul>
|
||||
|
|
|
@ -2,8 +2,6 @@
|
|||
title: "Application Administration: Detailed Walkthrough"
|
||||
---
|
||||
|
||||
## {{ page.title }} ##
|
||||
|
||||
The detailed walkthrough covers all the in-depth details and tasks for administering your applications in Kubernetes.
|
||||
|
||||
<p>Table of Contents:</p>
|
||||
|
|
|
@ -2,8 +2,6 @@
|
|||
title: "Quick Walkthrough: Kubernetes Basics"
|
||||
---
|
||||
|
||||
## {{ page.title }} ##
|
||||
|
||||
Use this quick walkthrough of Kubernetes to learn about the basic application administration tasks.
|
||||
|
||||
<p>Table of Contents:</p>
|
||||
|
|
|
@ -2,8 +2,6 @@
|
|||
title: "Examples: Deploying Clusters"
|
||||
---
|
||||
|
||||
## {{ page.title }} ##
|
||||
|
||||
Use the following examples to learn how to deploy your application into a Kubernetes cluster.
|
||||
|
||||
<p>Table of Contents:</p>
|
||||
|
|
|
@ -1,21 +1,17 @@
|
|||
---
|
||||
title: "Kubernetes Documentation: releases.k8s.io/release-1.1"
|
||||
---
|
||||
|
||||
|
||||
# Kubernetes Documentation: releases.k8s.io/release-1.1
|
||||
|
||||
* The [User's guide](user-guide/README.html) is for anyone who wants to run programs and
|
||||
* The [User's guide](user-guide/README) is for anyone who wants to run programs and
|
||||
services on an existing Kubernetes cluster.
|
||||
|
||||
* The [Cluster Admin's guide](admin/README.html) is for anyone setting up
|
||||
* The [Cluster Admin's guide](admin/README) is for anyone setting up
|
||||
a Kubernetes cluster or administering it.
|
||||
|
||||
* The [Developer guide](devel/README.html) is for anyone wanting to write
|
||||
* The [Developer guide](devel/README) is for anyone wanting to write
|
||||
programs that access the Kubernetes API, write plugins or extensions, or
|
||||
modify the core code of Kubernetes.
|
||||
|
||||
* The [Kubectl Command Line Interface](user-guide/kubectl/kubectl.html) is a detailed reference on
|
||||
* The [Kubectl Command Line Interface](user-guide/kubectl/kubectl) is a detailed reference on
|
||||
the `kubectl` CLI.
|
||||
|
||||
* The [API object documentation](http://kubernetes.io/third_party/swagger-ui/)
|
||||
|
@ -26,10 +22,10 @@ title: "Kubernetes Documentation: releases.k8s.io/release-1.1"
|
|||
* There are example files and walkthroughs in the [examples](../examples/)
|
||||
folder.
|
||||
|
||||
* If something went wrong, see the [troubleshooting](troubleshooting.html) document for how to debug.
|
||||
You should also check the [known issues](user-guide/known-issues.html) for the release you're using.
|
||||
* If something went wrong, see the [troubleshooting](troubleshooting) document for how to debug.
|
||||
You should also check the [known issues](user-guide/known-issues) for the release you're using.
|
||||
|
||||
* To report a security issue, see [Reporting a Security Issue](reporting-security-issues.html).
|
||||
* To report a security issue, see [Reporting a Security Issue](reporting-security-issues).
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,44 +1,40 @@
|
|||
---
|
||||
title: "Kubernetes Cluster Admin Guide"
|
||||
---
|
||||
|
||||
|
||||
# Kubernetes Cluster Admin Guide
|
||||
|
||||
The cluster admin guide is for anyone creating or administering a Kubernetes cluster.
|
||||
It assumes some familiarity with concepts in the [User Guide](../user-guide/README.html).
|
||||
It assumes some familiarity with concepts in the [User Guide](../user-guide/README).
|
||||
|
||||
## Admin Guide Table of Contents
|
||||
|
||||
[Introduction](introduction.html)
|
||||
[Introduction](introduction)
|
||||
|
||||
1. [Components of a cluster](cluster-components.html)
|
||||
1. [Cluster Management](cluster-management.html)
|
||||
1. [Components of a cluster](cluster-components)
|
||||
1. [Cluster Management](cluster-management)
|
||||
1. Administrating Master Components
|
||||
1. [The kube-apiserver binary](kube-apiserver.html)
|
||||
1. [Authorization](authorization.html)
|
||||
1. [Authentication](authentication.html)
|
||||
1. [Accessing the api](accessing-the-api.html)
|
||||
1. [Admission Controllers](admission-controllers.html)
|
||||
1. [Administrating Service Accounts](service-accounts-admin.html)
|
||||
1. [Resource Quotas](resource-quota.html)
|
||||
1. [The kube-scheduler binary](kube-scheduler.html)
|
||||
1. [The kube-controller-manager binary](kube-controller-manager.html)
|
||||
1. [Administrating Kubernetes Nodes](node.html)
|
||||
1. [The kubelet binary](kubelet.html)
|
||||
1. [Garbage Collection](garbage-collection.html)
|
||||
1. [The kube-proxy binary](kube-proxy.html)
|
||||
1. [The kube-apiserver binary](kube-apiserver)
|
||||
1. [Authorization](authorization)
|
||||
1. [Authentication](authentication)
|
||||
1. [Accessing the api](accessing-the-api)
|
||||
1. [Admission Controllers](admission-controllers)
|
||||
1. [Administrating Service Accounts](service-accounts-admin)
|
||||
1. [Resource Quotas](resource-quota)
|
||||
1. [The kube-scheduler binary](kube-scheduler)
|
||||
1. [The kube-controller-manager binary](kube-controller-manager)
|
||||
1. [Administrating Kubernetes Nodes](node)
|
||||
1. [The kubelet binary](kubelet)
|
||||
1. [Garbage Collection](garbage-collection)
|
||||
1. [The kube-proxy binary](kube-proxy)
|
||||
1. Administrating Addons
|
||||
1. [DNS](dns.html)
|
||||
1. [Networking](networking.html)
|
||||
1. [OVS Networking](ovs-networking.html)
|
||||
1. [DNS](dns)
|
||||
1. [Networking](networking)
|
||||
1. [OVS Networking](ovs-networking)
|
||||
1. Example Configurations
|
||||
1. [Multiple Clusters](multi-cluster.html)
|
||||
1. [High Availability Clusters](high-availability.html)
|
||||
1. [Large Clusters](cluster-large.html)
|
||||
1. [Getting started from scratch](../getting-started-guides/scratch.html)
|
||||
1. [Kubernetes's use of salt](salt.html)
|
||||
1. [Troubleshooting](cluster-troubleshooting.html)
|
||||
1. [Multiple Clusters](multi-cluster)
|
||||
1. [High Availability Clusters](high-availability)
|
||||
1. [Large Clusters](cluster-large)
|
||||
1. [Getting started from scratch](../getting-started-guides/scratch)
|
||||
1. [Kubernetes's use of salt](salt)
|
||||
1. [Troubleshooting](cluster-troubleshooting)
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,17 +1,13 @@
|
|||
---
|
||||
title: "Configuring APIserver ports"
|
||||
---
|
||||
|
||||
|
||||
# Configuring APIserver ports
|
||||
|
||||
This document describes what ports the Kubernetes apiserver
|
||||
may serve on and how to reach them. The audience is
|
||||
cluster administrators who want to customize their cluster
|
||||
or understand the details.
|
||||
|
||||
Most questions about accessing the cluster are covered
|
||||
in [Accessing the cluster](../user-guide/accessing-the-cluster.html).
|
||||
in [Accessing the cluster](../user-guide/accessing-the-cluster).
|
||||
|
||||
|
||||
## Ports and IPs Served On
|
||||
|
@ -30,10 +26,10 @@ By default the Kubernetes APIserver serves HTTP on 2 ports:
|
|||
- default is port 6443, change with `--secure-port` flag.
|
||||
- default IP is first non-localhost network interface, change with `--bind-address` flag.
|
||||
- serves HTTPS. Set cert with `--tls-cert-file` and key with `--tls-private-key-file` flag.
|
||||
- uses token-file or client-certificate based [authentication](authentication.html).
|
||||
- uses policy-based [authorization](authorization.html).
|
||||
- uses token-file or client-certificate based [authentication](authentication).
|
||||
- uses policy-based [authorization](authorization).
|
||||
3. Removed: ReadOnly Port
|
||||
- For security reasons, this had to be removed. Use the [service account](../user-guide/service-accounts.html) feature instead.
|
||||
- For security reasons, this had to be removed. Use the [service account](../user-guide/service-accounts) feature instead.
|
||||
|
||||
## Proxies and Firewall rules
|
||||
|
||||
|
@ -56,7 +52,7 @@ variety of uses cases:
|
|||
running on the `kubernetes-master` machine. The proxy can use cert-based authentication
|
||||
or token-based authentication.
|
||||
2. Processes running in Containers on Kubernetes that need to read from
|
||||
the apiserver. Currently, these can use a [service account](../user-guide/service-accounts.html).
|
||||
the apiserver. Currently, these can use a [service account](../user-guide/service-accounts).
|
||||
3. Scheduler and Controller-manager processes, which need to do read-write
|
||||
API operations. Currently, these have to run on the same host as the
|
||||
apiserver and use the Localhost Port. In the future, these will be
|
||||
|
|
|
@ -1,33 +1,7 @@
|
|||
---
|
||||
title: "Admission Controllers"
|
||||
---
|
||||
|
||||
|
||||
# Admission Controllers
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
|
||||
- [Admission Controllers](#admission-controllers)
|
||||
- [What are they?](#what-are-they)
|
||||
- [Why do I need them?](#why-do-i-need-them)
|
||||
- [How do I turn on an admission control plug-in?](#how-do-i-turn-on-an-admission-control-plug-in)
|
||||
- [What does each plug-in do?](#what-does-each-plug-in-do)
|
||||
- [AlwaysAdmit](#alwaysadmit)
|
||||
- [AlwaysDeny](#alwaysdeny)
|
||||
- [DenyExecOnPrivileged (deprecated)](#denyexeconprivileged-deprecated)
|
||||
- [DenyEscalatingExec](#denyescalatingexec)
|
||||
- [ServiceAccount](#serviceaccount)
|
||||
- [SecurityContextDeny](#securitycontextdeny)
|
||||
- [ResourceQuota](#resourcequota)
|
||||
- [LimitRanger](#limitranger)
|
||||
- [InitialResources (experimental)](#initialresources-experimental)
|
||||
- [NamespaceExists (deprecated)](#namespaceexists-deprecated)
|
||||
- [NamespaceAutoProvision (deprecated)](#namespaceautoprovision-deprecated)
|
||||
- [NamespaceLifecycle](#namespacelifecycle)
|
||||
- [Is there a recommended set of plug-ins to use?](#is-there-a-recommended-set-of-plug-ins-to-use)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
{% include pagetoc.html %}
|
||||
|
||||
## What are they?
|
||||
|
||||
|
@ -87,12 +61,12 @@ enabling this plug-in.
|
|||
|
||||
### ServiceAccount
|
||||
|
||||
This plug-in implements automation for [serviceAccounts](../user-guide/service-accounts.html).
|
||||
This plug-in implements automation for [serviceAccounts](../user-guide/service-accounts).
|
||||
We strongly recommend using this plug-in if you intend to make use of Kubernetes `ServiceAccount` objects.
|
||||
|
||||
### SecurityContextDeny
|
||||
|
||||
This plug-in will deny any pod with a [SecurityContext](../user-guide/security-context.html) that defines options that were not available on the `Container`.
|
||||
This plug-in will deny any pod with a [SecurityContext](../user-guide/security-context) that defines options that were not available on the `Container`.
|
||||
|
||||
### ResourceQuota
|
||||
|
||||
|
@ -100,7 +74,7 @@ This plug-in will observe the incoming request and ensure that it does not viola
|
|||
enumerated in the `ResourceQuota` object in a `Namespace`. If you are using `ResourceQuota`
|
||||
objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints.
|
||||
|
||||
See the [resourceQuota design doc](../design/admission_control_resource_quota.html) and the [example of Resource Quota](resourcequota/) for more details.
|
||||
See the [resourceQuota design doc](../design/admission_control_resource_quota) and the [example of Resource Quota](resourcequota/) for more details.
|
||||
|
||||
It is strongly encouraged that this plug-in is configured last in the sequence of admission control plug-ins. This is
|
||||
so that quota is not prematurely incremented only for the request to be rejected later in admission control.
|
||||
|
@ -113,7 +87,7 @@ your Kubernetes deployment, you MUST use this plug-in to enforce those constrain
|
|||
be used to apply default resource requests to Pods that don't specify any; currently, the default LimitRanger
|
||||
applies a 0.1 CPU requirement to all Pods in the `default` namespace.
|
||||
|
||||
See the [limitRange design doc](../design/admission_control_limit_range.html) and the [example of Limit Range](limitrange/) for more details.
|
||||
See the [limitRange design doc](../design/admission_control_limit_range) and the [example of Limit Range](limitrange/) for more details.
|
||||
|
||||
### InitialResources (experimental)
|
||||
|
||||
|
@ -122,7 +96,7 @@ then the plug-in auto-populates a compute resource request based on historical u
|
|||
If there is not enough data to make a decision the Request is left unchanged.
|
||||
When the plug-in sets a compute resource request, it annotates the pod with information on what compute resources it auto-populated.
|
||||
|
||||
See the [InitialResouces proposal](../proposals/initial-resources.html) for more details.
|
||||
See the [InitialResouces proposal](../proposals/initial-resources) for more details.
|
||||
|
||||
### NamespaceExists (deprecated)
|
||||
|
||||
|
@ -154,9 +128,9 @@ Yes.
|
|||
For Kubernetes 1.0, we strongly recommend running the following set of admission control plug-ins (order matters):
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
|
||||
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
|
||||
{% endraw %}
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Authentication Plugins"
|
||||
---
|
||||
|
||||
|
||||
# Authentication Plugins
|
||||
|
||||
Kubernetes uses client certificates, tokens, or http basic auth to authenticate users for API calls.
|
||||
|
||||
**Client certificate authentication** is enabled by passing the `--client-ca-file=SOMEFILE`
|
||||
|
|
|
@ -1,13 +1,8 @@
|
|||
---
|
||||
title: "Authorization Plugins"
|
||||
---
|
||||
|
||||
|
||||
# Authorization Plugins
|
||||
|
||||
|
||||
In Kubernetes, authorization happens as a separate step from authentication.
|
||||
See the [authentication documentation](authentication.html) for an
|
||||
See the [authentication documentation](authentication) for an
|
||||
overview of authentication.
|
||||
|
||||
Authorization applies to all HTTP accesses on the main (secure) apiserver port.
|
||||
|
@ -94,25 +89,25 @@ To permit an action Policy with an unset namespace applies regardless of namespa
|
|||
A service account automatically generates a user. The user's name is generated according to the naming convention:
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
|
||||
system:serviceaccount:<namespace>:<serviceaccountname>
|
||||
{% endraw %}
|
||||
|
||||
```
|
||||
|
||||
Creating a new namespace also causes a new service account to be created, of this form:*
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
|
||||
system:serviceaccount:<namespace>:default
|
||||
{% endraw %}
|
||||
|
||||
```
|
||||
|
||||
For example, if you wanted to grant the default service account in the kube-system full privilege to the API, you would add this line to your policy file:
|
||||
|
||||
{% highlight json %}
|
||||
{% raw %}
|
||||
|
||||
{"user":"system:serviceaccount:kube-system:default"}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
The apiserver will need to be restarted to pickup the new policy lines.
|
||||
|
@ -123,11 +118,11 @@ Other implementations can be developed fairly easily.
|
|||
The APIserver calls the Authorizer interface:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
type Authorizer interface {
|
||||
Authorize(a Attributes) error
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
to determine whether or not to allow each API action.
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Kubernetes Cluster Admin Guide: Cluster Components"
|
||||
---
|
||||
|
||||
|
||||
# Kubernetes Cluster Admin Guide: Cluster Components
|
||||
|
||||
This document outlines the various binary components that need to run to
|
||||
deliver a functioning Kubernetes cluster.
|
||||
|
||||
|
@ -19,7 +15,7 @@ unsatisfied).
|
|||
Master components could in theory be run on any node in the cluster. However,
|
||||
for simplicity, current set up scripts typically start all master components on
|
||||
the same VM, and does not run user containers on this VM. See
|
||||
[high-availability.md](high-availability.html) for an example multi-master-VM setup.
|
||||
[high-availability.md](high-availability) for an example multi-master-VM setup.
|
||||
|
||||
Even in the future, when Kubernetes is fully self-hosting, it will probably be
|
||||
wise to only allow master components to schedule on a subset of nodes, to limit
|
||||
|
@ -28,19 +24,19 @@ node-compromising security exploit.
|
|||
|
||||
### kube-apiserver
|
||||
|
||||
[kube-apiserver](kube-apiserver.html) exposes the Kubernetes API; it is the front-end for the
|
||||
[kube-apiserver](kube-apiserver) exposes the Kubernetes API; it is the front-end for the
|
||||
Kubernetes control plane. It is designed to scale horizontally (i.e., one scales
|
||||
it by running more of them-- [high-availability.md](high-availability.html)).
|
||||
it by running more of them-- [high-availability.md](high-availability)).
|
||||
|
||||
### etcd
|
||||
|
||||
[etcd](etcd.html) is used as Kubernetes' backing store. All cluster data is stored here.
|
||||
[etcd](etcd) is used as Kubernetes' backing store. All cluster data is stored here.
|
||||
Proper administration of a Kubernetes cluster includes a backup plan for etcd's
|
||||
data.
|
||||
|
||||
### kube-controller-manager
|
||||
|
||||
[kube-controller-manager](kube-controller-manager.html) is a binary that runs controllers, which are the
|
||||
[kube-controller-manager](kube-controller-manager) is a binary that runs controllers, which are the
|
||||
background threads that handle routine tasks in the cluster. Logically, each
|
||||
controller is a separate process, but to reduce the number of moving pieces in
|
||||
the system, they are all compiled into a single binary and run in a single
|
||||
|
@ -61,7 +57,7 @@ These controllers include:
|
|||
|
||||
### kube-scheduler
|
||||
|
||||
[kube-scheduler](kube-scheduler.html) watches newly created pods that have no node assigned, and
|
||||
[kube-scheduler](kube-scheduler) watches newly created pods that have no node assigned, and
|
||||
selects a node for them to run on.
|
||||
|
||||
### addons
|
||||
|
@ -89,7 +85,7 @@ the Kubernetes runtime environment.
|
|||
|
||||
### kubelet
|
||||
|
||||
[kubelet](kubelet.html) is the primary node agent. It:
|
||||
[kubelet](kubelet) is the primary node agent. It:
|
||||
* Watches for pods that have been assigned to its node (either by apiserver
|
||||
or via local configuration file) and:
|
||||
* Mounts the pod's required volumes
|
||||
|
@ -102,7 +98,7 @@ the Kubernetes runtime environment.
|
|||
|
||||
### kube-proxy
|
||||
|
||||
[kube-proxy](kube-proxy.html) enables the Kubernetes service abstraction by maintaining
|
||||
[kube-proxy](kube-proxy) enables the Kubernetes service abstraction by maintaining
|
||||
network rules on the host and performing connection forwarding.
|
||||
|
||||
### docker
|
||||
|
|
|
@ -1,71 +1,68 @@
|
|||
---
|
||||
title: "Using Large Clusters"
|
||||
section: guides
|
||||
---
|
||||
|
||||
## Support
|
||||
|
||||
At v1.0, Kubernetes supports clusters up to 100 nodes with 30 pods per node and 1-2 containers per pod.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
{% include pagetoc.html %}
|
||||
|
||||
## Setup
|
||||
|
||||
A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by a "master" (the cluster-level control plane).
|
||||
|
||||
Normally the number of nodes in a cluster is controlled by the the value `NUM_MINIONS` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/release-1.1/cluster/gce/config-default.sh)).
|
||||
|
||||
Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up.
|
||||
|
||||
When setting up a large Kubernetes cluster, the following issues must be considered.
|
||||
|
||||
### Quota Issues
|
||||
|
||||
To avoid running into cloud provider quota issues, when creating a cluster with many nodes, consider:
|
||||
* Increase the quota for things like CPU, IPs, etc.
|
||||
* In [GCE, for example,](https://cloud.google.com/compute/docs/resource-quotas) you'll want to increase the quota for:
|
||||
* CPUs
|
||||
* VM instances
|
||||
* Total persistent disk reserved
|
||||
* In-use IP addresses
|
||||
* Firewall Rules
|
||||
* Forwarding rules
|
||||
* Routes
|
||||
* Target pools
|
||||
* Gating the setup script so that it brings up new node VMs in smaller batches with waits in between, because some cloud providers rate limit the creation of VMs.
|
||||
|
||||
### Addon Resources
|
||||
|
||||
To prevent memory leaks or other resource issues in [cluster addons](https://releases.k8s.io/release-1.1/cluster/addons) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](http://pr.k8s.io/10653/files) and [#10778](http://pr.k8s.io/10778/files)).
|
||||
|
||||
For example:
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
containers:
|
||||
- image: gcr.io/google_containers/heapster:v0.15.0
|
||||
name: heapster
|
||||
resources:
|
||||
limits:
|
||||
cpu: 100m
|
||||
memory: 200Mi
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
These limits, however, are based on data collected from addons running on 4-node clusters (see [#10335](http://issue.k8s.io/10335#issuecomment-117861225)). The addons consume a lot more resources when running on large deployment clusters (see [#5880](http://issue.k8s.io/5880#issuecomment-113984085)). So, if a large cluster is deployed without adjusting these values, the addons may continuously get killed because they keep hitting the limits.
|
||||
|
||||
To avoid running into cluster addon resource issues, when creating a cluster with many nodes, consider the following:
|
||||
* Scale memory and CPU limits for each of the following addons, if used, along with the size of cluster (there is one replica of each handling the entire cluster so memory and CPU usage tends to grow proportionally with size/load on cluster):
|
||||
* Heapster ([GCM/GCL backed](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/google/heapster-controller.yaml), [InfluxDB backed](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/influxdb/heapster-controller.yaml), [InfluxDB/GCL backed](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/googleinfluxdb/heapster-controller-combined.yaml), [standalone](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/standalone/heapster-controller.yaml))
|
||||
* [InfluxDB and Grafana](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
|
||||
* [skydns, kube2sky, and dns etcd](http://releases.k8s.io/release-1.1/cluster/addons/dns/skydns-rc.yaml.in)
|
||||
* [Kibana](http://releases.k8s.io/release-1.1/cluster/addons/fluentd-elasticsearch/kibana-controller.yaml)
|
||||
* Scale number of replicas for the following addons, if used, along with the size of cluster (there are multiple replicas of each so increasing replicas should help handle increased load, but, since load per replica also increases slightly, also consider increasing CPU/memory limits):
|
||||
* [elasticsearch](http://releases.k8s.io/release-1.1/cluster/addons/fluentd-elasticsearch/es-controller.yaml)
|
||||
* Increase memory and CPU limits slightly for each of the following addons, if used, along with the size of cluster (there is one replica per node but CPU/memory usage increases slightly along with cluster load/size as well):
|
||||
* [FluentD with ElasticSearch Plugin](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/fluentd-es/fluentd-es.yaml)
|
||||
* [FluentD with GCP Plugin](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)
|
||||
|
||||
---
|
||||
title: "Using Large Clusters"
|
||||
---
|
||||
|
||||
## Support
|
||||
|
||||
At v1.0, Kubernetes supports clusters up to 100 nodes with 30 pods per node and 1-2 containers per pod.
|
||||
|
||||
|
||||
|
||||
{% include pagetoc.html %}
|
||||
|
||||
## Setup
|
||||
|
||||
A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by a "master" (the cluster-level control plane).
|
||||
|
||||
Normally the number of nodes in a cluster is controlled by the the value `NUM_MINIONS` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/release-1.1/cluster/gce/config-default.sh)).
|
||||
|
||||
Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up.
|
||||
|
||||
When setting up a large Kubernetes cluster, the following issues must be considered.
|
||||
|
||||
### Quota Issues
|
||||
|
||||
To avoid running into cloud provider quota issues, when creating a cluster with many nodes, consider:
|
||||
* Increase the quota for things like CPU, IPs, etc.
|
||||
* In [GCE, for example,](https://cloud.google.com/compute/docs/resource-quotas) you'll want to increase the quota for:
|
||||
* CPUs
|
||||
* VM instances
|
||||
* Total persistent disk reserved
|
||||
* In-use IP addresses
|
||||
* Firewall Rules
|
||||
* Forwarding rules
|
||||
* Routes
|
||||
* Target pools
|
||||
* Gating the setup script so that it brings up new node VMs in smaller batches with waits in between, because some cloud providers rate limit the creation of VMs.
|
||||
|
||||
### Addon Resources
|
||||
|
||||
To prevent memory leaks or other resource issues in [cluster addons](https://releases.k8s.io/release-1.1/cluster/addons) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](http://pr.k8s.io/10653/files) and [#10778](http://pr.k8s.io/10778/files)).
|
||||
|
||||
For example:
|
||||
|
||||
{% highlight yaml %}
|
||||
containers:
|
||||
- image: gcr.io/google_containers/heapster:v0.15.0
|
||||
name: heapster
|
||||
resources:
|
||||
limits:
|
||||
cpu: 100m
|
||||
memory: 200Mi
|
||||
{% endhighlight %}
|
||||
|
||||
These limits, however, are based on data collected from addons running on 4-node clusters (see [#10335](http://issue.k8s.io/10335#issuecomment-117861225)). The addons consume a lot more resources when running on large deployment clusters (see [#5880](http://issue.k8s.io/5880#issuecomment-113984085)). So, if a large cluster is deployed without adjusting these values, the addons may continuously get killed because they keep hitting the limits.
|
||||
|
||||
To avoid running into cluster addon resource issues, when creating a cluster with many nodes, consider the following:
|
||||
* Scale memory and CPU limits for each of the following addons, if used, along with the size of cluster (there is one replica of each handling the entire cluster so memory and CPU usage tends to grow proportionally with size/load on cluster):
|
||||
* Heapster ([GCM/GCL backed](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/google/heapster-controller.yaml), [InfluxDB backed](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/influxdb/heapster-controller.yaml), [InfluxDB/GCL backed](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/googleinfluxdb/heapster-controller-combined.yaml), [standalone](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/standalone/heapster-controller.yaml))
|
||||
* [InfluxDB and Grafana](http://releases.k8s.io/release-1.1/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
|
||||
* [skydns, kube2sky, and dns etcd](http://releases.k8s.io/release-1.1/cluster/addons/dns/skydns-rc.yaml.in)
|
||||
* [Kibana](http://releases.k8s.io/release-1.1/cluster/addons/fluentd-elasticsearch/kibana-controller.yaml)
|
||||
* Scale number of replicas for the following addons, if used, along with the size of cluster (there are multiple replicas of each so increasing replicas should help handle increased load, but, since load per replica also increases slightly, also consider increasing CPU/memory limits):
|
||||
* [elasticsearch](http://releases.k8s.io/release-1.1/cluster/addons/fluentd-elasticsearch/es-controller.yaml)
|
||||
* Increase memory and CPU limits slightly for each of the following addons, if used, along with the size of cluster (there is one replica per node but CPU/memory usage increases slightly along with cluster load/size as well):
|
||||
* [FluentD with ElasticSearch Plugin](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/fluentd-es/fluentd-es.yaml)
|
||||
* [FluentD with GCP Plugin](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)
|
||||
|
||||
For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](../user-guide/compute-resources.html#troubleshooting).
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Cluster Management"
|
||||
---
|
||||
|
||||
|
||||
# Cluster Management
|
||||
|
||||
This document describes several topics related to the lifecycle of a cluster: creating a new cluster,
|
||||
upgrading your cluster's
|
||||
master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a
|
||||
|
@ -12,7 +8,7 @@ running cluster.
|
|||
|
||||
## Creating and configuring a Cluster
|
||||
|
||||
To install Kubernetes on a set of machines, consult one of the existing [Getting Started guides](../../docs/getting-started-guides/README.html) depending on your environment.
|
||||
To install Kubernetes on a set of machines, consult one of the existing [Getting Started guides](/{{page.version}}/docs/getting-started-guides/README) depending on your environment.
|
||||
|
||||
## Upgrading a cluster
|
||||
|
||||
|
@ -51,17 +47,17 @@ Get its usage by running `cluster/gce/upgrade.sh -h`.
|
|||
For example, to upgrade just your master to a specific version (v1.0.2):
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
cluster/gce/upgrade.sh -M v1.0.2
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Alternatively, to upgrade your entire cluster to the latest stable release:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
cluster/gce/upgrade.sh release/stable
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### Other platforms
|
||||
|
@ -75,9 +71,9 @@ If your cluster runs short on resources you can easily add more machines to it i
|
|||
If you're using GCE or GKE it's done by resizing Instance Group managing your Nodes. It can be accomplished by modifying number of instances on `Compute > Compute Engine > Instance groups > your group > Edit group` [Google Cloud Console page](https://console.developers.google.com) or using gcloud CLI:
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
|
||||
gcloud compute instance-groups managed --zone compute-zone resize my-cluster-minon-group --new-size 42
|
||||
{% endraw %}
|
||||
|
||||
```
|
||||
|
||||
Instance Group will take care of putting appropriate image on new machines and start them, while Kubelet will register its Node with API server to make it available for scheduling. If you scale the instance group down, system will randomly choose Nodes to kill.
|
||||
|
@ -105,9 +101,9 @@ The initial values of the autoscaler parameters set by ``kube-up.sh`` and some m
|
|||
or using gcloud CLI:
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
|
||||
gcloud preview autoscaler --zone compute-zone <command>
|
||||
{% endraw %}
|
||||
|
||||
```
|
||||
|
||||
Note that autoscaling will work properly only if node metrics are accessible in Google Cloud Monitoring.
|
||||
|
@ -127,9 +123,9 @@ If you want more control over the upgrading process, you may use the following w
|
|||
Mark the node to be rebooted as unschedulable:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
kubectl replace nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedulable": true}}'
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
This keeps new pods from landing on the node while you are trying to get them off.
|
||||
|
@ -139,9 +135,9 @@ Get the pods off the machine, via any of the following strategies:
|
|||
* Delete pods with:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
kubectl delete pods $PODNAME
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
For pods with a replication controller, the pod will eventually be replaced by a new pod which will be scheduled to a new node. Additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.
|
||||
|
@ -153,14 +149,14 @@ Perform maintenance work on the node.
|
|||
Make the node schedulable again:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
kubectl replace nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedulable": false}}'
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
If you deleted the node's VM instance and created a new one, then a new schedulable node resource will
|
||||
be created automatically when you create a new VM instance (if you're using a cloud provider that supports
|
||||
node discovery; currently this is only Google Compute Engine, not including CoreOS on Google Compute Engine using kube-register). See [Node](node.html) for more details.
|
||||
node discovery; currently this is only Google Compute Engine, not including CoreOS on Google Compute Engine using kube-register). See [Node](node) for more details.
|
||||
|
||||
## Advanced Topics
|
||||
|
||||
|
@ -197,10 +193,10 @@ for changes to this variable to take effect.
|
|||
You can use the `kube-version-change` utility to convert config files between different API versions.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ hack/build-go.sh cmd/kube-version-change
|
||||
$ _output/local/go/bin/kube-version-change -i myPod.v1beta3.yaml -o myPod.v1.yaml
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
|
||||
|
|
|
@ -1,14 +1,10 @@
|
|||
---
|
||||
title: "Cluster Troubleshooting"
|
||||
---
|
||||
|
||||
|
||||
# Cluster Troubleshooting
|
||||
|
||||
This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the
|
||||
problem you are experiencing. See
|
||||
the [application troubleshooting guide](../user-guide/application-troubleshooting.html) for tips on application debugging.
|
||||
You may also visit [troubleshooting document](../troubleshooting.html) for more information.
|
||||
the [application troubleshooting guide](../user-guide/application-troubleshooting) for tips on application debugging.
|
||||
You may also visit [troubleshooting document](../troubleshooting) for more information.
|
||||
|
||||
## Listing your cluster
|
||||
|
||||
|
@ -17,9 +13,9 @@ The first thing to debug in your cluster is if your nodes are all registered cor
|
|||
Run
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
kubectl get nodes
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
And verify that all of the nodes you expect to see are present and that they are all in the `Ready` state.
|
||||
|
@ -92,7 +88,7 @@ Mitigations:
|
|||
- Action use IaaS providers reliable storage (e.g GCE PD or AWS EBS volume) for VMs with apiserver+etcd
|
||||
- Mitigates: Apiserver backing storage lost
|
||||
|
||||
- Action: Use (experimental) [high-availability](high-availability.html) configuration
|
||||
- Action: Use (experimental) [high-availability](high-availability) configuration
|
||||
- Mitigates: Master VM shutdown or master components (scheduler, API server, controller-managing) crashing
|
||||
- Will tolerate one or more simultaneous node or component failures
|
||||
- Mitigates: Apiserver backing storage (i.e., etcd's data directory) lost
|
||||
|
@ -111,7 +107,7 @@ Mitigations:
|
|||
- Mitigates: Node shutdown
|
||||
- Mitigates: Kubelet software fault
|
||||
|
||||
- Action: [Multiple independent clusters](multi-cluster.html) (and avoid making risky changes to all clusters at once)
|
||||
- Action: [Multiple independent clusters](multi-cluster) (and avoid making risky changes to all clusters at once)
|
||||
- Mitigates: Everything listed above.
|
||||
|
||||
|
||||
|
|
|
@ -1,31 +1,7 @@
|
|||
---
|
||||
title: "Daemon Sets"
|
||||
---
|
||||
|
||||
|
||||
# Daemon Sets
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
|
||||
- [Daemon Sets](#daemon-sets)
|
||||
- [What is a _Daemon Set_?](#what-is-a-daemon-set)
|
||||
- [Writing a DaemonSet Spec](#writing-a-daemonset-spec)
|
||||
- [Required Fields](#required-fields)
|
||||
- [Pod Template](#pod-template)
|
||||
- [Pod Selector](#pod-selector)
|
||||
- [Running Pods on Only Some Nodes](#running-pods-on-only-some-nodes)
|
||||
- [How Daemon Pods are Scheduled](#how-daemon-pods-are-scheduled)
|
||||
- [Communicating with DaemonSet Pods](#communicating-with-daemonset-pods)
|
||||
- [Updating a DaemonSet](#updating-a-daemonset)
|
||||
- [Alternatives to Daemon Set](#alternatives-to-daemon-set)
|
||||
- [Init Scripts](#init-scripts)
|
||||
- [Bare Pods](#bare-pods)
|
||||
- [Static Pods](#static-pods)
|
||||
- [Replication Controller](#replication-controller)
|
||||
- [Caveats](#caveats)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
{% include pagetoc.html %}
|
||||
|
||||
## What is a _Daemon Set_?
|
||||
|
||||
|
@ -49,8 +25,8 @@ but with different flags and/or different memory and cpu requests for different
|
|||
### Required Fields
|
||||
|
||||
As with all other Kubernetes config, a DaemonSet needs `apiVersion`, `kind`, and `metadata` fields. For
|
||||
general information about working with config files, see [here](../user-guide/simple-yaml.html),
|
||||
[here](../user-guide/configuring-containers.html), and [here](../user-guide/working-with-resources.html).
|
||||
general information about working with config files, see [here](../user-guide/simple-yaml),
|
||||
[here](../user-guide/configuring-containers), and [here](../user-guide/working-with-resources).
|
||||
|
||||
A DaemonSet also needs a [`.spec`](../devel/api-conventions.html#spec-and-status) section.
|
||||
|
||||
|
@ -59,20 +35,20 @@ A DaemonSet also needs a [`.spec`](../devel/api-conventions.html#spec-and-status
|
|||
The `.spec.template` is the only required field of the `.spec`.
|
||||
|
||||
The `.spec.template` is a [pod template](../user-guide/replication-controller.html#pod-template).
|
||||
It has exactly the same schema as a [pod](../user-guide/pods.html), except
|
||||
It has exactly the same schema as a [pod](../user-guide/pods), except
|
||||
it is nested and does not have an `apiVersion` or `kind`.
|
||||
|
||||
In addition to required fields for a pod, a pod template in a DaemonSet has to specify appropriate
|
||||
labels (see [pod selector](#pod-selector)).
|
||||
|
||||
A pod template in a DaemonSet must have a [`RestartPolicy`](../user-guide/pod-states.html)
|
||||
A pod template in a DaemonSet must have a [`RestartPolicy`](../user-guide/pod-states)
|
||||
equal to `Always`, or be unspecified, which defaults to `Always`.
|
||||
|
||||
### Pod Selector
|
||||
|
||||
The `.spec.selector` field is a pod selector. It works the same as the `.spec.selector` of
|
||||
a [ReplicationController](../user-guide/replication-controller.html) or
|
||||
[Job](../user-guide/jobs.html).
|
||||
a [ReplicationController](../user-guide/replication-controller) or
|
||||
[Job](../user-guide/jobs).
|
||||
|
||||
If the `.spec.selector` is specified, it must equal the `.spec.template.metadata.labels`. If not
|
||||
specified, the are default to be equal. Config with these unequal will be rejected by the API.
|
||||
|
@ -87,7 +63,7 @@ a node for testing.
|
|||
|
||||
If you specify a `.spec.template.spec.nodeSelector`, then the DaemonSet controller will
|
||||
create pods on nodes which match that [node
|
||||
selector](../user-guide/node-selection/README.html).
|
||||
selector](../user-guide/node-selection/README).
|
||||
|
||||
If you do not specify a `.spec.template.spec.nodeSelector`, then the DaemonSet controller will
|
||||
create pods on all nodes.
|
||||
|
@ -163,14 +139,14 @@ use a Daemon Set rather than creating individual pods.
|
|||
### Static Pods
|
||||
|
||||
It is possible to create pods by writing a file to a certain directory watched by Kubelet. These
|
||||
are called [static pods](static-pods.html).
|
||||
are called [static pods](static-pods).
|
||||
Unlike DaemonSet, static pods cannot be managed with kubectl
|
||||
or other Kubernetes API clients. Static pods do not depend on the apiserver, making them useful
|
||||
in cluster bootstrapping cases. Also, static pods may be deprecated in the future.
|
||||
|
||||
### Replication Controller
|
||||
|
||||
Daemon Set are similar to [Replication Controllers](../user-guide/replication-controller.html) in that
|
||||
Daemon Set are similar to [Replication Controllers](../user-guide/replication-controller) in that
|
||||
they both create pods, and those pods have processes which are not expected to terminate (e.g. web servers,
|
||||
storage servers).
|
||||
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "DNS Integration with Kubernetes"
|
||||
---
|
||||
|
||||
|
||||
# DNS Integration with Kubernetes
|
||||
|
||||
As of Kubernetes 0.8, DNS is offered as a [cluster add-on](http://releases.k8s.io/release-1.1/cluster/addons/README.md).
|
||||
If enabled, a DNS Pod and Service will be scheduled on the cluster, and the kubelets will be
|
||||
configured to tell individual containers to use the DNS Service's IP to resolve DNS names.
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "etcd"
|
||||
---
|
||||
|
||||
|
||||
# etcd
|
||||
|
||||
[etcd](https://coreos.com/etcd/docs/2.0.12/) is a highly-available key value
|
||||
store which Kubernetes uses for persistent storage of all of its REST API
|
||||
objects.
|
||||
|
@ -34,7 +30,7 @@ be run on master VMs. The default location that kubelet scans for manifests is
|
|||
## Kubernetes's usage of etcd
|
||||
|
||||
By default, Kubernetes objects are stored under the `/registry` key in etcd.
|
||||
This path can be prefixed by using the [kube-apiserver](kube-apiserver.html) flag
|
||||
This path can be prefixed by using the [kube-apiserver](kube-apiserver) flag
|
||||
`--etcd-prefix="/foo"`.
|
||||
|
||||
`etcd` is the only place that Kubernetes keeps state.
|
||||
|
@ -46,9 +42,9 @@ test key. On your master VM (or somewhere with firewalls configured such that
|
|||
you can talk to your cluster's etcd), try:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
curl -fs -X PUT "http://${host}:${port}/v2/keys/_test"
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
|
||||
|
|
|
@ -1,6 +1,5 @@
|
|||
---
|
||||
title: "Configuring Garbage Collection"
|
||||
section: guides
|
||||
---
|
||||
### Introduction
|
||||
|
||||
|
|
|
@ -1,241 +1,230 @@
|
|||
---
|
||||
title: "High Availability Kubernetes Clusters"
|
||||
section: guides
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
This document describes how to build a high-availability (HA) Kubernetes cluster. This is a fairly advanced topic.
|
||||
Users who merely want to experiment with Kubernetes are encouraged to use configurations that are simpler to set up such as
|
||||
the simple [Docker based single node cluster instructions](../../docs/getting-started-guides/docker.html),
|
||||
or try [Google Container Engine](https://cloud.google.com/container-engine/) for hosted Kubernetes.
|
||||
|
||||
Also, at this time high availability support for Kubernetes is not continuously tested in our end-to-end (e2e) testing. We will
|
||||
be working to add this continuous testing, but for now the single-node master installations are more heavily tested.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
{% include pagetoc.html %}
|
||||
|
||||
## Overview
|
||||
|
||||
Setting up a truly reliable, highly available distributed system requires a number of steps, it is akin to
|
||||
wearing underwear, pants, a belt, suspenders, another pair of underwear, and another pair of pants. We go into each
|
||||
of these steps in detail, but a summary is given here to help guide and orient the user.
|
||||
|
||||
The steps involved are as follows:
|
||||
* [Creating the reliable constituent nodes that collectively form our HA master implementation.](#reliable-nodes)
|
||||
* [Setting up a redundant, reliable storage layer with clustered etcd.](#establishing-a-redundant-reliable-data-storage-layer)
|
||||
* [Starting replicated, load balanced Kubernetes API servers](#replicated-api-servers)
|
||||
* [Setting up master-elected Kubernetes scheduler and controller-manager daemons](#master-elected-components)
|
||||
|
||||
Here's what the system should look like when it's finished:
|
||||

|
||||
|
||||
Ready? Let's get started.
|
||||
|
||||
## Initial set-up
|
||||
|
||||
The remainder of this guide assumes that you are setting up a 3-node clustered master, where each machine is running some flavor of Linux.
|
||||
Examples in the guide are given for Debian distributions, but they should be easily adaptable to other distributions.
|
||||
Likewise, this set up should work whether you are running in a public or private cloud provider, or if you are running
|
||||
on bare metal.
|
||||
|
||||
The easiest way to implement an HA Kubernetes cluster is to start with an existing single-master cluster. The
|
||||
instructions at [https://get.k8s.io](https://get.k8s.io)
|
||||
describe easy installation for single-master clusters on a variety of platforms.
|
||||
|
||||
## Reliable nodes
|
||||
|
||||
On each master node, we are going to run a number of processes that implement the Kubernetes API. The first step in making these reliable is
|
||||
to make sure that each automatically restarts when it fails. To achieve this, we need to install a process watcher. We choose to use
|
||||
the `kubelet` that we run on each of the worker nodes. This is convenient, since we can use containers to distribute our binaries, we can
|
||||
establish resource limits, and introspect the resource usage of each daemon. Of course, we also need something to monitor the kubelet
|
||||
itself (insert who watches the watcher jokes here). For Debian systems, we choose monit, but there are a number of alternate
|
||||
choices. For example, on systemd-based systems (e.g. RHEL, CentOS), you can run 'systemctl enable kubelet'.
|
||||
|
||||
If you are extending from a standard Kubernetes installation, the `kubelet` binary should already be present on your system. You can run
|
||||
`which kubelet` to determine if the binary is in fact installed. If it is not installed,
|
||||
you should install the [kubelet binary](https://storage.googleapis.com/kubernetes-release/release/v0.19.3/bin/linux/amd64/kubelet), the
|
||||
[kubelet init file](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/kubelet/initd) and [high-availability/default-kubelet](high-availability/default-kubelet)
|
||||
scripts.
|
||||
|
||||
If you are using monit, you should also install the monit daemon (`apt-get install monit`) and the [high-availability/monit-kubelet](high-availability/monit-kubelet) and
|
||||
[high-availability/monit-docker](high-availability/monit-docker) configs.
|
||||
|
||||
On systemd systems you `systemctl enable kubelet` and `systemctl enable docker`.
|
||||
|
||||
|
||||
## Establishing a redundant, reliable data storage layer
|
||||
|
||||
The central foundation of a highly available solution is a redundant, reliable storage layer. The number one rule of high-availability is
|
||||
to protect the data. Whatever else happens, whatever catches on fire, if you have the data, you can rebuild. If you lose the data, you're
|
||||
done.
|
||||
|
||||
Clustered etcd already replicates your storage to all master instances in your cluster. This means that to lose data, all three nodes would need
|
||||
to have their physical (or virtual) disks fail at the same time. The probability that this occurs is relatively low, so for many people
|
||||
running a replicated etcd cluster is likely reliable enough. You can add additional reliability by increasing the
|
||||
size of the cluster from three to five nodes. If that is still insufficient, you can add
|
||||
[even more redundancy to your storage layer](#even-more-reliable-storage).
|
||||
|
||||
### Clustering etcd
|
||||
|
||||
The full details of clustering etcd are beyond the scope of this document, lots of details are given on the
|
||||
[etcd clustering page](https://github.com/coreos/etcd/blob/master/Documentation/clustering.md). This example walks through
|
||||
a simple cluster set up, using etcd's built in discovery to build our cluster.
|
||||
|
||||
First, hit the etcd discovery service to create a new token:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
curl https://discovery.etcd.io/new?size=3
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
On each node, copy the [etcd.yaml](high-availability/etcd.yaml) file into `/etc/kubernetes/manifests/etcd.yaml`
|
||||
|
||||
The kubelet on each node actively monitors the contents of that directory, and it will create an instance of the `etcd`
|
||||
server from the definition of the pod specified in `etcd.yaml`.
|
||||
|
||||
Note that in `etcd.yaml` you should substitute the token URL you got above for `${DISCOVERY_TOKEN}` on all three machines,
|
||||
and you should substitute a different name (e.g. `node-1`) for ${NODE_NAME} and the correct IP address
|
||||
for `${NODE_IP}` on each machine.
|
||||
|
||||
|
||||
#### Validating your cluster
|
||||
|
||||
Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate with
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
etcdctl member list
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
and
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
etcdctl cluster-health
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
You can also validate that this is working with `etcdctl set foo bar` on one node, and `etcd get foo`
|
||||
on a different node.
|
||||
|
||||
### Even more reliable storage
|
||||
|
||||
Of course, if you are interested in increased data reliability, there are further options which makes the place where etcd
|
||||
installs it's data even more reliable than regular disks (belts *and* suspenders, ftw!).
|
||||
|
||||
If you use a cloud provider, then they usually provide this
|
||||
for you, for example [Persistent Disk](https://cloud.google.com/compute/docs/disks/persistent-disks) on the Google Cloud Platform. These
|
||||
are block-device persistent storage that can be mounted onto your virtual machine. Other cloud providers provide similar solutions.
|
||||
|
||||
If you are running on physical machines, you can also use network attached redundant storage using an iSCSI or NFS interface.
|
||||
Alternatively, you can run a clustered file system like Gluster or Ceph. Finally, you can also run a RAID array on each physical machine.
|
||||
|
||||
Regardless of how you choose to implement it, if you chose to use one of these options, you should make sure that your storage is mounted
|
||||
to each machine. If your storage is shared between the three masters in your cluster, you should create a different directory on the storage
|
||||
for each node. Throughout these instructions, we assume that this storage is mounted to your machine in `/var/etcd/data`
|
||||
|
||||
|
||||
## Replicated API Servers
|
||||
|
||||
Once you have replicated etcd set up correctly, we will also install the apiserver using the kubelet.
|
||||
|
||||
### Installing configuration files
|
||||
|
||||
First you need to create the initial log file, so that Docker mounts a file instead of a directory:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
touch /var/log/kube-apiserver.log
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Next, you need to create a `/srv/kubernetes/` directory on each node. This directory includes:
|
||||
* basic_auth.csv - basic auth user and password
|
||||
* ca.crt - Certificate Authority cert
|
||||
* known_tokens.csv - tokens that entities (e.g. the kubelet) can use to talk to the apiserver
|
||||
* kubecfg.crt - Client certificate, public key
|
||||
* kubecfg.key - Client certificate, private key
|
||||
* server.cert - Server certificate, public key
|
||||
* server.key - Server certificate, private key
|
||||
|
||||
The easiest way to create this directory, may be to copy it from the master node of a working cluster, or you can manually generate these files yourself.
|
||||
|
||||
### Starting the API Server
|
||||
|
||||
Once these files exist, copy the [kube-apiserver.yaml](high-availability/kube-apiserver.yaml) into `/etc/kubernetes/manifests/` on each master node.
|
||||
|
||||
The kubelet monitors this directory, and will automatically create an instance of the `kube-apiserver` container using the pod definition specified
|
||||
in the file.
|
||||
|
||||
### Load balancing
|
||||
|
||||
At this point, you should have 3 apiservers all working correctly. If you set up a network load balancer, you should
|
||||
be able to access your cluster via that load balancer, and see traffic balancing between the apiserver instances. Setting
|
||||
up a load balancer will depend on the specifics of your platform, for example instructions for the Google Cloud
|
||||
Platform can be found [here](https://cloud.google.com/compute/docs/load-balancing/)
|
||||
|
||||
Note, if you are using authentication, you may need to regenerate your certificate to include the IP address of the balancer,
|
||||
in addition to the IP addresses of the individual nodes.
|
||||
|
||||
For pods that you deploy into the cluster, the `kubernetes` service/dns name should provide a load balanced endpoint for the master automatically.
|
||||
|
||||
For external users of the API (e.g. the `kubectl` command line interface, continuous build pipelines, or other clients) you will want to configure
|
||||
them to talk to the external load balancer's IP address.
|
||||
|
||||
## Master elected components
|
||||
|
||||
So far we have set up state storage, and we have set up the API server, but we haven't run anything that actually modifies
|
||||
cluster state, such as the controller manager and scheduler. To achieve this reliably, we only want to have one actor modifying state at a time, but we want replicated
|
||||
instances of these actors, in case a machine dies. To achieve this, we are going to use a lease-lock in etcd to perform
|
||||
master election. On each of the three apiserver nodes, we run a small utility application named `podmaster`. It's job is to implement a master
|
||||
election protocol using etcd "compare and swap". If the apiserver node wins the election, it starts the master component it is managing (e.g. the scheduler), if it
|
||||
loses the election, it ensures that any master components running on the node (e.g. the scheduler) are stopped.
|
||||
|
||||
In the future, we expect to more tightly integrate this lease-locking into the scheduler and controller-manager binaries directly, as described in the [high availability design proposal](../proposals/high-availability.html)
|
||||
|
||||
### Installing configuration files
|
||||
|
||||
First, create empty log files on each node, so that Docker will mount the files not make new directories:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
touch /var/log/kube-scheduler.log
|
||||
touch /var/log/kube-controller-manager.log
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Next, set up the descriptions of the scheduler and controller manager pods on each node.
|
||||
by copying [kube-scheduler.yaml](high-availability/kube-scheduler.yaml) and [kube-controller-manager.yaml](high-availability/kube-controller-manager.yaml) into the `/srv/kubernetes/`
|
||||
directory.
|
||||
|
||||
### Running the podmaster
|
||||
|
||||
Now that the configuration files are in place, copy the [podmaster.yaml](high-availability/podmaster.yaml) config file into `/etc/kubernetes/manifests/`
|
||||
|
||||
As before, the kubelet on the node monitors this directory, and will start an instance of the podmaster using the pod specification provided in `podmaster.yaml`.
|
||||
|
||||
Now you will have one instance of the scheduler process running on a single master node, and likewise one
|
||||
controller-manager process running on a single (possibly different) master node. If either of these processes fail,
|
||||
the kubelet will restart them. If any of these nodes fail, the process will move to a different instance of a master
|
||||
node.
|
||||
|
||||
## Conclusion
|
||||
|
||||
At this point, you are done (yeah!) with the master components, but you still need to add worker nodes (boo!).
|
||||
|
||||
If you have an existing cluster, this is as simple as reconfiguring your kubelets to talk to the load-balanced endpoint, and
|
||||
restarting the kubelets on each node.
|
||||
|
||||
If you are turning up a fresh cluster, you will need to install the kubelet and kube-proxy on each worker node, and
|
||||
set the `--apiserver` flag to your replicated endpoint.
|
||||
|
||||
## Vagrant up!
|
||||
|
||||
We indeed have an initial proof of concept tester for this, which is available [here](https://releases.k8s.io/release-1.1/examples/high-availability).
|
||||
|
||||
---
|
||||
title: "High Availability Kubernetes Clusters"
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
This document describes how to build a high-availability (HA) Kubernetes cluster. This is a fairly advanced topic.
|
||||
Users who merely want to experiment with Kubernetes are encouraged to use configurations that are simpler to set up such as
|
||||
the simple [Docker based single node cluster instructions](/{{page.version}}/docs/getting-started-guides/docker),
|
||||
or try [Google Container Engine](https://cloud.google.com/container-engine/) for hosted Kubernetes.
|
||||
|
||||
Also, at this time high availability support for Kubernetes is not continuously tested in our end-to-end (e2e) testing. We will
|
||||
be working to add this continuous testing, but for now the single-node master installations are more heavily tested.
|
||||
|
||||
|
||||
|
||||
{% include pagetoc.html %}
|
||||
|
||||
## Overview
|
||||
|
||||
Setting up a truly reliable, highly available distributed system requires a number of steps, it is akin to
|
||||
wearing underwear, pants, a belt, suspenders, another pair of underwear, and another pair of pants. We go into each
|
||||
of these steps in detail, but a summary is given here to help guide and orient the user.
|
||||
|
||||
The steps involved are as follows:
|
||||
* [Creating the reliable constituent nodes that collectively form our HA master implementation.](#reliable-nodes)
|
||||
* [Setting up a redundant, reliable storage layer with clustered etcd.](#establishing-a-redundant-reliable-data-storage-layer)
|
||||
* [Starting replicated, load balanced Kubernetes API servers](#replicated-api-servers)
|
||||
* [Setting up master-elected Kubernetes scheduler and controller-manager daemons](#master-elected-components)
|
||||
|
||||
Here's what the system should look like when it's finished:
|
||||

|
||||
|
||||
Ready? Let's get started.
|
||||
|
||||
## Initial set-up
|
||||
|
||||
The remainder of this guide assumes that you are setting up a 3-node clustered master, where each machine is running some flavor of Linux.
|
||||
Examples in the guide are given for Debian distributions, but they should be easily adaptable to other distributions.
|
||||
Likewise, this set up should work whether you are running in a public or private cloud provider, or if you are running
|
||||
on bare metal.
|
||||
|
||||
The easiest way to implement an HA Kubernetes cluster is to start with an existing single-master cluster. The
|
||||
instructions at [https://get.k8s.io](https://get.k8s.io)
|
||||
describe easy installation for single-master clusters on a variety of platforms.
|
||||
|
||||
## Reliable nodes
|
||||
|
||||
On each master node, we are going to run a number of processes that implement the Kubernetes API. The first step in making these reliable is
|
||||
to make sure that each automatically restarts when it fails. To achieve this, we need to install a process watcher. We choose to use
|
||||
the `kubelet` that we run on each of the worker nodes. This is convenient, since we can use containers to distribute our binaries, we can
|
||||
establish resource limits, and introspect the resource usage of each daemon. Of course, we also need something to monitor the kubelet
|
||||
itself (insert who watches the watcher jokes here). For Debian systems, we choose monit, but there are a number of alternate
|
||||
choices. For example, on systemd-based systems (e.g. RHEL, CentOS), you can run 'systemctl enable kubelet'.
|
||||
|
||||
If you are extending from a standard Kubernetes installation, the `kubelet` binary should already be present on your system. You can run
|
||||
`which kubelet` to determine if the binary is in fact installed. If it is not installed,
|
||||
you should install the [kubelet binary](https://storage.googleapis.com/kubernetes-release/release/v0.19.3/bin/linux/amd64/kubelet), the
|
||||
[kubelet init file](http://releases.k8s.io/release-1.1/cluster/saltbase/salt/kubelet/initd) and [high-availability/default-kubelet](high-availability/default-kubelet)
|
||||
scripts.
|
||||
|
||||
If you are using monit, you should also install the monit daemon (`apt-get install monit`) and the [high-availability/monit-kubelet](high-availability/monit-kubelet) and
|
||||
[high-availability/monit-docker](high-availability/monit-docker) configs.
|
||||
|
||||
On systemd systems you `systemctl enable kubelet` and `systemctl enable docker`.
|
||||
|
||||
|
||||
## Establishing a redundant, reliable data storage layer
|
||||
|
||||
The central foundation of a highly available solution is a redundant, reliable storage layer. The number one rule of high-availability is
|
||||
to protect the data. Whatever else happens, whatever catches on fire, if you have the data, you can rebuild. If you lose the data, you're
|
||||
done.
|
||||
|
||||
Clustered etcd already replicates your storage to all master instances in your cluster. This means that to lose data, all three nodes would need
|
||||
to have their physical (or virtual) disks fail at the same time. The probability that this occurs is relatively low, so for many people
|
||||
running a replicated etcd cluster is likely reliable enough. You can add additional reliability by increasing the
|
||||
size of the cluster from three to five nodes. If that is still insufficient, you can add
|
||||
[even more redundancy to your storage layer](#even-more-reliable-storage).
|
||||
|
||||
### Clustering etcd
|
||||
|
||||
The full details of clustering etcd are beyond the scope of this document, lots of details are given on the
|
||||
[etcd clustering page](https://github.com/coreos/etcd/blob/master/Documentation/clustering.md). This example walks through
|
||||
a simple cluster set up, using etcd's built in discovery to build our cluster.
|
||||
|
||||
First, hit the etcd discovery service to create a new token:
|
||||
|
||||
{% highlight sh %}
|
||||
curl https://discovery.etcd.io/new?size=3
|
||||
{% endhighlight %}
|
||||
|
||||
On each node, copy the [etcd.yaml](high-availability/etcd.yaml) file into `/etc/kubernetes/manifests/etcd.yaml`
|
||||
|
||||
The kubelet on each node actively monitors the contents of that directory, and it will create an instance of the `etcd`
|
||||
server from the definition of the pod specified in `etcd.yaml`.
|
||||
|
||||
Note that in `etcd.yaml` you should substitute the token URL you got above for `${DISCOVERY_TOKEN}` on all three machines,
|
||||
and you should substitute a different name (e.g. `node-1`) for ${NODE_NAME} and the correct IP address
|
||||
for `${NODE_IP}` on each machine.
|
||||
|
||||
|
||||
#### Validating your cluster
|
||||
|
||||
Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate with
|
||||
|
||||
{% highlight sh %}
|
||||
etcdctl member list
|
||||
{% endhighlight %}
|
||||
|
||||
and
|
||||
|
||||
{% highlight sh %}
|
||||
etcdctl cluster-health
|
||||
{% endhighlight %}
|
||||
|
||||
You can also validate that this is working with `etcdctl set foo bar` on one node, and `etcd get foo`
|
||||
on a different node.
|
||||
|
||||
### Even more reliable storage
|
||||
|
||||
Of course, if you are interested in increased data reliability, there are further options which makes the place where etcd
|
||||
installs it's data even more reliable than regular disks (belts *and* suspenders, ftw!).
|
||||
|
||||
If you use a cloud provider, then they usually provide this
|
||||
for you, for example [Persistent Disk](https://cloud.google.com/compute/docs/disks/persistent-disks) on the Google Cloud Platform. These
|
||||
are block-device persistent storage that can be mounted onto your virtual machine. Other cloud providers provide similar solutions.
|
||||
|
||||
If you are running on physical machines, you can also use network attached redundant storage using an iSCSI or NFS interface.
|
||||
Alternatively, you can run a clustered file system like Gluster or Ceph. Finally, you can also run a RAID array on each physical machine.
|
||||
|
||||
Regardless of how you choose to implement it, if you chose to use one of these options, you should make sure that your storage is mounted
|
||||
to each machine. If your storage is shared between the three masters in your cluster, you should create a different directory on the storage
|
||||
for each node. Throughout these instructions, we assume that this storage is mounted to your machine in `/var/etcd/data`
|
||||
|
||||
|
||||
## Replicated API Servers
|
||||
|
||||
Once you have replicated etcd set up correctly, we will also install the apiserver using the kubelet.
|
||||
|
||||
### Installing configuration files
|
||||
|
||||
First you need to create the initial log file, so that Docker mounts a file instead of a directory:
|
||||
|
||||
{% highlight sh %}
|
||||
touch /var/log/kube-apiserver.log
|
||||
{% endhighlight %}
|
||||
|
||||
Next, you need to create a `/srv/kubernetes/` directory on each node. This directory includes:
|
||||
* basic_auth.csv - basic auth user and password
|
||||
* ca.crt - Certificate Authority cert
|
||||
* known_tokens.csv - tokens that entities (e.g. the kubelet) can use to talk to the apiserver
|
||||
* kubecfg.crt - Client certificate, public key
|
||||
* kubecfg.key - Client certificate, private key
|
||||
* server.cert - Server certificate, public key
|
||||
* server.key - Server certificate, private key
|
||||
|
||||
The easiest way to create this directory, may be to copy it from the master node of a working cluster, or you can manually generate these files yourself.
|
||||
|
||||
### Starting the API Server
|
||||
|
||||
Once these files exist, copy the [kube-apiserver.yaml](high-availability/kube-apiserver.yaml) into `/etc/kubernetes/manifests/` on each master node.
|
||||
|
||||
The kubelet monitors this directory, and will automatically create an instance of the `kube-apiserver` container using the pod definition specified
|
||||
in the file.
|
||||
|
||||
### Load balancing
|
||||
|
||||
At this point, you should have 3 apiservers all working correctly. If you set up a network load balancer, you should
|
||||
be able to access your cluster via that load balancer, and see traffic balancing between the apiserver instances. Setting
|
||||
up a load balancer will depend on the specifics of your platform, for example instructions for the Google Cloud
|
||||
Platform can be found [here](https://cloud.google.com/compute/docs/load-balancing/)
|
||||
|
||||
Note, if you are using authentication, you may need to regenerate your certificate to include the IP address of the balancer,
|
||||
in addition to the IP addresses of the individual nodes.
|
||||
|
||||
For pods that you deploy into the cluster, the `kubernetes` service/dns name should provide a load balanced endpoint for the master automatically.
|
||||
|
||||
For external users of the API (e.g. the `kubectl` command line interface, continuous build pipelines, or other clients) you will want to configure
|
||||
them to talk to the external load balancer's IP address.
|
||||
|
||||
## Master elected components
|
||||
|
||||
So far we have set up state storage, and we have set up the API server, but we haven't run anything that actually modifies
|
||||
cluster state, such as the controller manager and scheduler. To achieve this reliably, we only want to have one actor modifying state at a time, but we want replicated
|
||||
instances of these actors, in case a machine dies. To achieve this, we are going to use a lease-lock in etcd to perform
|
||||
master election. On each of the three apiserver nodes, we run a small utility application named `podmaster`. It's job is to implement a master
|
||||
election protocol using etcd "compare and swap". If the apiserver node wins the election, it starts the master component it is managing (e.g. the scheduler), if it
|
||||
loses the election, it ensures that any master components running on the node (e.g. the scheduler) are stopped.
|
||||
|
||||
In the future, we expect to more tightly integrate this lease-locking into the scheduler and controller-manager binaries directly, as described in the [high availability design proposal](../proposals/high-availability)
|
||||
|
||||
### Installing configuration files
|
||||
|
||||
First, create empty log files on each node, so that Docker will mount the files not make new directories:
|
||||
|
||||
{% highlight sh %}
|
||||
touch /var/log/kube-scheduler.log
|
||||
touch /var/log/kube-controller-manager.log
|
||||
{% endhighlight %}
|
||||
|
||||
Next, set up the descriptions of the scheduler and controller manager pods on each node.
|
||||
by copying [kube-scheduler.yaml](high-availability/kube-scheduler.yaml) and [kube-controller-manager.yaml](high-availability/kube-controller-manager.yaml) into the `/srv/kubernetes/`
|
||||
directory.
|
||||
|
||||
### Running the podmaster
|
||||
|
||||
Now that the configuration files are in place, copy the [podmaster.yaml](high-availability/podmaster.yaml) config file into `/etc/kubernetes/manifests/`
|
||||
|
||||
As before, the kubelet on the node monitors this directory, and will start an instance of the podmaster using the pod specification provided in `podmaster.yaml`.
|
||||
|
||||
Now you will have one instance of the scheduler process running on a single master node, and likewise one
|
||||
controller-manager process running on a single (possibly different) master node. If either of these processes fail,
|
||||
the kubelet will restart them. If any of these nodes fail, the process will move to a different instance of a master
|
||||
node.
|
||||
|
||||
## Conclusion
|
||||
|
||||
At this point, you are done (yeah!) with the master components, but you still need to add worker nodes (boo!).
|
||||
|
||||
If you have an existing cluster, this is as simple as reconfiguring your kubelets to talk to the load-balanced endpoint, and
|
||||
restarting the kubelets on each node.
|
||||
|
||||
If you are turning up a fresh cluster, you will need to install the kubelet and kube-proxy on each worker node, and
|
||||
set the `--apiserver` flag to your replicated endpoint.
|
||||
|
||||
## Vagrant up!
|
||||
|
||||
We indeed have an initial proof of concept tester for this, which is available [here](https://releases.k8s.io/release-1.1/examples/high-availability).
|
||||
|
||||
It implements the major concepts (with a few minor reductions for simplicity), of the podmaster HA implementation alongside a quick smoke test using k8petstore.
|
|
@ -1,44 +1,40 @@
|
|||
---
|
||||
title: "Kubernetes Cluster Admin Guide"
|
||||
---
|
||||
|
||||
|
||||
# Kubernetes Cluster Admin Guide
|
||||
|
||||
The cluster admin guide is for anyone creating or administering a Kubernetes cluster.
|
||||
It assumes some familiarity with concepts in the [User Guide](../user-guide/README.html).
|
||||
It assumes some familiarity with concepts in the [User Guide](../user-guide/README).
|
||||
|
||||
## Admin Guide Table of Contents
|
||||
|
||||
[Introduction](introduction.html)
|
||||
[Introduction](introduction)
|
||||
|
||||
1. [Components of a cluster](cluster-components.html)
|
||||
1. [Cluster Management](cluster-management.html)
|
||||
1. [Components of a cluster](cluster-components)
|
||||
1. [Cluster Management](cluster-management)
|
||||
1. Administrating Master Components
|
||||
1. [The kube-apiserver binary](kube-apiserver.html)
|
||||
1. [Authorization](authorization.html)
|
||||
1. [Authentication](authentication.html)
|
||||
1. [Accessing the api](accessing-the-api.html)
|
||||
1. [Admission Controllers](admission-controllers.html)
|
||||
1. [Administrating Service Accounts](service-accounts-admin.html)
|
||||
1. [Resource Quotas](resource-quota.html)
|
||||
1. [The kube-scheduler binary](kube-scheduler.html)
|
||||
1. [The kube-controller-manager binary](kube-controller-manager.html)
|
||||
1. [Administrating Kubernetes Nodes](node.html)
|
||||
1. [The kubelet binary](kubelet.html)
|
||||
1. [Garbage Collection](garbage-collection.html)
|
||||
1. [The kube-proxy binary](kube-proxy.html)
|
||||
1. [The kube-apiserver binary](kube-apiserver)
|
||||
1. [Authorization](authorization)
|
||||
1. [Authentication](authentication)
|
||||
1. [Accessing the api](accessing-the-api)
|
||||
1. [Admission Controllers](admission-controllers)
|
||||
1. [Administrating Service Accounts](service-accounts-admin)
|
||||
1. [Resource Quotas](resource-quota)
|
||||
1. [The kube-scheduler binary](kube-scheduler)
|
||||
1. [The kube-controller-manager binary](kube-controller-manager)
|
||||
1. [Administrating Kubernetes Nodes](node)
|
||||
1. [The kubelet binary](kubelet)
|
||||
1. [Garbage Collection](garbage-collection)
|
||||
1. [The kube-proxy binary](kube-proxy)
|
||||
1. Administrating Addons
|
||||
1. [DNS](dns.html)
|
||||
1. [Networking](networking.html)
|
||||
1. [OVS Networking](ovs-networking.html)
|
||||
1. [DNS](dns)
|
||||
1. [Networking](networking)
|
||||
1. [OVS Networking](ovs-networking)
|
||||
1. Example Configurations
|
||||
1. [Multiple Clusters](multi-cluster.html)
|
||||
1. [High Availability Clusters](high-availability.html)
|
||||
1. [Large Clusters](cluster-large.html)
|
||||
1. [Getting started from scratch](../getting-started-guides/scratch.html)
|
||||
1. [Kubernetes's use of salt](salt.html)
|
||||
1. [Troubleshooting](cluster-troubleshooting.html)
|
||||
1. [Multiple Clusters](multi-cluster)
|
||||
1. [High Availability Clusters](high-availability)
|
||||
1. [Large Clusters](cluster-large)
|
||||
1. [Getting started from scratch](../getting-started-guides/scratch)
|
||||
1. [Kubernetes's use of salt](salt)
|
||||
1. [Troubleshooting](cluster-troubleshooting)
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,19 +1,18 @@
|
|||
---
|
||||
title: "Kubernetes Cluster Admin Guide"
|
||||
section: guides
|
||||
---
|
||||
|
||||
The cluster admin guide is for anyone creating or administering a Kubernetes cluster.
|
||||
It assumes some familiarity with concepts in the [User Guide](../user-guide/README.html).
|
||||
It assumes some familiarity with concepts in the [User Guide](../user-guide/README).
|
||||
|
||||
|
||||
## Table of Contents
|
||||
|
||||
{% include pagetoc.html %}
|
||||
|
||||
## Planning a cluster
|
||||
|
||||
There are many different examples of how to setup a kubernetes cluster. Many of them are listed in this
|
||||
[matrix](../getting-started-guides/README.html). We call each of the combinations in this matrix a *distro*.
|
||||
[matrix](../getting-started-guides/README). We call each of the combinations in this matrix a *distro*.
|
||||
|
||||
Before choosing a particular guide, here are some things to consider:
|
||||
|
||||
|
@ -30,52 +29,52 @@ Before choosing a particular guide, here are some things to consider:
|
|||
- Not all distros are maintained as actively. Prefer ones which are listed as tested on a more recent version of
|
||||
Kubernetes.
|
||||
- If you are configuring kubernetes on-premises, you will need to consider what [networking
|
||||
model](networking.html) fits best.
|
||||
- If you are designing for very high-availability, you may want [clusters in multiple zones](multi-cluster.html).
|
||||
model](networking) fits best.
|
||||
- If you are designing for very high-availability, you may want [clusters in multiple zones](multi-cluster).
|
||||
- You may want to familiarize yourself with the various
|
||||
[components](cluster-components.html) needed to run a cluster.
|
||||
[components](cluster-components) needed to run a cluster.
|
||||
|
||||
## Setting up a cluster
|
||||
|
||||
Pick one of the Getting Started Guides from the [matrix](../getting-started-guides/README.html) and follow it.
|
||||
Pick one of the Getting Started Guides from the [matrix](../getting-started-guides/README) and follow it.
|
||||
If none of the Getting Started Guides fits, you may want to pull ideas from several of the guides.
|
||||
|
||||
One option for custom networking is *OpenVSwitch GRE/VxLAN networking* ([ovs-networking.md](ovs-networking.html)), which
|
||||
One option for custom networking is *OpenVSwitch GRE/VxLAN networking* ([ovs-networking.md](ovs-networking)), which
|
||||
uses OpenVSwitch to set up networking between pods across
|
||||
Kubernetes nodes.
|
||||
|
||||
If you are modifying an existing guide which uses Salt, this document explains [how Salt is used in the Kubernetes
|
||||
project](salt.html).
|
||||
project](salt).
|
||||
|
||||
## Managing a cluster, including upgrades
|
||||
|
||||
[Managing a cluster](cluster-management.html).
|
||||
[Managing a cluster](cluster-management).
|
||||
|
||||
## Managing nodes
|
||||
|
||||
[Managing nodes](node.html).
|
||||
[Managing nodes](node).
|
||||
|
||||
## Optional Cluster Services
|
||||
|
||||
* **DNS Integration with SkyDNS** ([dns.md](dns.html)):
|
||||
* **DNS Integration with SkyDNS** ([dns.md](dns)):
|
||||
Resolving a DNS name directly to a Kubernetes service.
|
||||
|
||||
* **Logging** with [Kibana](../user-guide/logging.html)
|
||||
* **Logging** with [Kibana](../user-guide/logging)
|
||||
|
||||
## Multi-tenant support
|
||||
|
||||
* **Resource Quota** ([resource-quota.md](resource-quota.html))
|
||||
* **Resource Quota** ([resource-quota.md](resource-quota))
|
||||
|
||||
## Security
|
||||
|
||||
* **Kubernetes Container Environment** ([docs/user-guide/container-environment.md](../user-guide/container-environment.html)):
|
||||
* **Kubernetes Container Environment** ([docs/user-guide/container-environment.md](../user-guide/container-environment)):
|
||||
Describes the environment for Kubelet managed containers on a Kubernetes
|
||||
node.
|
||||
|
||||
* **Securing access to the API Server** [accessing the api](accessing-the-api.html)
|
||||
* **Securing access to the API Server** [accessing the api](accessing-the-api)
|
||||
|
||||
* **Authentication** [authentication](authentication.html)
|
||||
* **Authentication** [authentication](authentication)
|
||||
|
||||
* **Authorization** [authorization](authorization.html)
|
||||
* **Authorization** [authorization](authorization)
|
||||
|
||||
* **Admission Controllers** [admission_controllers](admission-controllers.html)
|
||||
* **Admission Controllers** [admission_controllers](admission-controllers)
|
|
@ -1,88 +0,0 @@
|
|||
---
|
||||
title: "kube-apiserver"
|
||||
---
|
||||
|
||||
|
||||
## kube-apiserver
|
||||
|
||||
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
The Kubernetes API server validates and configures data
|
||||
for the api objects which include pods, services, replicationcontrollers, and
|
||||
others. The API Server services REST operations and provides the frontend to the
|
||||
cluster's shared state through which all other components interact.
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
kube-apiserver
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
--admission-control="AlwaysAdmit": Ordered list of plug-ins to do admission control of resources into cluster. Comma-delimited list of: AlwaysAdmit, AlwaysDeny, DenyEscalatingExec, DenyExecOnPrivileged, InitialResources, LimitRanger, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, ResourceQuota, SecurityContextDeny, ServiceAccount
|
||||
--admission-control-config-file="": File with admission control configuration.
|
||||
--advertise-address=<nil>: The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
|
||||
--allow-privileged[=false]: If true, allow privileged containers.
|
||||
--authorization-mode="AlwaysAllow": Ordered list of plug-ins to do authorization on secure port. Comma-delimited list of: AlwaysAllow,AlwaysDeny,ABAC
|
||||
--authorization-policy-file="": File with authorization policy in csv format, used with --authorization-mode=ABAC, on the secure port.
|
||||
--basic-auth-file="": If set, the file that will be used to admit requests to the secure port of the API server via http basic authentication.
|
||||
--bind-address=0.0.0.0: The IP address on which to serve the --read-only-port and --secure-port ports. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank, all interfaces will be used (0.0.0.0).
|
||||
--cert-dir="/var/run/kubernetes": The directory where the TLS certs are located (by default /var/run/kubernetes). If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
|
||||
--client-ca-file="": If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
|
||||
--cloud-config="": The path to the cloud provider configuration file. Empty string for no configuration file.
|
||||
--cloud-provider="": The provider for cloud services. Empty string for no provider.
|
||||
--cluster-name="kubernetes": The instance prefix for the cluster
|
||||
--cors-allowed-origins=[]: List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled.
|
||||
--etcd-config="": The config file for the etcd client. Mutually exclusive with -etcd-servers.
|
||||
--etcd-prefix="/registry": The prefix for all resource paths in etcd.
|
||||
--etcd-servers=[]: List of etcd servers to watch (http://ip:port), comma separated. Mutually exclusive with -etcd-config
|
||||
--etcd-servers-overrides=[]: Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are http://ip:port, semicolon separated.
|
||||
--event-ttl=1h0m0s: Amount of time to retain events. Default 1 hour.
|
||||
--experimental-keystone-url="": If passed, activates the keystone authentication plugin
|
||||
--external-hostname="": The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs.)
|
||||
--google-json-key="": The Google Cloud Platform Service Account JSON Key to use for authentication.
|
||||
--insecure-bind-address=127.0.0.1: The IP address on which to serve the --insecure-port (set to 0.0.0.0 for all interfaces). Defaults to localhost.
|
||||
--insecure-port=8080: The port on which to serve unsecured, unauthenticated access. Default 8080. It is assumed that firewall rules are set up such that this port is not reachable from outside of the cluster and that port 443 on the cluster's public address is proxied to this port. This is performed by nginx in the default setup.
|
||||
--kubelet-certificate-authority="": Path to a cert. file for the certificate authority.
|
||||
--kubelet-client-certificate="": Path to a client cert file for TLS.
|
||||
--kubelet-client-key="": Path to a client key file for TLS.
|
||||
--kubelet-https[=true]: Use https for kubelet connections
|
||||
--kubelet-port=10250: Kubelet port
|
||||
--kubelet-timeout=5s: Timeout for kubelet operations
|
||||
--log-flush-frequency=5s: Maximum number of seconds between log flushes
|
||||
--long-running-request-regexp="(/|^)((watch|proxy)(/|$)|(logs?|portforward|exec|attach)/?$)": A regular expression matching long running requests which should be excluded from maximum inflight request handling.
|
||||
--master-service-namespace="default": The namespace from which the kubernetes master services should be injected into pods
|
||||
--max-connection-bytes-per-sec=0: If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests
|
||||
--max-requests-inflight=400: The maximum number of requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit.
|
||||
--min-request-timeout=1800: An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load.
|
||||
--oidc-ca-file="": If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used
|
||||
--oidc-client-id="": The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set
|
||||
--oidc-issuer-url="": The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT)
|
||||
--oidc-username-claim="sub": The OpenID claim to use as the user name. Note that claims other than the default ('sub') is not guaranteed to be unique and immutable. This flag is experimental, please see the authentication documentation for further details.
|
||||
--profiling[=true]: Enable profiling via web interface host:port/debug/pprof/
|
||||
--runtime-config=: A set of key=value pairs that describe runtime configuration that may be passed to apiserver. apis/<groupVersion> key can be used to turn on/off specific api versions. apis/<groupVersion>/<resource> can be used to turn on/off specific resources. api/all and api/legacy are special keys to control all and legacy api versions respectively.
|
||||
--secure-port=6443: The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all.
|
||||
--service-account-key-file="": File containing PEM-encoded x509 RSA private or public key, used to verify ServiceAccount tokens. If unspecified, --tls-private-key-file is used.
|
||||
--service-account-lookup[=false]: If true, validate ServiceAccount tokens exist in etcd as part of authentication.
|
||||
--service-cluster-ip-range=<nil>: A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.
|
||||
--service-node-port-range=: A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range.
|
||||
--ssh-keyfile="": If non-empty, use secure SSH proxy to the nodes, using this user keyfile
|
||||
--ssh-user="": If non-empty, use secure SSH proxy to the nodes, using this user name
|
||||
--storage-versions="extensions/v1beta1,v1": The versions to store resources with. Different groups may be stored in different versions. Specified in the format "group1/version1,group2/version2...". This flag expects a complete list of storage versions of ALL groups registered in the server. It defaults to a list of preferred versions of all registered groups, which is derived from the KUBE_API_VERSIONS environment variable.
|
||||
--tls-cert-file="": File containing x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to /var/run/kubernetes.
|
||||
--tls-private-key-file="": File containing x509 private key matching --tls-cert-file.
|
||||
--token-auth-file="": If set, the file that will be used to secure the secure port of the API server via token authentication.
|
||||
--watch-cache[=true]: Enable watch caching in the apiserver
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-10-29 20:12:33.554980405 +0000 UTC
|
||||
|
||||
|
||||
|
|
@ -1,75 +0,0 @@
|
|||
---
|
||||
title: "kube-controller-manager"
|
||||
---
|
||||
|
||||
|
||||
## kube-controller-manager
|
||||
|
||||
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
The Kubernetes controller manager is a daemon that embeds
|
||||
the core control loops shipped with Kubernetes. In applications of robotics and
|
||||
automation, a control loop is a non-terminating loop that regulates the state of
|
||||
the system. In Kubernetes, a controller is a control loop that watches the shared
|
||||
state of the cluster through the apiserver and makes changes attempting to move the
|
||||
current state towards the desired state. Examples of controllers that ship with
|
||||
Kubernetes today are the replication controller, endpoints controller, namespace
|
||||
controller, and serviceaccounts controller.
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
kube-controller-manager
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
--address=127.0.0.1: The IP address to serve on (set to 0.0.0.0 for all interfaces)
|
||||
--allocate-node-cidrs[=false]: Should CIDRs for Pods be allocated and set on the cloud provider.
|
||||
--cloud-config="": The path to the cloud provider configuration file. Empty string for no configuration file.
|
||||
--cloud-provider="": The provider for cloud services. Empty string for no provider.
|
||||
--cluster-cidr=<nil>: CIDR Range for Pods in cluster.
|
||||
--cluster-name="kubernetes": The instance prefix for the cluster
|
||||
--concurrent-endpoint-syncs=5: The number of endpoint syncing operations that will be done concurrently. Larger number = faster endpoint updating, but more CPU (and network) load
|
||||
--concurrent_rc_syncs=5: The number of replication controllers that are allowed to sync concurrently. Larger number = more reponsive replica management, but more CPU (and network) load
|
||||
--deleting-pods-burst=10: Number of nodes on which pods are bursty deleted in case of node failure. For more details look into RateLimiter.
|
||||
--deleting-pods-qps=0.1: Number of nodes per second on which pods are deleted in case of node failure.
|
||||
--deployment-controller-sync-period=30s: Period for syncing the deployments.
|
||||
--google-json-key="": The Google Cloud Platform Service Account JSON Key to use for authentication.
|
||||
--horizontal-pod-autoscaler-sync-period=30s: The period for syncing the number of pods in horizontal pod autoscaler.
|
||||
--kubeconfig="": Path to kubeconfig file with authorization and master location information.
|
||||
--log-flush-frequency=5s: Maximum number of seconds between log flushes
|
||||
--master="": The address of the Kubernetes API server (overrides any value in kubeconfig)
|
||||
--min-resync-period=12h0m0s: The resync period in reflectors will be random between MinResyncPeriod and 2*MinResyncPeriod
|
||||
--namespace-sync-period=5m0s: The period for syncing namespace life-cycle updates
|
||||
--node-monitor-grace-period=40s: Amount of time which we allow running Node to be unresponsive before marking it unhealty. Must be N times more than kubelet's nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet to post node status.
|
||||
--node-monitor-period=5s: The period for syncing NodeStatus in NodeController.
|
||||
--node-startup-grace-period=1m0s: Amount of time which we allow starting Node to be unresponsive before marking it unhealty.
|
||||
--node-sync-period=10s: The period for syncing nodes from cloudprovider. Longer periods will result in fewer calls to cloud provider, but may delay addition of new nodes to cluster.
|
||||
--pod-eviction-timeout=5m0s: The grace period for deleting pods on failed nodes.
|
||||
--port=10252: The port that the controller-manager's http service runs on
|
||||
--profiling[=true]: Enable profiling via web interface host:port/debug/pprof/
|
||||
--pv-recycler-increment-timeout-nfs=30: the increment of time added per Gi to ActiveDeadlineSeconds for an NFS scrubber pod
|
||||
--pv-recycler-minimum-timeout-hostpath=60: The minimum ActiveDeadlineSeconds to use for a HostPath Recycler pod. This is for development and testing only and will not work in a multi-node cluster.
|
||||
--pv-recycler-minimum-timeout-nfs=300: The minimum ActiveDeadlineSeconds to use for an NFS Recycler pod
|
||||
--pv-recycler-pod-template-filepath-hostpath="": The file path to a pod definition used as a template for HostPath persistent volume recycling. This is for development and testing only and will not work in a multi-node cluster.
|
||||
--pv-recycler-pod-template-filepath-nfs="": The file path to a pod definition used as a template for NFS persistent volume recycling
|
||||
--pv-recycler-timeout-increment-hostpath=30: the increment of time added per Gi to ActiveDeadlineSeconds for a HostPath scrubber pod. This is for development and testing only and will not work in a multi-node cluster.
|
||||
--pvclaimbinder-sync-period=10s: The period for syncing persistent volumes and persistent volume claims
|
||||
--resource-quota-sync-period=10s: The period for syncing quota usage status in the system
|
||||
--root-ca-file="": If set, this root certificate authority will be included in service account's token secret. This must be a valid PEM-encoded CA bundle.
|
||||
--service-account-private-key-file="": Filename containing a PEM-encoded private RSA key used to sign service account tokens.
|
||||
--service-sync-period=5m0s: The period for syncing services with their external load balancers
|
||||
--terminated-pod-gc-threshold=12500: Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods. If <= 0, the terminated pod garbage collector is disabled.
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-10-29 20:12:25.539938496 +0000 UTC
|
||||
|
||||
|
||||
|
|
@ -1,53 +0,0 @@
|
|||
---
|
||||
title: "kube-proxy"
|
||||
---
|
||||
|
||||
|
||||
## kube-proxy
|
||||
|
||||
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
The Kubernetes network proxy runs on each node. This
|
||||
reflects services as defined in the Kubernetes API on each node and can do simple
|
||||
TCP,UDP stream forwarding or round robin TCP,UDP forwarding across a set of backends.
|
||||
Service cluster ips and ports are currently found through Docker-links-compatible
|
||||
environment variables specifying ports opened by the service proxy. There is an optional
|
||||
addon that provides cluster DNS for these cluster IPs. The user must create a service
|
||||
with the apiserver API to configure the proxy.
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
kube-proxy
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
--bind-address=0.0.0.0: The IP address for the proxy server to serve on (set to 0.0.0.0 for all interfaces)
|
||||
--cleanup-iptables[=false]: If true cleanup iptables rules and exit.
|
||||
--google-json-key="": The Google Cloud Platform Service Account JSON Key to use for authentication.
|
||||
--healthz-bind-address=127.0.0.1: The IP address for the health check server to serve on, defaulting to 127.0.0.1 (set to 0.0.0.0 for all interfaces)
|
||||
--healthz-port=10249: The port to bind the health check server. Use 0 to disable.
|
||||
--hostname-override="": If non-empty, will use this string as identification instead of the actual hostname.
|
||||
--iptables-sync-period=30s: How often iptables rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0.
|
||||
--kubeconfig="": Path to kubeconfig file with authorization information (the master location is set by the master flag).
|
||||
--log-flush-frequency=5s: Maximum number of seconds between log flushes
|
||||
--masquerade-all[=false]: If using the pure iptables proxy, SNAT everything
|
||||
--master="": The address of the Kubernetes API server (overrides any value in kubeconfig)
|
||||
--oom-score-adj=-999: The oom-score-adj value for kube-proxy process. Values must be within the range [-1000, 1000]
|
||||
--proxy-mode="": Which proxy mode to use: 'userspace' (older, stable) or 'iptables' (experimental). If blank, look at the Node object on the Kubernetes API and respect the 'net.experimental.kubernetes.io/proxy-mode' annotation if provided. Otherwise use the best-available proxy (currently userspace, but may change in future versions). If the iptables proxy is selected, regardless of how, but the system's kernel or iptables versions are insufficient, this always falls back to the userspace proxy.
|
||||
--proxy-port-range=: Range of host ports (beginPort-endPort, inclusive) that may be consumed in order to proxy service traffic. If unspecified (0-0) then ports will be randomly chosen.
|
||||
--resource-container="/kube-proxy": Absolute name of the resource-only container to create and run the Kube-proxy in (Default: /kube-proxy).
|
||||
--udp-timeout=250ms: How long an idle UDP connection will be kept open (e.g. '250ms', '2s'). Must be greater than 0. Only applicable for proxy-mode=userspace
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-10-29 20:12:28.465584706 +0000 UTC
|
||||
|
||||
|
||||
|
|
@ -1,48 +0,0 @@
|
|||
---
|
||||
title: "kube-scheduler"
|
||||
---
|
||||
|
||||
|
||||
## kube-scheduler
|
||||
|
||||
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
The Kubernetes scheduler is a policy-rich, topology-aware,
|
||||
workload-specific function that significantly impacts availability, performance,
|
||||
and capacity. The scheduler needs to take into account individual and collective
|
||||
resource requirements, quality of service requirements, hardware/software/policy
|
||||
constraints, affinity and anti-affinity specifications, data locality, inter-workload
|
||||
interference, deadlines, and so on. Workload-specific requirements will be exposed
|
||||
through the API as necessary.
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
kube-scheduler
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
--address=127.0.0.1: The IP address to serve on (set to 0.0.0.0 for all interfaces)
|
||||
--algorithm-provider="DefaultProvider": The scheduling algorithm provider to use, one of: DefaultProvider
|
||||
--bind-pods-burst=100: Number of bindings per second scheduler is allowed to make during bursts
|
||||
--bind-pods-qps=50: Number of bindings per second scheduler is allowed to continuously make
|
||||
--google-json-key="": The Google Cloud Platform Service Account JSON Key to use for authentication.
|
||||
--kubeconfig="": Path to kubeconfig file with authorization and master location information.
|
||||
--log-flush-frequency=5s: Maximum number of seconds between log flushes
|
||||
--master="": The address of the Kubernetes API server (overrides any value in kubeconfig)
|
||||
--policy-config-file="": File with scheduler policy configuration
|
||||
--port=10251: The port that the scheduler's http service runs on
|
||||
--profiling[=true]: Enable profiling via web interface host:port/debug/pprof/
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-10-29 20:12:20.542446971 +0000 UTC
|
||||
|
||||
|
||||
|
|
@ -1,115 +0,0 @@
|
|||
---
|
||||
title: "kubelet"
|
||||
---
|
||||
|
||||
|
||||
## kubelet
|
||||
|
||||
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
The kubelet is the primary "node agent" that runs on each
|
||||
node. The kubelet works in terms of a PodSpec. A PodSpec is a YAML or JSON object
|
||||
that describes a pod. The kubelet takes a set of PodSpecs that are provided through
|
||||
various mechanisms (primarily through the apiserver) and ensures that the containers
|
||||
described in those PodSpecs are running and healthy.
|
||||
|
||||
Other than from an PodSpec from the apiserver, there are three ways that a container
|
||||
manifest can be provided to the Kubelet.
|
||||
|
||||
File: Path passed as a flag on the command line. This file is rechecked every 20
|
||||
seconds (configurable with a flag).
|
||||
|
||||
HTTP endpoint: HTTP endpoint passed as a parameter on the command line. This endpoint
|
||||
is checked every 20 seconds (also configurable with a flag).
|
||||
|
||||
HTTP server: The kubelet can also listen for HTTP and respond to a simple API
|
||||
(underspec'd currently) to submit a new manifest.
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
kubelet
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
--address=0.0.0.0: The IP address for the Kubelet to serve on (set to 0.0.0.0 for all interfaces)
|
||||
--allow-privileged[=false]: If true, allow containers to request privileged mode. [default=false]
|
||||
--api-servers=[]: List of Kubernetes API servers for publishing events, and reading pods and services. (ip:port), comma separated.
|
||||
--cadvisor-port=4194: The port of the localhost cAdvisor endpoint
|
||||
--cert-dir="/var/run/kubernetes": The directory where the TLS certs are located (by default /var/run/kubernetes). If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
|
||||
--cgroup-root="": Optional root cgroup to use for pods. This is handled by the container runtime on a best effort basis. Default: '', which means use the container runtime default.
|
||||
--chaos-chance=0: If > 0.0, introduce random client errors and latency. Intended for testing. [default=0.0]
|
||||
--cloud-config="": The path to the cloud provider configuration file. Empty string for no configuration file.
|
||||
--cloud-provider="": The provider for cloud services. Empty string for no provider.
|
||||
--cluster-dns=<nil>: IP address for a cluster DNS server. If set, kubelet will configure all containers to use this for DNS resolution in addition to the host's DNS servers
|
||||
--cluster-domain="": Domain for this cluster. If set, kubelet will configure all containers to search this domain in addition to the host's search domains
|
||||
--config="": Path to the config file or directory of files
|
||||
--configure-cbr0[=false]: If true, kubelet will configure cbr0 based on Node.Spec.PodCIDR.
|
||||
--container-runtime="docker": The container runtime to use. Possible values: 'docker', 'rkt'. Default: 'docker'.
|
||||
--containerized[=false]: Experimental support for running kubelet in a container. Intended for testing. [default=false]
|
||||
--cpu-cfs-quota[=false]: Enable CPU CFS quota enforcement for containers that specify CPU limits
|
||||
--docker-endpoint="": If non-empty, use this for the docker endpoint to communicate with
|
||||
--docker-exec-handler="native": Handler to use when executing a command in a container. Valid values are 'native' and 'nsenter'. Defaults to 'native'.
|
||||
--enable-debugging-handlers[=true]: Enables server endpoints for log collection and local running of containers and commands
|
||||
--enable-server[=true]: Enable the Kubelet's server
|
||||
--event-burst=0: Maximum size of a bursty event records, temporarily allows event records to burst to this number, while still not exceeding event-qps. Only used if --event-qps > 0
|
||||
--event-qps=0: If > 0, limit event creations per second to this value. If 0, unlimited. [default=0.0]
|
||||
--file-check-frequency=20s: Duration between checking config files for new data
|
||||
--google-json-key="": The Google Cloud Platform Service Account JSON Key to use for authentication.
|
||||
--healthz-bind-address=127.0.0.1: The IP address for the healthz server to serve on, defaulting to 127.0.0.1 (set to 0.0.0.0 for all interfaces)
|
||||
--healthz-port=10248: The port of the localhost healthz endpoint
|
||||
--host-ipc-sources="*": Comma-separated list of sources from which the Kubelet allows pods to use the host ipc namespace. [default="*"]
|
||||
--host-network-sources="*": Comma-separated list of sources from which the Kubelet allows pods to use of host network. [default="*"]
|
||||
--host-pid-sources="*": Comma-separated list of sources from which the Kubelet allows pods to use the host pid namespace. [default="*"]
|
||||
--hostname-override="": If non-empty, will use this string as identification instead of the actual hostname.
|
||||
--http-check-frequency=20s: Duration between checking http for new data
|
||||
--image-gc-high-threshold=90: The percent of disk usage after which image garbage collection is always run. Default: 90%%
|
||||
--image-gc-low-threshold=80: The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. Default: 80%%
|
||||
--kubeconfig="/var/lib/kubelet/kubeconfig": Path to a kubeconfig file, specifying how to authenticate to API server (the master location is set by the api-servers flag).
|
||||
--log-flush-frequency=5s: Maximum number of seconds between log flushes
|
||||
--low-diskspace-threshold-mb=256: The absolute free disk space, in MB, to maintain. When disk space falls below this threshold, new pods would be rejected. Default: 256
|
||||
--manifest-url="": URL for accessing the container manifest
|
||||
--manifest-url-header="": HTTP header to use when accessing the manifest URL, with the key separated from the value with a ':', as in 'key:value'
|
||||
--master-service-namespace="default": The namespace from which the kubernetes master services should be injected into pods
|
||||
--max-open-files=1000000: Number of files that can be opened by Kubelet process. [default=1000000]
|
||||
--max-pods=40: Number of Pods that can run on this Kubelet.
|
||||
--maximum-dead-containers=100: Maximum number of old instances of containers to retain globally. Each container takes up some disk space. Default: 100.
|
||||
--maximum-dead-containers-per-container=2: Maximum number of old instances to retain per container. Each container takes up some disk space. Default: 2.
|
||||
--minimum-container-ttl-duration=1m0s: Minimum age for a finished container before it is garbage collected. Examples: '300ms', '10s' or '2h45m'
|
||||
--network-plugin="": <Warning: Alpha feature> The name of the network plugin to be invoked for various events in kubelet/pod lifecycle
|
||||
--network-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/net/exec/": <Warning: Alpha feature> The full path of the directory in which to search for network plugins
|
||||
--node-status-update-frequency=10s: Specifies how often kubelet posts node status to master. Note: be cautious when changing the constant, it must work with nodeMonitorGracePeriod in nodecontroller. Default: 10s
|
||||
--oom-score-adj=-999: The oom-score-adj value for kubelet process. Values must be within the range [-1000, 1000]
|
||||
--pod-cidr="": The CIDR to use for pod IP addresses, only used in standalone mode. In cluster mode, this is obtained from the master.
|
||||
--pod-infra-container-image="gcr.io/google_containers/pause:0.8.0": The image whose network/ipc namespaces containers in each pod will use.
|
||||
--port=10250: The port for the Kubelet to serve on. Note that "kubectl logs" will not work if you set this flag.
|
||||
--read-only-port=10255: The read-only port for the Kubelet to serve on (set to 0 to disable)
|
||||
--really-crash-for-testing[=false]: If true, when panics occur crash. Intended for testing.
|
||||
--register-node[=true]: Register the node with the apiserver (defaults to true if --api-servers is set)
|
||||
--registry-burst=10: Maximum size of a bursty pulls, temporarily allows pulls to burst to this number, while still not exceeding registry-qps. Only used if --registry-qps > 0
|
||||
--registry-qps=0: If > 0, limit registry pull QPS to this value. If 0, unlimited. [default=0.0]
|
||||
--resolv-conf="/etc/resolv.conf": Resolver configuration file used as the basis for the container DNS resolution configuration.
|
||||
--resource-container="/kubelet": Absolute name of the resource-only container to create and run the Kubelet in (Default: /kubelet).
|
||||
--rkt-path="": Path of rkt binary. Leave empty to use the first rkt in $PATH. Only used if --container-runtime='rkt'
|
||||
--rkt-stage1-image="": image to use as stage1. Local paths and http/https URLs are supported. If empty, the 'stage1.aci' in the same directory as '--rkt-path' will be used
|
||||
--root-dir="/var/lib/kubelet": Directory path for managing kubelet files (volume mounts,etc).
|
||||
--runonce[=false]: If true, exit after spawning pods from local manifests or remote urls. Exclusive with --api-servers, and --enable-server
|
||||
--serialize-image-pulls[=true]: Pull images one at a time. We recommend *not* changing the default value on nodes that run docker daemon with version < 1.9 or an Aufs storage backend. Issue #10959 has more details. [default=true]
|
||||
--streaming-connection-idle-timeout=0: Maximum time a streaming connection can be idle before the connection is automatically closed. Example: '5m'
|
||||
--sync-frequency=10s: Max period between synchronizing running containers and config
|
||||
--system-container="": Optional resource-only container in which to place all non-kernel processes that are not already in a container. Empty for no container. Rolling back the flag requires a reboot. (Default: "").
|
||||
--tls-cert-file="": File containing x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory passed to --cert-dir.
|
||||
--tls-private-key-file="": File containing x509 private key matching --tls-cert-file.
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-10-29 20:12:15.480131233 +0000 UTC
|
||||
|
||||
|
||||
|
|
@ -1,222 +1,202 @@
|
|||
---
|
||||
title: "Limit Range"
|
||||
---
|
||||
|
||||
Limit Range
|
||||
========================================
|
||||
By default, pods run with unbounded CPU and memory limits. This means that any pod in the
|
||||
system will be able to consume as much CPU and memory on the node that executes the pod.
|
||||
|
||||
Users may want to impose restrictions on the amount of resource a single pod in the system may consume
|
||||
for a variety of reasons.
|
||||
|
||||
For example:
|
||||
|
||||
1. Each node in the cluster has 2GB of memory. The cluster operator does not want to accept pods
|
||||
that require more than 2GB of memory since no node in the cluster can support the requirement. To prevent a
|
||||
pod from being permanently unscheduled to a node, the operator instead chooses to reject pods that exceed 2GB
|
||||
of memory as part of admission control.
|
||||
2. A cluster is shared by two communities in an organization that runs production and development workloads
|
||||
respectively. Production workloads may consume up to 8GB of memory, but development workloads may consume up
|
||||
to 512MB of memory. The cluster operator creates a separate namespace for each workload, and applies limits to
|
||||
each namespace.
|
||||
3. Users may create a pod which consumes resources just below the capacity of a machine. The left over space
|
||||
may be too small to be useful, but big enough for the waste to be costly over the entire cluster. As a result,
|
||||
the cluster operator may want to set limits that a pod must consume at least 20% of the memory and cpu of their
|
||||
average node size in order to provide for more uniform scheduling and to limit waste.
|
||||
|
||||
This example demonstrates how limits can be applied to a Kubernetes namespace to control
|
||||
min/max resource limits per pod. In addition, this example demonstrates how you can
|
||||
apply default resource limits to pods in the absence of an end-user specified value.
|
||||
|
||||
See [LimitRange design doc](../../design/admission_control_limit_range.html) for more information. For a detailed description of the Kubernetes resource model, see [Resources](../../../docs/user-guide/compute-resources.html)
|
||||
|
||||
Step 0: Prerequisites
|
||||
-----------------------------------------
|
||||
This example requires a running Kubernetes cluster. See the [Getting Started guides](../../../docs/getting-started-guides/) for how to get started.
|
||||
|
||||
Change to the `<kubernetes>` directory if you're not already there.
|
||||
|
||||
Step 1: Create a namespace
|
||||
-----------------------------------------
|
||||
This example will work in a custom namespace to demonstrate the concepts involved.
|
||||
|
||||
Let's create a new namespace called limit-example:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl create -f docs/admin/limitrange/namespace.yaml
|
||||
namespace "limit-example" created
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS AGE
|
||||
default <none> Active 5m
|
||||
limit-example <none> Active 53s
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Step 2: Apply a limit to the namespace
|
||||
-----------------------------------------
|
||||
Let's create a simple limit in our namespace.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl create -f docs/admin/limitrange/limits.yaml --namespace=limit-example
|
||||
limitrange "mylimits" created
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Let's describe the limits that we have imposed in our namespace.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl describe limits mylimits --namespace=limit-example
|
||||
Name: mylimits
|
||||
Namespace: limit-example
|
||||
Type Resource Min Max Request Limit Limit/Request
|
||||
---- -------- --- --- ------- ----- -------------
|
||||
Pod cpu 200m 2 - - -
|
||||
Pod memory 6Mi 1Gi - - -
|
||||
Container cpu 100m 2 200m 300m -
|
||||
Container memory 3Mi 1Gi 100Mi 200Mi -
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
In this scenario, we have said the following:
|
||||
|
||||
1. If a max constraint is specified for a resource (2 CPU and 1Gi memory in this case), then a limit
|
||||
must be specified for that resource across all containers. Failure to specify a limit will result in
|
||||
a validation error when attempting to create the pod. Note that a default value of limit is set by
|
||||
*default* in file `limits.yaml` (300m CPU and 200Mi memory).
|
||||
2. If a min constraint is specified for a resource (100m CPU and 3Mi memory in this case), then a
|
||||
request must be specified for that resource across all containers. Failure to specify a request will
|
||||
result in a validation error when attempting to create the pod. Note that a default value of request is
|
||||
set by *defaultRequest* in file `limits.yaml` (200m CPU and 100Mi memory).
|
||||
3. For any pod, the sum of all containers memory requests must be >= 6Mi and the sum of all containers
|
||||
memory limits must be <= 1Gi; the sum of all containers CPU requests must be >= 200m and the sum of all
|
||||
containers CPU limits must be <= 2.
|
||||
|
||||
Step 3: Enforcing limits at point of creation
|
||||
-----------------------------------------
|
||||
The limits enumerated in a namespace are only enforced when a pod is created or updated in
|
||||
the cluster. If you change the limits to a different value range, it does not affect pods that
|
||||
were previously created in a namespace.
|
||||
|
||||
If a resource (cpu or memory) is being restricted by a limit, the user will get an error at time
|
||||
of creation explaining why.
|
||||
|
||||
Let's first spin up a replication controller that creates a single container pod to demonstrate
|
||||
how default values are applied to each pod.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example
|
||||
replicationcontroller "nginx" created
|
||||
$ kubectl get pods --namespace=limit-example
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-aq0mf 1/1 Running 0 35s
|
||||
$ kubectl get pods nginx-aq0mf --namespace=limit-example -o yaml | grep resources -C 8
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
resourceVersion: "127"
|
||||
selfLink: /api/v1/namespaces/limit-example/pods/nginx-aq0mf
|
||||
uid: 51be42a7-7156-11e5-9921-286ed488f785
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: 300m
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 200m
|
||||
memory: 100Mi
|
||||
terminationMessagePath: /dev/termination-log
|
||||
volumeMounts:
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Note that our nginx container has picked up the namespace default cpu and memory resource *limits* and *requests*.
|
||||
|
||||
Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 cpu cores.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl create -f docs/admin/limitrange/invalid-pod.yaml --namespace=limit-example
|
||||
Error from server: error when creating "docs/admin/limitrange/invalid-pod.yaml": Pod "invalid-pod" is forbidden: [Maximum cpu usage per Pod is 2, but limit is 3., Maximum cpu usage per Container is 2, but limit is 3.]
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Let's create a pod that falls within the allowed limit boundaries.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl create -f docs/admin/limitrange/valid-pod.yaml --namespace=limit-example
|
||||
pod "valid-pod" created
|
||||
$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
uid: 162a12aa-7157-11e5-9921-286ed488f785
|
||||
spec:
|
||||
containers:
|
||||
- image: gcr.io/google_containers/serve_hostname
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: kubernetes-serve-hostname
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
||||
memory: 512Mi
|
||||
requests:
|
||||
cpu: "1"
|
||||
memory: 512Mi
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Note that this pod specifies explicit resource *limits* and *requests* so it did not pick up the namespace
|
||||
default values.
|
||||
|
||||
Note: The *limits* for CPU resource are not enforced in the default Kubernetes setup on the physical node
|
||||
that runs the container unless the administrator deploys the kubelet with the folllowing flag:
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
$ kubelet --help
|
||||
Usage of kubelet
|
||||
....
|
||||
--cpu-cfs-quota[=false]: Enable CPU CFS quota enforcement for containers that specify CPU limits
|
||||
$ kubelet --cpu-cfs-quota=true ...
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
Step 4: Cleanup
|
||||
----------------------------
|
||||
To remove the resources used by this example, you can just delete the limit-example namespace.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl delete namespace limit-example
|
||||
namespace "limit-example" deleted
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS AGE
|
||||
default <none> Active 20m
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Summary
|
||||
----------------------------
|
||||
Cluster operators that want to restrict the amount of resources a single container or pod may consume
|
||||
are able to define allowable ranges per Kubernetes namespace. In the absence of any explicit assignments,
|
||||
the Kubernetes system is able to apply default resource *limits* and *requests* if desired in order to
|
||||
constrain the amount of resource a pod consumes on a node.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
---
|
||||
title: "Limit Range"
|
||||
---
|
||||
|
||||
Limit Range
|
||||
========================================
|
||||
By default, pods run with unbounded CPU and memory limits. This means that any pod in the
|
||||
system will be able to consume as much CPU and memory on the node that executes the pod.
|
||||
|
||||
Users may want to impose restrictions on the amount of resource a single pod in the system may consume
|
||||
for a variety of reasons.
|
||||
|
||||
For example:
|
||||
|
||||
1. Each node in the cluster has 2GB of memory. The cluster operator does not want to accept pods
|
||||
that require more than 2GB of memory since no node in the cluster can support the requirement. To prevent a
|
||||
pod from being permanently unscheduled to a node, the operator instead chooses to reject pods that exceed 2GB
|
||||
of memory as part of admission control.
|
||||
2. A cluster is shared by two communities in an organization that runs production and development workloads
|
||||
respectively. Production workloads may consume up to 8GB of memory, but development workloads may consume up
|
||||
to 512MB of memory. The cluster operator creates a separate namespace for each workload, and applies limits to
|
||||
each namespace.
|
||||
3. Users may create a pod which consumes resources just below the capacity of a machine. The left over space
|
||||
may be too small to be useful, but big enough for the waste to be costly over the entire cluster. As a result,
|
||||
the cluster operator may want to set limits that a pod must consume at least 20% of the memory and cpu of their
|
||||
average node size in order to provide for more uniform scheduling and to limit waste.
|
||||
|
||||
This example demonstrates how limits can be applied to a Kubernetes namespace to control
|
||||
min/max resource limits per pod. In addition, this example demonstrates how you can
|
||||
apply default resource limits to pods in the absence of an end-user specified value.
|
||||
|
||||
See [LimitRange design doc](../../design/admission_control_limit_range) for more information. For a detailed description of the Kubernetes resource model, see [Resources](/{{page.version}}/docs/user-guide/compute-resources)
|
||||
|
||||
Step 0: Prerequisites
|
||||
-----------------------------------------
|
||||
This example requires a running Kubernetes cluster. See the [Getting Started guides](/{{page.version}}/docs/getting-started-guides/) for how to get started.
|
||||
|
||||
Change to the `<kubernetes>` directory if you're not already there.
|
||||
|
||||
Step 1: Create a namespace
|
||||
-----------------------------------------
|
||||
This example will work in a custom namespace to demonstrate the concepts involved.
|
||||
|
||||
Let's create a new namespace called limit-example:
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl create -f docs/admin/limitrange/namespace.yaml
|
||||
namespace "limit-example" created
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS AGE
|
||||
default <none> Active 5m
|
||||
limit-example <none> Active 53s
|
||||
{% endhighlight %}
|
||||
|
||||
Step 2: Apply a limit to the namespace
|
||||
-----------------------------------------
|
||||
Let's create a simple limit in our namespace.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl create -f docs/admin/limitrange/limits.yaml --namespace=limit-example
|
||||
limitrange "mylimits" created
|
||||
{% endhighlight %}
|
||||
|
||||
Let's describe the limits that we have imposed in our namespace.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl describe limits mylimits --namespace=limit-example
|
||||
Name: mylimits
|
||||
Namespace: limit-example
|
||||
Type Resource Min Max Request Limit Limit/Request
|
||||
---- -------- --- --- ------- ----- -------------
|
||||
Pod cpu 200m 2 - - -
|
||||
Pod memory 6Mi 1Gi - - -
|
||||
Container cpu 100m 2 200m 300m -
|
||||
Container memory 3Mi 1Gi 100Mi 200Mi -
|
||||
{% endhighlight %}
|
||||
|
||||
In this scenario, we have said the following:
|
||||
|
||||
1. If a max constraint is specified for a resource (2 CPU and 1Gi memory in this case), then a limit
|
||||
must be specified for that resource across all containers. Failure to specify a limit will result in
|
||||
a validation error when attempting to create the pod. Note that a default value of limit is set by
|
||||
*default* in file `limits.yaml` (300m CPU and 200Mi memory).
|
||||
2. If a min constraint is specified for a resource (100m CPU and 3Mi memory in this case), then a
|
||||
request must be specified for that resource across all containers. Failure to specify a request will
|
||||
result in a validation error when attempting to create the pod. Note that a default value of request is
|
||||
set by *defaultRequest* in file `limits.yaml` (200m CPU and 100Mi memory).
|
||||
3. For any pod, the sum of all containers memory requests must be >= 6Mi and the sum of all containers
|
||||
memory limits must be <= 1Gi; the sum of all containers CPU requests must be >= 200m and the sum of all
|
||||
containers CPU limits must be <= 2.
|
||||
|
||||
Step 3: Enforcing limits at point of creation
|
||||
-----------------------------------------
|
||||
The limits enumerated in a namespace are only enforced when a pod is created or updated in
|
||||
the cluster. If you change the limits to a different value range, it does not affect pods that
|
||||
were previously created in a namespace.
|
||||
|
||||
If a resource (cpu or memory) is being restricted by a limit, the user will get an error at time
|
||||
of creation explaining why.
|
||||
|
||||
Let's first spin up a replication controller that creates a single container pod to demonstrate
|
||||
how default values are applied to each pod.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example
|
||||
replicationcontroller "nginx" created
|
||||
$ kubectl get pods --namespace=limit-example
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-aq0mf 1/1 Running 0 35s
|
||||
$ kubectl get pods nginx-aq0mf --namespace=limit-example -o yaml | grep resources -C 8
|
||||
{% endhighlight %}
|
||||
|
||||
{% highlight yaml %}
|
||||
resourceVersion: "127"
|
||||
selfLink: /api/v1/namespaces/limit-example/pods/nginx-aq0mf
|
||||
uid: 51be42a7-7156-11e5-9921-286ed488f785
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: 300m
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 200m
|
||||
memory: 100Mi
|
||||
terminationMessagePath: /dev/termination-log
|
||||
volumeMounts:
|
||||
{% endhighlight %}
|
||||
|
||||
Note that our nginx container has picked up the namespace default cpu and memory resource *limits* and *requests*.
|
||||
|
||||
Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 cpu cores.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl create -f docs/admin/limitrange/invalid-pod.yaml --namespace=limit-example
|
||||
Error from server: error when creating "docs/admin/limitrange/invalid-pod.yaml": Pod "invalid-pod" is forbidden: [Maximum cpu usage per Pod is 2, but limit is 3., Maximum cpu usage per Container is 2, but limit is 3.]
|
||||
{% endhighlight %}
|
||||
|
||||
Let's create a pod that falls within the allowed limit boundaries.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl create -f docs/admin/limitrange/valid-pod.yaml --namespace=limit-example
|
||||
pod "valid-pod" created
|
||||
$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources
|
||||
{% endhighlight %}
|
||||
|
||||
{% highlight yaml %}
|
||||
uid: 162a12aa-7157-11e5-9921-286ed488f785
|
||||
spec:
|
||||
containers:
|
||||
- image: gcr.io/google_containers/serve_hostname
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: kubernetes-serve-hostname
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
||||
memory: 512Mi
|
||||
requests:
|
||||
cpu: "1"
|
||||
memory: 512Mi
|
||||
{% endhighlight %}
|
||||
|
||||
Note that this pod specifies explicit resource *limits* and *requests* so it did not pick up the namespace
|
||||
default values.
|
||||
|
||||
Note: The *limits* for CPU resource are not enforced in the default Kubernetes setup on the physical node
|
||||
that runs the container unless the administrator deploys the kubelet with the folllowing flag:
|
||||
|
||||
```
|
||||
$ kubelet --help
|
||||
Usage of kubelet
|
||||
....
|
||||
--cpu-cfs-quota[=false]: Enable CPU CFS quota enforcement for containers that specify CPU limits
|
||||
$ kubelet --cpu-cfs-quota=true ...
|
||||
```
|
||||
|
||||
Step 4: Cleanup
|
||||
----------------------------
|
||||
To remove the resources used by this example, you can just delete the limit-example namespace.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl delete namespace limit-example
|
||||
namespace "limit-example" deleted
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS AGE
|
||||
default <none> Active 20m
|
||||
{% endhighlight %}
|
||||
|
||||
Summary
|
||||
----------------------------
|
||||
Cluster operators that want to restrict the amount of resources a single container or pod may consume
|
||||
are able to define allowable ranges per Kubernetes namespace. In the absence of any explicit assignments,
|
||||
the Kubernetes system is able to apply default resource *limits* and *requests* if desired in order to
|
||||
constrain the amount of resource a pod consumes on a node.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,222 +1,202 @@
|
|||
---
|
||||
title: "Limit Range"
|
||||
---
|
||||
|
||||
Limit Range
|
||||
========================================
|
||||
By default, pods run with unbounded CPU and memory limits. This means that any pod in the
|
||||
system will be able to consume as much CPU and memory on the node that executes the pod.
|
||||
|
||||
Users may want to impose restrictions on the amount of resource a single pod in the system may consume
|
||||
for a variety of reasons.
|
||||
|
||||
For example:
|
||||
|
||||
1. Each node in the cluster has 2GB of memory. The cluster operator does not want to accept pods
|
||||
that require more than 2GB of memory since no node in the cluster can support the requirement. To prevent a
|
||||
pod from being permanently unscheduled to a node, the operator instead chooses to reject pods that exceed 2GB
|
||||
of memory as part of admission control.
|
||||
2. A cluster is shared by two communities in an organization that runs production and development workloads
|
||||
respectively. Production workloads may consume up to 8GB of memory, but development workloads may consume up
|
||||
to 512MB of memory. The cluster operator creates a separate namespace for each workload, and applies limits to
|
||||
each namespace.
|
||||
3. Users may create a pod which consumes resources just below the capacity of a machine. The left over space
|
||||
may be too small to be useful, but big enough for the waste to be costly over the entire cluster. As a result,
|
||||
the cluster operator may want to set limits that a pod must consume at least 20% of the memory and cpu of their
|
||||
average node size in order to provide for more uniform scheduling and to limit waste.
|
||||
|
||||
This example demonstrates how limits can be applied to a Kubernetes namespace to control
|
||||
min/max resource limits per pod. In addition, this example demonstrates how you can
|
||||
apply default resource limits to pods in the absence of an end-user specified value.
|
||||
|
||||
See [LimitRange design doc](../../design/admission_control_limit_range.html) for more information. For a detailed description of the Kubernetes resource model, see [Resources](../../../docs/user-guide/compute-resources.html)
|
||||
|
||||
Step 0: Prerequisites
|
||||
-----------------------------------------
|
||||
This example requires a running Kubernetes cluster. See the [Getting Started guides](../../../docs/getting-started-guides/) for how to get started.
|
||||
|
||||
Change to the `<kubernetes>` directory if you're not already there.
|
||||
|
||||
Step 1: Create a namespace
|
||||
-----------------------------------------
|
||||
This example will work in a custom namespace to demonstrate the concepts involved.
|
||||
|
||||
Let's create a new namespace called limit-example:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl create -f docs/admin/limitrange/namespace.yaml
|
||||
namespace "limit-example" created
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS AGE
|
||||
default <none> Active 5m
|
||||
limit-example <none> Active 53s
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Step 2: Apply a limit to the namespace
|
||||
-----------------------------------------
|
||||
Let's create a simple limit in our namespace.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl create -f docs/admin/limitrange/limits.yaml --namespace=limit-example
|
||||
limitrange "mylimits" created
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Let's describe the limits that we have imposed in our namespace.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl describe limits mylimits --namespace=limit-example
|
||||
Name: mylimits
|
||||
Namespace: limit-example
|
||||
Type Resource Min Max Request Limit Limit/Request
|
||||
---- -------- --- --- ------- ----- -------------
|
||||
Pod cpu 200m 2 - - -
|
||||
Pod memory 6Mi 1Gi - - -
|
||||
Container cpu 100m 2 200m 300m -
|
||||
Container memory 3Mi 1Gi 100Mi 200Mi -
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
In this scenario, we have said the following:
|
||||
|
||||
1. If a max constraint is specified for a resource (2 CPU and 1Gi memory in this case), then a limit
|
||||
must be specified for that resource across all containers. Failure to specify a limit will result in
|
||||
a validation error when attempting to create the pod. Note that a default value of limit is set by
|
||||
*default* in file `limits.yaml` (300m CPU and 200Mi memory).
|
||||
2. If a min constraint is specified for a resource (100m CPU and 3Mi memory in this case), then a
|
||||
request must be specified for that resource across all containers. Failure to specify a request will
|
||||
result in a validation error when attempting to create the pod. Note that a default value of request is
|
||||
set by *defaultRequest* in file `limits.yaml` (200m CPU and 100Mi memory).
|
||||
3. For any pod, the sum of all containers memory requests must be >= 6Mi and the sum of all containers
|
||||
memory limits must be <= 1Gi; the sum of all containers CPU requests must be >= 200m and the sum of all
|
||||
containers CPU limits must be <= 2.
|
||||
|
||||
Step 3: Enforcing limits at point of creation
|
||||
-----------------------------------------
|
||||
The limits enumerated in a namespace are only enforced when a pod is created or updated in
|
||||
the cluster. If you change the limits to a different value range, it does not affect pods that
|
||||
were previously created in a namespace.
|
||||
|
||||
If a resource (cpu or memory) is being restricted by a limit, the user will get an error at time
|
||||
of creation explaining why.
|
||||
|
||||
Let's first spin up a replication controller that creates a single container pod to demonstrate
|
||||
how default values are applied to each pod.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example
|
||||
replicationcontroller "nginx" created
|
||||
$ kubectl get pods --namespace=limit-example
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-aq0mf 1/1 Running 0 35s
|
||||
$ kubectl get pods nginx-aq0mf --namespace=limit-example -o yaml | grep resources -C 8
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
resourceVersion: "127"
|
||||
selfLink: /api/v1/namespaces/limit-example/pods/nginx-aq0mf
|
||||
uid: 51be42a7-7156-11e5-9921-286ed488f785
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: 300m
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 200m
|
||||
memory: 100Mi
|
||||
terminationMessagePath: /dev/termination-log
|
||||
volumeMounts:
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Note that our nginx container has picked up the namespace default cpu and memory resource *limits* and *requests*.
|
||||
|
||||
Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 cpu cores.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl create -f docs/admin/limitrange/invalid-pod.yaml --namespace=limit-example
|
||||
Error from server: error when creating "docs/admin/limitrange/invalid-pod.yaml": Pod "invalid-pod" is forbidden: [Maximum cpu usage per Pod is 2, but limit is 3., Maximum cpu usage per Container is 2, but limit is 3.]
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Let's create a pod that falls within the allowed limit boundaries.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl create -f docs/admin/limitrange/valid-pod.yaml --namespace=limit-example
|
||||
pod "valid-pod" created
|
||||
$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
uid: 162a12aa-7157-11e5-9921-286ed488f785
|
||||
spec:
|
||||
containers:
|
||||
- image: gcr.io/google_containers/serve_hostname
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: kubernetes-serve-hostname
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
||||
memory: 512Mi
|
||||
requests:
|
||||
cpu: "1"
|
||||
memory: 512Mi
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Note that this pod specifies explicit resource *limits* and *requests* so it did not pick up the namespace
|
||||
default values.
|
||||
|
||||
Note: The *limits* for CPU resource are not enforced in the default Kubernetes setup on the physical node
|
||||
that runs the container unless the administrator deploys the kubelet with the folllowing flag:
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
$ kubelet --help
|
||||
Usage of kubelet
|
||||
....
|
||||
--cpu-cfs-quota[=false]: Enable CPU CFS quota enforcement for containers that specify CPU limits
|
||||
$ kubelet --cpu-cfs-quota=true ...
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
Step 4: Cleanup
|
||||
----------------------------
|
||||
To remove the resources used by this example, you can just delete the limit-example namespace.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl delete namespace limit-example
|
||||
namespace "limit-example" deleted
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS AGE
|
||||
default <none> Active 20m
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Summary
|
||||
----------------------------
|
||||
Cluster operators that want to restrict the amount of resources a single container or pod may consume
|
||||
are able to define allowable ranges per Kubernetes namespace. In the absence of any explicit assignments,
|
||||
the Kubernetes system is able to apply default resource *limits* and *requests* if desired in order to
|
||||
constrain the amount of resource a pod consumes on a node.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
---
|
||||
title: "Limit Range"
|
||||
---
|
||||
|
||||
Limit Range
|
||||
========================================
|
||||
By default, pods run with unbounded CPU and memory limits. This means that any pod in the
|
||||
system will be able to consume as much CPU and memory on the node that executes the pod.
|
||||
|
||||
Users may want to impose restrictions on the amount of resource a single pod in the system may consume
|
||||
for a variety of reasons.
|
||||
|
||||
For example:
|
||||
|
||||
1. Each node in the cluster has 2GB of memory. The cluster operator does not want to accept pods
|
||||
that require more than 2GB of memory since no node in the cluster can support the requirement. To prevent a
|
||||
pod from being permanently unscheduled to a node, the operator instead chooses to reject pods that exceed 2GB
|
||||
of memory as part of admission control.
|
||||
2. A cluster is shared by two communities in an organization that runs production and development workloads
|
||||
respectively. Production workloads may consume up to 8GB of memory, but development workloads may consume up
|
||||
to 512MB of memory. The cluster operator creates a separate namespace for each workload, and applies limits to
|
||||
each namespace.
|
||||
3. Users may create a pod which consumes resources just below the capacity of a machine. The left over space
|
||||
may be too small to be useful, but big enough for the waste to be costly over the entire cluster. As a result,
|
||||
the cluster operator may want to set limits that a pod must consume at least 20% of the memory and cpu of their
|
||||
average node size in order to provide for more uniform scheduling and to limit waste.
|
||||
|
||||
This example demonstrates how limits can be applied to a Kubernetes namespace to control
|
||||
min/max resource limits per pod. In addition, this example demonstrates how you can
|
||||
apply default resource limits to pods in the absence of an end-user specified value.
|
||||
|
||||
See [LimitRange design doc](../../design/admission_control_limit_range) for more information. For a detailed description of the Kubernetes resource model, see [Resources](/{{page.version}}/docs/user-guide/compute-resources)
|
||||
|
||||
Step 0: Prerequisites
|
||||
-----------------------------------------
|
||||
This example requires a running Kubernetes cluster. See the [Getting Started guides](/{{page.version}}/docs/getting-started-guides/) for how to get started.
|
||||
|
||||
Change to the `<kubernetes>` directory if you're not already there.
|
||||
|
||||
Step 1: Create a namespace
|
||||
-----------------------------------------
|
||||
This example will work in a custom namespace to demonstrate the concepts involved.
|
||||
|
||||
Let's create a new namespace called limit-example:
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl create -f docs/admin/limitrange/namespace.yaml
|
||||
namespace "limit-example" created
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS AGE
|
||||
default <none> Active 5m
|
||||
limit-example <none> Active 53s
|
||||
{% endhighlight %}
|
||||
|
||||
Step 2: Apply a limit to the namespace
|
||||
-----------------------------------------
|
||||
Let's create a simple limit in our namespace.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl create -f docs/admin/limitrange/limits.yaml --namespace=limit-example
|
||||
limitrange "mylimits" created
|
||||
{% endhighlight %}
|
||||
|
||||
Let's describe the limits that we have imposed in our namespace.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl describe limits mylimits --namespace=limit-example
|
||||
Name: mylimits
|
||||
Namespace: limit-example
|
||||
Type Resource Min Max Request Limit Limit/Request
|
||||
---- -------- --- --- ------- ----- -------------
|
||||
Pod cpu 200m 2 - - -
|
||||
Pod memory 6Mi 1Gi - - -
|
||||
Container cpu 100m 2 200m 300m -
|
||||
Container memory 3Mi 1Gi 100Mi 200Mi -
|
||||
{% endhighlight %}
|
||||
|
||||
In this scenario, we have said the following:
|
||||
|
||||
1. If a max constraint is specified for a resource (2 CPU and 1Gi memory in this case), then a limit
|
||||
must be specified for that resource across all containers. Failure to specify a limit will result in
|
||||
a validation error when attempting to create the pod. Note that a default value of limit is set by
|
||||
*default* in file `limits.yaml` (300m CPU and 200Mi memory).
|
||||
2. If a min constraint is specified for a resource (100m CPU and 3Mi memory in this case), then a
|
||||
request must be specified for that resource across all containers. Failure to specify a request will
|
||||
result in a validation error when attempting to create the pod. Note that a default value of request is
|
||||
set by *defaultRequest* in file `limits.yaml` (200m CPU and 100Mi memory).
|
||||
3. For any pod, the sum of all containers memory requests must be >= 6Mi and the sum of all containers
|
||||
memory limits must be <= 1Gi; the sum of all containers CPU requests must be >= 200m and the sum of all
|
||||
containers CPU limits must be <= 2.
|
||||
|
||||
Step 3: Enforcing limits at point of creation
|
||||
-----------------------------------------
|
||||
The limits enumerated in a namespace are only enforced when a pod is created or updated in
|
||||
the cluster. If you change the limits to a different value range, it does not affect pods that
|
||||
were previously created in a namespace.
|
||||
|
||||
If a resource (cpu or memory) is being restricted by a limit, the user will get an error at time
|
||||
of creation explaining why.
|
||||
|
||||
Let's first spin up a replication controller that creates a single container pod to demonstrate
|
||||
how default values are applied to each pod.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example
|
||||
replicationcontroller "nginx" created
|
||||
$ kubectl get pods --namespace=limit-example
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-aq0mf 1/1 Running 0 35s
|
||||
$ kubectl get pods nginx-aq0mf --namespace=limit-example -o yaml | grep resources -C 8
|
||||
{% endhighlight %}
|
||||
|
||||
{% highlight yaml %}
|
||||
resourceVersion: "127"
|
||||
selfLink: /api/v1/namespaces/limit-example/pods/nginx-aq0mf
|
||||
uid: 51be42a7-7156-11e5-9921-286ed488f785
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: 300m
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 200m
|
||||
memory: 100Mi
|
||||
terminationMessagePath: /dev/termination-log
|
||||
volumeMounts:
|
||||
{% endhighlight %}
|
||||
|
||||
Note that our nginx container has picked up the namespace default cpu and memory resource *limits* and *requests*.
|
||||
|
||||
Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 cpu cores.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl create -f docs/admin/limitrange/invalid-pod.yaml --namespace=limit-example
|
||||
Error from server: error when creating "docs/admin/limitrange/invalid-pod.yaml": Pod "invalid-pod" is forbidden: [Maximum cpu usage per Pod is 2, but limit is 3., Maximum cpu usage per Container is 2, but limit is 3.]
|
||||
{% endhighlight %}
|
||||
|
||||
Let's create a pod that falls within the allowed limit boundaries.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl create -f docs/admin/limitrange/valid-pod.yaml --namespace=limit-example
|
||||
pod "valid-pod" created
|
||||
$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources
|
||||
{% endhighlight %}
|
||||
|
||||
{% highlight yaml %}
|
||||
uid: 162a12aa-7157-11e5-9921-286ed488f785
|
||||
spec:
|
||||
containers:
|
||||
- image: gcr.io/google_containers/serve_hostname
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: kubernetes-serve-hostname
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
||||
memory: 512Mi
|
||||
requests:
|
||||
cpu: "1"
|
||||
memory: 512Mi
|
||||
{% endhighlight %}
|
||||
|
||||
Note that this pod specifies explicit resource *limits* and *requests* so it did not pick up the namespace
|
||||
default values.
|
||||
|
||||
Note: The *limits* for CPU resource are not enforced in the default Kubernetes setup on the physical node
|
||||
that runs the container unless the administrator deploys the kubelet with the folllowing flag:
|
||||
|
||||
```
|
||||
$ kubelet --help
|
||||
Usage of kubelet
|
||||
....
|
||||
--cpu-cfs-quota[=false]: Enable CPU CFS quota enforcement for containers that specify CPU limits
|
||||
$ kubelet --cpu-cfs-quota=true ...
|
||||
```
|
||||
|
||||
Step 4: Cleanup
|
||||
----------------------------
|
||||
To remove the resources used by this example, you can just delete the limit-example namespace.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl delete namespace limit-example
|
||||
namespace "limit-example" deleted
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS AGE
|
||||
default <none> Active 20m
|
||||
{% endhighlight %}
|
||||
|
||||
Summary
|
||||
----------------------------
|
||||
Cluster operators that want to restrict the amount of resources a single container or pod may consume
|
||||
are able to define allowable ranges per Kubernetes namespace. In the absence of any explicit assignments,
|
||||
the Kubernetes system is able to apply default resource *limits* and *requests* if desired in order to
|
||||
constrain the amount of resource a pod consumes on a node.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,6 +1,5 @@
|
|||
---
|
||||
title: "Considerations for running multiple Kubernetes clusters"
|
||||
section: guides
|
||||
---
|
||||
|
||||
You may want to set up multiple Kubernetes clusters, both to
|
||||
|
@ -9,13 +8,13 @@ This document describes some of the issues to consider when making a decision ab
|
|||
|
||||
Note that at present,
|
||||
Kubernetes does not offer a mechanism to aggregate multiple clusters into a single virtual cluster. However,
|
||||
we [plan to do this in the future](../proposals/federation.html).
|
||||
we [plan to do this in the future](../proposals/federation).
|
||||
|
||||
## Scope of a single cluster
|
||||
|
||||
On IaaS providers such as Google Compute Engine or Amazon Web Services, a VM exists in a
|
||||
[zone](https://cloud.google.com/compute/docs/zones) or [availability
|
||||
zone](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html).
|
||||
zone](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones).
|
||||
We suggest that all the VMs in a Kubernetes cluster should be in the same availability zone, because:
|
||||
- compared to having a single global Kubernetes cluster, there are fewer single-points of failure
|
||||
- compared to a cluster that spans availability zones, it is easier to reason about the availability properties of a
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Namespaces"
|
||||
---
|
||||
|
||||
|
||||
# Namespaces
|
||||
|
||||
## Abstract
|
||||
|
||||
A Namespace is a mechanism to partition resources created by users into
|
||||
|
@ -50,12 +46,12 @@ Look [here](namespaces/) for an in depth example of namespaces.
|
|||
You can list the current namespaces in a cluster using:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS
|
||||
default <none> Active
|
||||
kube-system <none> Active
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Kubernetes starts with two initial namespaces:
|
||||
|
@ -65,15 +61,15 @@ Kubernetes starts with two initial namespaces:
|
|||
You can also get the summary of a specific namespace using:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl get namespaces <name>
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Or you can get detailed information with:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl describe namespaces <name>
|
||||
Name: default
|
||||
Labels: <none>
|
||||
|
@ -85,7 +81,7 @@ Resource Limits
|
|||
Type Resource Min Max Default
|
||||
---- -------- --- --- ---
|
||||
Container cpu - - 100m
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Note that these details show both resource quota (if present) as well as resource limit ranges.
|
||||
|
@ -96,7 +92,7 @@ to define *Hard* resource usage limits that a *Namespace* may consume.
|
|||
A limit range defines min/max constraints on the amount of resources a single entity can consume in
|
||||
a *Namespace*.
|
||||
|
||||
See [Admission control: Limit Range](../design/admission_control_limit_range.html)
|
||||
See [Admission control: Limit Range](../design/admission_control_limit_range)
|
||||
|
||||
A namespace can be in one of two phases:
|
||||
* `Active` the namespace is in use
|
||||
|
@ -109,12 +105,12 @@ See the [design doc](../design/namespaces.html#phases) for more details.
|
|||
To create a new namespace, first create a new YAML file called `my-namespace.yaml` with the contents:
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: <insert-namespace-name-here>
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Note that the name of your namespace must be a DNS compatible label.
|
||||
|
@ -124,24 +120,24 @@ More information on the `finalizers` field can be found in the namespace [design
|
|||
Then run:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl create -f ./my-namespace.yaml
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### Working in namespaces
|
||||
|
||||
See [Setting the namespace for a request](../../docs/user-guide/namespaces.html#setting-the-namespace-for-a-request)
|
||||
and [Setting the namespace preference](../../docs/user-guide/namespaces.html#setting-the-namespace-preference).
|
||||
See [Setting the namespace for a request](/{{page.version}}/docs/user-guide/namespaces.html#setting-the-namespace-for-a-request)
|
||||
and [Setting the namespace preference](/{{page.version}}/docs/user-guide/namespaces.html#setting-the-namespace-preference).
|
||||
|
||||
### Deleting a namespace
|
||||
|
||||
You can delete a namespace with
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl delete namespaces <insert-some-namespace-name>
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
**WARNING, this deletes _everything_ under the namespace!**
|
||||
|
@ -150,7 +146,7 @@ This delete is asynchronous, so for a time you will see the namespace in the `Te
|
|||
|
||||
## Namespaces and DNS
|
||||
|
||||
When you create a [Service](../../docs/user-guide/services.html), it creates a corresponding [DNS entry](dns.html).
|
||||
When you create a [Service](/{{page.version}}/docs/user-guide/services), it creates a corresponding [DNS entry](dns).
|
||||
This entry is of the form `<service-name>.<namespace-name>.svc.cluster.local`, which means
|
||||
that if a container just uses `<service-name>` it will resolve to the service which
|
||||
is local to a namespace. This is useful for using the same configuration across
|
||||
|
@ -160,7 +156,7 @@ across namespaces, you need to use the fully qualified domain name (FQDN).
|
|||
## Design
|
||||
|
||||
Details of the design of namespaces in Kubernetes, including a [detailed example](../design/namespaces.html#example-openshift-origin-managing-a-kubernetes-namespace)
|
||||
can be found in the [namespaces design doc](../design/namespaces.html)
|
||||
can be found in the [namespaces design doc](../design/namespaces)
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,283 +1,252 @@
|
|||
---
|
||||
title: "Kubernetes Namespaces"
|
||||
section: guides
|
||||
---
|
||||
|
||||
Kubernetes _[namespaces](../../../docs/admin/namespaces.html)_ help different projects, teams, or customers to share a Kubernetes cluster.
|
||||
|
||||
It does this by providing the following:
|
||||
|
||||
1. A scope for [Names](../../user-guide/identifiers.html).
|
||||
2. A mechanism to attach authorization and policy to a subsection of the cluster.
|
||||
|
||||
Use of multiple namespaces is optional.
|
||||
|
||||
This example demonstrates how to use Kubernetes namespaces to subdivide your cluster.
|
||||
|
||||
### Step Zero: Prerequisites
|
||||
|
||||
This example assumes the following:
|
||||
|
||||
1. You have an [existing Kubernetes cluster](../../getting-started-guides/).
|
||||
2. You have a basic understanding of Kubernetes _[pods](../../user-guide/pods.html)_, _[services](../../user-guide/services.html)_, and _[replication controllers](../../user-guide/replication-controller.html)_.
|
||||
|
||||
### Step One: Understand the default namespace
|
||||
|
||||
By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of pods,
|
||||
services, and replication controllers used by the cluster.
|
||||
|
||||
Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS
|
||||
default <none>
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
### Step Two: Create new namespaces
|
||||
|
||||
For this exercise, we will create two additional Kubernetes namespaces to hold our content.
|
||||
|
||||
Let's imagine a scenario where an organization is using a shared Kubernetes cluster for development and production use cases.
|
||||
|
||||
The development team would like to maintain a space in the cluster where they can get a view on the list of pods, services, and replication controllers
|
||||
they use to build and run their application. In this space, Kubernetes resources come and go, and the restrictions on who can or cannot modify resources
|
||||
are relaxed to enable agile development.
|
||||
|
||||
The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of
|
||||
pods, services, and replication controllers that run the production site.
|
||||
|
||||
One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production.
|
||||
|
||||
Let's create two new namespaces to hold our work.
|
||||
|
||||
Use the file [`namespace-dev.json`](namespace-dev.json) which describes a development namespace:
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE namespace-dev.json -->
|
||||
|
||||
{% highlight json %}
|
||||
{% raw %}
|
||||
{
|
||||
"kind": "Namespace",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "development",
|
||||
"labels": {
|
||||
"name": "development"
|
||||
}
|
||||
}
|
||||
}
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
[Download example](namespace-dev.json)
|
||||
<!-- END MUNGE: EXAMPLE namespace-dev.json -->
|
||||
|
||||
Create the development namespace using kubectl.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl create -f docs/admin/namespaces/namespace-dev.json
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
And then lets create the production namespace using kubectl.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl create -f docs/admin/namespaces/namespace-prod.json
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
To be sure things are right, let's list all of the namespaces in our cluster.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS
|
||||
default <none> Active
|
||||
development name=development Active
|
||||
production name=production Active
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
|
||||
### Step Three: Create pods in each namespace
|
||||
|
||||
A Kubernetes namespace provides the scope for pods, services, and replication controllers in the cluster.
|
||||
|
||||
Users interacting with one namespace do not see the content in another namespace.
|
||||
|
||||
To demonstrate this, let's spin up a simple replication controller and pod in the development namespace.
|
||||
|
||||
We first check what is the current context:
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority-data: REDACTED
|
||||
server: https://130.211.122.180
|
||||
name: lithe-cocoa-92103_kubernetes
|
||||
contexts:
|
||||
- context:
|
||||
cluster: lithe-cocoa-92103_kubernetes
|
||||
user: lithe-cocoa-92103_kubernetes
|
||||
name: lithe-cocoa-92103_kubernetes
|
||||
current-context: lithe-cocoa-92103_kubernetes
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: lithe-cocoa-92103_kubernetes
|
||||
user:
|
||||
client-certificate-data: REDACTED
|
||||
client-key-data: REDACTED
|
||||
token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b
|
||||
- name: lithe-cocoa-92103_kubernetes-basic-auth
|
||||
user:
|
||||
password: h5M0FtUUIflBSdI7
|
||||
username: admin
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
|
||||
$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
The above commands provided two request contexts you can alternate against depending on what namespace you
|
||||
wish to work against.
|
||||
|
||||
Let's switch to operate in the development namespace.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl config use-context dev
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
You can verify your current context by doing the following:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl config view
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority-data: REDACTED
|
||||
server: https://130.211.122.180
|
||||
name: lithe-cocoa-92103_kubernetes
|
||||
contexts:
|
||||
- context:
|
||||
cluster: lithe-cocoa-92103_kubernetes
|
||||
namespace: development
|
||||
user: lithe-cocoa-92103_kubernetes
|
||||
name: dev
|
||||
- context:
|
||||
cluster: lithe-cocoa-92103_kubernetes
|
||||
user: lithe-cocoa-92103_kubernetes
|
||||
name: lithe-cocoa-92103_kubernetes
|
||||
- context:
|
||||
cluster: lithe-cocoa-92103_kubernetes
|
||||
namespace: production
|
||||
user: lithe-cocoa-92103_kubernetes
|
||||
name: prod
|
||||
current-context: dev
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: lithe-cocoa-92103_kubernetes
|
||||
user:
|
||||
client-certificate-data: REDACTED
|
||||
client-key-data: REDACTED
|
||||
token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b
|
||||
- name: lithe-cocoa-92103_kubernetes-basic-auth
|
||||
user:
|
||||
password: h5M0FtUUIflBSdI7
|
||||
username: admin
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace.
|
||||
|
||||
Let's create some content.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
We have just created a replication controller whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
snowflake snowflake kubernetes/serve_hostname run=snowflake 2
|
||||
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
snowflake-8w0qn 1/1 Running 0 22s
|
||||
snowflake-jrpzb 1/1 Running 0 22s
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.
|
||||
|
||||
Let's switch to the production namespace and show how resources in one namespace are hidden from the other.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl config use-context prod
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
The production namespace should be empty.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Production likes to run cattle, so let's create some cattle pods.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5
|
||||
|
||||
$ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
cattle cattle kubernetes/serve_hostname run=cattle 5
|
||||
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
cattle-97rva 1/1 Running 0 12s
|
||||
cattle-i9ojn 1/1 Running 0 12s
|
||||
cattle-qj3yv 1/1 Running 0 12s
|
||||
cattle-yc7vn 1/1 Running 0 12s
|
||||
cattle-zz7ea 1/1 Running 0 12s
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.
|
||||
|
||||
As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different
|
||||
---
|
||||
title: "Kubernetes Namespaces"
|
||||
---
|
||||
|
||||
Kubernetes _[namespaces](/{{page.version}}/docs/admin/namespaces)_ help different projects, teams, or customers to share a Kubernetes cluster.
|
||||
|
||||
It does this by providing the following:
|
||||
|
||||
1. A scope for [Names](../../user-guide/identifiers).
|
||||
2. A mechanism to attach authorization and policy to a subsection of the cluster.
|
||||
|
||||
Use of multiple namespaces is optional.
|
||||
|
||||
This example demonstrates how to use Kubernetes namespaces to subdivide your cluster.
|
||||
|
||||
### Step Zero: Prerequisites
|
||||
|
||||
This example assumes the following:
|
||||
|
||||
1. You have an [existing Kubernetes cluster](../../getting-started-guides/).
|
||||
2. You have a basic understanding of Kubernetes _[pods](../../user-guide/pods)_, _[services](../../user-guide/services)_, and _[replication controllers](../../user-guide/replication-controller)_.
|
||||
|
||||
### Step One: Understand the default namespace
|
||||
|
||||
By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of pods,
|
||||
services, and replication controllers used by the cluster.
|
||||
|
||||
Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following:
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS
|
||||
default <none>
|
||||
{% endhighlight %}
|
||||
|
||||
### Step Two: Create new namespaces
|
||||
|
||||
For this exercise, we will create two additional Kubernetes namespaces to hold our content.
|
||||
|
||||
Let's imagine a scenario where an organization is using a shared Kubernetes cluster for development and production use cases.
|
||||
|
||||
The development team would like to maintain a space in the cluster where they can get a view on the list of pods, services, and replication controllers
|
||||
they use to build and run their application. In this space, Kubernetes resources come and go, and the restrictions on who can or cannot modify resources
|
||||
are relaxed to enable agile development.
|
||||
|
||||
The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of
|
||||
pods, services, and replication controllers that run the production site.
|
||||
|
||||
One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production.
|
||||
|
||||
Let's create two new namespaces to hold our work.
|
||||
|
||||
Use the file [`namespace-dev.json`](namespace-dev.json) which describes a development namespace:
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE namespace-dev.json -->
|
||||
|
||||
{% highlight json %}
|
||||
{
|
||||
"kind": "Namespace",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "development",
|
||||
"labels": {
|
||||
"name": "development"
|
||||
}
|
||||
}
|
||||
}
|
||||
{% endhighlight %}
|
||||
|
||||
[Download example](namespace-dev.json)
|
||||
<!-- END MUNGE: EXAMPLE namespace-dev.json -->
|
||||
|
||||
Create the development namespace using kubectl.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl create -f docs/admin/namespaces/namespace-dev.json
|
||||
{% endhighlight %}
|
||||
|
||||
And then lets create the production namespace using kubectl.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl create -f docs/admin/namespaces/namespace-prod.json
|
||||
{% endhighlight %}
|
||||
|
||||
To be sure things are right, let's list all of the namespaces in our cluster.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS
|
||||
default <none> Active
|
||||
development name=development Active
|
||||
production name=production Active
|
||||
{% endhighlight %}
|
||||
|
||||
|
||||
### Step Three: Create pods in each namespace
|
||||
|
||||
A Kubernetes namespace provides the scope for pods, services, and replication controllers in the cluster.
|
||||
|
||||
Users interacting with one namespace do not see the content in another namespace.
|
||||
|
||||
To demonstrate this, let's spin up a simple replication controller and pod in the development namespace.
|
||||
|
||||
We first check what is the current context:
|
||||
|
||||
{% highlight yaml %}
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority-data: REDACTED
|
||||
server: https://130.211.122.180
|
||||
name: lithe-cocoa-92103_kubernetes
|
||||
contexts:
|
||||
- context:
|
||||
cluster: lithe-cocoa-92103_kubernetes
|
||||
user: lithe-cocoa-92103_kubernetes
|
||||
name: lithe-cocoa-92103_kubernetes
|
||||
current-context: lithe-cocoa-92103_kubernetes
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: lithe-cocoa-92103_kubernetes
|
||||
user:
|
||||
client-certificate-data: REDACTED
|
||||
client-key-data: REDACTED
|
||||
token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b
|
||||
- name: lithe-cocoa-92103_kubernetes-basic-auth
|
||||
user:
|
||||
password: h5M0FtUUIflBSdI7
|
||||
username: admin
|
||||
{% endhighlight %}
|
||||
|
||||
The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
|
||||
$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
|
||||
{% endhighlight %}
|
||||
|
||||
The above commands provided two request contexts you can alternate against depending on what namespace you
|
||||
wish to work against.
|
||||
|
||||
Let's switch to operate in the development namespace.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl config use-context dev
|
||||
{% endhighlight %}
|
||||
|
||||
You can verify your current context by doing the following:
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl config view
|
||||
{% endhighlight %}
|
||||
|
||||
{% highlight yaml %}
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority-data: REDACTED
|
||||
server: https://130.211.122.180
|
||||
name: lithe-cocoa-92103_kubernetes
|
||||
contexts:
|
||||
- context:
|
||||
cluster: lithe-cocoa-92103_kubernetes
|
||||
namespace: development
|
||||
user: lithe-cocoa-92103_kubernetes
|
||||
name: dev
|
||||
- context:
|
||||
cluster: lithe-cocoa-92103_kubernetes
|
||||
user: lithe-cocoa-92103_kubernetes
|
||||
name: lithe-cocoa-92103_kubernetes
|
||||
- context:
|
||||
cluster: lithe-cocoa-92103_kubernetes
|
||||
namespace: production
|
||||
user: lithe-cocoa-92103_kubernetes
|
||||
name: prod
|
||||
current-context: dev
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: lithe-cocoa-92103_kubernetes
|
||||
user:
|
||||
client-certificate-data: REDACTED
|
||||
client-key-data: REDACTED
|
||||
token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b
|
||||
- name: lithe-cocoa-92103_kubernetes-basic-auth
|
||||
user:
|
||||
password: h5M0FtUUIflBSdI7
|
||||
username: admin
|
||||
{% endhighlight %}
|
||||
|
||||
At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace.
|
||||
|
||||
Let's create some content.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
|
||||
{% endhighlight %}
|
||||
|
||||
We have just created a replication controller whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
snowflake snowflake kubernetes/serve_hostname run=snowflake 2
|
||||
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
snowflake-8w0qn 1/1 Running 0 22s
|
||||
snowflake-jrpzb 1/1 Running 0 22s
|
||||
{% endhighlight %}
|
||||
|
||||
And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.
|
||||
|
||||
Let's switch to the production namespace and show how resources in one namespace are hidden from the other.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl config use-context prod
|
||||
{% endhighlight %}
|
||||
|
||||
The production namespace should be empty.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
{% endhighlight %}
|
||||
|
||||
Production likes to run cattle, so let's create some cattle pods.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5
|
||||
|
||||
$ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
cattle cattle kubernetes/serve_hostname run=cattle 5
|
||||
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
cattle-97rva 1/1 Running 0 12s
|
||||
cattle-i9ojn 1/1 Running 0 12s
|
||||
cattle-qj3yv 1/1 Running 0 12s
|
||||
cattle-yc7vn 1/1 Running 0 12s
|
||||
cattle-zz7ea 1/1 Running 0 12s
|
||||
{% endhighlight %}
|
||||
|
||||
At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.
|
||||
|
||||
As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different
|
||||
authorization rules for each namespace.
|
|
@ -1,15 +1,11 @@
|
|||
---
|
||||
title: "Kubernetes Namespaces"
|
||||
---
|
||||
|
||||
|
||||
## Kubernetes Namespaces
|
||||
|
||||
Kubernetes _[namespaces](../../../docs/admin/namespaces.html)_ help different projects, teams, or customers to share a Kubernetes cluster.
|
||||
Kubernetes _[namespaces](/{{page.version}}/docs/admin/namespaces)_ help different projects, teams, or customers to share a Kubernetes cluster.
|
||||
|
||||
It does this by providing the following:
|
||||
|
||||
1. A scope for [Names](../../user-guide/identifiers.html).
|
||||
1. A scope for [Names](../../user-guide/identifiers).
|
||||
2. A mechanism to attach authorization and policy to a subsection of the cluster.
|
||||
|
||||
Use of multiple namespaces is optional.
|
||||
|
@ -21,7 +17,7 @@ This example demonstrates how to use Kubernetes namespaces to subdivide your clu
|
|||
This example assumes the following:
|
||||
|
||||
1. You have an [existing Kubernetes cluster](../../getting-started-guides/).
|
||||
2. You have a basic understanding of Kubernetes _[pods](../../user-guide/pods.html)_, _[services](../../user-guide/services.html)_, and _[replication controllers](../../user-guide/replication-controller.html)_.
|
||||
2. You have a basic understanding of Kubernetes _[pods](../../user-guide/pods)_, _[services](../../user-guide/services)_, and _[replication controllers](../../user-guide/replication-controller)_.
|
||||
|
||||
### Step One: Understand the default namespace
|
||||
|
||||
|
@ -31,11 +27,11 @@ services, and replication controllers used by the cluster.
|
|||
Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS
|
||||
default <none>
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### Step Two: Create new namespaces
|
||||
|
@ -60,7 +56,7 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo
|
|||
<!-- BEGIN MUNGE: EXAMPLE namespace-dev.json -->
|
||||
|
||||
{% highlight json %}
|
||||
{% raw %}
|
||||
|
||||
{
|
||||
"kind": "Namespace",
|
||||
"apiVersion": "v1",
|
||||
|
@ -71,7 +67,7 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo
|
|||
}
|
||||
}
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
[Download example](namespace-dev.json)
|
||||
|
@ -80,29 +76,29 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo
|
|||
Create the development namespace using kubectl.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl create -f docs/admin/namespaces/namespace-dev.json
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
And then lets create the production namespace using kubectl.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl create -f docs/admin/namespaces/namespace-prod.json
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
To be sure things are right, let's list all of the namespaces in our cluster.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS
|
||||
default <none> Active
|
||||
development name=development Active
|
||||
production name=production Active
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
|
||||
|
@ -117,7 +113,7 @@ To demonstrate this, let's spin up a simple replication controller and pod in th
|
|||
We first check what is the current context:
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
|
@ -142,16 +138,16 @@ users:
|
|||
user:
|
||||
password: h5M0FtUUIflBSdI7
|
||||
username: admin
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
|
||||
$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
The above commands provided two request contexts you can alternate against depending on what namespace you
|
||||
|
@ -160,21 +156,21 @@ wish to work against.
|
|||
Let's switch to operate in the development namespace.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl config use-context dev
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
You can verify your current context by doing the following:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl config view
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
|
@ -209,7 +205,7 @@ users:
|
|||
user:
|
||||
password: h5M0FtUUIflBSdI7
|
||||
username: admin
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace.
|
||||
|
@ -217,15 +213,15 @@ At this point, all requests we make to the Kubernetes cluster from the command l
|
|||
Let's create some content.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
We have just created a replication controller whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
snowflake snowflake kubernetes/serve_hostname run=snowflake 2
|
||||
|
@ -234,7 +230,7 @@ $ kubectl get pods
|
|||
NAME READY STATUS RESTARTS AGE
|
||||
snowflake-8w0qn 1/1 Running 0 22s
|
||||
snowflake-jrpzb 1/1 Running 0 22s
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.
|
||||
|
@ -242,27 +238,27 @@ And this is great, developers are able to do what they want, and they do not hav
|
|||
Let's switch to the production namespace and show how resources in one namespace are hidden from the other.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl config use-context prod
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
The production namespace should be empty.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Production likes to run cattle, so let's create some cattle pods.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5
|
||||
|
||||
$ kubectl get rc
|
||||
|
@ -276,7 +272,7 @@ cattle-i9ojn 1/1 Running 0 12s
|
|||
cattle-qj3yv 1/1 Running 0 12s
|
||||
cattle-yc7vn 1/1 Running 0 12s
|
||||
cattle-zz7ea 1/1 Running 0 12s
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.
|
||||
|
|
|
@ -1,35 +1,15 @@
|
|||
---
|
||||
title: "Networking in Kubernetes"
|
||||
---
|
||||
|
||||
|
||||
# Networking in Kubernetes
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
|
||||
- [Networking in Kubernetes](#networking-in-kubernetes)
|
||||
- [Summary](#summary)
|
||||
- [Docker model](#docker-model)
|
||||
- [Kubernetes model](#kubernetes-model)
|
||||
- [How to achieve this](#how-to-achieve-this)
|
||||
- [Google Compute Engine (GCE)](#google-compute-engine-gce)
|
||||
- [L2 networks and linux bridging](#l2-networks-and-linux-bridging)
|
||||
- [Flannel](#flannel)
|
||||
- [OpenVSwitch](#openvswitch)
|
||||
- [Weave](#weave)
|
||||
- [Calico](#calico)
|
||||
- [Other reading](#other-reading)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
{% include pagetoc.html %}
|
||||
|
||||
Kubernetes approaches networking somewhat differently than Docker does by
|
||||
default. There are 4 distinct networking problems to solve:
|
||||
1. Highly-coupled container-to-container communications: this is solved by
|
||||
[pods](../user-guide/pods.html) and `localhost` communications.
|
||||
[pods](../user-guide/pods) and `localhost` communications.
|
||||
2. Pod-to-Pod communications: this is the primary focus of this document.
|
||||
3. Pod-to-Service communications: this is covered by [services](../user-guide/services.html).
|
||||
4. External-to-Service communications: this is covered by [services](../user-guide/services.html).
|
||||
3. Pod-to-Service communications: this is covered by [services](../user-guide/services).
|
||||
4. External-to-Service communications: this is covered by [services](../user-guide/services).
|
||||
|
||||
## Summary
|
||||
|
||||
|
@ -95,7 +75,7 @@ talk to other VMs in your project. This is the same basic model.
|
|||
Until now this document has talked about containers. In reality, Kubernetes
|
||||
applies IP addresses at the `Pod` scope - containers within a `Pod` share their
|
||||
network namespaces - including their IP address. This means that containers
|
||||
within a `Pod` can all reach each other’s ports on `localhost`. This does imply
|
||||
within a `Pod` can all reach each other's ports on `localhost`. This does imply
|
||||
that containers within a `Pod` must coordinate port usage, but this is no
|
||||
different than processes in a VM. We call this the "IP-per-pod" model. This
|
||||
is implemented in Docker as a "pod container" which holds the network namespace
|
||||
|
@ -128,9 +108,9 @@ on that subnet, and is passed to docker's `--bridge` flag.
|
|||
We start Docker with:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
DOCKER_OPTS="--bridge=cbr0 --iptables=false --ip-masq=false"
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
This bridge is created by Kubelet (controlled by the `--configure-cbr0=true`
|
||||
|
@ -147,18 +127,18 @@ itself) traffic that is bound for IPs outside the GCE project network
|
|||
(10.0.0.0/8).
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o eth0 -j MASQUERADE
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Lastly we enable IP forwarding in the kernel (so the kernel will process
|
||||
packets for bridged containers):
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
sysctl net.ipv4.ip_forward=1
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
The result of all this is that all `Pods` can reach each other and can egress
|
||||
|
@ -185,7 +165,7 @@ people have reported success with Flannel and Kubernetes.
|
|||
|
||||
### OpenVSwitch
|
||||
|
||||
[OpenVSwitch](ovs-networking.html) is a somewhat more mature but also
|
||||
[OpenVSwitch](ovs-networking) is a somewhat more mature but also
|
||||
complicated way to build an overlay network. This is endorsed by several of the
|
||||
"Big Shops" for networking.
|
||||
|
||||
|
@ -203,7 +183,7 @@ IPs.
|
|||
|
||||
The early design of the networking model and its rationale, and some future
|
||||
plans are described in more detail in the [networking design
|
||||
document](../design/networking.html).
|
||||
document](../design/networking).
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,35 +1,13 @@
|
|||
---
|
||||
title: "Node"
|
||||
---
|
||||
|
||||
|
||||
# Node
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
|
||||
- [Node](#node)
|
||||
- [What is a node?](#what-is-a-node)
|
||||
- [Node Status](#node-status)
|
||||
- [Node Addresses](#node-addresses)
|
||||
- [Node Phase](#node-phase)
|
||||
- [Node Condition](#node-condition)
|
||||
- [Node Capacity](#node-capacity)
|
||||
- [Node Info](#node-info)
|
||||
- [Node Management](#node-management)
|
||||
- [Node Controller](#node-controller)
|
||||
- [Self-Registration of Nodes](#self-registration-of-nodes)
|
||||
- [Manual Node Administration](#manual-node-administration)
|
||||
- [Node capacity](#node-capacity)
|
||||
- [API Object](#api-object)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
{% include pagetoc.html %}
|
||||
|
||||
## What is a node?
|
||||
|
||||
`Node` is a worker machine in Kubernetes, previously known as `Minion`. Node
|
||||
may be a VM or physical machine, depending on the cluster. Each node has
|
||||
the services necessary to run [Pods](../user-guide/pods.html) and is managed by the master
|
||||
the services necessary to run [Pods](../user-guide/pods) and is managed by the master
|
||||
components. The services on a node include docker, kubelet and network proxy. See
|
||||
[The Kubernetes Node](../design/architecture.html#the-kubernetes-node) section in the
|
||||
architecture design doc for more details.
|
||||
|
@ -79,14 +57,14 @@ Node condition is represented as a json object. For example,
|
|||
the following conditions mean the node is in sane state:
|
||||
|
||||
{% highlight json %}
|
||||
{% raw %}
|
||||
|
||||
"conditions": [
|
||||
{
|
||||
"kind": "Ready",
|
||||
"status": "True",
|
||||
},
|
||||
]
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
If the Status of the Ready condition
|
||||
|
@ -105,7 +83,7 @@ The information is gathered by Kubelet from the node.
|
|||
|
||||
## Node Management
|
||||
|
||||
Unlike [Pods](../user-guide/pods.html) and [Services](../user-guide/services.html), a Node is not inherently
|
||||
Unlike [Pods](../user-guide/pods) and [Services](../user-guide/services), a Node is not inherently
|
||||
created by Kubernetes: it is either taken from cloud providers like Google Compute Engine,
|
||||
or from your pool of physical or virtual machines. What this means is that when
|
||||
Kubernetes creates a node, it is really just creating an object that represents the node in its internal state.
|
||||
|
@ -113,7 +91,7 @@ After creation, Kubernetes will check whether the node is valid or not.
|
|||
For example, if you try to create a node from the following content:
|
||||
|
||||
{% highlight json %}
|
||||
{% raw %}
|
||||
|
||||
{
|
||||
"kind": "Node",
|
||||
"apiVersion": "v1",
|
||||
|
@ -124,7 +102,7 @@ For example, if you try to create a node from the following content:
|
|||
}
|
||||
}
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Kubernetes will create a Node object internally (the representation), and
|
||||
|
@ -187,9 +165,9 @@ preparatory step before a node reboot, etc. For example, to mark a node
|
|||
unschedulable, run this command:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
kubectl replace nodes 10.1.2.3 --patch='{"apiVersion": "v1", "unschedulable": true}'
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Note that pods which are created by a daemonSet controller bypass the Kubernetes scheduler,
|
||||
|
@ -212,7 +190,7 @@ If you want to explicitly reserve resources for non-Pod processes, you can creat
|
|||
pod. Use the following template:
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
|
@ -225,7 +203,7 @@ spec:
|
|||
limits:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Set the `cpu` and `memory` values to the amount of resources you want to reserve.
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Kubernetes OpenVSwitch GRE/VxLAN networking"
|
||||
---
|
||||
|
||||
|
||||
# Kubernetes OpenVSwitch GRE/VxLAN networking
|
||||
|
||||
This document describes how OpenVSwitch is used to setup networking between pods across nodes.
|
||||
The tunnel type could be GRE or VxLAN. VxLAN is preferable when large scale isolation needs to be performed within the network.
|
||||
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Resource Quotas"
|
||||
---
|
||||
|
||||
|
||||
# Resource Quotas
|
||||
|
||||
When several users or teams share a cluster with a fixed number of nodes,
|
||||
there is a concern that one team could use more than its fair share of resources.
|
||||
|
||||
|
@ -21,7 +17,7 @@ work like this:
|
|||
their resource requests defaulted to match their defined limits. The user is only charged for the
|
||||
resources they request in the Resource Quota versus their limits because the request is the minimum
|
||||
amount of resource guaranteed by the cluster during scheduling. For more information on over commit,
|
||||
see [compute-resources](../user-guide/compute-resources.html).
|
||||
see [compute-resources](../user-guide/compute-resources).
|
||||
- If creating a pod would cause the namespace to exceed any of the limits specified in the
|
||||
the Resource Quota for that namespace, then the request will fail with HTTP status
|
||||
code `403 FORBIDDEN`.
|
||||
|
@ -29,7 +25,7 @@ work like this:
|
|||
of the resources for which quota is enabled, then the POST of the pod will fail with HTTP
|
||||
status code `403 FORBIDDEN`. Hint: Use the LimitRange admission controller to force default
|
||||
values of *limits* (then resource *requests* would be equal to *limits* by default, see
|
||||
[admission controller](admission-controllers.html)) before the quota is checked to avoid this problem.
|
||||
[admission controller](admission-controllers)) before the quota is checked to avoid this problem.
|
||||
|
||||
Examples of policies that could be created using namespaces and quotas are:
|
||||
- In a cluster with a capacity of 32 GiB RAM, and 16 cores, let team A use 20 Gib and 10 cores,
|
||||
|
@ -54,7 +50,7 @@ Resource Quota is enforced in a particular namespace when there is a
|
|||
|
||||
## Compute Resource Quota
|
||||
|
||||
The total sum of [compute resources](../user-guide/compute-resources.html) requested by pods
|
||||
The total sum of [compute resources](../user-guide/compute-resources) requested by pods
|
||||
in a namespace can be limited. The following compute resource types are supported:
|
||||
|
||||
| ResourceName | Description |
|
||||
|
@ -91,7 +87,7 @@ supply of Pod IPs.
|
|||
Kubectl supports creating, updating, and viewing quotas:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl namespace myspace
|
||||
$ cat <<EOF > quota.json
|
||||
{
|
||||
|
@ -126,7 +122,7 @@ pods 5 10
|
|||
replicationcontrollers 5 20
|
||||
resourcequotas 1 1
|
||||
services 3 5
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
## Quota and Cluster Capacity
|
||||
|
@ -154,7 +150,7 @@ See a [detailed example for how to use resource quota](resourcequota/)..
|
|||
|
||||
## Read More
|
||||
|
||||
See [ResourceQuota design doc](../design/admission_control_resource_quota.html) for more information.
|
||||
See [ResourceQuota design doc](../design/admission_control_resource_quota) for more information.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,183 +1,165 @@
|
|||
---
|
||||
title: "Resource Quota"
|
||||
---
|
||||
|
||||
Resource Quota
|
||||
========================================
|
||||
This example demonstrates how [resource quota](../../admin/admission-controllers.html#resourcequota) and
|
||||
[limitsranger](../../admin/admission-controllers.html#limitranger) can be applied to a Kubernetes namespace.
|
||||
See [ResourceQuota design doc](../../design/admission_control_resource_quota.html) for more information.
|
||||
|
||||
This example assumes you have a functional Kubernetes setup.
|
||||
|
||||
Step 1: Create a namespace
|
||||
-----------------------------------------
|
||||
This example will work in a custom namespace to demonstrate the concepts involved.
|
||||
|
||||
Let's create a new namespace called quota-example:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl create -f docs/admin/resourcequota/namespace.yaml
|
||||
namespace "quota-example" created
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS AGE
|
||||
default <none> Active 2m
|
||||
quota-example <none> Active 39s
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Step 2: Apply a quota to the namespace
|
||||
-----------------------------------------
|
||||
By default, a pod will run with unbounded CPU and memory requests/limits. This means that any pod in the
|
||||
system will be able to consume as much CPU and memory on the node that executes the pod.
|
||||
|
||||
Users may want to restrict how much of the cluster resources a given namespace may consume
|
||||
across all of its pods in order to manage cluster usage. To do this, a user applies a quota to
|
||||
a namespace. A quota lets the user set hard limits on the total amount of node resources (cpu, memory)
|
||||
and API resources (pods, services, etc.) that a namespace may consume. In term of resources, Kubernetes
|
||||
checks the total resource *requests*, not resource *limits* of all containers/pods in the namespace.
|
||||
|
||||
Let's create a simple quota in our namespace:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl create -f docs/admin/resourcequota/quota.yaml --namespace=quota-example
|
||||
resourcequota "quota" created
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Once your quota is applied to a namespace, the system will restrict any creation of content
|
||||
in the namespace until the quota usage has been calculated. This should happen quickly.
|
||||
|
||||
You can describe your current quota usage to see what resources are being consumed in your
|
||||
namespace.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl describe quota quota --namespace=quota-example
|
||||
Name: quota
|
||||
Namespace: quota-example
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
cpu 0 20
|
||||
memory 0 1Gi
|
||||
persistentvolumeclaims 0 10
|
||||
pods 0 10
|
||||
replicationcontrollers 0 20
|
||||
resourcequotas 1 1
|
||||
secrets 1 10
|
||||
services 0 5
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Step 3: Applying default resource requests and limits
|
||||
-----------------------------------------
|
||||
Pod authors rarely specify resource requests and limits for their pods.
|
||||
|
||||
Since we applied a quota to our project, let's see what happens when an end-user creates a pod that has unbounded
|
||||
cpu and memory by creating an nginx container.
|
||||
|
||||
To demonstrate, lets create a replication controller that runs nginx:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl run nginx --image=nginx --replicas=1 --namespace=quota-example
|
||||
replicationcontroller "nginx" created
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Now let's look at the pods that were created.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl get pods --namespace=quota-example
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
What happened? I have no pods! Let's describe the replication controller to get a view of what is happening.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
kubectl describe rc nginx --namespace=quota-example
|
||||
Name: nginx
|
||||
Namespace: quota-example
|
||||
Image(s): nginx
|
||||
Selector: run=nginx
|
||||
Labels: run=nginx
|
||||
Replicas: 0 current / 1 desired
|
||||
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
|
||||
No volumes.
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Reason Message
|
||||
42s 11s 3 {replication-controller } FailedCreate Error creating: Pod "nginx-" is forbidden: Must make a non-zero request for memory since it is tracked by quota.
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
The Kubernetes API server is rejecting the replication controllers requests to create a pod because our pods
|
||||
do not specify any memory usage *request*.
|
||||
|
||||
So let's set some default values for the amount of cpu and memory a pod can consume:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl create -f docs/admin/resourcequota/limits.yaml --namespace=quota-example
|
||||
limitrange "limits" created
|
||||
$ kubectl describe limits limits --namespace=quota-example
|
||||
Name: limits
|
||||
Namespace: quota-example
|
||||
Type Resource Min Max Request Limit Limit/Request
|
||||
---- -------- --- --- ------- ----- -------------
|
||||
Container memory - - 256Mi 512Mi -
|
||||
Container cpu - - 100m 200m -
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Now any time a pod is created in this namespace, if it has not specified any resource request/limit, the default
|
||||
amount of cpu and memory per container will be applied, and the request will be used as part of admission control.
|
||||
|
||||
Now that we have applied default resource *request* for our namespace, our replication controller should be able to
|
||||
create its pods.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl get pods --namespace=quota-example
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-fca65 1/1 Running 0 1m
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
And if we print out our quota usage in the namespace:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl describe quota quota --namespace=quota-example
|
||||
Name: quota
|
||||
Namespace: quota-example
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
cpu 100m 20
|
||||
memory 256Mi 1Gi
|
||||
persistentvolumeclaims 0 10
|
||||
pods 1 10
|
||||
replicationcontrollers 1 20
|
||||
resourcequotas 1 1
|
||||
secrets 1 10
|
||||
services 0 5
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
You can now see the pod that was created is consuming explicit amounts of resources (specified by resource *request*),
|
||||
and the usage is being tracked by the Kubernetes system properly.
|
||||
|
||||
Summary
|
||||
----------------------------
|
||||
Actions that consume node resources for cpu and memory can be subject to hard quota limits defined
|
||||
by the namespace quota. The resource consumption is measured by resource *request* in pod specification.
|
||||
|
||||
Any action that consumes those resources can be tweaked, or can pick up namespace level defaults to
|
||||
meet your end goal.
|
||||
|
||||
|
||||
|
||||
---
|
||||
title: "Resource Quota"
|
||||
---
|
||||
|
||||
Resource Quota
|
||||
========================================
|
||||
This example demonstrates how [resource quota](../../admin/admission-controllers.html#resourcequota) and
|
||||
[limitsranger](../../admin/admission-controllers.html#limitranger) can be applied to a Kubernetes namespace.
|
||||
See [ResourceQuota design doc](../../design/admission_control_resource_quota) for more information.
|
||||
|
||||
This example assumes you have a functional Kubernetes setup.
|
||||
|
||||
Step 1: Create a namespace
|
||||
-----------------------------------------
|
||||
This example will work in a custom namespace to demonstrate the concepts involved.
|
||||
|
||||
Let's create a new namespace called quota-example:
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl create -f docs/admin/resourcequota/namespace.yaml
|
||||
namespace "quota-example" created
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS AGE
|
||||
default <none> Active 2m
|
||||
quota-example <none> Active 39s
|
||||
{% endhighlight %}
|
||||
|
||||
Step 2: Apply a quota to the namespace
|
||||
-----------------------------------------
|
||||
By default, a pod will run with unbounded CPU and memory requests/limits. This means that any pod in the
|
||||
system will be able to consume as much CPU and memory on the node that executes the pod.
|
||||
|
||||
Users may want to restrict how much of the cluster resources a given namespace may consume
|
||||
across all of its pods in order to manage cluster usage. To do this, a user applies a quota to
|
||||
a namespace. A quota lets the user set hard limits on the total amount of node resources (cpu, memory)
|
||||
and API resources (pods, services, etc.) that a namespace may consume. In term of resources, Kubernetes
|
||||
checks the total resource *requests*, not resource *limits* of all containers/pods in the namespace.
|
||||
|
||||
Let's create a simple quota in our namespace:
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl create -f docs/admin/resourcequota/quota.yaml --namespace=quota-example
|
||||
resourcequota "quota" created
|
||||
{% endhighlight %}
|
||||
|
||||
Once your quota is applied to a namespace, the system will restrict any creation of content
|
||||
in the namespace until the quota usage has been calculated. This should happen quickly.
|
||||
|
||||
You can describe your current quota usage to see what resources are being consumed in your
|
||||
namespace.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl describe quota quota --namespace=quota-example
|
||||
Name: quota
|
||||
Namespace: quota-example
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
cpu 0 20
|
||||
memory 0 1Gi
|
||||
persistentvolumeclaims 0 10
|
||||
pods 0 10
|
||||
replicationcontrollers 0 20
|
||||
resourcequotas 1 1
|
||||
secrets 1 10
|
||||
services 0 5
|
||||
{% endhighlight %}
|
||||
|
||||
Step 3: Applying default resource requests and limits
|
||||
-----------------------------------------
|
||||
Pod authors rarely specify resource requests and limits for their pods.
|
||||
|
||||
Since we applied a quota to our project, let's see what happens when an end-user creates a pod that has unbounded
|
||||
cpu and memory by creating an nginx container.
|
||||
|
||||
To demonstrate, lets create a replication controller that runs nginx:
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl run nginx --image=nginx --replicas=1 --namespace=quota-example
|
||||
replicationcontroller "nginx" created
|
||||
{% endhighlight %}
|
||||
|
||||
Now let's look at the pods that were created.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl get pods --namespace=quota-example
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
{% endhighlight %}
|
||||
|
||||
What happened? I have no pods! Let's describe the replication controller to get a view of what is happening.
|
||||
|
||||
{% highlight console %}
|
||||
kubectl describe rc nginx --namespace=quota-example
|
||||
Name: nginx
|
||||
Namespace: quota-example
|
||||
Image(s): nginx
|
||||
Selector: run=nginx
|
||||
Labels: run=nginx
|
||||
Replicas: 0 current / 1 desired
|
||||
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
|
||||
No volumes.
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Reason Message
|
||||
42s 11s 3 {replication-controller } FailedCreate Error creating: Pod "nginx-" is forbidden: Must make a non-zero request for memory since it is tracked by quota.
|
||||
{% endhighlight %}
|
||||
|
||||
The Kubernetes API server is rejecting the replication controllers requests to create a pod because our pods
|
||||
do not specify any memory usage *request*.
|
||||
|
||||
So let's set some default values for the amount of cpu and memory a pod can consume:
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl create -f docs/admin/resourcequota/limits.yaml --namespace=quota-example
|
||||
limitrange "limits" created
|
||||
$ kubectl describe limits limits --namespace=quota-example
|
||||
Name: limits
|
||||
Namespace: quota-example
|
||||
Type Resource Min Max Request Limit Limit/Request
|
||||
---- -------- --- --- ------- ----- -------------
|
||||
Container memory - - 256Mi 512Mi -
|
||||
Container cpu - - 100m 200m -
|
||||
{% endhighlight %}
|
||||
|
||||
Now any time a pod is created in this namespace, if it has not specified any resource request/limit, the default
|
||||
amount of cpu and memory per container will be applied, and the request will be used as part of admission control.
|
||||
|
||||
Now that we have applied default resource *request* for our namespace, our replication controller should be able to
|
||||
create its pods.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl get pods --namespace=quota-example
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-fca65 1/1 Running 0 1m
|
||||
{% endhighlight %}
|
||||
|
||||
And if we print out our quota usage in the namespace:
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl describe quota quota --namespace=quota-example
|
||||
Name: quota
|
||||
Namespace: quota-example
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
cpu 100m 20
|
||||
memory 256Mi 1Gi
|
||||
persistentvolumeclaims 0 10
|
||||
pods 1 10
|
||||
replicationcontrollers 1 20
|
||||
resourcequotas 1 1
|
||||
secrets 1 10
|
||||
services 0 5
|
||||
{% endhighlight %}
|
||||
|
||||
You can now see the pod that was created is consuming explicit amounts of resources (specified by resource *request*),
|
||||
and the usage is being tracked by the Kubernetes system properly.
|
||||
|
||||
Summary
|
||||
----------------------------
|
||||
Actions that consume node resources for cpu and memory can be subject to hard quota limits defined
|
||||
by the namespace quota. The resource consumption is measured by resource *request* in pod specification.
|
||||
|
||||
Any action that consumes those resources can be tweaked, or can pick up namespace level defaults to
|
||||
meet your end goal.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,183 +1,165 @@
|
|||
---
|
||||
title: "Resource Quota"
|
||||
---
|
||||
|
||||
Resource Quota
|
||||
========================================
|
||||
This example demonstrates how [resource quota](../../admin/admission-controllers.html#resourcequota) and
|
||||
[limitsranger](../../admin/admission-controllers.html#limitranger) can be applied to a Kubernetes namespace.
|
||||
See [ResourceQuota design doc](../../design/admission_control_resource_quota.html) for more information.
|
||||
|
||||
This example assumes you have a functional Kubernetes setup.
|
||||
|
||||
Step 1: Create a namespace
|
||||
-----------------------------------------
|
||||
This example will work in a custom namespace to demonstrate the concepts involved.
|
||||
|
||||
Let's create a new namespace called quota-example:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl create -f docs/admin/resourcequota/namespace.yaml
|
||||
namespace "quota-example" created
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS AGE
|
||||
default <none> Active 2m
|
||||
quota-example <none> Active 39s
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Step 2: Apply a quota to the namespace
|
||||
-----------------------------------------
|
||||
By default, a pod will run with unbounded CPU and memory requests/limits. This means that any pod in the
|
||||
system will be able to consume as much CPU and memory on the node that executes the pod.
|
||||
|
||||
Users may want to restrict how much of the cluster resources a given namespace may consume
|
||||
across all of its pods in order to manage cluster usage. To do this, a user applies a quota to
|
||||
a namespace. A quota lets the user set hard limits on the total amount of node resources (cpu, memory)
|
||||
and API resources (pods, services, etc.) that a namespace may consume. In term of resources, Kubernetes
|
||||
checks the total resource *requests*, not resource *limits* of all containers/pods in the namespace.
|
||||
|
||||
Let's create a simple quota in our namespace:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl create -f docs/admin/resourcequota/quota.yaml --namespace=quota-example
|
||||
resourcequota "quota" created
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Once your quota is applied to a namespace, the system will restrict any creation of content
|
||||
in the namespace until the quota usage has been calculated. This should happen quickly.
|
||||
|
||||
You can describe your current quota usage to see what resources are being consumed in your
|
||||
namespace.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl describe quota quota --namespace=quota-example
|
||||
Name: quota
|
||||
Namespace: quota-example
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
cpu 0 20
|
||||
memory 0 1Gi
|
||||
persistentvolumeclaims 0 10
|
||||
pods 0 10
|
||||
replicationcontrollers 0 20
|
||||
resourcequotas 1 1
|
||||
secrets 1 10
|
||||
services 0 5
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Step 3: Applying default resource requests and limits
|
||||
-----------------------------------------
|
||||
Pod authors rarely specify resource requests and limits for their pods.
|
||||
|
||||
Since we applied a quota to our project, let's see what happens when an end-user creates a pod that has unbounded
|
||||
cpu and memory by creating an nginx container.
|
||||
|
||||
To demonstrate, lets create a replication controller that runs nginx:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl run nginx --image=nginx --replicas=1 --namespace=quota-example
|
||||
replicationcontroller "nginx" created
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Now let's look at the pods that were created.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl get pods --namespace=quota-example
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
What happened? I have no pods! Let's describe the replication controller to get a view of what is happening.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
kubectl describe rc nginx --namespace=quota-example
|
||||
Name: nginx
|
||||
Namespace: quota-example
|
||||
Image(s): nginx
|
||||
Selector: run=nginx
|
||||
Labels: run=nginx
|
||||
Replicas: 0 current / 1 desired
|
||||
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
|
||||
No volumes.
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Reason Message
|
||||
42s 11s 3 {replication-controller } FailedCreate Error creating: Pod "nginx-" is forbidden: Must make a non-zero request for memory since it is tracked by quota.
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
The Kubernetes API server is rejecting the replication controllers requests to create a pod because our pods
|
||||
do not specify any memory usage *request*.
|
||||
|
||||
So let's set some default values for the amount of cpu and memory a pod can consume:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl create -f docs/admin/resourcequota/limits.yaml --namespace=quota-example
|
||||
limitrange "limits" created
|
||||
$ kubectl describe limits limits --namespace=quota-example
|
||||
Name: limits
|
||||
Namespace: quota-example
|
||||
Type Resource Min Max Request Limit Limit/Request
|
||||
---- -------- --- --- ------- ----- -------------
|
||||
Container memory - - 256Mi 512Mi -
|
||||
Container cpu - - 100m 200m -
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Now any time a pod is created in this namespace, if it has not specified any resource request/limit, the default
|
||||
amount of cpu and memory per container will be applied, and the request will be used as part of admission control.
|
||||
|
||||
Now that we have applied default resource *request* for our namespace, our replication controller should be able to
|
||||
create its pods.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl get pods --namespace=quota-example
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-fca65 1/1 Running 0 1m
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
And if we print out our quota usage in the namespace:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
$ kubectl describe quota quota --namespace=quota-example
|
||||
Name: quota
|
||||
Namespace: quota-example
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
cpu 100m 20
|
||||
memory 256Mi 1Gi
|
||||
persistentvolumeclaims 0 10
|
||||
pods 1 10
|
||||
replicationcontrollers 1 20
|
||||
resourcequotas 1 1
|
||||
secrets 1 10
|
||||
services 0 5
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
You can now see the pod that was created is consuming explicit amounts of resources (specified by resource *request*),
|
||||
and the usage is being tracked by the Kubernetes system properly.
|
||||
|
||||
Summary
|
||||
----------------------------
|
||||
Actions that consume node resources for cpu and memory can be subject to hard quota limits defined
|
||||
by the namespace quota. The resource consumption is measured by resource *request* in pod specification.
|
||||
|
||||
Any action that consumes those resources can be tweaked, or can pick up namespace level defaults to
|
||||
meet your end goal.
|
||||
|
||||
|
||||
|
||||
---
|
||||
title: "Resource Quota"
|
||||
---
|
||||
|
||||
Resource Quota
|
||||
========================================
|
||||
This example demonstrates how [resource quota](../../admin/admission-controllers.html#resourcequota) and
|
||||
[limitsranger](../../admin/admission-controllers.html#limitranger) can be applied to a Kubernetes namespace.
|
||||
See [ResourceQuota design doc](../../design/admission_control_resource_quota) for more information.
|
||||
|
||||
This example assumes you have a functional Kubernetes setup.
|
||||
|
||||
Step 1: Create a namespace
|
||||
-----------------------------------------
|
||||
This example will work in a custom namespace to demonstrate the concepts involved.
|
||||
|
||||
Let's create a new namespace called quota-example:
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl create -f docs/admin/resourcequota/namespace.yaml
|
||||
namespace "quota-example" created
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS AGE
|
||||
default <none> Active 2m
|
||||
quota-example <none> Active 39s
|
||||
{% endhighlight %}
|
||||
|
||||
Step 2: Apply a quota to the namespace
|
||||
-----------------------------------------
|
||||
By default, a pod will run with unbounded CPU and memory requests/limits. This means that any pod in the
|
||||
system will be able to consume as much CPU and memory on the node that executes the pod.
|
||||
|
||||
Users may want to restrict how much of the cluster resources a given namespace may consume
|
||||
across all of its pods in order to manage cluster usage. To do this, a user applies a quota to
|
||||
a namespace. A quota lets the user set hard limits on the total amount of node resources (cpu, memory)
|
||||
and API resources (pods, services, etc.) that a namespace may consume. In term of resources, Kubernetes
|
||||
checks the total resource *requests*, not resource *limits* of all containers/pods in the namespace.
|
||||
|
||||
Let's create a simple quota in our namespace:
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl create -f docs/admin/resourcequota/quota.yaml --namespace=quota-example
|
||||
resourcequota "quota" created
|
||||
{% endhighlight %}
|
||||
|
||||
Once your quota is applied to a namespace, the system will restrict any creation of content
|
||||
in the namespace until the quota usage has been calculated. This should happen quickly.
|
||||
|
||||
You can describe your current quota usage to see what resources are being consumed in your
|
||||
namespace.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl describe quota quota --namespace=quota-example
|
||||
Name: quota
|
||||
Namespace: quota-example
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
cpu 0 20
|
||||
memory 0 1Gi
|
||||
persistentvolumeclaims 0 10
|
||||
pods 0 10
|
||||
replicationcontrollers 0 20
|
||||
resourcequotas 1 1
|
||||
secrets 1 10
|
||||
services 0 5
|
||||
{% endhighlight %}
|
||||
|
||||
Step 3: Applying default resource requests and limits
|
||||
-----------------------------------------
|
||||
Pod authors rarely specify resource requests and limits for their pods.
|
||||
|
||||
Since we applied a quota to our project, let's see what happens when an end-user creates a pod that has unbounded
|
||||
cpu and memory by creating an nginx container.
|
||||
|
||||
To demonstrate, lets create a replication controller that runs nginx:
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl run nginx --image=nginx --replicas=1 --namespace=quota-example
|
||||
replicationcontroller "nginx" created
|
||||
{% endhighlight %}
|
||||
|
||||
Now let's look at the pods that were created.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl get pods --namespace=quota-example
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
{% endhighlight %}
|
||||
|
||||
What happened? I have no pods! Let's describe the replication controller to get a view of what is happening.
|
||||
|
||||
{% highlight console %}
|
||||
kubectl describe rc nginx --namespace=quota-example
|
||||
Name: nginx
|
||||
Namespace: quota-example
|
||||
Image(s): nginx
|
||||
Selector: run=nginx
|
||||
Labels: run=nginx
|
||||
Replicas: 0 current / 1 desired
|
||||
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
|
||||
No volumes.
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Reason Message
|
||||
42s 11s 3 {replication-controller } FailedCreate Error creating: Pod "nginx-" is forbidden: Must make a non-zero request for memory since it is tracked by quota.
|
||||
{% endhighlight %}
|
||||
|
||||
The Kubernetes API server is rejecting the replication controllers requests to create a pod because our pods
|
||||
do not specify any memory usage *request*.
|
||||
|
||||
So let's set some default values for the amount of cpu and memory a pod can consume:
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl create -f docs/admin/resourcequota/limits.yaml --namespace=quota-example
|
||||
limitrange "limits" created
|
||||
$ kubectl describe limits limits --namespace=quota-example
|
||||
Name: limits
|
||||
Namespace: quota-example
|
||||
Type Resource Min Max Request Limit Limit/Request
|
||||
---- -------- --- --- ------- ----- -------------
|
||||
Container memory - - 256Mi 512Mi -
|
||||
Container cpu - - 100m 200m -
|
||||
{% endhighlight %}
|
||||
|
||||
Now any time a pod is created in this namespace, if it has not specified any resource request/limit, the default
|
||||
amount of cpu and memory per container will be applied, and the request will be used as part of admission control.
|
||||
|
||||
Now that we have applied default resource *request* for our namespace, our replication controller should be able to
|
||||
create its pods.
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl get pods --namespace=quota-example
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-fca65 1/1 Running 0 1m
|
||||
{% endhighlight %}
|
||||
|
||||
And if we print out our quota usage in the namespace:
|
||||
|
||||
{% highlight console %}
|
||||
$ kubectl describe quota quota --namespace=quota-example
|
||||
Name: quota
|
||||
Namespace: quota-example
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
cpu 100m 20
|
||||
memory 256Mi 1Gi
|
||||
persistentvolumeclaims 0 10
|
||||
pods 1 10
|
||||
replicationcontrollers 1 20
|
||||
resourcequotas 1 1
|
||||
secrets 1 10
|
||||
services 0 5
|
||||
{% endhighlight %}
|
||||
|
||||
You can now see the pod that was created is consuming explicit amounts of resources (specified by resource *request*),
|
||||
and the usage is being tracked by the Kubernetes system properly.
|
||||
|
||||
Summary
|
||||
----------------------------
|
||||
Actions that consume node resources for cpu and memory can be subject to hard quota limits defined
|
||||
by the namespace quota. The resource consumption is measured by resource *request* in pod specification.
|
||||
|
||||
Any action that consumes those resources can be tweaked, or can pick up namespace level defaults to
|
||||
meet your end goal.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,109 +1,100 @@
|
|||
---
|
||||
title: "Configuring Kubernetes with Salt"
|
||||
section: guides
|
||||
---
|
||||
The Kubernetes cluster can be configured using Salt.
|
||||
|
||||
The Salt scripts are shared across multiple hosting providers, so it's important to understand some background information prior to making a modification to ensure your changes do not break hosting Kubernetes across multiple environments. Depending on where you host your Kubernetes cluster, you may be using different operating systems and different networking configurations. As a result, it's important to understand some background information before making Salt changes in order to minimize introducing failures for other hosting providers.
|
||||
|
||||
## Salt cluster setup
|
||||
|
||||
The **salt-master** service runs on the kubernetes-master [(except on the default GCE setup)](#standalone-salt-configuration-on-gce).
|
||||
|
||||
The **salt-minion** service runs on the kubernetes-master and each kubernetes-node in the cluster.
|
||||
|
||||
Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file [(except on GCE)](#standalone-salt-configuration-on-gce).
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
[root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf
|
||||
master: kubernetes-master
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
The salt-master is contacted by each salt-minion and depending upon the machine information presented, the salt-master will provision the machine as either a kubernetes-master or kubernetes-node with all the required capabilities needed to run Kubernetes.
|
||||
|
||||
If you are running the Vagrant based environment, the **salt-api** service is running on the kubernetes-master. It is configured to enable the vagrant user to introspect the salt cluster in order to find out about machines in the Vagrant environment via a REST API.
|
||||
|
||||
## Standalone Salt Configuration on GCE
|
||||
|
||||
On GCE, the master and nodes are all configured as [standalone minions](http://docs.saltstack.com/en/latest/topics/tutorials/standalone_minion.html). The configuration for each VM is derived from the VM's [instance metadata](https://cloud.google.com/compute/docs/metadata) and then stored in Salt grains (`/etc/salt/minion.d/grains.conf`) and pillars (`/srv/salt-overlay/pillar/cluster-params.sls`) that local Salt uses to enforce state.
|
||||
|
||||
All remaining sections that refer to master/minion setups should be ignored for GCE. One fallout of the GCE setup is that the Salt mine doesn't exist - there is no sharing of configuration amongst nodes.
|
||||
|
||||
## Salt security
|
||||
|
||||
*(Not applicable on default GCE setup.)*
|
||||
|
||||
Security is not enabled on the salt-master, and the salt-master is configured to auto-accept incoming requests from minions. It is not recommended to use this security configuration in production environments without deeper study. (In some environments this isn't as bad as it might sound if the salt master port isn't externally accessible and you trust everyone on your network.)
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
[root@kubernetes-master] $ cat /etc/salt/master.d/auto-accept.conf
|
||||
open_mode: True
|
||||
auto_accept: True
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
## Salt minion configuration
|
||||
|
||||
Each minion in the salt cluster has an associated configuration that instructs the salt-master how to provision the required resources on the machine.
|
||||
|
||||
An example file is presented below using the Vagrant based environment.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
[root@kubernetes-master] $ cat /etc/salt/minion.d/grains.conf
|
||||
grains:
|
||||
etcd_servers: $MASTER_IP
|
||||
cloud_provider: vagrant
|
||||
roles:
|
||||
- kubernetes-master
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Each hosting environment has a slightly different grains.conf file that is used to build conditional logic where required in the Salt files.
|
||||
|
||||
The following enumerates the set of defined key/value pairs that are supported today. If you add new ones, please make sure to update this list.
|
||||
|
||||
Key | Value
|
||||
------------- | -------------
|
||||
`api_servers` | (Optional) The IP address / host name where a kubelet can get read-only access to kube-apiserver
|
||||
`cbr-cidr` | (Optional) The minion IP address range used for the docker container bridge.
|
||||
`cloud` | (Optional) Which IaaS platform is used to host Kubernetes, *gce*, *azure*, *aws*, *vagrant*
|
||||
`etcd_servers` | (Optional) Comma-delimited list of IP addresses the kube-apiserver and kubelet use to reach etcd. Uses the IP of the first machine in the kubernetes_master role, or 127.0.0.1 on GCE.
|
||||
`hostnamef` | (Optional) The full host name of the machine, i.e. uname -n
|
||||
`node_ip` | (Optional) The IP address to use to address this node
|
||||
`hostname_override` | (Optional) Mapped to the kubelet hostname-override
|
||||
`network_mode` | (Optional) Networking model to use among nodes: *openvswitch*
|
||||
`networkInterfaceName` | (Optional) Networking interface to use to bind addresses, default value *eth0*
|
||||
`publicAddressOverride` | (Optional) The IP address the kube-apiserver should use to bind against for external read-only access
|
||||
`roles` | (Required) 1. `kubernetes-master` means this machine is the master in the Kubernetes cluster. 2. `kubernetes-pool` means this machine is a kubernetes-node. Depending on the role, the Salt scripts will provision different resources on the machine.
|
||||
|
||||
These keys may be leveraged by the Salt sls files to branch behavior.
|
||||
|
||||
In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, it's important to sometimes distinguish behavior based on operating system using if branches like the following.
|
||||
|
||||
{% highlight jinja %}
|
||||
{% raw %}
|
||||
{% if grains['os_family'] == 'RedHat' %}
|
||||
// something specific to a RedHat environment (Centos, Fedora, RHEL) where you may use yum, systemd, etc.
|
||||
{% else %}
|
||||
// something specific to Debian environment (apt-get, initd)
|
||||
{% endif %}
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. When configuring default arguments for processes, it's best to avoid the use of EnvironmentFiles (Systemd in Red Hat environments) or init.d files (Debian distributions) to hold default values that should be common across operating system environments. This helps keep our Salt template files easy to understand for editors who may not be familiar with the particulars of each distribution.
|
||||
|
||||
## Future enhancements (Networking)
|
||||
|
||||
Per pod IP configuration is provider-specific, so when making networking changes, it's important to sandbox these as all providers may not use the same mechanisms (iptables, openvswitch, etc.)
|
||||
|
||||
We should define a grains.conf key that captures more specifically what network configuration environment is being used to avoid future confusion across providers.
|
||||
|
||||
## Further reading
|
||||
|
||||
---
|
||||
title: "Configuring Kubernetes with Salt"
|
||||
---
|
||||
The Kubernetes cluster can be configured using Salt.
|
||||
|
||||
The Salt scripts are shared across multiple hosting providers, so it's important to understand some background information prior to making a modification to ensure your changes do not break hosting Kubernetes across multiple environments. Depending on where you host your Kubernetes cluster, you may be using different operating systems and different networking configurations. As a result, it's important to understand some background information before making Salt changes in order to minimize introducing failures for other hosting providers.
|
||||
|
||||
## Salt cluster setup
|
||||
|
||||
The **salt-master** service runs on the kubernetes-master [(except on the default GCE setup)](#standalone-salt-configuration-on-gce).
|
||||
|
||||
The **salt-minion** service runs on the kubernetes-master and each kubernetes-node in the cluster.
|
||||
|
||||
Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file [(except on GCE)](#standalone-salt-configuration-on-gce).
|
||||
|
||||
{% highlight console %}
|
||||
[root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf
|
||||
master: kubernetes-master
|
||||
{% endhighlight %}
|
||||
|
||||
The salt-master is contacted by each salt-minion and depending upon the machine information presented, the salt-master will provision the machine as either a kubernetes-master or kubernetes-node with all the required capabilities needed to run Kubernetes.
|
||||
|
||||
If you are running the Vagrant based environment, the **salt-api** service is running on the kubernetes-master. It is configured to enable the vagrant user to introspect the salt cluster in order to find out about machines in the Vagrant environment via a REST API.
|
||||
|
||||
## Standalone Salt Configuration on GCE
|
||||
|
||||
On GCE, the master and nodes are all configured as [standalone minions](http://docs.saltstack.com/en/latest/topics/tutorials/standalone_minion). The configuration for each VM is derived from the VM's [instance metadata](https://cloud.google.com/compute/docs/metadata) and then stored in Salt grains (`/etc/salt/minion.d/grains.conf`) and pillars (`/srv/salt-overlay/pillar/cluster-params.sls`) that local Salt uses to enforce state.
|
||||
|
||||
All remaining sections that refer to master/minion setups should be ignored for GCE. One fallout of the GCE setup is that the Salt mine doesn't exist - there is no sharing of configuration amongst nodes.
|
||||
|
||||
## Salt security
|
||||
|
||||
*(Not applicable on default GCE setup.)*
|
||||
|
||||
Security is not enabled on the salt-master, and the salt-master is configured to auto-accept incoming requests from minions. It is not recommended to use this security configuration in production environments without deeper study. (In some environments this isn't as bad as it might sound if the salt master port isn't externally accessible and you trust everyone on your network.)
|
||||
|
||||
{% highlight console %}
|
||||
[root@kubernetes-master] $ cat /etc/salt/master.d/auto-accept.conf
|
||||
open_mode: True
|
||||
auto_accept: True
|
||||
{% endhighlight %}
|
||||
|
||||
## Salt minion configuration
|
||||
|
||||
Each minion in the salt cluster has an associated configuration that instructs the salt-master how to provision the required resources on the machine.
|
||||
|
||||
An example file is presented below using the Vagrant based environment.
|
||||
|
||||
{% highlight console %}
|
||||
[root@kubernetes-master] $ cat /etc/salt/minion.d/grains.conf
|
||||
grains:
|
||||
etcd_servers: $MASTER_IP
|
||||
cloud_provider: vagrant
|
||||
roles:
|
||||
- kubernetes-master
|
||||
{% endhighlight %}
|
||||
|
||||
Each hosting environment has a slightly different grains.conf file that is used to build conditional logic where required in the Salt files.
|
||||
|
||||
The following enumerates the set of defined key/value pairs that are supported today. If you add new ones, please make sure to update this list.
|
||||
|
||||
Key | Value
|
||||
------------- | -------------
|
||||
`api_servers` | (Optional) The IP address / host name where a kubelet can get read-only access to kube-apiserver
|
||||
`cbr-cidr` | (Optional) The minion IP address range used for the docker container bridge.
|
||||
`cloud` | (Optional) Which IaaS platform is used to host Kubernetes, *gce*, *azure*, *aws*, *vagrant*
|
||||
`etcd_servers` | (Optional) Comma-delimited list of IP addresses the kube-apiserver and kubelet use to reach etcd. Uses the IP of the first machine in the kubernetes_master role, or 127.0.0.1 on GCE.
|
||||
`hostnamef` | (Optional) The full host name of the machine, i.e. uname -n
|
||||
`node_ip` | (Optional) The IP address to use to address this node
|
||||
`hostname_override` | (Optional) Mapped to the kubelet hostname-override
|
||||
`network_mode` | (Optional) Networking model to use among nodes: *openvswitch*
|
||||
`networkInterfaceName` | (Optional) Networking interface to use to bind addresses, default value *eth0*
|
||||
`publicAddressOverride` | (Optional) The IP address the kube-apiserver should use to bind against for external read-only access
|
||||
`roles` | (Required) 1. `kubernetes-master` means this machine is the master in the Kubernetes cluster. 2. `kubernetes-pool` means this machine is a kubernetes-node. Depending on the role, the Salt scripts will provision different resources on the machine.
|
||||
|
||||
These keys may be leveraged by the Salt sls files to branch behavior.
|
||||
|
||||
In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, it's important to sometimes distinguish behavior based on operating system using if branches like the following.
|
||||
|
||||
{% highlight jinja %}
|
||||
{% if grains['os_family'] == 'RedHat' %}
|
||||
// something specific to a RedHat environment (Centos, Fedora, RHEL) where you may use yum, systemd, etc.
|
||||
{% else %}
|
||||
// something specific to Debian environment (apt-get, initd)
|
||||
{% endif %}
|
||||
{% endhighlight %}
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. When configuring default arguments for processes, it's best to avoid the use of EnvironmentFiles (Systemd in Red Hat environments) or init.d files (Debian distributions) to hold default values that should be common across operating system environments. This helps keep our Salt template files easy to understand for editors who may not be familiar with the particulars of each distribution.
|
||||
|
||||
## Future enhancements (Networking)
|
||||
|
||||
Per pod IP configuration is provider-specific, so when making networking changes, it's important to sandbox these as all providers may not use the same mechanisms (iptables, openvswitch, etc.)
|
||||
|
||||
We should define a grains.conf key that captures more specifically what network configuration environment is being used to avoid future confusion across providers.
|
||||
|
||||
## Further reading
|
||||
|
||||
The [cluster/saltbase](http://releases.k8s.io/release-1.1/cluster/saltbase/) tree has more details on the current SaltStack configuration.
|
|
@ -1,12 +1,8 @@
|
|||
---
|
||||
title: "Cluster Admin Guide to Service Accounts"
|
||||
---
|
||||
|
||||
|
||||
# Cluster Admin Guide to Service Accounts
|
||||
|
||||
*This is a Cluster Administrator guide to service accounts. It assumes knowledge of
|
||||
the [User Guide to Service Accounts](../user-guide/service-accounts.html).*
|
||||
the [User Guide to Service Accounts](../user-guide/service-accounts).*
|
||||
|
||||
*Support for authorization and user accounts is planned but incomplete. Sometimes
|
||||
incomplete features are referred to in order to better describe service accounts.*
|
||||
|
@ -40,7 +36,7 @@ Three separate components cooperate to implement the automation around service a
|
|||
### Service Account Admission Controller
|
||||
|
||||
The modification of pods is implemented via a plugin
|
||||
called an [Admission Controller](admission-controllers.html). It is part of the apiserver.
|
||||
called an [Admission Controller](admission-controllers). It is part of the apiserver.
|
||||
It acts synchronously to modify pods as they are created or updated. When this plugin is active
|
||||
(and it is by default on most distributions), then it does the following when a pod is created or modified:
|
||||
1. If the pod does not have a `ServiceAccount` set, it sets the `ServiceAccount` to `default`.
|
||||
|
@ -66,7 +62,7 @@ of type `ServiceAccountToken` with an annotation referencing the service
|
|||
account, and the controller will update it with a generated token:
|
||||
|
||||
{% highlight json %}
|
||||
{% raw %}
|
||||
|
||||
secret.json:
|
||||
{
|
||||
"kind": "Secret",
|
||||
|
@ -79,22 +75,22 @@ secret.json:
|
|||
},
|
||||
"type": "kubernetes.io/service-account-token"
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
kubectl create -f ./secret.json
|
||||
kubectl describe secret mysecretname
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
#### To delete/invalidate a service account token
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
kubectl delete secret mysecretname
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### Service Account Controller
|
||||
|
|
|
@ -1,151 +1,130 @@
|
|||
---
|
||||
title: "Static pods (deprecated)"
|
||||
---
|
||||
|
||||
|
||||
# Static pods (deprecated)
|
||||
|
||||
**Static pods are to be deprecated and can be removed in any future Kubernetes release!**
|
||||
|
||||
*Static pod* are managed directly by kubelet daemon on a specific node, without API server observing it. It does not have associated any replication controller, kubelet daemon itself watches it and restarts it when it crashes. There is no health check though. Static pods are always bound to one kubelet daemon and always run on the same node with it.
|
||||
|
||||
Kubelet automatically creates so-called *mirror pod* on Kubernetes API server for each static pod, so the pods are visible there, but they cannot be controlled from the API server.
|
||||
|
||||
## Static pod creation
|
||||
|
||||
Static pod can be created in two ways: either by using configuration file(s) or by HTTP.
|
||||
|
||||
### Configuration files
|
||||
|
||||
The configuration files are just standard pod definition in json or yaml format in specific directory. Use `kubelet --config=<the directory>` to start kubelet daemon, which periodically scans the directory and creates/deletes static pods as yaml/json files appear/disappear there.
|
||||
|
||||
For example, this is how to start a simple web server as a static pod:
|
||||
|
||||
1. Choose a node where we want to run the static pod. In this example, it's `my-minion1`.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
[joe@host ~] $ ssh my-minion1
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
2. Choose a directory, say `/etc/kubelet.d` and place a web server pod definition there, e.g. `/etc/kubernetes.d/static-web.yaml`:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
[root@my-minion1 ~] $ mkdir /etc/kubernetes.d/
|
||||
[root@my-minion1 ~] $ cat <<EOF >/etc/kubernetes.d/static-web.yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: static-web
|
||||
labels:
|
||||
role: myrole
|
||||
spec:
|
||||
containers:
|
||||
- name: web
|
||||
image: nginx
|
||||
ports:
|
||||
- name: web
|
||||
containerPort: 80
|
||||
protocol: tcp
|
||||
EOF
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
2. Configure your kubelet daemon on the node to use this directory by running it with `--config=/etc/kubelet.d/` argument. On Fedora Fedora 21 with Kubernetes 0.17 edit `/etc/kubernetes/kubelet` to include this line:
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --config=/etc/kubelet.d/"
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
Instructions for other distributions or Kubernetes installations may vary.
|
||||
|
||||
3. Restart kubelet. On Fedora 21, this is:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
[root@my-minion1 ~] $ systemctl restart kubelet
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
## Pods created via HTTP
|
||||
|
||||
Kubelet periodically downloads a file specified by `--manifest-url=<URL>` argument and interprets it as a json/yaml file with a pod definition. It works the same as `--config=<directory>`, i.e. it's reloaded every now and then and changes are applied to running static pods (see below).
|
||||
|
||||
## Behavior of static pods
|
||||
|
||||
When kubelet starts, it automatically starts all pods defined in directory specified in `--config=` or `--manifest-url=` arguments, i.e. our static-web. (It may take some time to pull nginx image, be patient…):
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
[joe@my-minion1 ~] $ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
|
||||
f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-minion1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
If we look at our Kubernetes API server (running on host `my-master`), we see that a new mirror-pod was created there too:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
[joe@host ~] $ ssh my-master
|
||||
[joe@my-master ~] $ kubectl get pods
|
||||
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
|
||||
static-web-my-minion1 172.17.0.3 my-minion1/192.168.100.71 role=myrole Running 11 minutes
|
||||
web nginx Running 11 minutes
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Labels from the static pod are propagated into the mirror-pod and can be used as usual for filtering.
|
||||
|
||||
Notice we cannot delete the pod with the API server (e.g. via [`kubectl`](../user-guide/kubectl/kubectl.html) command), kubelet simply won't remove it.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
[joe@my-master ~] $ kubectl delete pod static-web-my-minion1
|
||||
pods/static-web-my-minion1
|
||||
[joe@my-master ~] $ kubectl get pods
|
||||
POD IP CONTAINER(S) IMAGE(S) HOST ...
|
||||
static-web-my-minion1 172.17.0.3 my-minion1/192.168.100.71 ...
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Back to our `my-minion1` host, we can try to stop the container manually and see, that kubelet automatically restarts it in a while:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
[joe@host ~] $ ssh my-minion1
|
||||
[joe@my-minion1 ~] $ docker stop f6d05272b57e
|
||||
[joe@my-minion1 ~] $ sleep 20
|
||||
[joe@my-minion1 ~] $ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED ...
|
||||
5b920cbaf8b1 nginx:latest "nginx -g 'daemon of 2 seconds ago ...
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
## Dynamic addition and removal of static pods
|
||||
|
||||
Running kubelet periodically scans the configured directory (`/etc/kubelet.d` in our example) for changes and adds/removes pods as files appear/disappear in this directory.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
[joe@my-minion1 ~] $ mv /etc/kubernetes.d/static-web.yaml /tmp
|
||||
[joe@my-minion1 ~] $ sleep 20
|
||||
[joe@my-minion1 ~] $ docker ps
|
||||
// no nginx container is running
|
||||
[joe@my-minion1 ~] $ mv /tmp/static-web.yaml /etc/kubernetes.d/
|
||||
[joe@my-minion1 ~] $ sleep 20
|
||||
[joe@my-minion1 ~] $ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED ...
|
||||
e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
---
|
||||
title: "Static pods (deprecated)"
|
||||
---
|
||||
|
||||
**Static pods are to be deprecated and can be removed in any future Kubernetes release!**
|
||||
|
||||
*Static pod* are managed directly by kubelet daemon on a specific node, without API server observing it. It does not have associated any replication controller, kubelet daemon itself watches it and restarts it when it crashes. There is no health check though. Static pods are always bound to one kubelet daemon and always run on the same node with it.
|
||||
|
||||
Kubelet automatically creates so-called *mirror pod* on Kubernetes API server for each static pod, so the pods are visible there, but they cannot be controlled from the API server.
|
||||
|
||||
## Static pod creation
|
||||
|
||||
Static pod can be created in two ways: either by using configuration file(s) or by HTTP.
|
||||
|
||||
### Configuration files
|
||||
|
||||
The configuration files are just standard pod definition in json or yaml format in specific directory. Use `kubelet --config=<the directory>` to start kubelet daemon, which periodically scans the directory and creates/deletes static pods as yaml/json files appear/disappear there.
|
||||
|
||||
For example, this is how to start a simple web server as a static pod:
|
||||
|
||||
1. Choose a node where we want to run the static pod. In this example, it's `my-minion1`.
|
||||
|
||||
{% highlight console %}
|
||||
[joe@host ~] $ ssh my-minion1
|
||||
{% endhighlight %}
|
||||
|
||||
2. Choose a directory, say `/etc/kubelet.d` and place a web server pod definition there, e.g. `/etc/kubernetes.d/static-web.yaml`:
|
||||
|
||||
{% highlight console %}
|
||||
[root@my-minion1 ~] $ mkdir /etc/kubernetes.d/
|
||||
[root@my-minion1 ~] $ cat <<EOF >/etc/kubernetes.d/static-web.yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: static-web
|
||||
labels:
|
||||
role: myrole
|
||||
spec:
|
||||
containers:
|
||||
- name: web
|
||||
image: nginx
|
||||
ports:
|
||||
- name: web
|
||||
containerPort: 80
|
||||
protocol: tcp
|
||||
EOF
|
||||
{% endhighlight %}
|
||||
|
||||
2. Configure your kubelet daemon on the node to use this directory by running it with `--config=/etc/kubelet.d/` argument. On Fedora Fedora 21 with Kubernetes 0.17 edit `/etc/kubernetes/kubelet` to include this line:
|
||||
|
||||
```
|
||||
KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --config=/etc/kubelet.d/"
|
||||
```
|
||||
|
||||
Instructions for other distributions or Kubernetes installations may vary.
|
||||
|
||||
3. Restart kubelet. On Fedora 21, this is:
|
||||
|
||||
{% highlight console %}
|
||||
[root@my-minion1 ~] $ systemctl restart kubelet
|
||||
{% endhighlight %}
|
||||
|
||||
## Pods created via HTTP
|
||||
|
||||
Kubelet periodically downloads a file specified by `--manifest-url=<URL>` argument and interprets it as a json/yaml file with a pod definition. It works the same as `--config=<directory>`, i.e. it's reloaded every now and then and changes are applied to running static pods (see below).
|
||||
|
||||
## Behavior of static pods
|
||||
|
||||
When kubelet starts, it automatically starts all pods defined in directory specified in `--config=` or `--manifest-url=` arguments, i.e. our static-web. (It may take some time to pull nginx image, be patient'|):
|
||||
|
||||
{% highlight console %}
|
||||
[joe@my-minion1 ~] $ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
|
||||
f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-minion1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c
|
||||
{% endhighlight %}
|
||||
|
||||
If we look at our Kubernetes API server (running on host `my-master`), we see that a new mirror-pod was created there too:
|
||||
|
||||
{% highlight console %}
|
||||
[joe@host ~] $ ssh my-master
|
||||
[joe@my-master ~] $ kubectl get pods
|
||||
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
|
||||
static-web-my-minion1 172.17.0.3 my-minion1/192.168.100.71 role=myrole Running 11 minutes
|
||||
web nginx Running 11 minutes
|
||||
{% endhighlight %}
|
||||
|
||||
Labels from the static pod are propagated into the mirror-pod and can be used as usual for filtering.
|
||||
|
||||
Notice we cannot delete the pod with the API server (e.g. via [`kubectl`](../user-guide/kubectl/kubectl) command), kubelet simply won't remove it.
|
||||
|
||||
{% highlight console %}
|
||||
[joe@my-master ~] $ kubectl delete pod static-web-my-minion1
|
||||
pods/static-web-my-minion1
|
||||
[joe@my-master ~] $ kubectl get pods
|
||||
POD IP CONTAINER(S) IMAGE(S) HOST ...
|
||||
static-web-my-minion1 172.17.0.3 my-minion1/192.168.100.71 ...
|
||||
{% endhighlight %}
|
||||
|
||||
Back to our `my-minion1` host, we can try to stop the container manually and see, that kubelet automatically restarts it in a while:
|
||||
|
||||
{% highlight console %}
|
||||
[joe@host ~] $ ssh my-minion1
|
||||
[joe@my-minion1 ~] $ docker stop f6d05272b57e
|
||||
[joe@my-minion1 ~] $ sleep 20
|
||||
[joe@my-minion1 ~] $ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED ...
|
||||
5b920cbaf8b1 nginx:latest "nginx -g 'daemon of 2 seconds ago ...
|
||||
{% endhighlight %}
|
||||
|
||||
## Dynamic addition and removal of static pods
|
||||
|
||||
Running kubelet periodically scans the configured directory (`/etc/kubelet.d` in our example) for changes and adds/removes pods as files appear/disappear in this directory.
|
||||
|
||||
{% highlight console %}
|
||||
[joe@my-minion1 ~] $ mv /etc/kubernetes.d/static-web.yaml /tmp
|
||||
[joe@my-minion1 ~] $ sleep 20
|
||||
[joe@my-minion1 ~] $ docker ps
|
||||
// no nginx container is running
|
||||
[joe@my-minion1 ~] $ mv /tmp/static-web.yaml /etc/kubernetes.d/
|
||||
[joe@my-minion1 ~] $ sleep 20
|
||||
[joe@my-minion1 ~] $ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED ...
|
||||
e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago
|
||||
{% endhighlight %}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,19 +1,15 @@
|
|||
---
|
||||
title: "The Kubernetes API"
|
||||
---
|
||||
Primary system and API concepts are documented in the [User guide](user-guide/README).
|
||||
|
||||
|
||||
# The Kubernetes API
|
||||
|
||||
Primary system and API concepts are documented in the [User guide](user-guide/README.html).
|
||||
|
||||
Overall API conventions are described in the [API conventions doc](devel/api-conventions.html).
|
||||
Overall API conventions are described in the [API conventions doc](devel/api-conventions).
|
||||
|
||||
Complete API details are documented via [Swagger](http://swagger.io/). The Kubernetes apiserver (aka "master") exports an API that can be used to retrieve the [Swagger spec](https://github.com/swagger-api/swagger-spec/tree/master/schemas/v1.2) for the Kubernetes API, by default at `/swaggerapi`, and a UI you can use to browse the API documentation at `/swagger-ui`. We also periodically update a [statically generated UI](http://kubernetes.io/third_party/swagger-ui/).
|
||||
|
||||
Remote access to the API is discussed in the [access doc](admin/accessing-the-api.html).
|
||||
Remote access to the API is discussed in the [access doc](admin/accessing-the-api).
|
||||
|
||||
The Kubernetes API also serves as the foundation for the declarative configuration schema for the system. The [Kubectl](user-guide/kubectl/kubectl.html) command-line tool can be used to create, update, delete, and get API objects.
|
||||
The Kubernetes API also serves as the foundation for the declarative configuration schema for the system. The [Kubectl](user-guide/kubectl/kubectl) command-line tool can be used to create, update, delete, and get API objects.
|
||||
|
||||
Kubernetes also stores its serialized state (currently in [etcd](https://coreos.com/docs/distributed-configuration/getting-started-with-etcd/)) in terms of the API resources.
|
||||
|
||||
|
@ -23,7 +19,7 @@ Kubernetes itself is decomposed into multiple components, which interact through
|
|||
|
||||
In our experience, any system that is successful needs to grow and change as new use cases emerge or existing ones change. Therefore, we expect the Kubernetes API to continuously change and grow. However, we intend to not break compatibility with existing clients, for an extended period of time. In general, new API resources and new resource fields can be expected to be added frequently. Elimination of resources or fields will require following a deprecation process. The precise deprecation policy for eliminating features is TBD, but once we reach our 1.0 milestone, there will be a specific policy.
|
||||
|
||||
What constitutes a compatible change and how to change the API are detailed by the [API change document](devel/api_changes.html).
|
||||
What constitutes a compatible change and how to change the API are detailed by the [API change document](devel/api_changes).
|
||||
|
||||
## API versioning
|
||||
|
||||
|
@ -34,7 +30,7 @@ multiple API versions, each at a different API path, such as `/api/v1` or
|
|||
We chose to version at the API level rather than at the resource or field level to ensure that the API presents a clear, consistent view of system resources and behavior, and to enable controlling access to end-of-lifed and/or experimental APIs.
|
||||
|
||||
Note that API versioning and Software versioning are only indirectly related. The [API and release
|
||||
versioning proposal](design/versioning.html) describes the relationship between API versioning and
|
||||
versioning proposal](design/versioning) describes the relationship between API versioning and
|
||||
software versioning.
|
||||
|
||||
|
||||
|
@ -64,7 +60,7 @@ in more detail in the [API Changes documentation](devel/api_changes.html#alpha-b
|
|||
## API groups
|
||||
|
||||
To make it easier to extend the Kubernetes API, we are in the process of implementing [*API
|
||||
groups*](proposals/api-group.html). These are simply different interfaces to read and/or modify the
|
||||
groups*](proposals/api-group). These are simply different interfaces to read and/or modify the
|
||||
same underlying resources. The API group is specified in a REST path and in the `apiVersion` field
|
||||
of a serialized object.
|
||||
|
||||
|
@ -102,7 +98,7 @@ Changes to services are the most significant difference between v1beta3 and v1.
|
|||
|
||||
Some other difference between v1beta3 and v1:
|
||||
|
||||
* The `pod.spec.containers[*].privileged` and `pod.spec.containers[*].capabilities` properties are now nested under the `pod.spec.containers[*].securityContext` property. See [Security Contexts](user-guide/security-context.html).
|
||||
* The `pod.spec.containers[*].privileged` and `pod.spec.containers[*].capabilities` properties are now nested under the `pod.spec.containers[*].securityContext` property. See [Security Contexts](user-guide/security-context).
|
||||
* The `pod.spec.host` property is renamed to `pod.spec.nodeName`.
|
||||
* The `endpoints.subsets[*].addresses.IP` property is renamed to `endpoints.subsets[*].addresses.ip`.
|
||||
* The `pod.status.containerStatuses[*].state.termination` and `pod.status.containerStatuses[*].lastState.termination` properties are renamed to `pod.status.containerStatuses[*].state.terminated` and `pod.status.containerStatuses[*].lastState.terminated` respectively.
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Kubernetes Design Overview"
|
||||
---
|
||||
|
||||
|
||||
# Kubernetes Design Overview
|
||||
|
||||
Kubernetes is a system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
|
||||
|
||||
Kubernetes establishes robust declarative primitives for maintaining the desired state requested by the user. We see these primitives as the main value added by Kubernetes. Self-healing mechanisms, such as auto-restarting, re-scheduling, and replicating containers require active controllers, not just imperative orchestration.
|
||||
|
@ -15,11 +11,11 @@ Kubernetes enables users to ask a cluster to run a set of containers. The system
|
|||
|
||||
Kubernetes is intended to run on a number of cloud providers, as well as on physical hosts.
|
||||
|
||||
A single Kubernetes cluster is not intended to span multiple availability zones. Instead, we recommend building a higher-level layer to replicate complete deployments of highly available applications across multiple zones (see [the multi-cluster doc](../admin/multi-cluster.html) and [cluster federation proposal](../proposals/federation.html) for more details).
|
||||
A single Kubernetes cluster is not intended to span multiple availability zones. Instead, we recommend building a higher-level layer to replicate complete deployments of highly available applications across multiple zones (see [the multi-cluster doc](../admin/multi-cluster) and [cluster federation proposal](../proposals/federation) for more details).
|
||||
|
||||
Finally, Kubernetes aspires to be an extensible, pluggable, building-block OSS platform and toolkit. Therefore, architecturally, we want Kubernetes to be built as a collection of pluggable components and layers, with the ability to use alternative schedulers, controllers, storage systems, and distribution mechanisms, and we're evolving its current code in that direction. Furthermore, we want others to be able to extend Kubernetes functionality, such as with higher-level PaaS functionality or multi-cluster layers, without modification of core Kubernetes source. Therefore, its API isn't just (or even necessarily mainly) targeted at end users, but at tool and extension developers. Its APIs are intended to serve as the foundation for an open ecosystem of tools, automation systems, and higher-level API layers. Consequently, there are no "internal" inter-component APIs. All APIs are visible and available, including the APIs used by the scheduler, the node controller, the replication-controller manager, Kubelet's API, etc. There's no glass to break -- in order to handle more complex use cases, one can just access the lower-level APIs in a fully transparent, composable manner.
|
||||
|
||||
For more about the Kubernetes architecture, see [architecture](architecture.html).
|
||||
For more about the Kubernetes architecture, see [architecture](architecture).
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,13 +1,8 @@
|
|||
---
|
||||
title: "K8s Identity and Access Management Sketch"
|
||||
---
|
||||
|
||||
|
||||
# K8s Identity and Access Management Sketch
|
||||
|
||||
This document suggests a direction for identity and access management in the Kubernetes system.
|
||||
|
||||
|
||||
## Background
|
||||
|
||||
High level goals are:
|
||||
|
@ -154,7 +149,7 @@ Improvements:
|
|||
|
||||
K8s will have a have a `namespace` API object. It is similar to a Google Compute Engine `project`. It provides a namespace for objects created by a group of people co-operating together, preventing name collisions with non-cooperating groups. It also serves as a reference point for authorization policies.
|
||||
|
||||
Namespaces are described in [namespaces.md](namespaces.html).
|
||||
Namespaces are described in [namespaces.md](namespaces).
|
||||
|
||||
In the Enterprise Profile:
|
||||
- a `userAccount` may have permission to access several `namespace`s.
|
||||
|
@ -164,7 +159,7 @@ In the Simple Profile:
|
|||
|
||||
Namespaces versus userAccount vs Labels:
|
||||
- `userAccount`s are intended for audit logging (both name and UID should be logged), and to define who has access to `namespace`s.
|
||||
- `labels` (see [docs/user-guide/labels.md](../../docs/user-guide/labels.html)) should be used to distinguish pods, users, and other objects that cooperate towards a common goal but are different in some way, such as version, or responsibilities.
|
||||
- `labels` (see [docs/user-guide/labels.md](/{{page.version}}/docs/user-guide/labels)) should be used to distinguish pods, users, and other objects that cooperate towards a common goal but are different in some way, such as version, or responsibilities.
|
||||
- `namespace`s prevent name collisions between uncoordinated groups of people, and provide a place to attach common policies for co-operating groups of people.
|
||||
|
||||
|
||||
|
@ -225,7 +220,7 @@ Policy objects may be applicable only to a single namespace or to all namespaces
|
|||
|
||||
## Accounting
|
||||
|
||||
The API should have a `quota` concept (see http://issue.k8s.io/442). A quota object relates a namespace (and optionally a label selector) to a maximum quantity of resources that may be used (see [resources design doc](resources.html)).
|
||||
The API should have a `quota` concept (see http://issue.k8s.io/442). A quota object relates a namespace (and optionally a label selector) to a maximum quantity of resources that may be used (see [resources design doc](resources)).
|
||||
|
||||
Initially:
|
||||
- a `quota` object is immutable.
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Kubernetes Proposal - Admission Control"
|
||||
---
|
||||
|
||||
|
||||
# Kubernetes Proposal - Admission Control
|
||||
|
||||
**Related PR:**
|
||||
|
||||
| Topic | Link |
|
||||
|
@ -41,7 +37,7 @@ The kube-apiserver takes the following OPTIONAL arguments to enable admission co
|
|||
An **AdmissionControl** plug-in is an implementation of the following interface:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
package admission
|
||||
|
||||
// Attributes is an interface used by a plug-in to make an admission decision on a individual request.
|
||||
|
@ -58,18 +54,18 @@ type Interface interface {
|
|||
// An error is returned if it denies the request.
|
||||
Admit(a Attributes) (err error)
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
A **plug-in** must be compiled with the binary, and is registered as an available option by providing a name, and implementation
|
||||
of admission.Interface.
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
func init() {
|
||||
admission.RegisterPlugin("AlwaysDeny", func(client client.Interface, config io.Reader) (admission.Interface, error) { return NewAlwaysDeny(), nil })
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Invocation of admission control is handled by the **APIServer** and not individual **RESTStorage** implementations.
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Admission control plugin: LimitRanger"
|
||||
---
|
||||
|
||||
|
||||
# Admission control plugin: LimitRanger
|
||||
|
||||
## Background
|
||||
|
||||
This document proposes a system for enforcing resource requirements constraints as part of admission control.
|
||||
|
@ -25,7 +21,7 @@ The **LimitRange** resource is scoped to a **Namespace**.
|
|||
### Type
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
// LimitType is a type of object that is limited
|
||||
type LimitType string
|
||||
|
||||
|
@ -81,7 +77,7 @@ type LimitRangeList struct {
|
|||
// More info: http://releases.k8s.io/release-1.1/docs/design/admission_control_limit_range.md
|
||||
Items []LimitRange `json:"items"`
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### Validation
|
||||
|
@ -95,21 +91,21 @@ Min (if specified) <= DefaultRequest (if specified) <= Default (if specified) <=
|
|||
The following default value behaviors are applied to a LimitRange for a given named resource.
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
|
||||
if LimitRangeItem.Default[resourceName] is undefined
|
||||
if LimitRangeItem.Max[resourceName] is defined
|
||||
LimitRangeItem.Default[resourceName] = LimitRangeItem.Max[resourceName]
|
||||
{% endraw %}
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
|
||||
if LimitRangeItem.DefaultRequest[resourceName] is undefined
|
||||
if LimitRangeItem.Default[resourceName] is defined
|
||||
LimitRangeItem.DefaultRequest[resourceName] = LimitRangeItem.Default[resourceName]
|
||||
else if LimitRangeItem.Min[resourceName] is defined
|
||||
LimitRangeItem.DefaultRequest[resourceName] = LimitRangeItem.Min[resourceName]
|
||||
{% endraw %}
|
||||
|
||||
```
|
||||
|
||||
## AdmissionControl plugin: LimitRanger
|
||||
|
@ -121,9 +117,9 @@ If a constraint is not specified for an enumerated resource, it is not enforced
|
|||
To enable the plug-in and support for LimitRange, the kube-apiserver must be configured as follows:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kube-apiserver --admission-control=LimitRanger
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### Enforcement of constraints
|
||||
|
@ -172,7 +168,7 @@ Across all containers in pod, the following must hold true
|
|||
The default ```LimitRange``` that is applied via Salt configuration will be updated as follows:
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
|
||||
apiVersion: "v1"
|
||||
kind: "LimitRange"
|
||||
metadata:
|
||||
|
@ -183,7 +179,7 @@ spec:
|
|||
- type: "Container"
|
||||
defaultRequests:
|
||||
cpu: "100m"
|
||||
{% endraw %}
|
||||
|
||||
```
|
||||
|
||||
## Example
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Admission control plugin: ResourceQuota"
|
||||
---
|
||||
|
||||
|
||||
# Admission control plugin: ResourceQuota
|
||||
|
||||
## Background
|
||||
|
||||
This document describes a system for enforcing hard resource usage limits per namespace as part of admission control.
|
||||
|
@ -20,7 +16,7 @@ This document describes a system for enforcing hard resource usage limits per na
|
|||
The **ResourceQuota** object is scoped to a **Namespace**.
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
// The following identify resource constants for Kubernetes object types
|
||||
const (
|
||||
// Pods, number
|
||||
|
@ -71,7 +67,7 @@ type ResourceQuotaList struct {
|
|||
// Items is a list of ResourceQuota objects
|
||||
Items []ResourceQuota `json:"items" description:"items is a list of ResourceQuota objects; see http://releases.k8s.io/release-1.1/docs/design/admission_control_resource_quota.md#admissioncontrol-plugin-resourcequota"`
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
## Quota Tracked Resources
|
||||
|
@ -152,9 +148,9 @@ The **ResourceQuota** plug-in introspects all incoming admission requests.
|
|||
To enable the plug-in and support for ResourceQuota, the kube-apiserver must be configured as follows:
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
|
||||
$ kube-apiserver --admission-control=ResourceQuota
|
||||
{% endraw %}
|
||||
|
||||
```
|
||||
|
||||
It makes decisions by evaluating the incoming object against all defined **ResourceQuota.Status.Hard** resource limits in the request
|
||||
|
@ -177,7 +173,7 @@ kubectl is modified to support the **ResourceQuota** resource.
|
|||
For example,
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl create -f docs/admin/resourcequota/namespace.yaml
|
||||
namespace "quota-example" created
|
||||
$ kubectl create -f docs/admin/resourcequota/quota.yaml --namespace=quota-example
|
||||
|
@ -195,11 +191,11 @@ replicationcontrollers 0 20
|
|||
resourcequotas 1 1
|
||||
secrets 1 10
|
||||
services 0 5
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
## More information
|
||||
|
||||
See [resource quota document](../admin/resource-quota.html) and the [example of Resource Quota](../admin/resourcequota/) for more information.
|
||||
See [resource quota document](../admin/resource-quota) and the [example of Resource Quota](../admin/resourcequota/) for more information.
|
||||
|
||||
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Kubernetes architecture"
|
||||
---
|
||||
|
||||
|
||||
# Kubernetes architecture
|
||||
|
||||
A running Kubernetes cluster contains node agents (`kubelet`) and master components (APIs, scheduler, etc), on top of a distributed storage solution. This diagram shows our desired eventual state, though we're still working on a few things, like making `kubelet` itself (all our components, really) run within containers, and making the scheduler 100% pluggable.
|
||||
|
||||

|
||||
|
@ -19,13 +15,13 @@ Each node runs Docker, of course. Docker takes care of the details of downloadi
|
|||
|
||||
### `kubelet`
|
||||
|
||||
The `kubelet` manages [pods](../user-guide/pods.html) and their containers, their images, their volumes, etc.
|
||||
The `kubelet` manages [pods](../user-guide/pods) and their containers, their images, their volumes, etc.
|
||||
|
||||
### `kube-proxy`
|
||||
|
||||
Each node also runs a simple network proxy and load balancer (see the [services FAQ](https://github.com/kubernetes/kubernetes/wiki/Services-FAQ) for more details). This reflects `services` (see [the services doc](../user-guide/services.html) for more details) as defined in the Kubernetes API on each node and can do simple TCP and UDP stream forwarding (round robin) across a set of backends.
|
||||
Each node also runs a simple network proxy and load balancer (see the [services FAQ](https://github.com/kubernetes/kubernetes/wiki/Services-FAQ) for more details). This reflects `services` (see [the services doc](../user-guide/services) for more details) as defined in the Kubernetes API on each node and can do simple TCP and UDP stream forwarding (round robin) across a set of backends.
|
||||
|
||||
Service endpoints are currently found via [DNS](../admin/dns.html) or through environment variables (both [Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) and Kubernetes `{FOO}_SERVICE_HOST` and `{FOO}_SERVICE_PORT` variables are supported). These variables resolve to ports managed by the service proxy.
|
||||
Service endpoints are currently found via [DNS](../admin/dns) or through environment variables (both [Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) and Kubernetes `{FOO}_SERVICE_HOST` and `{FOO}_SERVICE_PORT` variables are supported). These variables resolve to ports managed by the service proxy.
|
||||
|
||||
## The Kubernetes Control Plane
|
||||
|
||||
|
@ -37,7 +33,7 @@ All persistent master state is stored in an instance of `etcd`. This provides a
|
|||
|
||||
### Kubernetes API Server
|
||||
|
||||
The apiserver serves up the [Kubernetes API](../api.html). It is intended to be a CRUD-y server, with most/all business logic implemented in separate components or in plug-ins. It mainly processes REST operations, validates them, and updates the corresponding objects in `etcd` (and eventually other stores).
|
||||
The apiserver serves up the [Kubernetes API](../api). It is intended to be a CRUD-y server, with most/all business logic implemented in separate components or in plug-ins. It mainly processes REST operations, validates them, and updates the corresponding objects in `etcd` (and eventually other stores).
|
||||
|
||||
### Scheduler
|
||||
|
||||
|
@ -47,7 +43,7 @@ The scheduler binds unscheduled pods to nodes via the `/binding` API. The schedu
|
|||
|
||||
All other cluster-level functions are currently performed by the Controller Manager. For instance, `Endpoints` objects are created and updated by the endpoints controller, and nodes are discovered, managed, and monitored by the node controller. These could eventually be split into separate components to make them independently pluggable.
|
||||
|
||||
The [`replicationcontroller`](../user-guide/replication-controller.html) is a mechanism that is layered on top of the simple [`pod`](../user-guide/pods.html) API. We eventually plan to port it to a generic plug-in mechanism, once one is implemented.
|
||||
The [`replicationcontroller`](../user-guide/replication-controller) is a mechanism that is layered on top of the simple [`pod`](../user-guide/pods) API. We eventually plan to port it to a generic plug-in mechanism, once one is implemented.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,11 +1,6 @@
|
|||
---
|
||||
title: "Clustering in Kubernetes"
|
||||
---
|
||||
|
||||
|
||||
# Clustering in Kubernetes
|
||||
|
||||
|
||||
## Overview
|
||||
|
||||
The term "clustering" refers to the process of having all members of the Kubernetes cluster find and trust each other. There are multiple different ways to achieve clustering with different security and usability profiles. This document attempts to lay out the user experiences for clustering that Kubernetes aims to address.
|
||||
|
|
|
@ -1,38 +1,34 @@
|
|||
---
|
||||
title: "Building with Docker"
|
||||
---
|
||||
|
||||
This directory contains diagrams for the clustering design doc.
|
||||
|
||||
This depends on the `seqdiag` [utility](http://blockdiag.com/en/seqdiag/index.html). Assuming you have a non-borked python install, this should be installable with
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
pip install seqdiag
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Just call `make` to regenerate the diagrams.
|
||||
|
||||
## Building with Docker
|
||||
|
||||
If you are on a Mac or your pip install is messed up, you can easily build with docker.
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
make docker
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
The first run will be slow but things should be fast after that.
|
||||
|
||||
To clean up the docker containers that are created (and other cruft that is left around) you can run `make docker-clean`.
|
||||
|
||||
If you are using boot2docker and get warnings about clock skew (or if things aren't building for some reason) then you can fix that up with `make fix-clock-skew`.
|
||||
|
||||
## Automatically rebuild on file changes
|
||||
|
||||
If you have the fswatch utility installed, you can have it monitor the file system and automatically rebuild when files have changed. Just do a `make watch`.
|
||||
|
||||
|
||||
|
||||
---
|
||||
title: "Building with Docker"
|
||||
---
|
||||
|
||||
This directory contains diagrams for the clustering design doc.
|
||||
|
||||
This depends on the `seqdiag` [utility](http://blockdiag.com/en/seqdiag/index). Assuming you have a non-borked python install, this should be installable with
|
||||
|
||||
{% highlight sh %}
|
||||
pip install seqdiag
|
||||
{% endhighlight %}
|
||||
|
||||
Just call `make` to regenerate the diagrams.
|
||||
|
||||
## Building with Docker
|
||||
|
||||
If you are on a Mac or your pip install is messed up, you can easily build with docker.
|
||||
|
||||
{% highlight sh %}
|
||||
make docker
|
||||
{% endhighlight %}
|
||||
|
||||
The first run will be slow but things should be fast after that.
|
||||
|
||||
To clean up the docker containers that are created (and other cruft that is left around) you can run `make docker-clean`.
|
||||
|
||||
If you are using boot2docker and get warnings about clock skew (or if things aren't building for some reason) then you can fix that up with `make fix-clock-skew`.
|
||||
|
||||
## Automatically rebuild on file changes
|
||||
|
||||
If you have the fswatch utility installed, you can have it monitor the file system and automatically rebuild when files have changed. Just do a `make watch`.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,38 +1,34 @@
|
|||
---
|
||||
title: "Building with Docker"
|
||||
---
|
||||
|
||||
This directory contains diagrams for the clustering design doc.
|
||||
|
||||
This depends on the `seqdiag` [utility](http://blockdiag.com/en/seqdiag/index.html). Assuming you have a non-borked python install, this should be installable with
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
pip install seqdiag
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
Just call `make` to regenerate the diagrams.
|
||||
|
||||
## Building with Docker
|
||||
|
||||
If you are on a Mac or your pip install is messed up, you can easily build with docker.
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
make docker
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
The first run will be slow but things should be fast after that.
|
||||
|
||||
To clean up the docker containers that are created (and other cruft that is left around) you can run `make docker-clean`.
|
||||
|
||||
If you are using boot2docker and get warnings about clock skew (or if things aren't building for some reason) then you can fix that up with `make fix-clock-skew`.
|
||||
|
||||
## Automatically rebuild on file changes
|
||||
|
||||
If you have the fswatch utility installed, you can have it monitor the file system and automatically rebuild when files have changed. Just do a `make watch`.
|
||||
|
||||
|
||||
|
||||
---
|
||||
title: "Building with Docker"
|
||||
---
|
||||
|
||||
This directory contains diagrams for the clustering design doc.
|
||||
|
||||
This depends on the `seqdiag` [utility](http://blockdiag.com/en/seqdiag/index). Assuming you have a non-borked python install, this should be installable with
|
||||
|
||||
{% highlight sh %}
|
||||
pip install seqdiag
|
||||
{% endhighlight %}
|
||||
|
||||
Just call `make` to regenerate the diagrams.
|
||||
|
||||
## Building with Docker
|
||||
|
||||
If you are on a Mac or your pip install is messed up, you can easily build with docker.
|
||||
|
||||
{% highlight sh %}
|
||||
make docker
|
||||
{% endhighlight %}
|
||||
|
||||
The first run will be slow but things should be fast after that.
|
||||
|
||||
To clean up the docker containers that are created (and other cruft that is left around) you can run `make docker-clean`.
|
||||
|
||||
If you are using boot2docker and get warnings about clock skew (or if things aren't building for some reason) then you can fix that up with `make fix-clock-skew`.
|
||||
|
||||
## Automatically rebuild on file changes
|
||||
|
||||
If you have the fswatch utility installed, you can have it monitor the file system and automatically rebuild when files have changed. Just do a `make watch`.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Container Command Execution & Port Forwarding in Kubernetes"
|
||||
---
|
||||
|
||||
|
||||
# Container Command Execution & Port Forwarding in Kubernetes
|
||||
|
||||
## Abstract
|
||||
|
||||
This describes an approach for providing support for:
|
||||
|
|
|
@ -1,131 +1,125 @@
|
|||
---
|
||||
title: "DaemonSet in Kubernetes"
|
||||
---
|
||||
|
||||
|
||||
# DaemonSet in Kubernetes
|
||||
|
||||
**Author**: Ananya Kumar (@AnanyaKumar)
|
||||
|
||||
**Status**: Implemented.
|
||||
|
||||
This document presents the design of the Kubernetes DaemonSet, describes use cases, and gives an overview of the code.
|
||||
|
||||
## Motivation
|
||||
|
||||
Many users have requested for a way to run a daemon on every node in a Kubernetes cluster, or on a certain set of nodes in a cluster. This is essential for use cases such as building a sharded datastore, or running a logger on every node. In comes the DaemonSet, a way to conveniently create and manage daemon-like workloads in Kubernetes.
|
||||
|
||||
## Use Cases
|
||||
|
||||
The DaemonSet can be used for user-specified system services, cluster-level applications with strong node ties, and Kubernetes node services. Below are example use cases in each category.
|
||||
|
||||
### User-Specified System Services:
|
||||
|
||||
Logging: Some users want a way to collect statistics about nodes in a cluster and send those logs to an external database. For example, system administrators might want to know if their machines are performing as expected, if they need to add more machines to the cluster, or if they should switch cloud providers. The DaemonSet can be used to run a data collection service (for example fluentd) on every node and send the data to a service like ElasticSearch for analysis.
|
||||
|
||||
### Cluster-Level Applications
|
||||
|
||||
Datastore: Users might want to implement a sharded datastore in their cluster. A few nodes in the cluster, labeled ‘app=datastore’, might be responsible for storing data shards, and pods running on these nodes might serve data. This architecture requires a way to bind pods to specific nodes, so it cannot be achieved using a Replication Controller. A DaemonSet is a convenient way to implement such a datastore.
|
||||
|
||||
For other uses, see the related [feature request](https://issues.k8s.io/1518)
|
||||
|
||||
## Functionality
|
||||
|
||||
The DaemonSet supports standard API features:
|
||||
- create
|
||||
- The spec for DaemonSets has a pod template field.
|
||||
- Using the pod’s nodeSelector field, DaemonSets can be restricted to operate over nodes that have a certain label. For example, suppose that in a cluster some nodes are labeled ‘app=database’. You can use a DaemonSet to launch a datastore pod on exactly those nodes labeled ‘app=database’.
|
||||
- Using the pod's nodeName field, DaemonSets can be restricted to operate on a specified node.
|
||||
- The PodTemplateSpec used by the DaemonSet is the same as the PodTemplateSpec used by the Replication Controller.
|
||||
- The initial implementation will not guarnatee that DaemonSet pods are created on nodes before other pods.
|
||||
- The initial implementation of DaemonSet does not guarantee that DaemonSet pods show up on nodes (for example because of resource limitations of the node), but makes a best effort to launch DaemonSet pods (like Replication Controllers do with pods). Subsequent revisions might ensure that DaemonSet pods show up on nodes, preempting other pods if necessary.
|
||||
- The DaemonSet controller adds an annotation "kubernetes.io/created-by: \<json API object reference\>"
|
||||
- YAML example:
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
apiVersion: v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
labels:
|
||||
app: datastore
|
||||
name: datastore
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: datastore-shard
|
||||
spec:
|
||||
nodeSelector:
|
||||
app: datastore-node
|
||||
containers:
|
||||
name: datastore-shard
|
||||
image: kubernetes/sharded
|
||||
ports:
|
||||
- containerPort: 9042
|
||||
name: main
|
||||
{% endraw %}
|
||||
{% endhighlight %}
|
||||
|
||||
- commands that get info
|
||||
- get (e.g. kubectl get daemonsets)
|
||||
- describe
|
||||
- Modifiers
|
||||
- delete (if --cascade=true, then first the client turns down all the pods controlled by the DaemonSet (by setting the nodeSelector to a uuid pair that is unlikely to be set on any node); then it deletes the DaemonSet; then it deletes the pods)
|
||||
- label
|
||||
- annotate
|
||||
- update operations like patch and replace (only allowed to selector and to nodeSelector and nodeName of pod template)
|
||||
- DaemonSets have labels, so you could, for example, list all DaemonSets with certain labels (the same way you would for a Replication Controller).
|
||||
- In general, for all the supported features like get, describe, update, etc, the DaemonSet works in a similar way to the Replication Controller. However, note that the DaemonSet and the Replication Controller are different constructs.
|
||||
|
||||
### Persisting Pods
|
||||
|
||||
- Ordinary liveness probes specified in the pod template work to keep pods created by a DaemonSet running.
|
||||
- If a daemon pod is killed or stopped, the DaemonSet will create a new replica of the daemon pod on the node.
|
||||
|
||||
### Cluster Mutations
|
||||
|
||||
- When a new node is added to the cluster, the DaemonSet controller starts daemon pods on the node for DaemonSets whose pod template nodeSelectors match the node’s labels.
|
||||
- Suppose the user launches a DaemonSet that runs a logging daemon on all nodes labeled “logger=fluentd�. If the user then adds the “logger=fluentd� label to a node (that did not initially have the label), the logging daemon will launch on the node. Additionally, if a user removes the label from a node, the logging daemon on that node will be killed.
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
We considered several alternatives, that were deemed inferior to the approach of creating a new DaemonSet abstraction.
|
||||
|
||||
One alternative is to include the daemon in the machine image. In this case it would run outside of Kubernetes proper, and thus not be monitored, health checked, usable as a service endpoint, easily upgradable, etc.
|
||||
|
||||
A related alternative is to package daemons as static pods. This would address most of the problems described above, but they would still not be easily upgradable, and more generally could not be managed through the API server interface.
|
||||
|
||||
A third alternative is to generalize the Replication Controller. We would do something like: if you set the `replicas` field of the ReplicationConrollerSpec to -1, then it means "run exactly one replica on every node matching the nodeSelector in the pod template." The ReplicationController would pretend `replicas` had been set to some large number -- larger than the largest number of nodes ever expected in the cluster -- and would use some anti-affinity mechanism to ensure that no more than one Pod from the ReplicationController runs on any given node. There are two downsides to this approach. First, there would always be a large number of Pending pods in the scheduler (these will be scheduled onto new machines when they are added to the cluster). The second downside is more philosophical: DaemonSet and the Replication Controller are very different concepts. We believe that having small, targeted controllers for distinct purposes makes Kubernetes easier to understand and use, compared to having larger multi-functional controllers (see ["Convert ReplicationController to a plugin"](http://issues.k8s.io/3058) for some discussion of this topic).
|
||||
|
||||
## Design
|
||||
|
||||
#### Client
|
||||
|
||||
- Add support for DaemonSet commands to kubectl and the client. Client code was added to client/unversioned. The main files in Kubectl that were modified are kubectl/describe.go and kubectl/stop.go, since for other calls like Get, Create, and Update, the client simply forwards the request to the backend via the REST API.
|
||||
|
||||
#### Apiserver
|
||||
|
||||
- Accept, parse, validate client commands
|
||||
- REST API calls are handled in registry/daemon
|
||||
- In particular, the api server will add the object to etcd
|
||||
- DaemonManager listens for updates to etcd (using Framework.informer)
|
||||
- API objects for DaemonSet were created in expapi/v1/types.go and expapi/v1/register.go
|
||||
- Validation code is in expapi/validation
|
||||
|
||||
#### Daemon Manager
|
||||
|
||||
- Creates new DaemonSets when requested. Launches the corresponding daemon pod on all nodes with labels matching the new DaemonSet’s selector.
|
||||
- Listens for addition of new nodes to the cluster, by setting up a framework.NewInformer that watches for the creation of Node API objects. When a new node is added, the daemon manager will loop through each DaemonSet. If the label of the node matches the selector of the DaemonSet, then the daemon manager will create the corresponding daemon pod in the new node.
|
||||
- The daemon manager creates a pod on a node by sending a command to the API server, requesting for a pod to be bound to the node (the node will be specified via its hostname)
|
||||
|
||||
#### Kubelet
|
||||
|
||||
- Does not need to be modified, but health checking will occur for the daemon pods and revive the pods if they are killed (we set the pod restartPolicy to Always). We reject DaemonSet objects with pod templates that don’t have restartPolicy set to Always.
|
||||
|
||||
## Open Issues
|
||||
|
||||
- Should work similarly to [Deployment](http://issues.k8s.io/1743).
|
||||
|
||||
|
||||
|
||||
---
|
||||
title: "DaemonSet in Kubernetes"
|
||||
---
|
||||
**Author**: Ananya Kumar (@AnanyaKumar)
|
||||
|
||||
**Status**: Implemented.
|
||||
|
||||
This document presents the design of the Kubernetes DaemonSet, describes use cases, and gives an overview of the code.
|
||||
|
||||
## Motivation
|
||||
|
||||
Many users have requested for a way to run a daemon on every node in a Kubernetes cluster, or on a certain set of nodes in a cluster. This is essential for use cases such as building a sharded datastore, or running a logger on every node. In comes the DaemonSet, a way to conveniently create and manage daemon-like workloads in Kubernetes.
|
||||
|
||||
## Use Cases
|
||||
|
||||
The DaemonSet can be used for user-specified system services, cluster-level applications with strong node ties, and Kubernetes node services. Below are example use cases in each category.
|
||||
|
||||
### User-Specified System Services:
|
||||
|
||||
Logging: Some users want a way to collect statistics about nodes in a cluster and send those logs to an external database. For example, system administrators might want to know if their machines are performing as expected, if they need to add more machines to the cluster, or if they should switch cloud providers. The DaemonSet can be used to run a data collection service (for example fluentd) on every node and send the data to a service like ElasticSearch for analysis.
|
||||
|
||||
### Cluster-Level Applications
|
||||
|
||||
Datastore: Users might want to implement a sharded datastore in their cluster. A few nodes in the cluster, labeled 'app=datastore', might be responsible for storing data shards, and pods running on these nodes might serve data. This architecture requires a way to bind pods to specific nodes, so it cannot be achieved using a Replication Controller. A DaemonSet is a convenient way to implement such a datastore.
|
||||
|
||||
For other uses, see the related [feature request](https://issues.k8s.io/1518)
|
||||
|
||||
## Functionality
|
||||
|
||||
The DaemonSet supports standard API features:
|
||||
- create
|
||||
- The spec for DaemonSets has a pod template field.
|
||||
- Using the pod's nodeSelector field, DaemonSets can be restricted to operate over nodes that have a certain label. For example, suppose that in a cluster some nodes are labeled 'app=database'. You can use a DaemonSet to launch a datastore pod on exactly those nodes labeled 'app=database'.
|
||||
- Using the pod's nodeName field, DaemonSets can be restricted to operate on a specified node.
|
||||
- The PodTemplateSpec used by the DaemonSet is the same as the PodTemplateSpec used by the Replication Controller.
|
||||
- The initial implementation will not guarnatee that DaemonSet pods are created on nodes before other pods.
|
||||
- The initial implementation of DaemonSet does not guarantee that DaemonSet pods show up on nodes (for example because of resource limitations of the node), but makes a best effort to launch DaemonSet pods (like Replication Controllers do with pods). Subsequent revisions might ensure that DaemonSet pods show up on nodes, preempting other pods if necessary.
|
||||
- The DaemonSet controller adds an annotation "kubernetes.io/created-by: \<json API object reference\>"
|
||||
- YAML example:
|
||||
|
||||
{% highlight yaml %}
|
||||
apiVersion: v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
labels:
|
||||
app: datastore
|
||||
name: datastore
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: datastore-shard
|
||||
spec:
|
||||
nodeSelector:
|
||||
app: datastore-node
|
||||
containers:
|
||||
name: datastore-shard
|
||||
image: kubernetes/sharded
|
||||
ports:
|
||||
- containerPort: 9042
|
||||
name: main
|
||||
{% endhighlight %}
|
||||
|
||||
- commands that get info
|
||||
- get (e.g. kubectl get daemonsets)
|
||||
- describe
|
||||
- Modifiers
|
||||
- delete (if --cascade=true, then first the client turns down all the pods controlled by the DaemonSet (by setting the nodeSelector to a uuid pair that is unlikely to be set on any node); then it deletes the DaemonSet; then it deletes the pods)
|
||||
- label
|
||||
- annotate
|
||||
- update operations like patch and replace (only allowed to selector and to nodeSelector and nodeName of pod template)
|
||||
- DaemonSets have labels, so you could, for example, list all DaemonSets with certain labels (the same way you would for a Replication Controller).
|
||||
- In general, for all the supported features like get, describe, update, etc, the DaemonSet works in a similar way to the Replication Controller. However, note that the DaemonSet and the Replication Controller are different constructs.
|
||||
|
||||
### Persisting Pods
|
||||
|
||||
- Ordinary liveness probes specified in the pod template work to keep pods created by a DaemonSet running.
|
||||
- If a daemon pod is killed or stopped, the DaemonSet will create a new replica of the daemon pod on the node.
|
||||
|
||||
### Cluster Mutations
|
||||
|
||||
- When a new node is added to the cluster, the DaemonSet controller starts daemon pods on the node for DaemonSets whose pod template nodeSelectors match the node's labels.
|
||||
- Suppose the user launches a DaemonSet that runs a logging daemon on all nodes labeled 'logger=fluentd'?. If the user then adds the 'logger=fluentd'? label to a node (that did not initially have the label), the logging daemon will launch on the node. Additionally, if a user removes the label from a node, the logging daemon on that node will be killed.
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
We considered several alternatives, that were deemed inferior to the approach of creating a new DaemonSet abstraction.
|
||||
|
||||
One alternative is to include the daemon in the machine image. In this case it would run outside of Kubernetes proper, and thus not be monitored, health checked, usable as a service endpoint, easily upgradable, etc.
|
||||
|
||||
A related alternative is to package daemons as static pods. This would address most of the problems described above, but they would still not be easily upgradable, and more generally could not be managed through the API server interface.
|
||||
|
||||
A third alternative is to generalize the Replication Controller. We would do something like: if you set the `replicas` field of the ReplicationConrollerSpec to -1, then it means "run exactly one replica on every node matching the nodeSelector in the pod template." The ReplicationController would pretend `replicas` had been set to some large number -- larger than the largest number of nodes ever expected in the cluster -- and would use some anti-affinity mechanism to ensure that no more than one Pod from the ReplicationController runs on any given node. There are two downsides to this approach. First, there would always be a large number of Pending pods in the scheduler (these will be scheduled onto new machines when they are added to the cluster). The second downside is more philosophical: DaemonSet and the Replication Controller are very different concepts. We believe that having small, targeted controllers for distinct purposes makes Kubernetes easier to understand and use, compared to having larger multi-functional controllers (see ["Convert ReplicationController to a plugin"](http://issues.k8s.io/3058) for some discussion of this topic).
|
||||
|
||||
## Design
|
||||
|
||||
#### Client
|
||||
|
||||
- Add support for DaemonSet commands to kubectl and the client. Client code was added to client/unversioned. The main files in Kubectl that were modified are kubectl/describe.go and kubectl/stop.go, since for other calls like Get, Create, and Update, the client simply forwards the request to the backend via the REST API.
|
||||
|
||||
#### Apiserver
|
||||
|
||||
- Accept, parse, validate client commands
|
||||
- REST API calls are handled in registry/daemon
|
||||
- In particular, the api server will add the object to etcd
|
||||
- DaemonManager listens for updates to etcd (using Framework.informer)
|
||||
- API objects for DaemonSet were created in expapi/v1/types.go and expapi/v1/register.go
|
||||
- Validation code is in expapi/validation
|
||||
|
||||
#### Daemon Manager
|
||||
|
||||
- Creates new DaemonSets when requested. Launches the corresponding daemon pod on all nodes with labels matching the new DaemonSet's selector.
|
||||
- Listens for addition of new nodes to the cluster, by setting up a framework.NewInformer that watches for the creation of Node API objects. When a new node is added, the daemon manager will loop through each DaemonSet. If the label of the node matches the selector of the DaemonSet, then the daemon manager will create the corresponding daemon pod in the new node.
|
||||
- The daemon manager creates a pod on a node by sending a command to the API server, requesting for a pod to be bound to the node (the node will be specified via its hostname)
|
||||
|
||||
#### Kubelet
|
||||
|
||||
- Does not need to be modified, but health checking will occur for the daemon pods and revive the pods if they are killed (we set the pod restartPolicy to Always). We reject DaemonSet objects with pod templates that don't have restartPolicy set to Always.
|
||||
|
||||
## Open Issues
|
||||
|
||||
- Should work similarly to [Deployment](http://issues.k8s.io/1743).
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Kubernetes Event Compression"
|
||||
---
|
||||
|
||||
|
||||
# Kubernetes Event Compression
|
||||
|
||||
This document captures the design of event compression.
|
||||
|
||||
|
||||
|
@ -63,7 +59,7 @@ Each binary that generates events:
|
|||
Sample kubectl output
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT REASON SOURCE MESSAGE
|
||||
Thu, 12 Feb 2015 01:13:02 +0000 Thu, 12 Feb 2015 01:13:02 +0000 1 kubernetes-minion-4.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-4.c.saad-dev-vms.internal} Starting kubelet.
|
||||
Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-minion-1.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-1.c.saad-dev-vms.internal} Starting kubelet.
|
||||
|
@ -76,7 +72,7 @@ Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4
|
|||
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 monitoring-heapster-controller-oh43e Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods
|
||||
Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey BoundPod implicitly required container POD pulled {kubelet kubernetes-minion-4.c.saad-dev-vms.internal} Successfully pulled image "kubernetes/pause:latest"
|
||||
Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey Pod scheduled {scheduler } Successfully assigned kibana-logging-controller-gziey to kubernetes-minion-4.c.saad-dev-vms.internal
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
This demonstrates what would have been 20 separate entries (indicating scheduling failure) collapsed/compressed down to 5 entries.
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Variable expansion in pod command, args, and env"
|
||||
---
|
||||
|
||||
|
||||
# Variable expansion in pod command, args, and env
|
||||
|
||||
## Abstract
|
||||
|
||||
A proposal for the expansion of environment variables using a simple `$(var)` syntax.
|
||||
|
@ -170,7 +166,7 @@ which may not be bound because the service that provides them does not yet exist
|
|||
mapping function that uses a list of `map[string]string` like:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
func MakeMappingFunc(maps ...map[string]string) func(string) string {
|
||||
return func(input string) string {
|
||||
for _, context := range maps {
|
||||
|
@ -201,7 +197,7 @@ mapping := MakeMappingFunc(containerEnv)
|
|||
|
||||
// default variables not found in serviceEnv
|
||||
mappingWithDefaults := MakeMappingFunc(serviceEnv, containerEnv)
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### Implementation changes
|
||||
|
@ -224,7 +220,7 @@ must be able to create an event. In order to facilitate this, we should create
|
|||
the `api/client/record` package which is similar to `EventRecorder`, but scoped to a single object:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
// ObjectEventRecorder knows how to record events about a single object.
|
||||
type ObjectEventRecorder interface {
|
||||
// Event constructs an event from the given information and puts it in the queue for sending.
|
||||
|
@ -242,14 +238,14 @@ type ObjectEventRecorder interface {
|
|||
// PastEventf is just like Eventf, but with an option to specify the event's 'timestamp' field.
|
||||
PastEventf(timestamp unversioned.Time, reason, messageFmt string, args ...interface{})
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
There should also be a function that can construct an `ObjectEventRecorder` from a `runtime.Object`
|
||||
and an `EventRecorder`:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
type objectRecorderImpl struct {
|
||||
object runtime.Object
|
||||
recorder EventRecorder
|
||||
|
@ -262,7 +258,7 @@ func (r *objectRecorderImpl) Event(reason, message string) {
|
|||
func ObjectEventRecorderFor(object runtime.Object, recorder EventRecorder) ObjectEventRecorder {
|
||||
return &objectRecorderImpl{object, recorder}
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
#### Expansion package
|
||||
|
@ -270,7 +266,7 @@ func ObjectEventRecorderFor(object runtime.Object, recorder EventRecorder) Objec
|
|||
The expansion package should provide two methods:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
// MappingFuncFor returns a mapping function for use with Expand that
|
||||
// implements the expansion semantics defined in the expansion spec; it
|
||||
// returns the input string wrapped in the expansion syntax if no mapping
|
||||
|
@ -286,7 +282,7 @@ func MappingFuncFor(recorder record.ObjectEventRecorder, context ...map[string]s
|
|||
func Expand(input string, mapping func(string) string) string {
|
||||
// ...
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
#### Kubelet changes
|
||||
|
@ -361,7 +357,7 @@ No other variables are defined.
|
|||
Notice the `$(var)` syntax.
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
|
@ -375,13 +371,13 @@ spec:
|
|||
- name: PUBLIC_URL
|
||||
value: "http://$(GITSERVER_SERVICE_HOST):$(GITSERVER_SERVICE_PORT)"
|
||||
restartPolicy: Never
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
#### In a pod: building a URL using downward API
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
|
@ -399,7 +395,7 @@ spec:
|
|||
- name: PUBLIC_URL
|
||||
value: "http://gitserver.$(POD_NAMESPACE):$(SERVICE_PORT)"
|
||||
restartPolicy: Never
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Adding custom resources to the Kubernetes API server"
|
||||
---
|
||||
|
||||
|
||||
# Adding custom resources to the Kubernetes API server
|
||||
|
||||
This document describes the design for implementing the storage of custom API types in the Kubernetes API Server.
|
||||
|
||||
|
||||
|
@ -60,7 +56,7 @@ using '-' instead of capitalization ('camel-case'), with the first character bei
|
|||
capitalized. In pseudo code:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
var result string
|
||||
for ix := range kindName {
|
||||
if isCapital(kindName[ix]) {
|
||||
|
@ -68,7 +64,7 @@ for ix := range kindName {
|
|||
}
|
||||
result = append(result, toLowerCase(kindName[ix])
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
As a concrete example, the resource named `camel-case-kind.example.com` defines resources of Kind `CamelCaseKind`, in
|
||||
|
@ -86,7 +82,7 @@ deleting a namespace, deletes all third party resources in that namespace.
|
|||
For example, if a user creates:
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
|
||||
metadata:
|
||||
name: cron-tab.example.com
|
||||
apiVersion: extensions/v1beta1
|
||||
|
@ -95,7 +91,7 @@ description: "A specification of a Pod to run on a cron style schedule"
|
|||
versions:
|
||||
- name: stable/v1
|
||||
- name: experimental/v2
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Then the API server will program in two new RESTful resource paths:
|
||||
|
@ -106,7 +102,7 @@ Then the API server will program in two new RESTful resource paths:
|
|||
Now that this schema has been created, a user can `POST`:
|
||||
|
||||
{% highlight json %}
|
||||
{% raw %}
|
||||
|
||||
{
|
||||
"metadata": {
|
||||
"name": "my-new-cron-object"
|
||||
|
@ -116,7 +112,7 @@ Now that this schema has been created, a user can `POST`:
|
|||
"cronSpec": "* * * * /5",
|
||||
"image": "my-awesome-chron-image"
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
to: `/third-party/example.com/stable/v1/namespaces/default/crontabs/my-new-cron-object`
|
||||
|
@ -124,9 +120,9 @@ to: `/third-party/example.com/stable/v1/namespaces/default/crontabs/my-new-cron-
|
|||
and the corresponding data will be stored into etcd by the APIServer, so that when the user issues:
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
|
||||
GET /third-party/example.com/stable/v1/namespaces/default/crontabs/my-new-cron-object`
|
||||
{% endraw %}
|
||||
|
||||
```
|
||||
|
||||
And when they do that, they will get back the same data, but with additional Kubernetes metadata
|
||||
|
@ -135,15 +131,15 @@ And when they do that, they will get back the same data, but with additional Kub
|
|||
Likewise, to list all resources, a user can issue:
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
|
||||
GET /third-party/example.com/stable/v1/namespaces/default/crontabs
|
||||
{% endraw %}
|
||||
|
||||
```
|
||||
|
||||
and get back:
|
||||
|
||||
{% highlight json %}
|
||||
{% raw %}
|
||||
|
||||
{
|
||||
"apiVersion": "example.com/stable/v1",
|
||||
"kind": "CronTabList",
|
||||
|
@ -159,7 +155,7 @@ and get back:
|
|||
}
|
||||
]
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Because all objects are expected to contain standard Kubernetes metadata fields, these
|
||||
|
@ -191,17 +187,17 @@ Each custom object stored by the API server needs a custom key in storage, this
|
|||
Given the definitions above, the key for a specific third-party object is:
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
|
||||
${standard-k8s-prefix}/third-party-resources/${third-party-resource-namespace}/${third-party-resource-name}/${resource-namespace}/${resource-name}
|
||||
{% endraw %}
|
||||
|
||||
```
|
||||
|
||||
Thus, listing a third-party resource can be achieved by listing the directory:
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
|
||||
${standard-k8s-prefix}/third-party-resources/${third-party-resource-namespace}/${third-party-resource-name}/${resource-namespace}/
|
||||
{% endraw %}
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Horizontal Pod Autoscaling"
|
||||
---
|
||||
|
||||
|
||||
# Horizontal Pod Autoscaling
|
||||
|
||||
## Preface
|
||||
|
||||
This document briefly describes the design of the horizontal autoscaler for pods.
|
||||
|
@ -42,7 +38,7 @@ Scale subresource is in API for replication controller or deployment under the f
|
|||
It has the following structure:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
// represents a scaling request for a resource.
|
||||
type Scale struct {
|
||||
unversioned.TypeMeta
|
||||
|
@ -69,7 +65,7 @@ type ScaleStatus struct {
|
|||
// label query over pods that should match the replicas count.
|
||||
Selector map[string]string `json:"selector,omitempty"`
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Writing to `ScaleSpec.Replicas` resizes the replication controller/deployment associated with
|
||||
|
@ -86,7 +82,7 @@ In Kubernetes version 1.1, we are introducing HorizontalPodAutoscaler object. It
|
|||
It has the following structure:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
// configuration of a horizontal pod autoscaler.
|
||||
type HorizontalPodAutoscaler struct {
|
||||
unversioned.TypeMeta
|
||||
|
@ -139,7 +135,7 @@ type HorizontalPodAutoscalerStatus struct {
|
|||
// e.g. 70 means that an average pod is using now 70% of its requested CPU.
|
||||
CurrentCPUUtilizationPercentage *int
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
`ScaleRef` is a reference to the Scale subresource.
|
||||
|
@ -147,7 +143,7 @@ type HorizontalPodAutoscalerStatus struct {
|
|||
We are also introducing HorizontalPodAutoscalerList object to enable listing all autoscalers in a namespace:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
// list of horizontal pod autoscaler objects.
|
||||
type HorizontalPodAutoscalerList struct {
|
||||
unversioned.TypeMeta
|
||||
|
@ -156,7 +152,7 @@ type HorizontalPodAutoscalerList struct {
|
|||
// list of horizontal pod autoscaler objects.
|
||||
Items []HorizontalPodAutoscaler
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
## Autoscaling Algorithm
|
||||
|
@ -178,9 +174,9 @@ In future, there will be API on master for this purpose
|
|||
The target number of pods is calculated from the following formula:
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
|
||||
TargetNumOfPods = ceil(sum(CurrentPodsCPUUtilization) / Target)
|
||||
{% endraw %}
|
||||
|
||||
```
|
||||
|
||||
Starting and stopping pods may introduce noise to the metric (for instance, starting may temporarily increase CPU).
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Identifiers and Names in Kubernetes"
|
||||
---
|
||||
|
||||
|
||||
# Identifiers and Names in Kubernetes
|
||||
|
||||
A summarization of the goals and recommendations for identifiers in Kubernetes. Described in [GitHub issue #199](http://issue.k8s.io/199).
|
||||
|
||||
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Kubernetes Design Overview"
|
||||
---
|
||||
|
||||
|
||||
# Kubernetes Design Overview
|
||||
|
||||
Kubernetes is a system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
|
||||
|
||||
Kubernetes establishes robust declarative primitives for maintaining the desired state requested by the user. We see these primitives as the main value added by Kubernetes. Self-healing mechanisms, such as auto-restarting, re-scheduling, and replicating containers require active controllers, not just imperative orchestration.
|
||||
|
@ -15,11 +11,11 @@ Kubernetes enables users to ask a cluster to run a set of containers. The system
|
|||
|
||||
Kubernetes is intended to run on a number of cloud providers, as well as on physical hosts.
|
||||
|
||||
A single Kubernetes cluster is not intended to span multiple availability zones. Instead, we recommend building a higher-level layer to replicate complete deployments of highly available applications across multiple zones (see [the multi-cluster doc](../admin/multi-cluster.html) and [cluster federation proposal](../proposals/federation.html) for more details).
|
||||
A single Kubernetes cluster is not intended to span multiple availability zones. Instead, we recommend building a higher-level layer to replicate complete deployments of highly available applications across multiple zones (see [the multi-cluster doc](../admin/multi-cluster) and [cluster federation proposal](../proposals/federation) for more details).
|
||||
|
||||
Finally, Kubernetes aspires to be an extensible, pluggable, building-block OSS platform and toolkit. Therefore, architecturally, we want Kubernetes to be built as a collection of pluggable components and layers, with the ability to use alternative schedulers, controllers, storage systems, and distribution mechanisms, and we're evolving its current code in that direction. Furthermore, we want others to be able to extend Kubernetes functionality, such as with higher-level PaaS functionality or multi-cluster layers, without modification of core Kubernetes source. Therefore, its API isn't just (or even necessarily mainly) targeted at end users, but at tool and extension developers. Its APIs are intended to serve as the foundation for an open ecosystem of tools, automation systems, and higher-level API layers. Consequently, there are no "internal" inter-component APIs. All APIs are visible and available, including the APIs used by the scheduler, the node controller, the replication-controller manager, Kubelet's API, etc. There's no glass to break -- in order to handle more complex use cases, one can just access the lower-level APIs in a fully transparent, composable manner.
|
||||
|
||||
For more about the Kubernetes architecture, see [architecture](architecture.html).
|
||||
For more about the Kubernetes architecture, see [architecture](architecture).
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Namespaces"
|
||||
---
|
||||
|
||||
|
||||
# Namespaces
|
||||
|
||||
## Abstract
|
||||
|
||||
A Namespace is a mechanism to partition resources created by users into
|
||||
|
@ -47,7 +43,7 @@ The Namespace provides a unique scope for:
|
|||
A *Namespace* defines a logically named group for multiple *Kind*s of resources.
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
type Namespace struct {
|
||||
TypeMeta `json:",inline"`
|
||||
ObjectMeta `json:"metadata,omitempty"`
|
||||
|
@ -55,7 +51,7 @@ type Namespace struct {
|
|||
Spec NamespaceSpec `json:"spec,omitempty"`
|
||||
Status NamespaceStatus `json:"status,omitempty"`
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
A *Namespace* name is a DNS compatible label.
|
||||
|
@ -79,7 +75,7 @@ distinguish distinct entities, and reference particular entities across operatio
|
|||
|
||||
A *Namespace* provides an authorization scope for accessing content associated with the *Namespace*.
|
||||
|
||||
See [Authorization plugins](../admin/authorization.html)
|
||||
See [Authorization plugins](../admin/authorization)
|
||||
|
||||
### Limit Resource Consumption
|
||||
|
||||
|
@ -88,19 +84,19 @@ A *Namespace* provides a scope to limit resource consumption.
|
|||
A *LimitRange* defines min/max constraints on the amount of resources a single entity can consume in
|
||||
a *Namespace*.
|
||||
|
||||
See [Admission control: Limit Range](admission_control_limit_range.html)
|
||||
See [Admission control: Limit Range](admission_control_limit_range)
|
||||
|
||||
A *ResourceQuota* tracks aggregate usage of resources in the *Namespace* and allows cluster operators
|
||||
to define *Hard* resource usage limits that a *Namespace* may consume.
|
||||
|
||||
See [Admission control: Resource Quota](admission_control_resource_quota.html)
|
||||
See [Admission control: Resource Quota](admission_control_resource_quota)
|
||||
|
||||
### Finalizers
|
||||
|
||||
Upon creation of a *Namespace*, the creator may provide a list of *Finalizer* objects.
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
type FinalizerName string
|
||||
|
||||
// These are internal finalizers to Kubernetes, must be qualified name unless defined here
|
||||
|
@ -113,7 +109,7 @@ type NamespaceSpec struct {
|
|||
// Finalizers is an opaque list of values that must be empty to permanently remove object from storage
|
||||
Finalizers []FinalizerName
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
A *FinalizerName* is a qualified name.
|
||||
|
@ -131,7 +127,7 @@ set by default.
|
|||
A *Namespace* may exist in the following phases.
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
type NamespacePhase string
|
||||
const(
|
||||
NamespaceActive NamespacePhase = "Active"
|
||||
|
@ -142,7 +138,7 @@ type NamespaceStatus struct {
|
|||
...
|
||||
Phase NamespacePhase
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
A *Namespace* is in the **Active** phase if it does not have a *ObjectMeta.DeletionTimestamp*.
|
||||
|
@ -241,7 +237,7 @@ to take part in Namespace termination.
|
|||
OpenShift creates a Namespace in Kubernetes
|
||||
|
||||
{% highlight json %}
|
||||
{% raw %}
|
||||
|
||||
{
|
||||
"apiVersion":"v1",
|
||||
"kind": "Namespace",
|
||||
|
@ -258,7 +254,7 @@ OpenShift creates a Namespace in Kubernetes
|
|||
"phase": "Active"
|
||||
}
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
OpenShift then goes and creates a set of resources (pods, services, etc) associated
|
||||
|
@ -268,7 +264,7 @@ own storage associated with the "development" namespace unknown to Kubernetes.
|
|||
User deletes the Namespace in Kubernetes, and Namespace now has following state:
|
||||
|
||||
{% highlight json %}
|
||||
{% raw %}
|
||||
|
||||
{
|
||||
"apiVersion":"v1",
|
||||
"kind": "Namespace",
|
||||
|
@ -286,7 +282,7 @@ User deletes the Namespace in Kubernetes, and Namespace now has following state:
|
|||
"phase": "Terminating"
|
||||
}
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
The Kubernetes *namespace controller* observes the namespace has a *deletionTimestamp*
|
||||
|
@ -295,7 +291,7 @@ success, it executes a *finalize* action that modifies the *Namespace* by
|
|||
removing *kubernetes* from the list of finalizers:
|
||||
|
||||
{% highlight json %}
|
||||
{% raw %}
|
||||
|
||||
{
|
||||
"apiVersion":"v1",
|
||||
"kind": "Namespace",
|
||||
|
@ -313,7 +309,7 @@ removing *kubernetes* from the list of finalizers:
|
|||
"phase": "Terminating"
|
||||
}
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
OpenShift Origin has its own *namespace controller* that is observing cluster state, and
|
||||
|
@ -325,7 +321,7 @@ from the list of finalizers.
|
|||
This results in the following state:
|
||||
|
||||
{% highlight json %}
|
||||
{% raw %}
|
||||
|
||||
{
|
||||
"apiVersion":"v1",
|
||||
"kind": "Namespace",
|
||||
|
@ -343,7 +339,7 @@ This results in the following state:
|
|||
"phase": "Terminating"
|
||||
}
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
At this point, the Kubernetes *namespace controller* in its sync loop will see that the namespace
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Networking"
|
||||
---
|
||||
|
||||
|
||||
# Networking
|
||||
|
||||
There are 4 distinct networking problems to solve:
|
||||
|
||||
1. Highly-coupled container-to-container communications
|
||||
|
@ -40,7 +36,7 @@ among other problems.
|
|||
## Container to container
|
||||
|
||||
All containers within a pod behave as if they are on the same host with regard
|
||||
to networking. They can all reach each other’s ports on localhost. This offers
|
||||
to networking. They can all reach each other's ports on localhost. This offers
|
||||
simplicity (static ports know a priori), security (ports bound to localhost
|
||||
are visible within the pod but never outside it), and performance. This also
|
||||
reduces friction for applications moving from the world of uncontainerized apps
|
||||
|
@ -104,14 +100,14 @@ differentiate it from `docker0`) is set up outside of Docker proper.
|
|||
Example of GCE's advanced routing rules:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
gcloud compute routes add "${MINION_NAMES[$i]}" \
|
||||
--project "${PROJECT}" \
|
||||
--destination-range "${MINION_IP_RANGES[$i]}" \
|
||||
--network "${NETWORK}" \
|
||||
--next-hop-instance "${MINION_NAMES[$i]}" \
|
||||
--next-hop-instance-zone "${ZONE}" &
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
GCE itself does not know anything about these IPs, though. This means that when
|
||||
|
@ -122,7 +118,7 @@ a pod tries to egress beyond GCE's project the packets must be SNAT'ed
|
|||
|
||||
With the primary aim of providing IP-per-pod-model, other implementations exist
|
||||
to serve the purpose outside of GCE.
|
||||
- [OpenVSwitch with GRE/VxLAN](../admin/ovs-networking.html)
|
||||
- [OpenVSwitch with GRE/VxLAN](../admin/ovs-networking)
|
||||
- [Flannel](https://github.com/coreos/flannel#flannel)
|
||||
- [L2 networks](http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/)
|
||||
("With Linux Bridge devices" section)
|
||||
|
@ -133,7 +129,7 @@ to serve the purpose outside of GCE.
|
|||
|
||||
## Pod to service
|
||||
|
||||
The [service](../user-guide/services.html) abstraction provides a way to group pods under a
|
||||
The [service](../user-guide/services) abstraction provides a way to group pods under a
|
||||
common access policy (e.g. load-balanced). The implementation of this creates a
|
||||
virtual IP which clients can access and which is transparently proxied to the
|
||||
pods in a Service. Each node runs a kube-proxy process which programs
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Persistent Storage"
|
||||
---
|
||||
|
||||
|
||||
# Persistent Storage
|
||||
|
||||
This document proposes a model for managing persistent, cluster-scoped storage for applications requiring long lived data.
|
||||
|
||||
### tl;dr
|
||||
|
@ -100,7 +96,7 @@ Events that communicate the state of a mounted volume are left to the volume plu
|
|||
An administrator provisions storage by posting PVs to the API. Various way to automate this task can be scripted. Dynamic provisioning is a future feature that can maintain levels of PVs.
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
|
||||
POST:
|
||||
|
||||
kind: PersistentVolume
|
||||
|
@ -113,16 +109,16 @@ spec:
|
|||
persistentDisk:
|
||||
pdName: "abc123"
|
||||
fsType: "ext4"
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl get pv
|
||||
|
||||
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
|
||||
pv0001 map[] 10737418240 RWO Pending
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
#### Users request storage
|
||||
|
@ -132,7 +128,7 @@ A user requests storage by posting a PVC to the API. Their request contains the
|
|||
The user must be within a namespace to create PVCs.
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
|
||||
POST:
|
||||
|
||||
kind: PersistentVolumeClaim
|
||||
|
@ -145,16 +141,16 @@ spec:
|
|||
resources:
|
||||
requests:
|
||||
storage: 3
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl get pvc
|
||||
|
||||
NAME LABELS STATUS VOLUME
|
||||
myclaim-1 map[] pending
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
|
||||
|
@ -163,7 +159,7 @@ myclaim-1 map[] pending
|
|||
The ```PersistentVolumeClaimBinder``` attempts to find an available volume that most closely matches the user's request. If one exists, they are bound by putting a reference on the PV to the PVC. Requests can go unfulfilled if a suitable match is not found.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl get pv
|
||||
|
||||
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
|
||||
|
@ -174,7 +170,7 @@ kubectl get pvc
|
|||
|
||||
NAME LABELS STATUS VOLUME
|
||||
myclaim-1 map[] Bound b16e91d6-c0ef-11e4-8be4-80e6500a981e
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
#### Claim usage
|
||||
|
@ -184,7 +180,7 @@ The claim holder can use their claim as a volume. The ```PersistentVolumeClaimV
|
|||
The claim holder owns the claim and its data for as long as the claim exists. The pod using the claim can be deleted, but the claim remains in the user's namespace. It can be used again and again by many pods.
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
|
||||
POST:
|
||||
|
||||
kind: Pod
|
||||
|
@ -205,7 +201,7 @@ spec:
|
|||
accessMode: ReadWriteOnce
|
||||
claimRef:
|
||||
name: myclaim-1
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
#### Releasing a claim and Recycling a volume
|
||||
|
@ -213,9 +209,9 @@ spec:
|
|||
When a claim holder is finished with their data, they can delete their claim.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ kubectl delete pvc myclaim-1
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
The ```PersistentVolumeClaimBinder``` will reconcile this by removing the claim reference from the PV and change the PVs status to 'Released'.
|
||||
|
|
|
@ -1,15 +1,11 @@
|
|||
---
|
||||
title: "Design Principles"
|
||||
---
|
||||
|
||||
|
||||
# Design Principles
|
||||
|
||||
Principles to follow when extending Kubernetes.
|
||||
|
||||
## API
|
||||
|
||||
See also the [API conventions](../devel/api-conventions.html).
|
||||
See also the [API conventions](../devel/api-conventions).
|
||||
|
||||
* All APIs should be declarative.
|
||||
* API objects should be complementary and composable, not opaque wrappers.
|
||||
|
|
|
@ -3,7 +3,7 @@ title: "The Kubernetes resource model"
|
|||
---
|
||||
|
||||
**Note: this is a design doc, which describes features that have not been completely implemented.
|
||||
User documentation of the current state is [here](../user-guide/compute-resources.html). The tracking issue for
|
||||
User documentation of the current state is [here](../user-guide/compute-resources). The tracking issue for
|
||||
implementation of this model is
|
||||
[#168](http://issue.k8s.io/168). Currently, both limits and requests of memory and
|
||||
cpu on containers (not pods) are supported. "memory" is in bytes and "cpu" is in
|
||||
|
@ -62,12 +62,12 @@ Both users and a number of system components, such as schedulers, (horizontal) a
|
|||
Resource requirements for a container or pod should have the following form:
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
|
||||
resourceRequirementSpec: [
|
||||
request: [ cpu: 2.5, memory: "40Mi" ],
|
||||
limit: [ cpu: 4.0, memory: "99Mi" ],
|
||||
]
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Where:
|
||||
|
@ -78,11 +78,11 @@ Where:
|
|||
Total capacity for a node should have a similar structure:
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
|
||||
resourceCapacitySpec: [
|
||||
total: [ cpu: 12, memory: "128Gi" ]
|
||||
]
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Where:
|
||||
|
@ -113,7 +113,7 @@ The following resource types are predefined ("reserved") by Kubernetes in the `k
|
|||
* [future] `schedulingLatency`: as per lmctfy
|
||||
* [future] `cpuConversionFactor`: property of a node: the speed of a CPU core on the node's processor divided by the speed of the canonical Kubernetes CPU (a floating point value; default = 1.0).
|
||||
|
||||
To reduce performance portability problems for pods, and to avoid worse-case provisioning behavior, the units of CPU will be normalized to a canonical "Kubernetes Compute Unit" (KCU, pronounced ˈkoÍ?okoÍžo), which will roughly be equivalent to a single CPU hyperthreaded core for some recent x86 processor. The normalization may be implementation-defined, although some reasonable defaults will be provided in the open-source Kubernetes code.
|
||||
To reduce performance portability problems for pods, and to avoid worse-case provisioning behavior, the units of CPU will be normalized to a canonical "Kubernetes Compute Unit" (KCU, pronounced ˈkoÍ?okoÍžo), which will roughly be equivalent to a single CPU hyperthreaded core for some recent x86 processor. The normalization may be implementation-defined, although some reasonable defaults will be provided in the open-source Kubernetes code.
|
||||
|
||||
Note that requesting 2 KCU won't guarantee that precisely 2 physical cores will be allocated — control of aspects like this will be handled by resource _qualities_ (a future feature).
|
||||
|
||||
|
@ -135,7 +135,7 @@ rather than decimal ones: "64MiB" rather than "64MB".
|
|||
A resource type may have an associated read-only ResourceType structure, that contains metadata about the type. For example:
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
|
||||
resourceTypes: [
|
||||
"kubernetes.io/memory": [
|
||||
isCompressible: false, ...
|
||||
|
@ -146,7 +146,7 @@ resourceTypes: [
|
|||
]
|
||||
"kubernetes.io/disk-space": [ ... ]
|
||||
]
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Kubernetes will provide ResourceType metadata for its predefined types. If no resource metadata can be found for a resource type, Kubernetes will assume that it is a quantified, incompressible resource that is not specified in milli-units, and has no default value.
|
||||
|
@ -169,24 +169,24 @@ The following are planned future extensions to the resource model, included here
|
|||
|
||||
## Usage data
|
||||
|
||||
Because resource usage and related metrics change continuously, need to be tracked over time (i.e., historically), can be characterized in a variety of ways, and are fairly voluminous, we will not include usage in core API objects, such as [Pods](../user-guide/pods.html) and Nodes, but will provide separate APIs for accessing and managing that data. See the Appendix for possible representations of usage data, but the representation we'll use is TBD.
|
||||
Because resource usage and related metrics change continuously, need to be tracked over time (i.e., historically), can be characterized in a variety of ways, and are fairly voluminous, we will not include usage in core API objects, such as [Pods](../user-guide/pods) and Nodes, but will provide separate APIs for accessing and managing that data. See the Appendix for possible representations of usage data, but the representation we'll use is TBD.
|
||||
|
||||
Singleton values for observed and predicted future usage will rapidly prove inadequate, so we will support the following structure for extended usage information:
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
|
||||
resourceStatus: [
|
||||
usage: [ cpu: <CPU-info>, memory: <memory-info> ],
|
||||
maxusage: [ cpu: <CPU-info>, memory: <memory-info> ],
|
||||
predicted: [ cpu: <CPU-info>, memory: <memory-info> ],
|
||||
]
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
where a `<CPU-info>` or `<memory-info>` structure looks like this:
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
|
||||
{
|
||||
mean: <value> # arithmetic mean
|
||||
max: <value> # minimum value
|
||||
|
@ -200,7 +200,7 @@ where a `<CPU-info>` or `<memory-info>` structure looks like this:
|
|||
...
|
||||
]
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
All parts of this structure are optional, although we strongly encourage including quantities for 50, 90, 95, 99, 99.5, and 99.9 percentiles. _[In practice, it will be important to include additional info such as the length of the time window over which the averages are calculated, the confidence level, and information-quality metrics such as the number of dropped or discarded data points.]_
|
||||
|
@ -243,5 +243,3 @@ This is the amount of time a container spends accessing disk, including actuator
|
|||
* Units: operations per second
|
||||
* Compressible? yes
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,11 +1,7 @@
|
|||
---
|
||||
title: "Abstract"
|
||||
---
|
||||
|
||||
|
||||
## Abstract
|
||||
|
||||
A proposal for the distribution of [secrets](../user-guide/secrets.html) (passwords, keys, etc) to the Kubelet and to
|
||||
A proposal for the distribution of [secrets](../user-guide/secrets) (passwords, keys, etc) to the Kubelet and to
|
||||
containers inside Kubernetes using a custom [volume](../user-guide/volumes.html#secrets) type. See the [secrets example](../user-guide/secrets/) for more information.
|
||||
|
||||
## Motivation
|
||||
|
@ -75,7 +71,7 @@ service would also consume the secrets associated with the MySQL service.
|
|||
|
||||
### Use-Case: Secrets associated with service accounts
|
||||
|
||||
[Service Accounts](service_accounts.html) are proposed as a
|
||||
[Service Accounts](service_accounts) are proposed as a
|
||||
mechanism to decouple capabilities and security contexts from individual human users. A
|
||||
`ServiceAccount` contains references to some number of secrets. A `Pod` can specify that it is
|
||||
associated with a `ServiceAccount`. Secrets should have a `Type` field to allow the Kubelet and
|
||||
|
@ -245,7 +241,7 @@ memory overcommit on the node.
|
|||
|
||||
#### Secret data on the node: isolation
|
||||
|
||||
Every pod will have a [security context](security_context.html).
|
||||
Every pod will have a [security context](security_context).
|
||||
Secret data on the node should be isolated according to the security context of the container. The
|
||||
Kubelet volume plugin API will be changed so that a volume plugin receives the security context of
|
||||
a volume along with the volume spec. This will allow volume plugins to implement setting the
|
||||
|
@ -257,7 +253,7 @@ Several proposals / upstream patches are notable as background for this proposal
|
|||
|
||||
1. [Docker vault proposal](https://github.com/docker/docker/issues/10310)
|
||||
2. [Specification for image/container standardization based on volumes](https://github.com/docker/docker/issues/9277)
|
||||
3. [Kubernetes service account proposal](service_accounts.html)
|
||||
3. [Kubernetes service account proposal](service_accounts)
|
||||
4. [Secrets proposal for docker (1)](https://github.com/docker/docker/pull/6075)
|
||||
5. [Secrets proposal for docker (2)](https://github.com/docker/docker/pull/6697)
|
||||
|
||||
|
@ -277,7 +273,7 @@ in the container specification.
|
|||
A new resource for secrets will be added to the API:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
type Secret struct {
|
||||
TypeMeta
|
||||
ObjectMeta
|
||||
|
@ -301,7 +297,7 @@ const (
|
|||
)
|
||||
|
||||
const MaxSecretSize = 1 * 1024 * 1024
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
A Secret can declare a type in order to provide type information to system components that work
|
||||
|
@ -327,7 +323,7 @@ A new `SecretSource` type of volume source will be added to the `VolumeSource` s
|
|||
API:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
type VolumeSource struct {
|
||||
// Other fields omitted
|
||||
|
||||
|
@ -338,7 +334,7 @@ type VolumeSource struct {
|
|||
type SecretSource struct {
|
||||
Target ObjectReference
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Secret volume sources are validated to ensure that the specified object reference actually points
|
||||
|
@ -356,14 +352,14 @@ require access to the API server to retrieve secret data and therefore the volum
|
|||
will have to change to expose a client interface:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
type Host interface {
|
||||
// Other methods omitted
|
||||
|
||||
// GetKubeClient returns a client interface
|
||||
GetKubeClient() client.Interface
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
The secret volume plugin will be responsible for:
|
||||
|
@ -403,7 +399,7 @@ suggested changes. All of these examples are assumed to be created in a namespa
|
|||
To create a pod that uses an ssh key stored as a secret, we first need to create a secret:
|
||||
|
||||
{% highlight json %}
|
||||
{% raw %}
|
||||
|
||||
{
|
||||
"kind": "Secret",
|
||||
"apiVersion": "v1",
|
||||
|
@ -415,7 +411,7 @@ To create a pod that uses an ssh key stored as a secret, we first need to create
|
|||
"id-rsa.pub": "dmFsdWUtMQ0K"
|
||||
}
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
**Note:** The serialized JSON and YAML values of secret data are encoded as
|
||||
|
@ -425,7 +421,7 @@ omitted.
|
|||
Now we can create a pod which references the secret with the ssh key and consumes it in a volume:
|
||||
|
||||
{% highlight json %}
|
||||
{% raw %}
|
||||
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1",
|
||||
|
@ -459,7 +455,7 @@ Now we can create a pod which references the secret with the ssh key and consume
|
|||
]
|
||||
}
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
When the container's command runs, the pieces of the key will be available in:
|
||||
|
@ -478,7 +474,7 @@ credentials.
|
|||
The secrets:
|
||||
|
||||
{% highlight json %}
|
||||
{% raw %}
|
||||
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "List",
|
||||
|
@ -506,13 +502,13 @@ The secrets:
|
|||
}
|
||||
}]
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
The pods:
|
||||
|
||||
{% highlight json %}
|
||||
{% raw %}
|
||||
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "List",
|
||||
|
@ -584,7 +580,7 @@ The pods:
|
|||
}
|
||||
}]
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
The specs for the two pods differ only in the value of the object referred to by the secret volume
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Security in Kubernetes"
|
||||
---
|
||||
|
||||
|
||||
# Security in Kubernetes
|
||||
|
||||
Kubernetes should define a reasonable set of security best practices that allows processes to be isolated from each other, from the cluster infrastructure, and which preserves important boundaries between those who manage the cluster, and those who use the cluster.
|
||||
|
||||
While Kubernetes today is not primarily a multi-tenant system, the long term evolution of Kubernetes will increasingly rely on proper boundaries between users and administrators. The code running on the cluster must be appropriately isolated and secured to prevent malicious parties from affecting the entire cluster.
|
||||
|
@ -68,14 +64,14 @@ Automated process users fall into the following categories:
|
|||
A pod runs in a *security context* under a *service account* that is defined by an administrator or project administrator, and the *secrets* a pod has access to is limited by that *service account*.
|
||||
|
||||
|
||||
1. The API should authenticate and authorize user actions [authn and authz](access.html)
|
||||
1. The API should authenticate and authorize user actions [authn and authz](access)
|
||||
2. All infrastructure components (kubelets, kube-proxies, controllers, scheduler) should have an infrastructure user that they can authenticate with and be authorized to perform only the functions they require against the API.
|
||||
3. Most infrastructure components should use the API as a way of exchanging data and changing the system, and only the API should have access to the underlying data store (etcd)
|
||||
4. When containers run on the cluster and need to talk to other containers or the API server, they should be identified and authorized clearly as an autonomous process via a [service account](service_accounts.html)
|
||||
4. When containers run on the cluster and need to talk to other containers or the API server, they should be identified and authorized clearly as an autonomous process via a [service account](service_accounts)
|
||||
1. If the user who started a long-lived process is removed from access to the cluster, the process should be able to continue without interruption
|
||||
2. If the user who started processes are removed from the cluster, administrators may wish to terminate their processes in bulk
|
||||
3. When containers run with a service account, the user that created / triggered the service account behavior must be associated with the container's action
|
||||
5. When container processes run on the cluster, they should run in a [security context](security_context.html) that isolates those processes via Linux user security, user namespaces, and permissions.
|
||||
5. When container processes run on the cluster, they should run in a [security context](security_context) that isolates those processes via Linux user security, user namespaces, and permissions.
|
||||
1. Administrators should be able to configure the cluster to automatically confine all container processes as a non-root, randomly assigned UID
|
||||
2. Administrators should be able to ensure that container processes within the same namespace are all assigned the same unix user UID
|
||||
3. Administrators should be able to limit which developers and project administrators have access to higher privilege actions
|
||||
|
@ -84,7 +80,7 @@ A pod runs in a *security context* under a *service account* that is defined by
|
|||
6. Developers may need to ensure their images work within higher security requirements specified by administrators
|
||||
7. When available, Linux kernel user namespaces can be used to ensure 5.2 and 5.4 are met.
|
||||
8. When application developers want to share filesystem data via distributed filesystems, the Unix user ids on those filesystems must be consistent across different container processes
|
||||
6. Developers should be able to define [secrets](secrets.html) that are automatically added to the containers when pods are run
|
||||
6. Developers should be able to define [secrets](secrets) that are automatically added to the containers when pods are run
|
||||
1. Secrets are files injected into the container whose values should not be displayed within a pod. Examples:
|
||||
1. An SSH private key for git cloning remote data
|
||||
2. A client certificate for accessing a remote system
|
||||
|
@ -98,11 +94,11 @@ A pod runs in a *security context* under a *service account* that is defined by
|
|||
|
||||
### Related design discussion
|
||||
|
||||
* [Authorization and authentication](access.html)
|
||||
* [Authorization and authentication](access)
|
||||
* [Secret distribution via files](http://pr.k8s.io/2030)
|
||||
* [Docker secrets](https://github.com/docker/docker/pull/6697)
|
||||
* [Docker vault](https://github.com/docker/docker/issues/10310)
|
||||
* [Service Accounts:](service_accounts.html)
|
||||
* [Service Accounts:](service_accounts)
|
||||
* [Secret volumes](http://pr.k8s.io/4126)
|
||||
|
||||
## Specific Design Points
|
||||
|
|
|
@ -1,13 +1,9 @@
|
|||
---
|
||||
title: "Security Contexts"
|
||||
---
|
||||
|
||||
|
||||
# Security Contexts
|
||||
|
||||
## Abstract
|
||||
|
||||
A security context is a set of constraints that are applied to a container in order to achieve the following goals (from [security design](security.html)):
|
||||
A security context is a set of constraints that are applied to a container in order to achieve the following goals (from [security design](security)):
|
||||
|
||||
1. Ensure a clear isolation between container and the underlying host it runs on
|
||||
2. Limit the ability of the container to negatively impact the infrastructure or other containers
|
||||
|
@ -41,7 +37,7 @@ Processes in pods will need to have consistent UID/GID/SELinux category labels i
|
|||
* The concept of a security context should not be tied to a particular security mechanism or platform
|
||||
(ie. SELinux, AppArmor)
|
||||
* Applying a different security context to a scope (namespace or pod) requires a solution such as the one proposed for
|
||||
[service accounts](service_accounts.html).
|
||||
[service accounts](service_accounts).
|
||||
|
||||
## Use Cases
|
||||
|
||||
|
@ -92,7 +88,7 @@ It is recommended that this design be implemented in two phases:
|
|||
The Kubelet will have an interface that points to a `SecurityContextProvider`. The `SecurityContextProvider` is invoked before creating and running a given container:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
type SecurityContextProvider interface {
|
||||
// ModifyContainerConfig is called before the Docker createContainer call.
|
||||
// The security context provider can make changes to the Config with which
|
||||
|
@ -108,7 +104,7 @@ type SecurityContextProvider interface {
|
|||
// with a security context.
|
||||
ModifyHostConfig(pod *api.Pod, container *api.Container, hostConfig *docker.HostConfig)
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
If the value of the SecurityContextProvider field on the Kubelet is nil, the kubelet will create and run the container as it does today.
|
||||
|
@ -119,7 +115,7 @@ A security context resides on the container and represents the runtime parameter
|
|||
be used to create and run the container via container APIs. Following is an example of an initial implementation:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
type Container struct {
|
||||
... other fields omitted ...
|
||||
// Optional: SecurityContext defines the security options the pod should be run with
|
||||
|
@ -159,7 +155,7 @@ type SELinuxOptions struct {
|
|||
// SELinux level label.
|
||||
Level string
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### Admission
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Service Accounts"
|
||||
---
|
||||
|
||||
|
||||
# Service Accounts
|
||||
|
||||
## Motivation
|
||||
|
||||
Processes in Pods may need to call the Kubernetes API. For example:
|
||||
|
@ -26,10 +22,10 @@ They also may interact with services other than the Kubernetes API, such as:
|
|||
|
||||
A service account binds together several things:
|
||||
- a *name*, understood by users, and perhaps by peripheral systems, for an identity
|
||||
- a *principal* that can be authenticated and [authorized](../admin/authorization.html)
|
||||
- a [security context](security_context.html), which defines the Linux Capabilities, User IDs, Groups IDs, and other
|
||||
- a *principal* that can be authenticated and [authorized](../admin/authorization)
|
||||
- a [security context](security_context), which defines the Linux Capabilities, User IDs, Groups IDs, and other
|
||||
capabilities and controls on interaction with the file system and OS.
|
||||
- a set of [secrets](secrets.html), which a container may use to
|
||||
- a set of [secrets](secrets), which a container may use to
|
||||
access various networked resources.
|
||||
|
||||
## Design Discussion
|
||||
|
@ -37,7 +33,7 @@ A service account binds together several things:
|
|||
A new object Kind is added:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
type ServiceAccount struct {
|
||||
TypeMeta `json:",inline" yaml:",inline"`
|
||||
ObjectMeta `json:"metadata,omitempty" yaml:"metadata,omitempty"`
|
||||
|
@ -46,7 +42,7 @@ type ServiceAccount struct {
|
|||
securityContext ObjectReference // (reference to a securityContext object)
|
||||
secrets []ObjectReference // (references to secret objects
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
The name ServiceAccount is chosen because it is widely used already (e.g. by Kerberos and LDAP)
|
||||
|
|
|
@ -1,11 +1,7 @@
|
|||
---
|
||||
title: "Simple rolling update"
|
||||
---
|
||||
|
||||
|
||||
## Simple rolling update
|
||||
|
||||
This is a lightweight design document for simple [rolling update](../user-guide/kubectl/kubectl_rolling-update.html) in `kubectl`.
|
||||
This is a lightweight design document for simple [rolling update](../user-guide/kubectl/kubectl_rolling-update) in `kubectl`.
|
||||
|
||||
Complete execution flow can be found [here](#execution-details). See the [example of rolling update](../user-guide/update-demo/) for more information.
|
||||
|
||||
|
@ -35,9 +31,9 @@ To facilitate recovery in the case of a crash of the updating process itself, we
|
|||
Recovery is achieved by issuing the same command again:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
kubectl rolling-update foo [foo-v2] --image=myimage:v2
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Whenever the rolling update command executes, the kubectl client looks for replication controllers called `foo` and `foo-next`, if they exist, an attempt is
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Kubernetes API and Release Versioning"
|
||||
---
|
||||
|
||||
|
||||
# Kubernetes API and Release Versioning
|
||||
|
||||
Legend:
|
||||
|
||||
* **Kube <major>.<minor>.<patch>** refers to the version of Kubernetes that is released. This versions all components: apiserver, kubelet, kubectl, etc.
|
||||
|
|
|
@ -1,47 +1,43 @@
|
|||
---
|
||||
title: "Kubernetes Developer Guide"
|
||||
---
|
||||
|
||||
|
||||
# Kubernetes Developer Guide
|
||||
|
||||
The developer guide is for anyone wanting to either write code which directly accesses the
|
||||
Kubernetes API, or to contribute directly to the Kubernetes project.
|
||||
It assumes some familiarity with concepts in the [User Guide](../user-guide/README.html) and the [Cluster Admin
|
||||
Guide](../admin/README.html).
|
||||
It assumes some familiarity with concepts in the [User Guide](../user-guide/README) and the [Cluster Admin
|
||||
Guide](../admin/README).
|
||||
|
||||
|
||||
## The process of developing and contributing code to the Kubernetes project
|
||||
|
||||
* **On Collaborative Development** ([collab.md](collab.html)): Info on pull requests and code reviews.
|
||||
* **On Collaborative Development** ([collab.md](collab)): Info on pull requests and code reviews.
|
||||
|
||||
* **GitHub Issues** ([issues.md](issues.html)): How incoming issues are reviewed and prioritized.
|
||||
* **GitHub Issues** ([issues.md](issues)): How incoming issues are reviewed and prioritized.
|
||||
|
||||
* **Pull Request Process** ([pull-requests.md](pull-requests.html)): When and why pull requests are closed.
|
||||
* **Pull Request Process** ([pull-requests.md](pull-requests)): When and why pull requests are closed.
|
||||
|
||||
* **Faster PR reviews** ([faster_reviews.md](faster_reviews.html)): How to get faster PR reviews.
|
||||
* **Faster PR reviews** ([faster_reviews.md](faster_reviews)): How to get faster PR reviews.
|
||||
|
||||
* **Getting Recent Builds** ([getting-builds.md](getting-builds.html)): How to get recent builds including the latest builds that pass CI.
|
||||
* **Getting Recent Builds** ([getting-builds.md](getting-builds)): How to get recent builds including the latest builds that pass CI.
|
||||
|
||||
* **Automated Tools** ([automation.md](automation.html)): Descriptions of the automation that is running on our github repository.
|
||||
* **Automated Tools** ([automation.md](automation)): Descriptions of the automation that is running on our github repository.
|
||||
|
||||
|
||||
## Setting up your dev environment, coding, and debugging
|
||||
|
||||
* **Development Guide** ([development.md](development.html)): Setting up your development environment.
|
||||
* **Development Guide** ([development.md](development)): Setting up your development environment.
|
||||
|
||||
* **Hunting flaky tests** ([flaky-tests.md](flaky-tests.html)): We have a goal of 99.9% flake free tests.
|
||||
* **Hunting flaky tests** ([flaky-tests.md](flaky-tests)): We have a goal of 99.9% flake free tests.
|
||||
Here's how to run your tests many times.
|
||||
|
||||
* **Logging Conventions** ([logging.md](logging.html)]: Glog levels.
|
||||
* **Logging Conventions** ([logging.md](logging)]: Glog levels.
|
||||
|
||||
* **Profiling Kubernetes** ([profiling.md](profiling.html)): How to plug in go pprof profiler to Kubernetes.
|
||||
* **Profiling Kubernetes** ([profiling.md](profiling)): How to plug in go pprof profiler to Kubernetes.
|
||||
|
||||
* **Instrumenting Kubernetes with a new metric**
|
||||
([instrumentation.md](instrumentation.html)): How to add a new metrics to the
|
||||
([instrumentation.md](instrumentation)): How to add a new metrics to the
|
||||
Kubernetes code base.
|
||||
|
||||
* **Coding Conventions** ([coding-conventions.md](coding-conventions.html)):
|
||||
* **Coding Conventions** ([coding-conventions.md](coding-conventions)):
|
||||
Coding style advice for contributors.
|
||||
|
||||
|
||||
|
@ -49,33 +45,33 @@ Guide](../admin/README.html).
|
|||
|
||||
* API objects are explained at [http://kubernetes.io/third_party/swagger-ui/](http://kubernetes.io/third_party/swagger-ui/).
|
||||
|
||||
* **Annotations** ([docs/user-guide/annotations.md](../user-guide/annotations.html)): are for attaching arbitrary non-identifying metadata to objects.
|
||||
* **Annotations** ([docs/user-guide/annotations.md](../user-guide/annotations)): are for attaching arbitrary non-identifying metadata to objects.
|
||||
Programs that automate Kubernetes objects may use annotations to store small amounts of their state.
|
||||
|
||||
* **API Conventions** ([api-conventions.md](api-conventions.html)):
|
||||
* **API Conventions** ([api-conventions.md](api-conventions)):
|
||||
Defining the verbs and resources used in the Kubernetes API.
|
||||
|
||||
* **API Client Libraries** ([client-libraries.md](client-libraries.html)):
|
||||
* **API Client Libraries** ([client-libraries.md](client-libraries)):
|
||||
A list of existing client libraries, both supported and user-contributed.
|
||||
|
||||
|
||||
## Writing plugins
|
||||
|
||||
* **Authentication Plugins** ([docs/admin/authentication.md](../admin/authentication.html)):
|
||||
* **Authentication Plugins** ([docs/admin/authentication.md](../admin/authentication)):
|
||||
The current and planned states of authentication tokens.
|
||||
|
||||
* **Authorization Plugins** ([docs/admin/authorization.md](../admin/authorization.html)):
|
||||
* **Authorization Plugins** ([docs/admin/authorization.md](../admin/authorization)):
|
||||
Authorization applies to all HTTP requests on the main apiserver port.
|
||||
This doc explains the available authorization implementations.
|
||||
|
||||
* **Admission Control Plugins** ([admission_control](../design/admission_control.html))
|
||||
* **Admission Control Plugins** ([admission_control](../design/admission_control))
|
||||
|
||||
|
||||
## Building releases
|
||||
|
||||
* **Making release notes** ([making-release-notes.md](making-release-notes.html)): Generating release nodes for a new release.
|
||||
* **Making release notes** ([making-release-notes.md](making-release-notes)): Generating release nodes for a new release.
|
||||
|
||||
* **Releasing Kubernetes** ([releasing.md](releasing.html)): How to create a Kubernetes release (as in version)
|
||||
* **Releasing Kubernetes** ([releasing.md](releasing)): How to create a Kubernetes release (as in version)
|
||||
and how the version information gets embedded into the built binaries.
|
||||
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,13 +1,9 @@
|
|||
---
|
||||
title: "So you want to change the API?"
|
||||
---
|
||||
|
||||
|
||||
# So you want to change the API?
|
||||
|
||||
Before attempting a change to the API, you should familiarize yourself
|
||||
with a number of existing API types and with the [API
|
||||
conventions](api-conventions.html). If creating a new API
|
||||
conventions](api-conventions). If creating a new API
|
||||
type/resource, we also recommend that you first send a PR containing
|
||||
just a proposal for the new API types, and that you initially target
|
||||
the extensions API (pkg/apis/extensions).
|
||||
|
@ -96,27 +92,27 @@ Let's consider some examples. In a hypothetical API (assume we're at version
|
|||
v6), the `Frobber` struct looks something like this:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
// API v6.
|
||||
type Frobber struct {
|
||||
Height int `json:"height"`
|
||||
Param string `json:"param"`
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
You want to add a new `Width` field. It is generally safe to add new fields
|
||||
without changing the API version, so you can simply change it to:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
// Still API v6.
|
||||
type Frobber struct {
|
||||
Height int `json:"height"`
|
||||
Width int `json:"width"`
|
||||
Param string `json:"param"`
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
The onus is on you to define a sane default value for `Width` such that rule #1
|
||||
|
@ -128,7 +124,7 @@ simply change `Param string` to `Params []string` (without creating a whole new
|
|||
API version) - that fails rules #1 and #2. You can instead do something like:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
// Still API v6, but kind of clumsy.
|
||||
type Frobber struct {
|
||||
Height int `json:"height"`
|
||||
|
@ -136,7 +132,7 @@ type Frobber struct {
|
|||
Param string `json:"param"` // the first param
|
||||
ExtraParams []string `json:"params"` // additional params
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Now you can satisfy the rules: API calls that provide the old style `Param`
|
||||
|
@ -148,14 +144,14 @@ distinct from any one version is to handle growth like this. The internal
|
|||
representation can be implemented as:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
// Internal, soon to be v7beta1.
|
||||
type Frobber struct {
|
||||
Height int
|
||||
Width int
|
||||
Params []string
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
The code that converts to/from versioned APIs can decode this into the somewhat
|
||||
|
@ -179,14 +175,14 @@ you add units to `height` and `width`. You implement this by adding duplicate
|
|||
fields:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
type Frobber struct {
|
||||
Height *int `json:"height"`
|
||||
Width *int `json:"width"`
|
||||
HeightInInches *int `json:"heightInInches"`
|
||||
WidthInInches *int `json:"widthInInches"`
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
You convert all of the fields to pointers in order to distinguish between unset and
|
||||
|
@ -202,38 +198,38 @@ in the case of an old client that was only aware of the old field (e.g., `height
|
|||
Say the client creates:
|
||||
|
||||
{% highlight json %}
|
||||
{% raw %}
|
||||
|
||||
{
|
||||
"height": 10,
|
||||
"width": 5
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
and GETs:
|
||||
|
||||
{% highlight json %}
|
||||
{% raw %}
|
||||
|
||||
{
|
||||
"height": 10,
|
||||
"heightInInches": 10,
|
||||
"width": 5,
|
||||
"widthInInches": 5
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
then PUTs back:
|
||||
|
||||
{% highlight json %}
|
||||
{% raw %}
|
||||
|
||||
{
|
||||
"height": 13,
|
||||
"heightInInches": 10,
|
||||
"width": 5,
|
||||
"widthInInches": 5
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
The update should not fail, because it would have worked before `heightInInches` was added.
|
||||
|
@ -270,8 +266,8 @@ Breaking compatibility of a beta or stable API version, such as v1, is unaccepta
|
|||
Compatibility for experimental or alpha APIs is not strictly required, but
|
||||
breaking compatibility should not be done lightly, as it disrupts all users of the
|
||||
feature. Experimental APIs may be removed. Alpha and beta API versions may be deprecated
|
||||
and eventually removed wholesale, as described in the [versioning document](../design/versioning.html).
|
||||
Document incompatible changes across API versions under the [conversion tips](../api.html).
|
||||
and eventually removed wholesale, as described in the [versioning document](../design/versioning).
|
||||
Document incompatible changes across API versions under the [conversion tips](../api).
|
||||
|
||||
If your change is going to be backward incompatible or might be a breaking change for API
|
||||
consumers, please send an announcement to `kubernetes-dev@googlegroups.com` before
|
||||
|
@ -405,9 +401,9 @@ regenerate auto-generated ones. To regenerate them:
|
|||
- run
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
hack/update-generated-conversions.sh
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
If running the above script is impossible due to compile errors, the easiest
|
||||
|
@ -433,9 +429,9 @@ To regenerate them:
|
|||
- run
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
hack/update-generated-deep-copies.sh
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
## Edit json (un)marshaling code
|
||||
|
@ -451,9 +447,9 @@ To regenerate them:
|
|||
- run
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
hack/update-codecgen.sh
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
## Making a new API Group
|
||||
|
@ -518,7 +514,7 @@ doing!
|
|||
|
||||
## Write end-to-end tests
|
||||
|
||||
Check out the [E2E docs](e2e-tests.html) for detailed information about how to write end-to-end
|
||||
Check out the [E2E docs](e2e-tests) for detailed information about how to write end-to-end
|
||||
tests for your feature.
|
||||
|
||||
## Examples and docs
|
||||
|
@ -536,9 +532,9 @@ an example to illustrate your change.
|
|||
Make sure you update the swagger API spec by running:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
hack/update-swagger-spec.sh
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
The API spec changes should be in a commit separate from your other changes.
|
||||
|
@ -590,7 +586,7 @@ New feature development proceeds through a series of stages of increasing maturi
|
|||
upgrade may require downtime for anything relying on the new feature, and may require
|
||||
manual conversion of objects to the new version; when manual conversion is necessary, the
|
||||
project will provide documentation on the process (for an example, see [v1 conversion
|
||||
tips](../api.html))
|
||||
tips](../api))
|
||||
- Cluster Reliability: since the feature has e2e tests, enabling the feature via a flag should not
|
||||
create new bugs in unrelated features; because the feature is new, it may have minor bugs
|
||||
- Support: the project commits to complete the feature, in some form, in a subsequent Stable
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Kubernetes Development Automation"
|
||||
---
|
||||
|
||||
|
||||
# Kubernetes Development Automation
|
||||
|
||||
## Overview
|
||||
|
||||
Kubernetes uses a variety of automated tools in an attempt to relieve developers of repeptitive, low
|
||||
|
@ -24,13 +20,13 @@ for kubernetes.
|
|||
The submit-queue does the following:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
for _, pr := range readyToMergePRs() {
|
||||
if testsAreStable() {
|
||||
mergePR(pr)
|
||||
}
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
The status of the submit-queue is [online.](http://submit-queue.k8s.io/)
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Overview"
|
||||
---
|
||||
|
||||
|
||||
# Overview
|
||||
|
||||
This document explains cherry picks are managed on release branches within the
|
||||
Kubernetes projects.
|
||||
|
||||
|
@ -13,9 +9,9 @@ Kubernetes projects.
|
|||
Any contributor can propose a cherry pick of any pull request, like so:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
hack/cherry_pick_pull.sh upstream/release-3.14 98765
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
This will walk you through the steps to propose an automated cherry pick of pull
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Kubernetes CLI/Configuration Roadmap"
|
||||
---
|
||||
|
||||
|
||||
# Kubernetes CLI/Configuration Roadmap
|
||||
|
||||
See github issues with the following labels:
|
||||
* [area/app-config-deployment](https://github.com/kubernetes/kubernetes/labels/area/app-config-deployment)
|
||||
* [component/kubectl](https://github.com/kubernetes/kubernetes/labels/component/kubectl)
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Kubernetes API client libraries"
|
||||
---
|
||||
|
||||
|
||||
## Kubernetes API client libraries
|
||||
|
||||
### Supported
|
||||
|
||||
* [Go](http://releases.k8s.io/release-1.1/pkg/client/)
|
||||
|
|
|
@ -9,7 +9,7 @@ Code conventions
|
|||
- Go
|
||||
- Ensure your code passes the [presubmit checks](development.html#hooks)
|
||||
- [Go Code Review Comments](https://github.com/golang/go/wiki/CodeReviewComments)
|
||||
- [Effective Go](https://golang.org/doc/effective_go.html)
|
||||
- [Effective Go](https://golang.org/doc/effective_go)
|
||||
- Comment your code.
|
||||
- [Go's commenting conventions](http://blog.golang.org/godoc-documenting-go-code)
|
||||
- If reviewers ask questions about why the code is the way it is, that's a sign that comments might be helpful.
|
||||
|
@ -24,10 +24,10 @@ Code conventions
|
|||
- Importers can use a different name if they need to disambiguate.
|
||||
- Locks should be called `lock` and should never be embedded (always `lock sync.Mutex`). When multiple locks are present, give each lock a distinct name following Go conventions - `stateLock`, `mapLock` etc.
|
||||
- API conventions
|
||||
- [API changes](api_changes.html)
|
||||
- [API conventions](api-conventions.html)
|
||||
- [Kubectl conventions](kubectl-conventions.html)
|
||||
- [Logging conventions](logging.html)
|
||||
- [API changes](api_changes)
|
||||
- [API conventions](api-conventions)
|
||||
- [Kubectl conventions](kubectl-conventions)
|
||||
- [Logging conventions](logging)
|
||||
|
||||
Testing conventions
|
||||
- All new packages and most new significant functionality must come with unit tests
|
||||
|
@ -44,7 +44,7 @@ Directory and file conventions
|
|||
- Package directories should generally avoid using separators as much as possible (when packages are multiple words, they usually should be in nested subdirectories).
|
||||
- Document directories and filenames should use dashes rather than underscores
|
||||
- Contrived examples that illustrate system features belong in /docs/user-guide or /docs/admin, depending on whether it is a feature primarily intended for users that deploy applications or cluster administrators, respectively. Actual application examples belong in /examples.
|
||||
- Examples should also illustrate [best practices for using the system](../user-guide/config-best-practices.html)
|
||||
- Examples should also illustrate [best practices for using the system](../user-guide/config-best-practices)
|
||||
- Third-party code
|
||||
- Third-party Go code is managed using Godeps
|
||||
- Other third-party code belongs in /third_party
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "On Collaborative Development"
|
||||
---
|
||||
|
||||
|
||||
# On Collaborative Development
|
||||
|
||||
Kubernetes is open source, but many of the people working on it do so as their day job. In order to avoid forcing people to be "at work" effectively 24/7, we want to establish some semi-formal protocols around development. Hopefully these rules make things go more smoothly. If you find that this is not the case, please complain loudly.
|
||||
|
||||
## Patches welcome
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Getting started with Vagrant"
|
||||
---
|
||||
|
||||
|
||||
## Getting started with Vagrant
|
||||
|
||||
Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X).
|
||||
|
||||
### Prerequisites
|
||||
|
@ -15,31 +11,31 @@ Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/deve
|
|||
2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware)
|
||||
3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware)
|
||||
4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/)
|
||||
3. Get or build a [binary release](../../../docs/getting-started-guides/binary_release.html)
|
||||
3. Get or build a [binary release](/{{page.version}}/docs/getting-started-guides/binary_release)
|
||||
|
||||
### Setup
|
||||
|
||||
By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-minion-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
cd kubernetes
|
||||
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
./cluster/kube-up.sh
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
|
||||
|
||||
If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable:
|
||||
If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default) environment variable:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
export VAGRANT_DEFAULT_PROVIDER=parallels
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
./cluster/kube-up.sh
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine.
|
||||
|
@ -49,25 +45,25 @@ By default, each VM in the cluster is running Fedora, and all of the Kubernetes
|
|||
To access the master or any node:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
vagrant ssh master
|
||||
vagrant ssh minion-1
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
If you are running more than one nodes, you can access the others by:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
vagrant ssh minion-2
|
||||
vagrant ssh minion-3
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
To view the service status and/or logs on the kubernetes-master:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ vagrant ssh master
|
||||
[vagrant@kubernetes-master ~] $ sudo systemctl status kube-apiserver
|
||||
[vagrant@kubernetes-master ~] $ sudo journalctl -r -u kube-apiserver
|
||||
|
@ -77,19 +73,19 @@ $ vagrant ssh master
|
|||
|
||||
[vagrant@kubernetes-master ~] $ sudo systemctl status etcd
|
||||
[vagrant@kubernetes-master ~] $ sudo systemctl status nginx
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
To view the services on any of the nodes:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ vagrant ssh minion-1
|
||||
[vagrant@kubernetes-minion-1] $ sudo systemctl status docker
|
||||
[vagrant@kubernetes-minion-1] $ sudo journalctl -r -u docker
|
||||
[vagrant@kubernetes-minion-1] $ sudo systemctl status kubelet
|
||||
[vagrant@kubernetes-minion-1] $ sudo journalctl -r -u kubelet
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### Interacting with your Kubernetes cluster with Vagrant.
|
||||
|
@ -99,26 +95,26 @@ With your Kubernetes cluster up, you can manage the nodes in your cluster with t
|
|||
To push updates to new Kubernetes code after making source changes:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
./cluster/kube-push.sh
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
To stop and then restart the cluster:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
vagrant halt
|
||||
./cluster/kube-up.sh
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
To destroy the cluster:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
vagrant destroy
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script.
|
||||
|
@ -126,14 +122,14 @@ Once your Vagrant machines are up and provisioned, the first thing to do is to c
|
|||
You may need to build the binaries first, you can do this with `make`
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ ./cluster/kubectl.sh get nodes
|
||||
|
||||
NAME LABELS STATUS
|
||||
kubernetes-minion-0whl kubernetes.io/hostname=kubernetes-minion-0whl Ready
|
||||
kubernetes-minion-4jdf kubernetes.io/hostname=kubernetes-minion-4jdf Ready
|
||||
kubernetes-minion-epbe kubernetes.io/hostname=kubernetes-minion-epbe Ready
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### Interacting with your Kubernetes cluster with the `kube-*` scripts.
|
||||
|
@ -143,41 +139,41 @@ Alternatively to using the vagrant commands, you can also use the `cluster/kube-
|
|||
All of these commands assume you have set `KUBERNETES_PROVIDER` appropriately:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Bring up a vagrant cluster
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
./cluster/kube-up.sh
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Destroy the vagrant cluster
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
./cluster/kube-down.sh
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Update the vagrant cluster after you make changes (only works when building your own releases locally):
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
./cluster/kube-push.sh
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Interact with the cluster
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
./cluster/kubectl.sh
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### Authenticating with your master
|
||||
|
@ -185,7 +181,7 @@ Interact with the cluster
|
|||
When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ cat ~/.kubernetes_vagrant_auth
|
||||
{ "User": "vagrant",
|
||||
"Password": "vagrant"
|
||||
|
@ -193,15 +189,15 @@ $ cat ~/.kubernetes_vagrant_auth
|
|||
"CertFile": "/home/k8s_user/.kubecfg.vagrant.crt",
|
||||
"KeyFile": "/home/k8s_user/.kubecfg.vagrant.key"
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
You should now be set to use the `cluster/kubectl.sh` script. For example try to list the nodes that you have started with:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
./cluster/kubectl.sh get nodes
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### Running containers
|
||||
|
@ -209,14 +205,14 @@ You should now be set to use the `cluster/kubectl.sh` script. For example try to
|
|||
Your cluster is running, you can list the nodes in your cluster:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ ./cluster/kubectl.sh get nodes
|
||||
|
||||
NAME LABELS STATUS
|
||||
kubernetes-minion-0whl kubernetes.io/hostname=kubernetes-minion-0whl Ready
|
||||
kubernetes-minion-4jdf kubernetes.io/hostname=kubernetes-minion-4jdf Ready
|
||||
kubernetes-minion-epbe kubernetes.io/hostname=kubernetes-minion-epbe Ready
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Now start running some containers!
|
||||
|
@ -225,7 +221,7 @@ You can now use any of the cluster/kube-*.sh commands to interact with your VM m
|
|||
Before starting a container there will be no pods, services and replication controllers.
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ cluster/kubectl.sh get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
|
||||
|
@ -234,59 +230,59 @@ NAME LABELS SELECTOR IP(S) PORT(S)
|
|||
|
||||
$ cluster/kubectl.sh get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Start a container running nginx with a replication controller and three replicas
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
my-nginx my-nginx nginx run=my-nginx 3
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
When listing the pods, you will see that three containers have been started and are in Waiting state:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ cluster/kubectl.sh get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-389da 1/1 Waiting 0 33s
|
||||
my-nginx-kqdjk 1/1 Waiting 0 33s
|
||||
my-nginx-nyj3x 1/1 Waiting 0 33s
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
You need to wait for the provisioning to complete, you can monitor the minions by doing:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ sudo salt '*minion-1' cmd.run 'docker images'
|
||||
kubernetes-minion-1:
|
||||
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
|
||||
<none> <none> 96864a7d2df3 26 hours ago 204.4 MB
|
||||
kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Once the docker image for nginx has been downloaded, the container will start and you can list it:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ sudo salt '*minion-1' cmd.run 'docker ps'
|
||||
kubernetes-minion-1:
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f
|
||||
fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Going back to listing the pods, services and replicationcontrollers, you now have:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ cluster/kubectl.sh get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-389da 1/1 Running 0 33s
|
||||
|
@ -299,21 +295,21 @@ NAME LABELS SELECTOR IP(S) PORT(S)
|
|||
$ cluster/kubectl.sh get rc
|
||||
NAME IMAGE(S) SELECTOR REPLICAS
|
||||
my-nginx nginx run=my-nginx 3
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
We did not start any services, hence there are none listed. But we see three replicas displayed properly.
|
||||
Check the [guestbook](../../../examples/guestbook/README.html) application to learn how to create a service.
|
||||
Check the [guestbook](../../../examples/guestbook/README) application to learn how to create a service.
|
||||
You can already play with scaling the replicas with:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2
|
||||
$ ./cluster/kubectl.sh get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-kqdjk 1/1 Running 0 13m
|
||||
my-nginx-nyj3x 1/1 Running 0 13m
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Congratulations!
|
||||
|
@ -323,9 +319,9 @@ Congratulations!
|
|||
The following will run all of the end-to-end testing scenarios assuming you set your environment in `cluster/kube-env.sh`:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
NUM_MINIONS=3 hack/e2e-test.sh
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### Troubleshooting
|
||||
|
@ -335,12 +331,12 @@ NUM_MINIONS=3 hack/e2e-test.sh
|
|||
By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh`
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box
|
||||
export KUBERNETES_BOX_URL=path_of_your_kuber_box
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
./cluster/kube-up.sh
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
#### I just created the cluster, but I am getting authorization errors!
|
||||
|
@ -348,21 +344,21 @@ export KUBERNETES_PROVIDER=vagrant
|
|||
You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact.
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
rm ~/.kubernetes_vagrant_auth
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
After using kubectl.sh make sure that the correct credentials are set:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ cat ~/.kubernetes_vagrant_auth
|
||||
{
|
||||
"User": "vagrant",
|
||||
"Password": "vagrant"
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
#### I just created the cluster, but I do not see my container running!
|
||||
|
@ -383,9 +379,9 @@ Are you sure you built a release first? Did you install `net-tools`? For more cl
|
|||
You can control the number of nodes that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_MINIONS` to 1 like so:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
export NUM_MINIONS=1
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
#### I want my VMs to have more memory!
|
||||
|
@ -394,18 +390,18 @@ You can control the memory allotted to virtual machines with the `KUBERNETES_MEM
|
|||
Just set it to the number of megabytes you would like the machines to have. For example:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
export KUBERNETES_MEMORY=2048
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
If you need more granular control, you can set the amount of memory for the master and nodes independently. For example:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
export KUBERNETES_MASTER_MEMORY=1536
|
||||
export KUBERNETES_MINION_MEMORY=2048
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
#### I ran vagrant suspend and nothing works!
|
||||
|
|
|
@ -1,17 +1,13 @@
|
|||
---
|
||||
title: "Development Guide"
|
||||
---
|
||||
|
||||
|
||||
# Development Guide
|
||||
|
||||
# Releases and Official Builds
|
||||
|
||||
Official releases are built in Docker containers. Details are [here](http://releases.k8s.io/release-1.1/build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below.
|
||||
|
||||
## Go development environment
|
||||
|
||||
Kubernetes is written in [Go](http://golang.org) programming language. If you haven't set up Go development environment, please follow [this instruction](http://golang.org/doc/code.html) to install go tool and set up GOPATH. Ensure your version of Go is at least 1.3.
|
||||
Kubernetes is written in [Go](http://golang.org) programming language. If you haven't set up Go development environment, please follow [this instruction](http://golang.org/doc/code) to install go tool and set up GOPATH. Ensure your version of Go is at least 1.3.
|
||||
|
||||
## Git Setup
|
||||
|
||||
|
@ -31,56 +27,56 @@ Below, we outline one of the more common git workflows that core developers use.
|
|||
The commands below require that you have $GOPATH set ([$GOPATH docs](https://golang.org/doc/code.html#GOPATH)). We highly recommend you put Kubernetes' code into your GOPATH. Note: the commands below will not work if there is more than one directory in your `$GOPATH`.
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
mkdir -p $GOPATH/src/k8s.io
|
||||
cd $GOPATH/src/k8s.io
|
||||
# Replace "$YOUR_GITHUB_USERNAME" below with your github username
|
||||
git clone https://github.com/$YOUR_GITHUB_USERNAME/kubernetes.git
|
||||
cd kubernetes
|
||||
git remote add upstream 'https://github.com/kubernetes/kubernetes.git'
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### Create a branch and make changes
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
git checkout -b myfeature
|
||||
# Make your code changes
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### Keeping your development fork in sync
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
git fetch upstream
|
||||
git rebase upstream/master
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Note: If you have write access to the main repository at github.com/kubernetes/kubernetes, you should modify your git configuration so that you can't accidentally push to upstream:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
git remote set-url --push upstream no_push
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### Committing changes to your fork
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
git commit
|
||||
git push -f origin myfeature
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### Creating a pull request
|
||||
|
||||
1. Visit https://github.com/$YOUR_GITHUB_USERNAME/kubernetes
|
||||
2. Click the "Compare and pull request" button next to your "myfeature" branch.
|
||||
3. Check out the pull request [process](pull-requests.html) for more details
|
||||
3. Check out the pull request [process](pull-requests) for more details
|
||||
|
||||
### When to retain commits and when to squash
|
||||
|
||||
|
@ -94,7 +90,7 @@ fixups (e.g. automated doc formatting), use one or more commits for the
|
|||
changes to tooling and a final commit to apply the fixup en masse. This makes
|
||||
reviews much easier.
|
||||
|
||||
See [Faster Reviews](faster_reviews.html) for more details.
|
||||
See [Faster Reviews](faster_reviews) for more details.
|
||||
|
||||
## godep and dependency management
|
||||
|
||||
|
@ -111,20 +107,20 @@ directly from mercurial.
|
|||
2) Create a new GOPATH for your tools and install godep:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
export GOPATH=$HOME/go-tools
|
||||
mkdir -p $GOPATH
|
||||
go get github.com/tools/godep
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
3) Add $GOPATH/bin to your path. Typically you'd add this to your ~/.profile:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
export GOPATH=$HOME/go-tools
|
||||
export PATH=$PATH:$GOPATH/bin
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### Using godep
|
||||
|
@ -136,40 +132,40 @@ Here's a quick walkthrough of one way to use godeps to add or update a Kubernete
|
|||
_Devoting a separate directory is not required, but it is helpful to separate dependency updates from other changes._
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
export KPATH=$HOME/code/kubernetes
|
||||
mkdir -p $KPATH/src/k8s.io/kubernetes
|
||||
cd $KPATH/src/k8s.io/kubernetes
|
||||
git clone https://path/to/your/fork .
|
||||
# Or copy your existing local repo here. IMPORTANT: making a symlink doesn't work.
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
2) Set up your GOPATH.
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
# Option A: this will let your builds see packages that exist elsewhere on your system.
|
||||
export GOPATH=$KPATH:$GOPATH
|
||||
# Option B: This will *not* let your local builds see packages that exist elsewhere on your system.
|
||||
export GOPATH=$KPATH
|
||||
# Option B is recommended if you're going to mess with the dependencies.
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
3) Populate your new GOPATH.
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
cd $KPATH/src/k8s.io/kubernetes
|
||||
godep restore
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
4) Next, you can either add a new dependency or update an existing one.
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
# To add a new dependency, do:
|
||||
cd $KPATH/src/k8s.io/kubernetes
|
||||
go get path/to/dependency
|
||||
|
@ -181,7 +177,7 @@ cd $KPATH/src/k8s.io/kubernetes
|
|||
go get -u path/to/dependency
|
||||
# Change code in Kubernetes accordingly if necessary.
|
||||
godep update path/to/dependency/...
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
_If `go get -u path/to/dependency` fails with compilation errors, instead try `go get -d -u path/to/dependency`
|
||||
|
@ -203,41 +199,41 @@ Before committing any changes, please link/copy these hooks into your .git
|
|||
directory. This will keep you from accidentally committing non-gofmt'd go code.
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
cd kubernetes/.git/hooks/
|
||||
ln -s ../../hooks/pre-commit .
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
## Unit tests
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
cd kubernetes
|
||||
hack/test-go.sh
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Alternatively, you could also run:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
cd kubernetes
|
||||
godep go test ./...
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
If you only want to run unit tests in one package, you could run ``godep go test`` under the package directory. For example, the following commands will run all unit tests in package kubelet:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ cd kubernetes # step into the kubernetes directory.
|
||||
$ cd pkg/kubelet
|
||||
$ godep go test
|
||||
# some output from unit tests
|
||||
PASS
|
||||
ok k8s.io/kubernetes/pkg/kubelet 0.317s
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
## Coverage
|
||||
|
@ -247,10 +243,10 @@ Currently, collecting coverage is only supported for the Go unit tests.
|
|||
To run all unit tests and generate an HTML coverage report, run the following:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
cd kubernetes
|
||||
KUBE_COVER=y hack/test-go.sh
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
At the end of the run, an the HTML report will be generated with the path printed to stdout.
|
||||
|
@ -258,10 +254,10 @@ At the end of the run, an the HTML report will be generated with the path printe
|
|||
To run tests and collect coverage in only one package, pass its relative path under the `kubernetes` directory as an argument, for example:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
cd kubernetes
|
||||
KUBE_COVER=y hack/test-go.sh pkg/kubectl
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Multiple arguments can be passed, in which case the coverage results will be combined for all tests run.
|
||||
|
@ -273,10 +269,10 @@ Coverage results for the project can also be viewed on [Coveralls](https://cover
|
|||
You need an [etcd](https://github.com/coreos/etcd/releases/tag/v2.0.0) in your path, please make sure it is installed and in your ``$PATH``.
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
cd kubernetes
|
||||
hack/test-integration.sh
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
## End-to-End tests
|
||||
|
@ -284,18 +280,18 @@ hack/test-integration.sh
|
|||
You can run an end-to-end test which will bring up a master and two nodes, perform some tests, and then tear everything down. Make sure you have followed the getting started steps for your chosen cloud platform (which might involve changing the `KUBERNETES_PROVIDER` environment variable to something other than "gce".
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
cd kubernetes
|
||||
hack/e2e-test.sh
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Pressing control-C should result in an orderly shutdown but if something goes wrong and you still have some VMs running you can force a cleanup with this command:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
go run hack/e2e.go --down
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### Flag options
|
||||
|
@ -303,7 +299,7 @@ go run hack/e2e.go --down
|
|||
See the flag definitions in `hack/e2e.go` for more options, such as reusing an existing cluster, here is an overview:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
# Build binaries for testing
|
||||
go run hack/e2e.go --build
|
||||
|
||||
|
@ -330,13 +326,13 @@ go run hack/e2e.go -v -test --test_args="--ginkgo.focus=Pods.*env"
|
|||
|
||||
# Alternately, if you have the e2e cluster up and no desire to see the event stream, you can run ginkgo-e2e.sh directly:
|
||||
hack/ginkgo-e2e.sh --ginkgo.focus=Pods.*env
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### Combining flags
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
# Flags can be combined, and their actions will take place in this order:
|
||||
# -build, -push|-up|-pushup, -test|-tests=..., -down
|
||||
# e.g.:
|
||||
|
@ -350,14 +346,14 @@ go run hack/e2e.go -build -pushup -test -down
|
|||
# kubectl output.
|
||||
go run hack/e2e.go -v -ctl='get events'
|
||||
go run hack/e2e.go -v -ctl='delete pod foobar'
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
## Conformance testing
|
||||
|
||||
End-to-end testing, as described above, is for [development
|
||||
distributions](writing-a-getting-started-guide.html). A conformance test is used on
|
||||
a [versioned distro](writing-a-getting-started-guide.html).
|
||||
distributions](writing-a-getting-started-guide). A conformance test is used on
|
||||
a [versioned distro](writing-a-getting-started-guide).
|
||||
|
||||
The conformance test runs a subset of the e2e-tests against a manually-created cluster. It does not
|
||||
require support for up/push/down and other operations. To run a conformance test, you need to know the
|
||||
|
@ -367,14 +363,14 @@ See [conformance-test.sh](http://releases.k8s.io/release-1.1/hack/conformance-te
|
|||
|
||||
## Testing out flaky tests
|
||||
|
||||
[Instructions here](flaky-tests.html)
|
||||
[Instructions here](flaky-tests)
|
||||
|
||||
## Regenerating the CLI documentation
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
hack/update-generated-docs.sh
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "End-2-End Testing in Kubernetes"
|
||||
---
|
||||
|
||||
|
||||
# End-2-End Testing in Kubernetes
|
||||
|
||||
## Overview
|
||||
|
||||
The end-2-end tests for kubernetes provide a mechanism to test behavior of the system, and to ensure end user operations match developer specifications. In distributed systems it is not uncommon that a minor change may pass all unit tests, but cause unforseen changes at the system level. Thus, the primary objectives of the end-2-end tests are to ensure a consistent and reliable behavior of the kubernetes code base, and to catch bugs early.
|
||||
|
@ -32,7 +28,7 @@ The output for the end-2-end tests will be a single binary called `e2e.test` und
|
|||
For the purposes of brevity, we will look at a subset of the options, which are listed below:
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
|
||||
-ginkgo.dryRun=false: If set, ginkgo will walk the test hierarchy without actually running anything. Best paired with -v.
|
||||
-ginkgo.failFast=false: If set, ginkgo will stop running a test suite after a failure occurs.
|
||||
-ginkgo.failOnPending=false: If set, ginkgo will mark the test suite as failed if any specs are pending.
|
||||
|
@ -45,18 +41,18 @@ For the purposes of brevity, we will look at a subset of the options, which are
|
|||
-prom-push-gateway="": The URL to prometheus gateway, so that metrics can be pushed during e2es and scraped by prometheus. Typically something like 127.0.0.1:9091.
|
||||
-provider="": The name of the Kubernetes provider (gce, gke, local, vagrant, etc.)
|
||||
-repo-root="../../": Root directory of kubernetes repository, for finding test files.
|
||||
{% endraw %}
|
||||
|
||||
```
|
||||
|
||||
Prior to running the tests, it is recommended that you first create a simple auth file in your home directory, e.g. `$HOME/.kubernetes_auth` , with the following:
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
|
||||
{
|
||||
"User": "root",
|
||||
"Password": ""
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
```
|
||||
|
||||
Next, you will need a cluster that you can test against. As mentioned earlier, you will want to execute `sudo ./hack/local-up-cluster.sh`. To get a sense of what tests exist, you may want to run:
|
||||
|
@ -89,14 +85,14 @@ If a behavior does not currently have coverage and a developer wishes to add a n
|
|||
|
||||
Another benefit of the end-2-end tests is the ability to create reproducible loads on the system, which can then be used to determine the responsiveness, or analyze other characteristics of the system. For example, the density tests load the system to 30,50,100 pods per/node and measures the different characteristics of the system, such as throughput, api-latency, etc.
|
||||
|
||||
For a good overview of how we analyze performance data, please read the following [post](http://blog.kubernetes.io/2015/09/kubernetes-performance-measurements-and.html)
|
||||
For a good overview of how we analyze performance data, please read the following [post](http://blog.kubernetes.io/2015/09/kubernetes-performance-measurements-and)
|
||||
|
||||
For developers who are interested in doing their own performance analysis, we recommend setting up [prometheus](http://prometheus.io/) for data collection, and using [promdash](http://prometheus.io/docs/visualization/promdash/) to visualize the data. There also exists the option of pushing your own metrics in from the tests using a [prom-push-gateway](http://prometheus.io/docs/instrumenting/pushing/). Containers for all of these components can be found [here](https://hub.docker.com/u/prom/).
|
||||
|
||||
For more accurate measurements, you may wish to set up prometheus external to kubernetes in an environment where it can access the major system components (api-server, controller-manager, scheduler). This is especially useful when attempting to gather metrics in a load-balanced api-server environment, because all api-servers can be analyzed independently as well as collectively. On startup, configuration file is passed to prometheus that specifies the endpoints that prometheus will scrape, as well as the sampling interval.
|
||||
|
||||
```
|
||||
{% raw %}
|
||||
|
||||
#prometheus.conf
|
||||
job: {
|
||||
name: "kubernetes"
|
||||
|
@ -109,7 +105,7 @@ job: {
|
|||
# controller-manager
|
||||
target: "http://localhost:10252/metrics"
|
||||
}
|
||||
{% endraw %}
|
||||
|
||||
```
|
||||
|
||||
Once prometheus is scraping the kubernetes endpoints, that data can then be plotted using promdash, and alerts can be created against the assortment of metrics that kubernetes provides.
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "How to get faster PR reviews"
|
||||
---
|
||||
|
||||
|
||||
# How to get faster PR reviews
|
||||
|
||||
Most of what is written here is not at all specific to Kubernetes, but it bears
|
||||
being written down in the hope that it will occasionally remind people of "best
|
||||
practices" around code reviews.
|
||||
|
@ -27,10 +23,10 @@ Let's talk about how to avoid this.
|
|||
|
||||
## 0. Familiarize yourself with project conventions
|
||||
|
||||
* [Development guide](development.html)
|
||||
* [Coding conventions](coding-conventions.html)
|
||||
* [API conventions](api-conventions.html)
|
||||
* [Kubectl conventions](kubectl-conventions.html)
|
||||
* [Development guide](development)
|
||||
* [Coding conventions](coding-conventions)
|
||||
* [API conventions](api-conventions)
|
||||
* [Kubectl conventions](kubectl-conventions)
|
||||
|
||||
## 1. Don't build a cathedral in one PR
|
||||
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Hunting flaky tests in Kubernetes"
|
||||
---
|
||||
|
||||
|
||||
# Hunting flaky tests in Kubernetes
|
||||
|
||||
Sometimes unit tests are flaky. This means that due to (usually) race conditions, they will occasionally fail, even though most of the time they pass.
|
||||
|
||||
We have a goal of 99.9% flake free tests. This means that there is only one flake in one thousand runs of a test.
|
||||
|
@ -18,7 +14,7 @@ There is a testing image `brendanburns/flake` up on the docker hub. We will use
|
|||
Create a replication controller with the following config:
|
||||
|
||||
{% highlight yaml %}
|
||||
{% raw %}
|
||||
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
|
@ -38,15 +34,15 @@ spec:
|
|||
value: pkg/tools
|
||||
- name: REPO_SPEC
|
||||
value: https://github.com/kubernetes/kubernetes
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Note that we omit the labels and the selector fields of the replication controller, because they will be populated from the labels field of the pod template by default.
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
kubectl create -f ./controller.yaml
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
This will spin up 24 instances of the test. They will run to completion, then exit, and the kubelet will restart them, accumulating more and more runs of the test.
|
||||
|
@ -54,7 +50,7 @@ You can examine the recent runs of the test by calling `docker ps -a` and lookin
|
|||
You can use this script to automate checking for failures, assuming your cluster is running on GCE and has four nodes:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
echo "" > output.txt
|
||||
for i in {1..4}; do
|
||||
echo "Checking kubernetes-minion-${i}"
|
||||
|
@ -62,15 +58,15 @@ for i in {1..4}; do
|
|||
gcloud compute ssh "kubernetes-minion-${i}" --command="sudo docker ps -a" >> output.txt
|
||||
done
|
||||
grep "Exited ([^0])" output.txt
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Eventually you will have sufficient runs for your purposes. At that point you can stop and delete the replication controller by running:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
kubectl stop replicationcontroller flakecontroller
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
If you do a final check for flakes with `docker ps -a`, ignore tasks that exited -1, since that's what happens when you stop the replication controller.
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Getting Kubernetes Builds"
|
||||
---
|
||||
|
||||
|
||||
# Getting Kubernetes Builds
|
||||
|
||||
You can use [hack/get-build.sh](http://releases.k8s.io/release-1.1/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build).
|
||||
|
||||
Run `./hack/get-build.sh -h` for its usage.
|
||||
|
@ -12,36 +8,36 @@ Run `./hack/get-build.sh -h` for its usage.
|
|||
For example, to get a build at a specific version (v1.0.2):
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
./hack/get-build.sh v1.0.2
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Alternatively, to get the latest stable release:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
./hack/get-build.sh release/stable
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Finally, you can just print the latest or stable version:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
./hack/get-build.sh -v ci/latest
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
You can also use the gsutil tool to explore the Google Cloud Storage release buckets. Here are some examples:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
gsutil cat gs://kubernetes-release/ci/latest.txt # output the latest ci version number
|
||||
gsutil cat gs://kubernetes-release/ci/latest-green.txt # output the latest ci version number that passed gce e2e
|
||||
gsutil ls gs://kubernetes-release/ci/v0.20.0-29-g29a55cc/ # list the contents of a ci release
|
||||
gsutil ls gs://kubernetes-release/release # list all official releases and rcs
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
|
||||
|
|
|
@ -1,47 +1,43 @@
|
|||
---
|
||||
title: "Kubernetes Developer Guide"
|
||||
---
|
||||
|
||||
|
||||
# Kubernetes Developer Guide
|
||||
|
||||
The developer guide is for anyone wanting to either write code which directly accesses the
|
||||
Kubernetes API, or to contribute directly to the Kubernetes project.
|
||||
It assumes some familiarity with concepts in the [User Guide](../user-guide/README.html) and the [Cluster Admin
|
||||
Guide](../admin/README.html).
|
||||
It assumes some familiarity with concepts in the [User Guide](../user-guide/README) and the [Cluster Admin
|
||||
Guide](../admin/README).
|
||||
|
||||
|
||||
## The process of developing and contributing code to the Kubernetes project
|
||||
|
||||
* **On Collaborative Development** ([collab.md](collab.html)): Info on pull requests and code reviews.
|
||||
* **On Collaborative Development** ([collab.md](collab)): Info on pull requests and code reviews.
|
||||
|
||||
* **GitHub Issues** ([issues.md](issues.html)): How incoming issues are reviewed and prioritized.
|
||||
* **GitHub Issues** ([issues.md](issues)): How incoming issues are reviewed and prioritized.
|
||||
|
||||
* **Pull Request Process** ([pull-requests.md](pull-requests.html)): When and why pull requests are closed.
|
||||
* **Pull Request Process** ([pull-requests.md](pull-requests)): When and why pull requests are closed.
|
||||
|
||||
* **Faster PR reviews** ([faster_reviews.md](faster_reviews.html)): How to get faster PR reviews.
|
||||
* **Faster PR reviews** ([faster_reviews.md](faster_reviews)): How to get faster PR reviews.
|
||||
|
||||
* **Getting Recent Builds** ([getting-builds.md](getting-builds.html)): How to get recent builds including the latest builds that pass CI.
|
||||
* **Getting Recent Builds** ([getting-builds.md](getting-builds)): How to get recent builds including the latest builds that pass CI.
|
||||
|
||||
* **Automated Tools** ([automation.md](automation.html)): Descriptions of the automation that is running on our github repository.
|
||||
* **Automated Tools** ([automation.md](automation)): Descriptions of the automation that is running on our github repository.
|
||||
|
||||
|
||||
## Setting up your dev environment, coding, and debugging
|
||||
|
||||
* **Development Guide** ([development.md](development.html)): Setting up your development environment.
|
||||
* **Development Guide** ([development.md](development)): Setting up your development environment.
|
||||
|
||||
* **Hunting flaky tests** ([flaky-tests.md](flaky-tests.html)): We have a goal of 99.9% flake free tests.
|
||||
* **Hunting flaky tests** ([flaky-tests.md](flaky-tests)): We have a goal of 99.9% flake free tests.
|
||||
Here's how to run your tests many times.
|
||||
|
||||
* **Logging Conventions** ([logging.md](logging.html)]: Glog levels.
|
||||
* **Logging Conventions** ([logging.md](logging)]: Glog levels.
|
||||
|
||||
* **Profiling Kubernetes** ([profiling.md](profiling.html)): How to plug in go pprof profiler to Kubernetes.
|
||||
* **Profiling Kubernetes** ([profiling.md](profiling)): How to plug in go pprof profiler to Kubernetes.
|
||||
|
||||
* **Instrumenting Kubernetes with a new metric**
|
||||
([instrumentation.md](instrumentation.html)): How to add a new metrics to the
|
||||
([instrumentation.md](instrumentation)): How to add a new metrics to the
|
||||
Kubernetes code base.
|
||||
|
||||
* **Coding Conventions** ([coding-conventions.md](coding-conventions.html)):
|
||||
* **Coding Conventions** ([coding-conventions.md](coding-conventions)):
|
||||
Coding style advice for contributors.
|
||||
|
||||
|
||||
|
@ -49,33 +45,33 @@ Guide](../admin/README.html).
|
|||
|
||||
* API objects are explained at [http://kubernetes.io/third_party/swagger-ui/](http://kubernetes.io/third_party/swagger-ui/).
|
||||
|
||||
* **Annotations** ([docs/user-guide/annotations.md](../user-guide/annotations.html)): are for attaching arbitrary non-identifying metadata to objects.
|
||||
* **Annotations** ([docs/user-guide/annotations.md](../user-guide/annotations)): are for attaching arbitrary non-identifying metadata to objects.
|
||||
Programs that automate Kubernetes objects may use annotations to store small amounts of their state.
|
||||
|
||||
* **API Conventions** ([api-conventions.md](api-conventions.html)):
|
||||
* **API Conventions** ([api-conventions.md](api-conventions)):
|
||||
Defining the verbs and resources used in the Kubernetes API.
|
||||
|
||||
* **API Client Libraries** ([client-libraries.md](client-libraries.html)):
|
||||
* **API Client Libraries** ([client-libraries.md](client-libraries)):
|
||||
A list of existing client libraries, both supported and user-contributed.
|
||||
|
||||
|
||||
## Writing plugins
|
||||
|
||||
* **Authentication Plugins** ([docs/admin/authentication.md](../admin/authentication.html)):
|
||||
* **Authentication Plugins** ([docs/admin/authentication.md](../admin/authentication)):
|
||||
The current and planned states of authentication tokens.
|
||||
|
||||
* **Authorization Plugins** ([docs/admin/authorization.md](../admin/authorization.html)):
|
||||
* **Authorization Plugins** ([docs/admin/authorization.md](../admin/authorization)):
|
||||
Authorization applies to all HTTP requests on the main apiserver port.
|
||||
This doc explains the available authorization implementations.
|
||||
|
||||
* **Admission Control Plugins** ([admission_control](../design/admission_control.html))
|
||||
* **Admission Control Plugins** ([admission_control](../design/admission_control))
|
||||
|
||||
|
||||
## Building releases
|
||||
|
||||
* **Making release notes** ([making-release-notes.md](making-release-notes.html)): Generating release nodes for a new release.
|
||||
* **Making release notes** ([making-release-notes.md](making-release-notes)): Generating release nodes for a new release.
|
||||
|
||||
* **Releasing Kubernetes** ([releasing.md](releasing.html)): How to create a Kubernetes release (as in version)
|
||||
* **Releasing Kubernetes** ([releasing.md](releasing)): How to create a Kubernetes release (as in version)
|
||||
and how the version information gets embedded into the built binaries.
|
||||
|
||||
|
||||
|
|
|
@ -8,16 +8,8 @@ Kubectl Conventions
|
|||
|
||||
Updated: 8/27/2015
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
{% include pagetoc.html %}
|
||||
|
||||
- [Principles](#principles)
|
||||
- [Command conventions](#command-conventions)
|
||||
- [Flag conventions](#flag-conventions)
|
||||
- [Output conventions](#output-conventions)
|
||||
- [Documentation conventions](#documentation-conventions)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
|
||||
## Principles
|
||||
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Making release notes"
|
||||
---
|
||||
|
||||
|
||||
## Making release notes
|
||||
|
||||
This documents the process for making release notes for a release.
|
||||
|
||||
### 1) Note the PR number of the previous release
|
||||
|
@ -17,9 +13,9 @@ Find the most-recent PR that was merged with the current .0 release. Remember t
|
|||
### 2) Run the release-notes tool
|
||||
|
||||
{% highlight bash %}
|
||||
{% raw %}
|
||||
|
||||
${KUBERNETES_ROOT}/build/make-release-notes.sh $LASTPR $CURRENTPR
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
### 3) Trim the release notes
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Profiling Kubernetes"
|
||||
---
|
||||
|
||||
|
||||
# Profiling Kubernetes
|
||||
|
||||
This document explain how to plug in profiler and how to profile Kubernetes services.
|
||||
|
||||
## Profiling library
|
||||
|
@ -16,11 +12,11 @@ Go comes with inbuilt 'net/http/pprof' profiling library and profiling web servi
|
|||
TL;DR: Add lines:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
m.mux.HandleFunc("/debug/pprof/", pprof.Index)
|
||||
m.mux.HandleFunc("/debug/pprof/profile", pprof.Profile)
|
||||
m.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol)
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
to the init(c *Config) method in 'pkg/master/master.go' and import 'net/http/pprof' package.
|
||||
|
@ -32,17 +28,17 @@ In most use cases to use profiler service it's enough to do 'import _ net/http/p
|
|||
Even when running profiler I found not really straightforward to use 'go tool pprof' with it. The problem is that at least for dev purposes certificates generated for APIserver are not signed by anyone trusted and because secureServer serves only secure traffic it isn't straightforward to connect to the service. The best workaround I found is by creating an ssh tunnel from the kubernetes_master open unsecured port to some external server, and use this server as a proxy. To save everyone looking for correct ssh flags, it is done by running:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
ssh kubernetes_master -L<local_port>:localhost:8080
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
or analogous one for you Cloud provider. Afterwards you can e.g. run
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
go tool pprof http://localhost:<local_port>/debug/pprof/profile
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
to get 30 sec. CPU profile.
|
||||
|
|
|
@ -37,7 +37,7 @@ Automation
|
|||
----------
|
||||
|
||||
We use a variety of automation to manage pull requests. This automation is described in detail
|
||||
[elsewhere.](automation.html)
|
||||
[elsewhere.](automation)
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: "Releasing Kubernetes"
|
||||
---
|
||||
|
||||
|
||||
# Releasing Kubernetes
|
||||
|
||||
This document explains how to cut a release, and the theory behind it. If you
|
||||
just want to cut a release and move on with your life, you can stop reading
|
||||
after the first section.
|
||||
|
@ -38,9 +34,9 @@ can find the Git hash for a build by looking at the "Console Log", then look for
|
|||
`githash=`. You should see a line line:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
+ githash=v0.20.2-322-g974377b
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Because Jenkins builds frequently, if you're looking between jobs
|
||||
|
@ -55,9 +51,9 @@ oncall.
|
|||
Before proceeding to the next step:
|
||||
|
||||
{% highlight sh %}
|
||||
{% raw %}
|
||||
|
||||
export BRANCHPOINT=v0.20.2-322-g974377b
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
Where `v0.20.2-322-g974377b` is the git hash you decided on. This will become
|
||||
|
@ -95,7 +91,7 @@ In your git repo (you still have `${VER}` set from above right?):
|
|||
|
||||
#### Writing Release Notes
|
||||
|
||||
[This helpful guide](making-release-notes.html) describes how to write release
|
||||
[This helpful guide](making-release-notes) describes how to write release
|
||||
notes for a major/minor release. In the release template on GitHub, leave the
|
||||
last PR number that the tool finds for the `.0` release, so the next releaser
|
||||
doesn't have to hunt.
|
||||
|
@ -107,7 +103,7 @@ doesn't have to hunt.
|
|||
We cut `vX.Y.Z` releases from the `release-vX.Y` branch after all cherry picks
|
||||
to the branch have been resolved. You should ensure all outstanding cherry picks
|
||||
have been reviewed and merged and the branch validated on Jenkins (validation
|
||||
TBD). See the [Cherry Picks](cherry-picks.html) for more information on how to
|
||||
TBD). See the [Cherry Picks](cherry-picks) for more information on how to
|
||||
manage cherry picks prior to cutting the release.
|
||||
|
||||
#### Tagging and Merging
|
||||
|
@ -207,12 +203,12 @@ We are using `pkg/version/base.go` as the source of versioning in absence of
|
|||
information from git. Here is a sample of that file's contents:
|
||||
|
||||
{% highlight go %}
|
||||
{% raw %}
|
||||
|
||||
var (
|
||||
gitVersion string = "v0.4-dev" // version from git, output of $(git describe)
|
||||
gitCommit string = "" // sha1 from git, output of $(git rev-parse HEAD)
|
||||
)
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
This means a build with `go install` or `go get` or a build from a tarball will
|
||||
|
@ -292,7 +288,7 @@ As an example, Docker commit a327d9b91edf has a `v1.1.1-N-gXXX` label but it is
|
|||
not present in Docker `v1.2.0`:
|
||||
|
||||
{% highlight console %}
|
||||
{% raw %}
|
||||
|
||||
$ git describe a327d9b91edf
|
||||
v1.1.1-822-ga327d9b91edf
|
||||
|
||||
|
@ -300,7 +296,7 @@ $ git log --oneline v1.2.0..a327d9b91edf
|
|||
a327d9b91edf Fix data space reporting from Kb/Mb to KB/MB
|
||||
|
||||
(Non-empty output here means the commit is not present on v1.2.0.)
|
||||
{% endraw %}
|
||||
|
||||
{% endhighlight %}
|
||||
|
||||
## Release Notes
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue