Re-work networking section to be more clear

reviewable/pr1174/r3
Casey Davenport 2016-09-16 15:57:58 -07:00
parent 2ee8a639fb
commit b0ce198615
1 changed files with 28 additions and 21 deletions

View File

@ -67,27 +67,35 @@ the node is added. A process in one pod should be able to communicate with
another pod using the IP of the second pod. This connectivity can be
accomplished in two ways:
- **Configure underlay network to route Pod IPs**
- Harder to setup from scratch.
- Google Compute Engine ([GCE](/docs/getting-started-guides/gce)) and [AWS](/docs/getting-started-guides/aws) guides use this approach.
- Need to make the Pod IPs routable by programming routers, switches, etc.
- This can be done in a few different ways:
- Implement in the "Routes" interface of a Cloud Provider module.
- Manually configure static routing external to Kubernetes.
- Generally highest performance.
- **Use a network plugin**
- Easier to setup
- Pod IPs are made accessible through route distribution or encapsulation.
- Examples:
- **Using an overlay network**
- An overlay network obscures the underlying network architecture from the
pod network through traffic encapsulation (e.g vxlan).
- Encapsulation reduces performance, though exactly how much depends on your solution.
- **Without an overlay network**
- Configure the underlying network fabric (switches, routers, etc) to be aware of pod IP addresses.
- This does not require the encapsulation provided by an overlay, and so can achieve
better performance.
Which method you choose depends on your environment and requirements. There are various ways
to implement one of the above options:
- **Use a network plugin which is called by Kubernetes**
- Kubernetes supports the [CNI](https://github.com/containernetworking/cni) network plugin interface.
- There are a number of solutions which provide plugins for Kubernetes:
- [Flannel](https://github.com/coreos/flannel)
- [Calico](http://https://github.com/projectcalico/calico-containers)
- [Weave](http://weave.works/)
- [Open vSwitch (OVS)](http://openvswitch.org/)
- Does not require "Routes" portion of Cloud Provider module.
- Reduced performance (exactly how much depends on your solution).
- More information on network plugins can be found [here](/docs/admin/networking#how-to-achieve-this).
- [More found here](/docs/admin/networking#how-to-achieve-this)
- You can also write your own.
- **Compile support directly into Kubernetes**
- This can be done by implementing the "Routes" interface of a Cloud Provider module.
- The Google Compute Engine ([GCE](/docs/getting-started-guides/gce)) and [AWS](/docs/getting-started-guides/aws) guides use this approach.
- **Configure the network external to Kubernetes**
- This can be done by manually running commands, or through a set of externally maintained scripts.
- You have to implement this yourself, but it can give you an extra degree of flexibility.
You need to select an address range for the Pod IPs.
You will need to select an address range for the Pod IPs. Note that IPv6 is not yet supported for Pod IPs.
- Various approaches:
- GCE: each project has its own `10.0.0.0/8`. Carve off a `/16` for each
@ -95,10 +103,8 @@ You need to select an address range for the Pod IPs.
Each node gets a further subdivision of this space.
- AWS: use one VPC for whole organization, carve off a chunk for each
cluster, or use different VPC for different clusters.
- IPv6 is not supported yet.
- Allocate one CIDR subnet for each node's PodIPs, or a single large CIDR
from which smaller CIDRs are automatically allocated to each node (if nodes
are dynamically added).
from which smaller CIDRs are automatically allocated to each node.
- You need max-pods-per-node * max-number-of-nodes IPs in total. A `/24` per
node supports 254 pods per machine and is a common choice. If IPs are
scarce, a `/26` (62 pods per machine) or even a `/27` (30 pods) may be sufficient.
@ -126,8 +132,9 @@ Also, you need to pick a static IP for master node.
Kubernetes enables the definition of fine-grained network policy between Pods
using the [NetworkPolicy](/docs/user-guide/networkpolicy) resource.
For clusters which choose to enable NetworkPolicy, the Calico
[policy controller addon](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/calico-policy-controller)
Not all networking providers support the Kubernetes NetworkPolicy features.
For clusters which choose to enable NetworkPolicy, the
[Calico policy controller addon](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/calico-policy-controller)
can enforce the NetworkPolicy API on top of native cloud-provider networking,
Flannel, or Calico networking.