Merge branch 'master' into fix-cm-link
commit
101414df5e
|
@ -31,7 +31,7 @@ to talk to the Kubernetes API.
|
|||
API requests are tied to either a normal user or a service account, or are treated
|
||||
as anonymous requests. This means every process inside or outside the cluster, from
|
||||
a human user typing `kubectl` on a workstation, to `kubelets` on nodes, to members
|
||||
of the control plane, must authenticate when making requests to the the API server,
|
||||
of the control plane, must authenticate when making requests to the API server,
|
||||
or be treated as an anonymous user.
|
||||
|
||||
## Authentication strategies
|
||||
|
|
|
@ -99,7 +99,7 @@ Some possible patterns for communicating with pods in a DaemonSet are:
|
|||
- **Push**: Pods in the Daemon Set are configured to send updates to another service, such
|
||||
as a stats database. They do not have clients.
|
||||
- **NodeIP and Known Port**: Pods in the Daemon Set use a `hostPort`, so that the pods are reachable
|
||||
via the node IPs. Clients knows the the list of nodes ips somehow, and know the port by convention.
|
||||
via the node IPs. Clients knows the list of nodes ips somehow, and know the port by convention.
|
||||
- **DNS**: Create a [headless service](/docs/user-guide/services/#headless-services) with the same pod selector,
|
||||
and then discover DaemonSets using the `endpoints` resource or retrieve multiple A records from
|
||||
DNS.
|
||||
|
|
|
@ -17,7 +17,7 @@ kubernetes manages lifecycle of all images through imageManager, with the cooper
|
|||
of cadvisor.
|
||||
|
||||
The policy for garbage collecting images takes two factors into consideration:
|
||||
`HighThresholdPercent` and `LowThresholdPercent`. Disk usage above the the high threshold
|
||||
`HighThresholdPercent` and `LowThresholdPercent`. Disk usage above the high threshold
|
||||
will trigger garbage collection. The garbage collection will delete least recently used images until the low
|
||||
threshold has been met.
|
||||
|
||||
|
|
|
@ -45,7 +45,7 @@ kube-controller-manager
|
|||
--concurrent_rc_syncs int32 The number of replication controllers that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load (default 5)
|
||||
--configure-cloud-routes Should CIDRs allocated by allocate-node-cidrs be configured on the cloud provider. (default true)
|
||||
--controller-start-interval duration Interval between starting controller managers.
|
||||
--daemonset-lookup-cache-size int32 The the size of lookup cache for daemonsets. Larger number = more responsive daemonsets, but more MEM load. (default 1024)
|
||||
--daemonset-lookup-cache-size int32 The size of lookup cache for daemonsets. Larger number = more responsive daemonsets, but more MEM load. (default 1024)
|
||||
--deployment-controller-sync-period duration Period for syncing the deployments. (default 30s)
|
||||
--enable-dynamic-provisioning Enable dynamic provisioning for environments that support it. (default true)
|
||||
--enable-garbage-collector Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-apiserver. (default true)
|
||||
|
@ -89,8 +89,8 @@ StreamingProxyRedirects=true|false (ALPHA - default=false)
|
|||
--pv-recycler-pod-template-filepath-nfs string The file path to a pod definition used as a template for NFS persistent volume recycling
|
||||
--pv-recycler-timeout-increment-hostpath int32 the increment of time added per Gi to ActiveDeadlineSeconds for a HostPath scrubber pod. This is for development and testing only and will not work in a multi-node cluster. (default 30)
|
||||
--pvclaimbinder-sync-period duration The period for syncing persistent volumes and persistent volume claims (default 15s)
|
||||
--replicaset-lookup-cache-size int32 The the size of lookup cache for replicatsets. Larger number = more responsive replica management, but more MEM load. (default 4096)
|
||||
--replication-controller-lookup-cache-size int32 The the size of lookup cache for replication controllers. Larger number = more responsive replica management, but more MEM load. (default 4096)
|
||||
--replicaset-lookup-cache-size int32 The size of lookup cache for replicatsets. Larger number = more responsive replica management, but more MEM load. (default 4096)
|
||||
--replication-controller-lookup-cache-size int32 The size of lookup cache for replication controllers. Larger number = more responsive replica management, but more MEM load. (default 4096)
|
||||
--resource-quota-sync-period duration The period for syncing quota usage status in the system (default 5m0s)
|
||||
--root-ca-file string If set, this root certificate authority will be included in service account's token secret. This must be a valid PEM-encoded CA bundle.
|
||||
--route-reconciliation-period duration The period for reconciling routes created for Nodes by cloud provider. (default 10s)
|
||||
|
|
|
@ -17,7 +17,7 @@ various mechanisms (primarily through the apiserver) and ensures that the contai
|
|||
described in those PodSpecs are running and healthy. The kubelet doesn't manage
|
||||
containers which were not created by Kubernetes.
|
||||
|
||||
Other than from an PodSpec from the apiserver, there are three ways that a container
|
||||
Other than from a PodSpec from the apiserver, there are three ways that a container
|
||||
manifest can be provided to the Kubelet.
|
||||
|
||||
File: Path passed as a flag on the command line. This file is rechecked every 20
|
||||
|
|
|
@ -36,7 +36,7 @@ Each critical add-on has to tolerate it,
|
|||
the other pods shouldn't tolerate the taint. The tain is removed once the add-on is successfully scheduled.
|
||||
|
||||
*Warning:* currently there is no guarantee which node is chosen and which pods are being killed
|
||||
in order to schedule crical pod, so if rescheduler is enabled you pods might be occasionally
|
||||
in order to schedule critical pods, so if rescheduler is enabled you pods might be occasionally
|
||||
killed for this purpose.
|
||||
|
||||
## Config
|
||||
|
|
|
@ -19,7 +19,7 @@ The installation uses a tool called `kubeadm` which is part of Kubernetes.
|
|||
This process works with local VMs, physical servers and/or cloud servers.
|
||||
It is simple enough that you can easily integrate its use into your own automation (Terraform, Chef, Puppet, etc).
|
||||
|
||||
See the full [`kubeadm` reference](/docs/admin/kubeadm) for information on all `kubeadm` command-line flags and for advice on automating `kubeadm` itself.
|
||||
See the full `kubeadm` [reference](/docs/admin/kubeadm) for information on all `kubeadm` command-line flags and for advice on automating `kubeadm` itself.
|
||||
|
||||
**The `kubeadm` tool is currently in alpha but please try it out and give us [feedback](/docs/getting-started-guides/kubeadm/#feedback)!
|
||||
Be sure to read the [limitations](#limitations); in particular note that kubeadm doesn't have great support for
|
||||
|
|
|
@ -12,7 +12,7 @@ In Kubernetes version 1.5, Windows Server Containers for Kubernetes is supported
|
|||
1. Kubernetes control plane running on existing Linux infrastructure (version 1.5 or later)
|
||||
2. Kubenet network plugin setup on the Linux nodes
|
||||
3. Windows Server 2016 (RTM version 10.0.14393 or later)
|
||||
4. Docker Version 1.12.2-cs2-ws-beta or later
|
||||
4. Docker Version 1.12.2-cs2-ws-beta or later for Windows Server nodes (Linux nodes and Kubernetes control plane can run any Kubernetes supported Docker Version)
|
||||
|
||||
## Networking
|
||||
Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) don’t natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used.
|
||||
|
@ -40,6 +40,7 @@ To run Windows Server Containers on Kubernetes, you'll need to set up both your
|
|||
2. DNS support for Windows recently got merged to docker master and is currently not supported in a stable docker release. To use DNS build docker from master or download the binary from [Docker master](https://master.dockerproject.org/)
|
||||
3. Pull the `apprenda/pause` image from `https://hub.docker.com/r/apprenda/pause`
|
||||
4. RRAS (Routing) Windows feature enabled
|
||||
5. Install a VMSwitch of type `Internal`, by running `New-VMSwitch -Name KubeProxySwitch -SwitchType Internal` command in *PowerShell* window. This will create a new Network Interface with name `vEthernet (KubeProxySwitch)`. This interface will be used by kube-proxy to add Service IPs.
|
||||
|
||||
**Linux Host Setup**
|
||||
|
||||
|
@ -127,8 +128,8 @@ To start kube-proxy on your Windows node:
|
|||
|
||||
Run the following in a PowerShell window with administrative privileges. Be aware that if the node reboots or the process exits, you will have to rerun the commands below to restart the kube-proxy.
|
||||
|
||||
1. Set environment variable *INTERFACE_TO_ADD_SERVICE_IP* value to a node only network interface. The interface created when docker is installed should work
|
||||
`$env:INTERFACE_TO_ADD_SERVICE_IP = "vEthernet (HNS Internal NIC)"`
|
||||
1. Set environment variable *INTERFACE_TO_ADD_SERVICE_IP* value to `vEthernet (KubeProxySwitch)` which we created in **_Windows Host Setup_** above
|
||||
`$env:INTERFACE_TO_ADD_SERVICE_IP = "vEthernet (KubeProxySwitch)"`
|
||||
|
||||
2. Run *kube-proxy* executable using the below command
|
||||
`.\proxy.exe --v=3 --proxy-mode=userspace --hostname-override=<ip address/hostname of the windows node> --master=<api server location> --bind-address=<ip address of the windows node>`
|
||||
|
|
|
@ -12,7 +12,15 @@ external IP address.
|
|||
|
||||
{% capture prerequisites %}
|
||||
|
||||
{% include task-tutorial-prereqs.md %}
|
||||
* Install [kubectl](http://kubernetes.io/docs/user-guide/prereqs).
|
||||
|
||||
* Use a cloud provider like Google Container Engine or Amazon Web Services to
|
||||
create a Kubernetes cluster. This tutorial creates an
|
||||
[external load balancer](/docs/user-guide/load-balancer/),
|
||||
which requires a cloud provider.
|
||||
|
||||
* Configure `kubectl` to communicate with your Kubernetes API server. For
|
||||
instructions, see the documentation for your cloud provider.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
|
|
@ -328,7 +328,7 @@ Host: k8s-master:8080
|
|||
```
|
||||
|
||||
To consume opaque resources in pods, include the name of the opaque
|
||||
resource as a key in the the `spec.containers[].resources.requests` map.
|
||||
resource as a key in the `spec.containers[].resources.requests` map.
|
||||
|
||||
The pod will be scheduled only if all of the resource requests are
|
||||
satisfied (including cpu, memory and any opaque resources.) The pod will
|
||||
|
|
Loading…
Reference in New Issue