Fix a couple typos. (#5709)
parent
22860da3c1
commit
3060ec3dea
|
@ -40,8 +40,8 @@ Successfully running cloud-controller-manager requires some changes to your clus
|
|||
|
||||
Keep in mind that setting up your cluster to use cloud controller manager will change your cluster behaviour in a few ways:
|
||||
|
||||
* kubelets specifying `--cloud-provider=external` will add a taint `node.cloudprovider.kubernetes.io/uninitialized` with an effect `NoSchedule` during initialization. This marks the node as needing a second initialization from an external controller before it can be scheduled work. Note that in the event that cloud controller manager is not available, new nodes in the cluster will be left unscheduable. The taint is important since the scheduler may require cloud specific information about nodes such as it's region or type (high cpu, gpu, high memory, spot instance, etc).
|
||||
* cloud information about nodes in the cluster will no longer be retrieved using local metadata, but instead all API calls to retreive node information will go through cloud controller manager. This may mean you can restrict access to your cloud API on the kubelets for better security. For larger clusters you may want to consider if cloud controller manager will hit rate limits since it is now responsible for almost all API calls to your cloud from within the cluster.
|
||||
* kubelets specifying `--cloud-provider=external` will add a taint `node.cloudprovider.kubernetes.io/uninitialized` with an effect `NoSchedule` during initialization. This marks the node as needing a second initialization from an external controller before it can be scheduled work. Note that in the event that cloud controller manager is not available, new nodes in the cluster will be left unschedulable. The taint is important since the scheduler may require cloud specific information about nodes such as their region or type (high cpu, gpu, high memory, spot instance, etc).
|
||||
* cloud information about nodes in the cluster will no longer be retrieved using local metadata, but instead all API calls to retrieve node information will go through cloud controller manager. This may mean you can restrict access to your cloud API on the kubelets for better security. For larger clusters you may want to consider if cloud controller manager will hit rate limits since it is now responsible for almost all API calls to your cloud from within the cluster.
|
||||
|
||||
|
||||
As of v1.8, cloud controller manager can implement:
|
||||
|
@ -78,13 +78,13 @@ Cloud controller manager does not implement any of the volume controllers found
|
|||
|
||||
### Scalability
|
||||
|
||||
In the previous architecture for cloud providers, we relied on kubelets using a local metadata service to retreive node information about itself. With this new architecture, we now fully rely on the cloud controller managers to retrieve information for all nodes. For very larger clusters, you should consider possible bottle necks such as resource requirements and API rate limiting.
|
||||
In the previous architecture for cloud providers, we relied on kubelets using a local metadata service to retrieve node information about itself. With this new architecture, we now fully rely on the cloud controller managers to retrieve information for all nodes. For very larger clusters, you should consider possible bottle necks such as resource requirements and API rate limiting.
|
||||
|
||||
### Chicken and Egg
|
||||
|
||||
The goal of the cloud controller manager project is to decouple development of cloud features from the core Kubernetes project. Unforunately, many aspects of the Kubernetes project has assumptions that cloud provider features are tightly integrated into the project. As a result, adopting this new architecture can create several situations where a request is being made for information from a cloud provider, but the cloud controller manager may not be able to return that information without the original request being complete.
|
||||
The goal of the cloud controller manager project is to decouple development of cloud features from the core Kubernetes project. Unfortunately, many aspects of the Kubernetes project has assumptions that cloud provider features are tightly integrated into the project. As a result, adopting this new architecture can create several situations where a request is being made for information from a cloud provider, but the cloud controller manager may not be able to return that information without the original request being complete.
|
||||
|
||||
A good example of this is the TLS bootstrapping feature in the Kubelet. Currently, TLS bootstraping assumes that the Kubelet has the ability to ask the cloud provider (or a local metadata service) for all its address types (private, public, etc) but cloud controller manager cannot set a node's address types without being initialzed in the first place which requires that the kubelet has TLS certificates to communicate with the apiserver.
|
||||
A good example of this is the TLS bootstrapping feature in the Kubelet. Currently, TLS bootstrapping assumes that the Kubelet has the ability to ask the cloud provider (or a local metadata service) for all its address types (private, public, etc) but cloud controller manager cannot set a node's address types without being initialized in the first place which requires that the kubelet has TLS certificates to communicate with the apiserver.
|
||||
|
||||
As this initiative evolves, changes will be made to address these issues in upcoming releases.
|
||||
|
||||
|
|
Loading…
Reference in New Issue