Merge pull request #22558 from tengqm/fix-links-concepts-1
Fix links in concepts section (2)pull/22903/head
commit
776ce402b5
|
@ -34,14 +34,14 @@ Before choosing a guide, here are some considerations:
|
|||
- Do you **just want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the
|
||||
latter, choose an actively-developed distro. Some distros only use binary releases, but
|
||||
offer a greater variety of choices.
|
||||
- Familiarize yourself with the [components](/docs/admin/cluster-components/) needed to run a cluster.
|
||||
- Familiarize yourself with the [components](/docs/concepts/overview/components/) needed to run a cluster.
|
||||
|
||||
|
||||
## Managing a cluster
|
||||
|
||||
* [Managing a cluster](/docs/tasks/administer-cluster/cluster-management/) describes several topics related to the lifecycle of a cluster: creating a new cluster, upgrading your cluster’s master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a running cluster.
|
||||
|
||||
* Learn how to [manage nodes](/docs/concepts/nodes/node/).
|
||||
* Learn how to [manage nodes](/docs/concepts/architecture/nodes/).
|
||||
|
||||
* Learn how to set up and manage the [resource quota](/docs/concepts/policy/resource-quotas/) for shared clusters.
|
||||
|
||||
|
@ -59,14 +59,14 @@ Before choosing a guide, here are some considerations:
|
|||
|
||||
* [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/) explains plug-ins which intercepts requests to the Kubernetes API server after authentication and authorization.
|
||||
|
||||
* [Using Sysctls in a Kubernetes Cluster](/docs/concepts/cluster-administration/sysctl-cluster/) describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters .
|
||||
* [Using Sysctls in a Kubernetes Cluster](/docs/tasks/administer-cluster/sysctl-cluster/) describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters .
|
||||
|
||||
* [Auditing](/docs/tasks/debug-application-cluster/audit/) describes how to interact with Kubernetes' audit logs.
|
||||
|
||||
### Securing the kubelet
|
||||
* [Control Plane-Node communication](/docs/concepts/architecture/control-plane-node-communication/)
|
||||
* [TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
|
||||
* [Kubelet authentication/authorization](/docs/admin/kubelet-authentication-authorization/)
|
||||
* [Kubelet authentication/authorization](/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/)
|
||||
|
||||
## Optional Cluster Services
|
||||
|
||||
|
|
|
@ -5,35 +5,30 @@ content_type: concept
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
|
||||
Add-ons extend the functionality of Kubernetes.
|
||||
|
||||
This page lists some of the available add-ons and links to their respective installation instructions.
|
||||
|
||||
Add-ons in each section are sorted alphabetically - the ordering does not imply any preferential status.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Networking and Network Policy
|
||||
|
||||
|
||||
* [ACI](https://www.github.com/noironetworks/aci-containers) provides integrated container networking and network security with Cisco ACI.
|
||||
* [Calico](https://docs.projectcalico.org/latest/introduction/) is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer.
|
||||
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) unites Flannel and Calico, providing networking and network policy.
|
||||
* [Cilium](https://github.com/cilium/cilium) is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported, and it can work on top of other CNI plugins.
|
||||
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, Romana, or Weave.
|
||||
* [Contiv](http://contiv.github.io) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](http://github.com/contiv). The [installer](http://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options.
|
||||
* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads.
|
||||
* [Contiv](https://contiv.github.io) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](https://github.com/contiv). The [installer](https://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options.
|
||||
* [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads.
|
||||
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md) is an overlay network provider that can be used with Kubernetes.
|
||||
* [Knitter](https://github.com/ZTE/Knitter/) is a plugin to support multiple network interfaces in a Kubernetes pod.
|
||||
* [Multus](https://github.com/Intel-Corp/multus-cni) is a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and VPP based workloads in Kubernetes.
|
||||
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) is OVN based CNI controller plugin to provide cloud native based Service function chaining(SFC), Multiple OVN overlay networking, dynamic subnet creation, dynamic creation of virtual networks, VLAN Provider network, Direct provider network and pluggable with other Multi-network plugins, ideal for edge based cloud native workloads in Multi-cluster networking
|
||||
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift.
|
||||
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring.
|
||||
* [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).
|
||||
* [Romana](https://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).
|
||||
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
|
||||
|
||||
## Service Discovery
|
||||
|
|
|
@ -8,8 +8,6 @@ weight: 30
|
|||
This page explains how to manage Kubernetes running on a specific
|
||||
cloud provider.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
### kubeadm
|
||||
[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) is a popular option for creating kubernetes clusters.
|
||||
|
@ -46,8 +44,10 @@ controllerManager:
|
|||
```
|
||||
|
||||
The in-tree cloud providers typically need both `--cloud-provider` and `--cloud-config` specified in the command lines
|
||||
for the [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) and the
|
||||
[kubelet](/docs/admin/kubelet/). The contents of the file specified in `--cloud-config` for each provider is documented below as well.
|
||||
for the [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/),
|
||||
[kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) and the
|
||||
[kubelet](/docs/reference/command-line-tools-reference/kubelet/).
|
||||
The contents of the file specified in `--cloud-config` for each provider is documented below as well.
|
||||
|
||||
For all external cloud providers, please follow the instructions on the individual repositories,
|
||||
which are listed under their headings below, or one may view [the list of all repositories](https://github.com/kubernetes?q=cloud-provider-&type=&language=)
|
||||
|
@ -94,7 +94,7 @@ Different settings can be applied to a load balancer service in AWS using _annot
|
|||
* `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix`: Used to specify access log s3 bucket prefix.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags`: Used on the service to specify a comma-separated list of key-value pairs which will be recorded as additional tags in the ELB. For example: `"Key1=Val1,Key2=Val2,KeyNoVal1=,KeyNoVal2"`.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-backend-protocol`: Used on the service to specify the protocol spoken by the backend (pod) behind a listener. If `http` (default) or `https`, an HTTPS listener that terminates the connection and parses headers is created. If set to `ssl` or `tcp`, a "raw" SSL listener is used. If set to `http` and `aws-load-balancer-ssl-cert` is not used then a HTTP listener is used.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-ssl-cert`: Used on the service to request a secure listener. Value is a valid certificate ARN. For more, see [ELB Listener Config](http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-listener-config.html) CertARN is an IAM or CM certificate ARN, for example `arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012`.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-ssl-cert`: Used on the service to request a secure listener. Value is a valid certificate ARN. For more, see [ELB Listener Config](https://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-listener-config.html) CertARN is an IAM or CM certificate ARN, for example `arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012`.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled`: Used on the service to enable or disable connection draining.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout`: Used on the service to specify a connection draining timeout.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout`: Used on the service to specify the idle connection timeout.
|
||||
|
@ -358,13 +358,10 @@ Kubernetes network plugin and should appear in the `[Route]` section of the
|
|||
the `extraroutes` extension then use `router-id` to specify a router to add
|
||||
routes to. The router chosen must span the private networks containing your
|
||||
cluster nodes (typically there is only one node network, and this value should be
|
||||
the default router for the node network). This value is required to use [kubenet]
|
||||
the default router for the node network). This value is required to use
|
||||
[kubenet](/docs/concepts/cluster-administration/network-plugins/#kubenet)
|
||||
on OpenStack.
|
||||
|
||||
[kubenet]: /docs/concepts/cluster-administration/network-plugins/#kubenet
|
||||
|
||||
|
||||
|
||||
## OVirt
|
||||
|
||||
### Node Name
|
||||
|
|
|
@ -13,9 +13,6 @@ Application and systems logs can help you understand what is happening inside yo
|
|||
|
||||
However, the native functionality provided by a container engine or runtime is usually not enough for a complete logging solution. For example, if a container crashes, a pod is evicted, or a node dies, you'll usually still want to access your application's logs. As such, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called _cluster-level-logging_. Cluster-level logging requires a separate backend to store, analyze, and query logs. Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes cluster.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
Cluster-level logging architectures are described in assumption that
|
||||
|
@ -135,7 +132,7 @@ Because the logging agent must run on every node, it's common to implement it as
|
|||
|
||||
Using a node-level logging agent is the most common and encouraged approach for a Kubernetes cluster, because it creates only one agent per node, and it doesn't require any changes to the applications running on the node. However, node-level logging _only works for applications' standard output and standard error_.
|
||||
|
||||
Kubernetes doesn't specify a logging agent, but two optional logging agents are packaged with the Kubernetes release: [Stackdriver Logging](/docs/user-guide/logging/stackdriver) for use with Google Cloud Platform, and [Elasticsearch](/docs/user-guide/logging/elasticsearch). You can find more information and instructions in the dedicated documents. Both use [fluentd](http://www.fluentd.org/) with custom configuration as an agent on the node.
|
||||
Kubernetes doesn't specify a logging agent, but two optional logging agents are packaged with the Kubernetes release: [Stackdriver Logging](/docs/tasks/debug-application-cluster/logging-stackdriver/) for use with Google Cloud Platform, and [Elasticsearch](/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/). You can find more information and instructions in the dedicated documents. Both use [fluentd](https://www.fluentd.org/) with custom configuration as an agent on the node.
|
||||
|
||||
### Using a sidecar container with the logging agent
|
||||
|
||||
|
@ -242,7 +239,7 @@ a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to c
|
|||
{{< note >}}
|
||||
The configuration of fluentd is beyond the scope of this article. For
|
||||
information about configuring fluentd, see the
|
||||
[official fluentd documentation](http://docs.fluentd.org/).
|
||||
[official fluentd documentation](https://docs.fluentd.org/).
|
||||
{{< /note >}}
|
||||
|
||||
The second file describes a pod that has a sidecar container running fluentd.
|
||||
|
|
|
@ -10,9 +10,6 @@ weight: 40
|
|||
|
||||
You've deployed your application and exposed it via a service. Now what? Kubernetes provides a number of tools to help you manage your application deployment, including scaling and updating. Among the features that we will discuss in more depth are [configuration files](/docs/concepts/configuration/overview/) and [labels](/docs/concepts/overview/working-with-objects/labels/).
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Organizing resource configurations
|
||||
|
@ -356,7 +353,8 @@ Sometimes it's necessary to make narrow, non-disruptive updates to resources you
|
|||
|
||||
### kubectl apply
|
||||
|
||||
It is suggested to maintain a set of configuration files in source control (see [configuration as code](http://martinfowler.com/bliki/InfrastructureAsCode.html)),
|
||||
It is suggested to maintain a set of configuration files in source control
|
||||
(see [configuration as code](https://martinfowler.com/bliki/InfrastructureAsCode.html)),
|
||||
so that they can be maintained and versioned along with the code for the resources they configure.
|
||||
Then, you can use [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) to push your configuration changes to the cluster.
|
||||
|
||||
|
|
|
@ -17,9 +17,6 @@ problems to address:
|
|||
3. Pod-to-Service communications: this is covered by [services](/docs/concepts/services-networking/service/).
|
||||
4. External-to-Service communications: this is covered by [services](/docs/concepts/services-networking/service/).
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
Kubernetes is all about sharing machines between applications. Typically,
|
||||
|
@ -93,7 +90,7 @@ Thanks to the "programmable" characteristic of Open vSwitch, Antrea is able to i
|
|||
|
||||
### AOS from Apstra
|
||||
|
||||
[AOS](http://www.apstra.com/products/aos/) is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform. AOS leverages a highly scalable distributed design to eliminate network outages while minimizing costs.
|
||||
[AOS](https://www.apstra.com/products/aos/) is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform. AOS leverages a highly scalable distributed design to eliminate network outages while minimizing costs.
|
||||
|
||||
The AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems. These Layer-3 hosts can be Linux servers (Debian, Ubuntu, CentOS) that create BGP neighbor relationships directly with the top of rack switches (TORs). AOS automates the routing adjacencies and then provides fine grained control over the route health injections (RHI) that are common in a Kubernetes deployment.
|
||||
|
||||
|
@ -101,7 +98,7 @@ AOS has a rich set of REST API endpoints that enable Kubernetes to quickly chang
|
|||
|
||||
AOS supports the use of common vendor equipment from manufacturers including Cisco, Arista, Dell, Mellanox, HPE, and a large number of white-box systems and open network operating systems like Microsoft SONiC, Dell OPX, and Cumulus Linux.
|
||||
|
||||
Details on how the AOS system works can be accessed here: http://www.apstra.com/products/how-it-works/
|
||||
Details on how the AOS system works can be accessed here: https://www.apstra.com/products/how-it-works/
|
||||
|
||||
### AWS VPC CNI for Kubernetes
|
||||
|
||||
|
@ -123,7 +120,7 @@ Azure CNI is available natively in the [Azure Kubernetes Service (AKS)] (https:/
|
|||
|
||||
With the help of the Big Cloud Fabric's virtual pod multi-tenant architecture, container orchestration systems such as Kubernetes, RedHat OpenShift, Mesosphere DC/OS & Docker Swarm will be natively integrated alongside with VM orchestration systems such as VMware, OpenStack & Nutanix. Customers will be able to securely inter-connect any number of these clusters and enable inter-tenant communication between them if needed.
|
||||
|
||||
BCF was recognized by Gartner as a visionary in the latest [Magic Quadrant](http://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html). One of the BCF Kubernetes on-premises deployments (which includes Kubernetes, DC/OS & VMware running on multiple DCs across different geographic regions) is also referenced [here](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/).
|
||||
BCF was recognized by Gartner as a visionary in the latest [Magic Quadrant](https://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html). One of the BCF Kubernetes on-premises deployments (which includes Kubernetes, DC/OS & VMware running on multiple DCs across different geographic regions) is also referenced [here](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/).
|
||||
|
||||
### Cilium
|
||||
|
||||
|
@ -135,7 +132,7 @@ addressing, and it can be used in combination with other CNI plugins.
|
|||
|
||||
### CNI-Genie from Huawei
|
||||
|
||||
[CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) is a CNI plugin that enables Kubernetes to [simultaneously have access to different implementations](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables) of the [Kubernetes network model](https://github.com/kubernetes/website/blob/master/content/en/docs/concepts/cluster-administration/networking.md#the-kubernetes-network-model) in runtime. This includes any implementation that runs as a [CNI plugin](https://github.com/containernetworking/cni#3rd-party-plugins), such as [Flannel](https://github.com/coreos/flannel#flannel), [Calico](http://docs.projectcalico.org/), [Romana](http://romana.io), [Weave-net](https://www.weave.works/products/weave-net/).
|
||||
[CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) is a CNI plugin that enables Kubernetes to [simultaneously have access to different implementations](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables) of the [Kubernetes network model](/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model) in runtime. This includes any implementation that runs as a [CNI plugin](https://github.com/containernetworking/cni#3rd-party-plugins), such as [Flannel](https://github.com/coreos/flannel#flannel), [Calico](https://docs.projectcalico.org/), [Romana](https://romana.io), [Weave-net](https://www.weave.works/products/weave-net/).
|
||||
|
||||
CNI-Genie also supports [assigning multiple IP addresses to a pod](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-ips/README.md#feature-2-extension-cni-genie-multiple-ip-addresses-per-pod), each from a different CNI plugin.
|
||||
|
||||
|
@ -157,11 +154,11 @@ network complexity required to deploy Kubernetes at scale within AWS.
|
|||
|
||||
### Contiv
|
||||
|
||||
[Contiv](https://github.com/contiv/netplugin) provides configurable networking (native l3 using BGP, overlay using vxlan, classic l2, or Cisco-SDN/ACI) for various use cases. [Contiv](http://contiv.io) is all open sourced.
|
||||
[Contiv](https://github.com/contiv/netplugin) provides configurable networking (native l3 using BGP, overlay using vxlan, classic l2, or Cisco-SDN/ACI) for various use cases. [Contiv](https://contiv.io) is all open sourced.
|
||||
|
||||
### Contrail / Tungsten Fabric
|
||||
|
||||
[Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is a truly open, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with various orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide different isolation modes for virtual machines, containers/pods and bare metal workloads.
|
||||
[Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is a truly open, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with various orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide different isolation modes for virtual machines, containers/pods and bare metal workloads.
|
||||
|
||||
### DANM
|
||||
|
||||
|
@ -242,7 +239,7 @@ traffic to the internet.
|
|||
|
||||
### Kube-router
|
||||
|
||||
[Kube-router](https://github.com/cloudnativelabs/kube-router) is a purpose-built networking solution for Kubernetes that aims to provide high performance and operational simplicity. Kube-router provides a Linux [LVS/IPVS](http://www.linuxvirtualserver.org/software/ipvs.html)-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer.
|
||||
[Kube-router](https://github.com/cloudnativelabs/kube-router) is a purpose-built networking solution for Kubernetes that aims to provide high performance and operational simplicity. Kube-router provides a Linux [LVS/IPVS](https://www.linuxvirtualserver.org/software/ipvs.html)-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer.
|
||||
|
||||
### L2 networks and linux bridging
|
||||
|
||||
|
@ -252,8 +249,8 @@ Note that these instructions have only been tried very casually - it seems to
|
|||
work, but has not been thoroughly tested. If you use this technique and
|
||||
perfect the process, please let us know.
|
||||
|
||||
Follow the "With Linux Bridge devices" section of [this very nice
|
||||
tutorial](http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/) from
|
||||
Follow the "With Linux Bridge devices" section of
|
||||
[this very nice tutorial](https://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/) from
|
||||
Lars Kellogg-Stedman.
|
||||
|
||||
### Multus (a Multi Network plugin)
|
||||
|
@ -274,7 +271,7 @@ Multus supports all [reference plugins](https://github.com/containernetworking/p
|
|||
|
||||
### Nuage Networks VCS (Virtualized Cloud Services)
|
||||
|
||||
[Nuage](http://www.nuagenetworks.net) provides a highly scalable policy-based Software-Defined Networking (SDN) platform. Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
|
||||
[Nuage](https://www.nuagenetworks.net) provides a highly scalable policy-based Software-Defined Networking (SDN) platform. Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
|
||||
|
||||
The Nuage platform uses overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Nuage's policy abstraction model is designed with applications in mind and makes it easy to declare fine-grained policies for applications.The platform's real-time analytics engine enables visibility and security monitoring for Kubernetes applications.
|
||||
|
||||
|
@ -294,7 +291,7 @@ at [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes).
|
|||
|
||||
### Project Calico
|
||||
|
||||
[Project Calico](http://docs.projectcalico.org/) is an open source container networking provider and network policy engine.
|
||||
[Project Calico](https://docs.projectcalico.org/) is an open source container networking provider and network policy engine.
|
||||
|
||||
Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet, for both Linux (open source) and Windows (proprietary - available from [Tigera](https://www.tigera.io/essentials/)). Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking. Calico also provides fine-grained, intent based network security policy for Kubernetes pods via its distributed firewall.
|
||||
|
||||
|
@ -302,7 +299,7 @@ Calico can also be run in policy enforcement mode in conjunction with other netw
|
|||
|
||||
### Romana
|
||||
|
||||
[Romana](http://romana.io) is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network. Romana supports Kubernetes [Network Policy](/docs/concepts/services-networking/network-policies/) to provide isolation across network namespaces.
|
||||
[Romana](https://romana.io) is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network. Romana supports Kubernetes [Network Policy](/docs/concepts/services-networking/network-policies/) to provide isolation across network namespaces.
|
||||
|
||||
### Weave Net from Weaveworks
|
||||
|
||||
|
@ -312,13 +309,9 @@ Weave Net runs as a [CNI plug-in](https://www.weave.works/docs/net/latest/cni-pl
|
|||
or stand-alone. In either version, it doesn't require any configuration or extra code
|
||||
to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
The early design of the networking model and its rationale, and some future
|
||||
plans are described in more detail in the [networking design
|
||||
document](https://git.k8s.io/community/contributors/design-proposals/network/networking.md).
|
||||
|
||||
plans are described in more detail in the
|
||||
[networking design document](https://git.k8s.io/community/contributors/design-proposals/network/networking.md).
|
||||
|
||||
|
|
|
@ -28,7 +28,7 @@ The Kubernetes Container environment provides several important resources to Con
|
|||
|
||||
The *hostname* of a Container is the name of the Pod in which the Container is running.
|
||||
It is available through the `hostname` command or the
|
||||
[`gethostname`](http://man7.org/linux/man-pages/man2/gethostname.2.html)
|
||||
[`gethostname`](https://man7.org/linux/man-pages/man2/gethostname.2.html)
|
||||
function call in libc.
|
||||
|
||||
The Pod name and namespace are available as environment variables through the
|
||||
|
@ -51,7 +51,7 @@ FOO_SERVICE_PORT=<the port the service is running on>
|
|||
```
|
||||
|
||||
Services have dedicated IP addresses and are available to the Container via DNS,
|
||||
if [DNS addon](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/) is enabled.
|
||||
if [DNS addon](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/) is enabled.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -19,8 +19,6 @@ before referring to it in a
|
|||
|
||||
This page provides an outline of the container image concept.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Image names
|
||||
|
@ -261,7 +259,7 @@ EOF
|
|||
This needs to be done for each pod that is using a private registry.
|
||||
|
||||
However, setting of this field can be automated by setting the imagePullSecrets
|
||||
in a [ServiceAccount](/docs/user-guide/service-accounts) resource.
|
||||
in a [ServiceAccount](/docs/tasks/configure-pod-container/configure-service-accounts/) resource.
|
||||
|
||||
Check [Add ImagePullSecrets to a Service Account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) for detailed instructions.
|
||||
|
||||
|
|
|
@ -24,9 +24,6 @@ their work environment. Developers who are prospective {{< glossary_tooltip text
|
|||
useful as an introduction to what extension points and patterns
|
||||
exist, and their trade-offs and limitations.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Overview
|
||||
|
@ -37,14 +34,14 @@ Customization approaches can be broadly divided into *configuration*, which only
|
|||
|
||||
*Configuration files* and *flags* are documented in the Reference section of the online documentation, under each binary:
|
||||
|
||||
* [kubelet](/docs/admin/kubelet/)
|
||||
* [kube-apiserver](/docs/admin/kube-apiserver/)
|
||||
* [kube-controller-manager](/docs/admin/kube-controller-manager/)
|
||||
* [kube-scheduler](/docs/admin/kube-scheduler/).
|
||||
* [kubelet](/docs/reference/command-line-tools-reference/kubelet/)
|
||||
* [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/)
|
||||
* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/)
|
||||
* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/).
|
||||
|
||||
Flags and configuration files may not always be changeable in a hosted Kubernetes service or a distribution with managed installation. When they are changeable, they are usually only changeable by the cluster administrator. Also, they are subject to change in future Kubernetes versions, and setting them may require restarting processes. For those reasons, they should be used only when there are no other options.
|
||||
|
||||
*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/policy/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable.
|
||||
*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/policy/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/using-api/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable.
|
||||
|
||||
## Extensions
|
||||
|
||||
|
@ -75,10 +72,9 @@ failure.
|
|||
|
||||
In the webhook model, Kubernetes makes a network request to a remote service.
|
||||
In the *Binary Plugin* model, Kubernetes executes a binary (program).
|
||||
Binary plugins are used by the kubelet (e.g. [Flex Volume
|
||||
Plugins](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md)
|
||||
and [Network
|
||||
Plugins](/docs/concepts/cluster-administration/network-plugins/))
|
||||
Binary plugins are used by the kubelet (e.g.
|
||||
[Flex Volume Plugins](/docs/concepts/storage/volumes/#flexVolume)
|
||||
and [Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/))
|
||||
and by kubectl.
|
||||
|
||||
Below is a diagram showing how the extension points interact with the
|
||||
|
@ -98,12 +94,12 @@ This diagram shows the extension points in a Kubernetes system.
|
|||
<!-- image source diagrams: https://docs.google.com/drawings/d/1k2YdJgNTtNfW7_A8moIIkij-DmVgEhNrn3y2OODwqQQ/view -->
|
||||
|
||||
1. Users often interact with the Kubernetes API using `kubectl`. [Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary. They only affect the individual user's local environment, and so cannot enforce site-wide policies.
|
||||
2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](/docs/concepts/overview/extending#api-access-extensions) section.
|
||||
3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](/docs/concepts/overview/extending#user-defined-types) section. Custom Resources are often used with API Access Extensions.
|
||||
4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](/docs/concepts/overview/extending#scheduler-extensions) section.
|
||||
2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](#api-access-extensions) section.
|
||||
3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](#user-defined-types) section. Custom Resources are often used with API Access Extensions.
|
||||
4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](#scheduler-extensions) section.
|
||||
5. Much of the behavior of Kubernetes is implemented by programs called Controllers which are clients of the API-Server. Controllers are often used in conjunction with Custom Resources.
|
||||
6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](/docs/concepts/overview/extending#network-plugins) allow for different implementations of pod networking.
|
||||
7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](/docs/concepts/overview/extending#storage-plugins).
|
||||
6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](#network-plugins) allow for different implementations of pod networking.
|
||||
7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](#storage-plugins).
|
||||
|
||||
If you are unsure where to start, this flowchart can help. Note that some solutions may involve several types of extensions.
|
||||
|
||||
|
@ -119,7 +115,7 @@ Consider adding a Custom Resource to Kubernetes if you want to define new contro
|
|||
|
||||
Do not use a Custom Resource as data storage for application, user, or monitoring data.
|
||||
|
||||
For more about Custom Resources, see the [Custom Resources concept guide](/docs/concepts/api-extension/custom-resources/).
|
||||
For more about Custom Resources, see the [Custom Resources concept guide](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
|
||||
|
||||
|
||||
### Combining New APIs with Automation
|
||||
|
@ -172,20 +168,21 @@ Kubelet call a Binary Plugin to mount the volume.
|
|||
### Device Plugins
|
||||
|
||||
Device plugins allow a node to discover new Node resources (in addition to the
|
||||
builtin ones like cpu and memory) via a [Device
|
||||
Plugin](/docs/concepts/cluster-administration/device-plugins/).
|
||||
builtin ones like cpu and memory) via a
|
||||
[Device Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/).
|
||||
|
||||
|
||||
### Network Plugins
|
||||
|
||||
Different networking fabrics can be supported via node-level [Network Plugins](/docs/admin/network-plugins/).
|
||||
Different networking fabrics can be supported via node-level
|
||||
[Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/).
|
||||
|
||||
### Scheduler Extensions
|
||||
|
||||
The scheduler is a special type of controller that watches pods, and assigns
|
||||
pods to nodes. The default scheduler can be replaced entirely, while
|
||||
continuing to use other Kubernetes components, or [multiple
|
||||
schedulers](/docs/tasks/administer-cluster/configure-multiple-schedulers/)
|
||||
continuing to use other Kubernetes components, or
|
||||
[multiple schedulers](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)
|
||||
can run at the same time.
|
||||
|
||||
This is a significant undertaking, and almost all Kubernetes users find they
|
||||
|
@ -196,18 +193,14 @@ The scheduler also supports a
|
|||
that permits a webhook backend (scheduler extension) to filter and prioritize
|
||||
the nodes chosen for a pod.
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Learn more about [Custom Resources](/docs/concepts/api-extension/custom-resources/)
|
||||
* Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
* Learn about [Dynamic admission control](/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||
* Learn more about Infrastructure extensions
|
||||
* [Network Plugins](/docs/concepts/cluster-administration/network-plugins/)
|
||||
* [Device Plugins](/docs/concepts/cluster-administration/device-plugins/)
|
||||
* [Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
|
||||
* [Device Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
|
||||
* Learn about [kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/)
|
||||
* Learn about the [Operator pattern](/docs/concepts/extend-kubernetes/operator/)
|
||||
|
||||
|
||||
|
|
|
@ -15,8 +15,6 @@ The additional APIs can either be ready-made solutions such as [service-catalog]
|
|||
|
||||
The aggregation layer is different from [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/), which are a way to make the {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}} recognise new kinds of object.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Aggregation layer
|
||||
|
@ -34,11 +32,8 @@ If your extension API server cannot achieve that latency requirement, consider m
|
|||
`EnableAggregatedDiscoveryTimeout=false` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on the kube-apiserver
|
||||
to disable the timeout restriction. This deprecated feature gate will be removed in a future release.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* To get the aggregator working in your environment, [configure the aggregation layer](/docs/tasks/extend-kubernetes/configure-aggregation-layer/).
|
||||
* Then, [setup an extension api-server](/docs/tasks/extend-kubernetes/setup-extension-api-server/) to work with the aggregation layer.
|
||||
* Also, learn how to [extend the Kubernetes API using Custom Resource Definitions](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/).
|
||||
|
|
|
@ -13,8 +13,6 @@ weight: 10
|
|||
resource to your Kubernetes cluster and when to use a standalone service. It describes the two
|
||||
methods for adding custom resources and how to choose between them.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
## Custom resources
|
||||
|
||||
|
@ -28,7 +26,7 @@ many core Kubernetes functions are now built using custom resources, making Kube
|
|||
Custom resources can appear and disappear in a running cluster through dynamic registration,
|
||||
and cluster admins can update custom resources independently of the cluster itself.
|
||||
Once a custom resource is installed, users can create and access its objects using
|
||||
[kubectl](/docs/user-guide/kubectl-overview/), just as they do for built-in resources like
|
||||
[kubectl](/docs/reference/kubectl/overview/), just as they do for built-in resources like
|
||||
*Pods*.
|
||||
|
||||
## Custom controllers
|
||||
|
@ -52,7 +50,9 @@ for specific applications into an extension of the Kubernetes API.
|
|||
|
||||
## Should I add a custom resource to my Kubernetes Cluster?
|
||||
|
||||
When creating a new API, consider whether to [aggregate your API with the Kubernetes cluster APIs](/docs/concepts/api-extension/apiserver-aggregation/) or let your API stand alone.
|
||||
When creating a new API, consider whether to
|
||||
[aggregate your API with the Kubernetes cluster APIs](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
|
||||
or let your API stand alone.
|
||||
|
||||
| Consider API aggregation if: | Prefer a stand-alone API if: |
|
||||
| ---------------------------- | ---------------------------- |
|
||||
|
|
|
@ -19,8 +19,6 @@ The targeted devices include GPUs, high-performance NICs, FPGAs, InfiniBand adap
|
|||
and other similar computing resources that may require vendor specific initialization
|
||||
and setup.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Device plugin registration
|
||||
|
@ -39,7 +37,7 @@ During the registration, the device plugin needs to send:
|
|||
* The name of its Unix socket.
|
||||
* The Device Plugin API version against which it was built.
|
||||
* The `ResourceName` it wants to advertise. Here `ResourceName` needs to follow the
|
||||
[extended resource naming scheme](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources)
|
||||
[extended resource naming scheme](/docs/concepts/configuration/manage-resources-container/#extended-resources)
|
||||
as `vendor-domain/resourcetype`.
|
||||
(For example, an NVIDIA GPU is advertised as `nvidia.com/gpu`.)
|
||||
|
||||
|
|
|
@ -6,14 +6,11 @@ weight: 30
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
Operators are software extensions to Kubernetes that make use of [custom
|
||||
resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
Operators are software extensions to Kubernetes that make use of
|
||||
[custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
to manage applications and their components. Operators follow
|
||||
Kubernetes principles, notably the [control loop](/docs/concepts/#kubernetes-control-plane).
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Motivation
|
||||
|
|
|
@ -24,7 +24,6 @@ The Kubernetes API lets you query and manipulate the state of objects in the Kub
|
|||
|
||||
API endpoints, resource types and samples are described in the [API Reference](/docs/reference/kubernetes-api/).
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## API changes
|
||||
|
@ -135,7 +134,7 @@ There are several API groups in a cluster:
|
|||
(e.g. `apiVersion: batch/v1`). The Kubernetes [API reference](/docs/reference/kubernetes-api/) has a
|
||||
full list of available API groups.
|
||||
|
||||
There are two paths to extending the API with [custom resources](/docs/concepts/api-extension/custom-resources/):
|
||||
There are two paths to extending the API with [custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/):
|
||||
|
||||
1. [CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)
|
||||
lets you declaratively define how the API server should provide your chosen resource API.
|
||||
|
|
|
@ -22,10 +22,9 @@ Each object can have a set of key/value labels defined. Each Key must be unique
|
|||
}
|
||||
```
|
||||
|
||||
Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using [annotations](/docs/concepts/overview/working-with-objects/annotations/).
|
||||
|
||||
|
||||
|
||||
Labels allow for efficient queries and watches and are ideal for use in UIs
|
||||
and CLIs. Non-identifying information should be recorded using
|
||||
[annotations](/docs/concepts/overview/working-with-objects/annotations/).
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -77,7 +76,7 @@ spec:
|
|||
|
||||
## Label selectors
|
||||
|
||||
Unlike [names and UIDs](/docs/user-guide/identifiers), labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).
|
||||
Unlike [names and UIDs](/docs/concepts/overview/working-with-objects/names/), labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).
|
||||
|
||||
Via a _label selector_, the client/user can identify a set of objects. The label selector is the core grouping primitive in Kubernetes.
|
||||
|
||||
|
@ -186,7 +185,10 @@ kubectl get pods -l 'environment,environment notin (frontend)'
|
|||
|
||||
### Set references in API objects
|
||||
|
||||
Some Kubernetes objects, such as [`services`](/docs/user-guide/services) and [`replicationcontrollers`](/docs/user-guide/replication-controller), also use label selectors to specify sets of other resources, such as [pods](/docs/user-guide/pods).
|
||||
Some Kubernetes objects, such as [`services`](/docs/concepts/services-networking/service/)
|
||||
and [`replicationcontrollers`](/docs/concepts/workloads/controllers/replicationcontroller/),
|
||||
also use label selectors to specify sets of other resources, such as
|
||||
[pods](/docs/concepts/workloads/pods/pod/).
|
||||
|
||||
#### Service and ReplicationController
|
||||
|
||||
|
@ -210,7 +212,11 @@ this selector (respectively in `json` or `yaml` format) is equivalent to `compon
|
|||
|
||||
#### Resources that support set-based requirements
|
||||
|
||||
Newer resources, such as [`Job`](/docs/concepts/workloads/controllers/jobs-run-to-completion/), [`Deployment`](/docs/concepts/workloads/controllers/deployment/), [`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/), and [`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/), support _set-based_ requirements as well.
|
||||
Newer resources, such as [`Job`](/docs/concepts/workloads/controllers/job/),
|
||||
[`Deployment`](/docs/concepts/workloads/controllers/deployment/),
|
||||
[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/), and
|
||||
[`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/),
|
||||
support _set-based_ requirements as well.
|
||||
|
||||
```yaml
|
||||
selector:
|
||||
|
@ -228,4 +234,3 @@ selector:
|
|||
One use case for selecting over labels is to constrain the set of nodes onto which a pod can schedule.
|
||||
See the documentation on [node selection](/docs/concepts/scheduling-eviction/assign-pod-node/) for more information.
|
||||
|
||||
|
||||
|
|
|
@ -13,9 +13,6 @@ weight: 30
|
|||
Kubernetes supports multiple virtual clusters backed by the same physical cluster.
|
||||
These virtual clusters are called namespaces.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## When to Use Multiple Namespaces
|
||||
|
@ -35,13 +32,14 @@ In future versions of Kubernetes, objects in the same namespace will have the sa
|
|||
access control policies by default.
|
||||
|
||||
It is not necessary to use multiple namespaces just to separate slightly different
|
||||
resources, such as different versions of the same software: use [labels](/docs/user-guide/labels) to distinguish
|
||||
resources, such as different versions of the same software: use
|
||||
[labels](/docs/concepts/overview/working-with-objects/labels) to distinguish
|
||||
resources within the same namespace.
|
||||
|
||||
## Working with Namespaces
|
||||
|
||||
Creation and deletion of namespaces are described in the [Admin Guide documentation
|
||||
for namespaces](/docs/admin/namespaces).
|
||||
Creation and deletion of namespaces are described in the
|
||||
[Admin Guide documentation for namespaces](/docs/tasks/administer-cluster/namespaces).
|
||||
|
||||
{{< note >}}
|
||||
Avoid creating namespace with prefix `kube-`, since it is reserved for Kubernetes system namespaces.
|
||||
|
@ -93,7 +91,8 @@ kubectl config view --minify | grep namespace:
|
|||
|
||||
## Namespaces and DNS
|
||||
|
||||
When you create a [Service](/docs/user-guide/services), it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/).
|
||||
When you create a [Service](/docs/concepts/services-networking/service/),
|
||||
it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/).
|
||||
This entry is of the form `<service-name>.<namespace-name>.svc.cluster.local`, which means
|
||||
that if a container just uses `<service-name>`, it will resolve to the service which
|
||||
is local to a namespace. This is useful for using the same configuration across
|
||||
|
@ -104,7 +103,8 @@ across namespaces, you need to use the fully qualified domain name (FQDN).
|
|||
|
||||
Most Kubernetes resources (e.g. pods, services, replication controllers, and others) are
|
||||
in some namespaces. However namespace resources are not themselves in a namespace.
|
||||
And low-level resources, such as [nodes](/docs/admin/node) and
|
||||
And low-level resources, such as
|
||||
[nodes](/docs/concepts/architecture/nodes/) and
|
||||
persistentVolumes, are not in any namespace.
|
||||
|
||||
To see which Kubernetes resources are and aren't in a namespace:
|
||||
|
@ -117,12 +117,8 @@ kubectl api-resources --namespaced=true
|
|||
kubectl api-resources --namespaced=false
|
||||
```
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn more about [creating a new namespace](/docs/tasks/administer-cluster/namespaces/#creating-a-new-namespace).
|
||||
* Learn more about [deleting a namespace](/docs/tasks/administer-cluster/namespaces/#deleting-a-namespace).
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -10,7 +10,6 @@ Kubernetes objects. This document provides an overview of the different
|
|||
approaches. Read the [Kubectl book](https://kubectl.docs.kubernetes.io) for
|
||||
details of managing objects by Kubectl.
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Management techniques
|
||||
|
@ -167,11 +166,8 @@ Disadvantages compared to imperative object configuration:
|
|||
- Declarative object configuration is harder to debug and understand results when they are unexpected.
|
||||
- Partial updates using diffs create complex merge and patch operations.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
- [Managing Kubernetes Objects Using Imperative Commands](/docs/tasks/manage-kubernetes-objects/imperative-command/)
|
||||
- [Managing Kubernetes Objects Using Object Configuration (Imperative)](/docs/tasks/manage-kubernetes-objects/imperative-config/)
|
||||
- [Managing Kubernetes Objects Using Object Configuration (Declarative)](/docs/tasks/manage-kubernetes-objects/declarative-config/)
|
||||
|
@ -180,4 +176,3 @@ Disadvantages compared to imperative object configuration:
|
|||
- [Kubectl Book](https://kubectl.docs.kubernetes.io)
|
||||
- [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
|
||||
|
||||
|
||||
|
|
|
@ -8,13 +8,10 @@ weight: 10
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
By default, containers run with unbounded [compute resources](/docs/user-guide/compute-resources) on a Kubernetes cluster.
|
||||
By default, containers run with unbounded [compute resources](/docs/concepts/configuration/manage-resources-containers/) on a Kubernetes cluster.
|
||||
With resource quotas, cluster administrators can restrict resource consumption and creation on a {{< glossary_tooltip text="namespace" term_id="namespace" >}} basis.
|
||||
Within a namespace, a Pod or Container can consume as much CPU and memory as defined by the namespace's resource quota. There is a concern that one Pod or Container could monopolize all available resources. A LimitRange is a policy to constrain resource allocations (to Pods or Containers) in a namespace.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
A _LimitRange_ provides constraints that can:
|
||||
|
@ -54,11 +51,8 @@ there may be contention for resources. In this case, the Containers or Pods will
|
|||
|
||||
Neither contention nor changes to a LimitRange will affect already created resources.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
Refer to the [LimitRanger design document](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) for more information.
|
||||
|
||||
For examples on using limits, see:
|
||||
|
@ -70,5 +64,3 @@ For examples on using limits, see:
|
|||
- [how to configure minimum and maximum Storage consumption per namespace](/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage).
|
||||
- a [detailed example on configuring quota per namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/).
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -14,9 +14,6 @@ weight: 20
|
|||
Pod Security Policies enable fine-grained authorization of pod creation and
|
||||
updates.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## What is a Pod Security Policy?
|
||||
|
@ -143,13 +140,13 @@ For a complete example of authorizing a PodSecurityPolicy, see
|
|||
|
||||
### Troubleshooting
|
||||
|
||||
- The [Controller Manager](/docs/admin/kube-controller-manager/) must be run
|
||||
- The [Controller Manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) must be run
|
||||
against [the secured API port](/docs/reference/access-authn-authz/controlling-access/),
|
||||
and must not have superuser permissions. Otherwise requests would bypass
|
||||
authentication and authorization modules, all PodSecurityPolicy objects would be
|
||||
allowed, and users would be able to create privileged containers. For more details
|
||||
on configuring Controller Manager authorization, see [Controller
|
||||
Roles](/docs/reference/access-authn-authz/rbac/#controller-roles).
|
||||
on configuring Controller Manager authorization, see
|
||||
[Controller Roles](/docs/reference/access-authn-authz/rbac/#controller-roles).
|
||||
|
||||
## Policy Order
|
||||
|
||||
|
@ -629,15 +626,12 @@ By default, all safe sysctls are allowed.
|
|||
- `allowedUnsafeSysctls` - allows specific sysctls that had been disallowed by the default list, so long as these are not listed in `forbiddenSysctls`.
|
||||
|
||||
Refer to the [Sysctl documentation](
|
||||
/docs/concepts/cluster-administration/sysctl-cluster/#podsecuritypolicy).
|
||||
|
||||
|
||||
/docs/tasks/administer-cluster/sysctl-cluster/#podsecuritypolicy).
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
- See [Pod Security Standards](/docs/concepts/security/pod-security-standards/) for policy recommendations.
|
||||
|
||||
See [Pod Security Standards](/docs/concepts/security/pod-security-standards/) for policy recommendations.
|
||||
|
||||
Refer to [Pod Security Policy Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) for the api details.
|
||||
- Refer to [Pod Security Policy Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) for the api details.
|
||||
|
||||
|
||||
|
|
|
@ -13,9 +13,6 @@ there is a concern that one team could use more than its fair share of resources
|
|||
|
||||
Resource quotas are a tool for administrators to address this concern.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
A resource quota, defined by a `ResourceQuota` object, provides constraints that limit
|
||||
|
@ -27,15 +24,21 @@ Resource quotas work like this:
|
|||
|
||||
- Different teams work in different namespaces. Currently this is voluntary, but
|
||||
support for making this mandatory via ACLs is planned.
|
||||
|
||||
- The administrator creates one `ResourceQuota` for each namespace.
|
||||
|
||||
- Users create resources (pods, services, etc.) in the namespace, and the quota system
|
||||
tracks usage to ensure it does not exceed hard resource limits defined in a `ResourceQuota`.
|
||||
|
||||
- If creating or updating a resource violates a quota constraint, the request will fail with HTTP
|
||||
status code `403 FORBIDDEN` with a message explaining the constraint that would have been violated.
|
||||
|
||||
- If quota is enabled in a namespace for compute resources like `cpu` and `memory`, users must specify
|
||||
requests or limits for those values; otherwise, the quota system may reject pod creation. Hint: Use
|
||||
the `LimitRanger` admission controller to force defaults for pods that make no compute resource requirements.
|
||||
See the [walkthrough](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/) for an example of how to avoid this problem.
|
||||
|
||||
See the [walkthrough](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
|
||||
for an example of how to avoid this problem.
|
||||
|
||||
The name of a `ResourceQuota` object must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
|
@ -63,7 +66,7 @@ A resource quota is enforced in a particular namespace when there is a
|
|||
|
||||
## Compute Resource Quota
|
||||
|
||||
You can limit the total sum of [compute resources](/docs/user-guide/compute-resources) that can be requested in a given namespace.
|
||||
You can limit the total sum of [compute resources](/docs/concepts/configuration/manage-resources-containers/) that can be requested in a given namespace.
|
||||
|
||||
The following resource types are supported:
|
||||
|
||||
|
@ -77,7 +80,7 @@ The following resource types are supported:
|
|||
### Resource Quota For Extended Resources
|
||||
|
||||
In addition to the resources mentioned above, in release 1.10, quota support for
|
||||
[extended resources](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources) is added.
|
||||
[extended resources](/docs/concepts/configuration/manage-resources-containers/#extended-resources) is added.
|
||||
|
||||
As overcommit is not allowed for extended resources, it makes no sense to specify both `requests`
|
||||
and `limits` for the same extended resource in a quota. So for extended resources, only quota items
|
||||
|
@ -596,11 +599,7 @@ See [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765) and
|
|||
|
||||
See a [detailed example for how to use resource quota](/docs/tasks/administer-cluster/quota-api-object/).
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information.
|
||||
|
||||
- See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information.
|
||||
|
||||
|
|
Loading…
Reference in New Issue