website/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md

284 lines
16 KiB
Markdown
Raw Normal View History

---
title: Configure the Aggregation Layer
reviewers:
- lavalamp
- cheftako
- chenopis
content_template: templates/task
Release docs for Kubernetes 1.11 (#9171) * Seperate priority and preemption (#8144) * Doc about PID pressure condition. (#8211) * Doc about PID pressure condition. Signed-off-by: Da K. Ma <klaus1982.cn@gmail.com> * "so" -> "too" * Update version selector for 1.11 * StorageObjectInUseProtection is GA (#8291) * Feature gate: StorageObjectInUseProtection is GA Update feature gate reference for 1.11 * Trivial commit to re-trigger Netlify * CRIContainerLogRotation is Beta in 1.11 (#8665) * Seperate priority and preemption (#8144) * CRIContainerLogRotation is Beta in 1.11 xref: kubernetes/kubernetes#64046 * Bring StorageObjectInUseProtection feature to GA (#8159) * StorageObjectInUseProtection is GA (#8291) * Feature gate: StorageObjectInUseProtection is GA Update feature gate reference for 1.11 * Trivial commit to re-trigger Netlify * Bring StorageObjectInUseProtection feature to GA StorageObjectInUseProtection is Beta in K8s 1.10. It's brought to GA in K8s 1.11. * Fixed typo and added feature state tags. * Remove KUBE_API_VERSIONS doc (#8292) The support to the KUBER_API_VERSIONS environment variable is completely dropped (no deprecation). This PR removes the related doc in release-1.11. xref: kubernetes/kubernetes#63165 * Remove InitialResources from admission controllers (#8293) The feature (was experimental) is dropped in 1.11. xref: kubernetes/kubernetes#58784 * Remove docs related to in-tree support to GPU (#8294) * Remove docs related to in-tree support to GPU The in-tree support to GPU is completely removed in release 1.11. This PR removes the related docs in release-1.11 branch. xref: kubernetes/kubernetes#61498 * Update content updated by PR to Hugo syntax Signed-off-by: Misty Stanley-Jones <mistyhacks@google.com> * Update the doc about extra volume in kubeadm config (#8453) Signed-off-by: Xianglin Gao <xianglin.gxl@alibaba-inc.com> * Update CRD Subresources for 1.11 (#8519) * coredns: update notes in administer-cluster/coredns.md (#8697) CoreDNS is installed by default in 1.11. Add notes on how to install kube-dns instead. Update notes about CoreDNS->CoreDNS upgrades as in 1.11 the Corefile is retained. Add example on upgrading from kube-dns to CoreDNS. * kubeadm-alpha: CoreDNS related changes (#8727) Update note about CoreDNS feature gate. This change also updates a tab as a kubeadm sub-command will change. It looks for a new generated file: generated/kubeadm_alpha_phase_addon_coredns.md instead of: generated/kubeadm_alpha_phase_addon_kube-dns.md * Update cloud controller manager docs to beta 1.11 (#8756) * Update cloud controller manager docs to beta 1.11 * Use Hugo shortcode for feature state * kubeadm-upgrade: include new command `kubeadm upgrade diff` (#8617) Also: - Include note that this was added in 1.11. - Modify the note about upgrade guidance. * independent: update CoreDNS mentions for kubeadm (#8753) Give CoreDNS instead of kube-dns examples in: - docs/setup/independent/create-cluster-kubeadm.md - docs/setup/independent/troubleshooting-kubeadm.md * update 1.11 --server-print info (#8870) * update 1.11 --server-print info * Copyedit * Mark ExpandPersistentVolumes feature to beta (#8778) * Update version selector for 1.11 * Mark ExpandPersistentVolumes Beta xref: kubernetes/kubernetes#64288 * fix shortcode, add placeholder files to fix deploy failures (#8874) * declare ipvs ga (#8850) * kubeadm: update info about CoreDNS in kubeadm-init.md (#8728) Add info to install kube-dns instead of CoreDNS, as CoreDNS is the default DNS server in 1.11. Add notes that kubeadm config images can be used to list and pull the required images in 1.11. * kubeadm: update implementation-details.md about CoreDNS (#8829) - Replace examples from kube-dns to CoreDNS - Add notes about the CoreDNS feature gate status in 1.11 - Add note that the service name for CoreDNS is also called `kube-dns` * Update block device support for 1.11 (#8895) * Update block device support for 1.11 * Copyedits * Fix typo 'fiber channel' (#8957) Signed-off-by: Misty Stanley-Jones <mistyhacks@google.com> * kubeadm-upgrade: add the 'node [config]' sub-command (#8960) - Add includes for the generated pages - Include placeholder generated pages * kubeadm-init: update the example for the MasterConfiguration (#8958) - include godocs link for MasterConfiguration - include example MasterConfiguration - add note that `kubeadm config print-default` can be used * kubeadm-config: include new commands (#8862) Add notes and includes for these new commands in 1.11: - kubeadm config print-default - kubeadm config migrate - kubeadm config images list - kubeadm config images pull Include placeholder generated files for the above. * administer-cluster/coredns: include more changes (#8985) It was requested that for this page a couple of methods should be outlined: - manual installation for CoreDNS explained at the Kubernetes section of the GitHub project for CoreDNS - installation and upgrade via kubeadm Make the above changes and also add a section "About CoreDNS". This commit also lowercases a section title. * Update CRD subresources doc for 1.11 (#8918) * Add docs for volume expansion and online resizing (#8896) * Add docs for volume expansion going beta * Copyedit * Address feedback * Update exec plugin docs with TLS credentials (#8826) * Update exec plugin docs with TLS credentials kubernetes/kubernetes#61803 implements TLS client credential support for 1.11. * Copyedit * More copyedits for clarification * Additional copyedit * Change token->credential * NodeRestriction admission prevents kubelet taint removal (#8911) * dns-custom-namerserver: break down the page into mutliple sections (#8900) * dns-custom-namerserver: break down the page into mutliple sections This page is currently about kube-dns and is a bit outdated. Introduce the heading `# Customizing kube-dns`. Introduce a separate section about CoreDNS. * Copyedits, fix headings for customizing DNS Hey Lubomir, I coypedited pretty heavily because this workflow is so much easier for docs and because I'm trying to help improve everything touching kubeadm as much as possible. But there's one outstanding issue wrt headings and intro content: you can't add a heading 1 to a topic to do what you wanted to do. The page title in the front matter is rendered as a heading 1 and everything else has to start at heading 2. (We still need to doc this better in the docs contributing content, I know.) Instead, I think we need to rewrite the top-of-page intro content to explain better the relationship between kube-dns and CoreDNS. I'm happy to write something, but I thought I'd push this commit first so you can see what I'm doing. Hope it's all clear -- ping here or on Slack with any questions ~ Jennifer * Interim fix for talking about CoreDNS * Fix CoreDNS details * PSP readOnly hostPath (#8898) * Add documentation for crictl (#8880) * Add documentation for crictl * Copyedit Signed-off-by: Misty Stanley-Jones <mistyhacks@google.com> * Final copyedit * VolumeSubpathEnvExpansion alpha feature (#8835) * Note that Heapster is deprecated (#8827) * Note that Heapster is deprecated This notes that Heapster is deprecated, and migrates the relevant docs to talk about metrics-server or other solutions by default. * Copyedits and improvements Signed-off-by: Misty Stanley-Jones <mistyhacks@google.com> * Address feedback * fix shortcode to troubleshoot deploy (#9057) * update dynamic kubelet config docs for v1.11 (#8766) * update dynamic kubelet config docs for v1.11 * Substantial copyedit * Address feedback * Reference doc for kubeadm (release-1.11) (#9044) * Reference doc for kubeadm (release-1.11) * fix shortcode to troubleshoot deploy (#9057) * Reference doc for kube-components (release-1.11) (#9045) * Reference doc for kube-components (release-1.11) * Update cloud-controller-manager.md * fix shortcode to troubleshoot deploy (#9057) * Documentation on lowercasing kubeadm init apiserver SANs (#9059) * Documentation on lowercasing kubeadm init apiserver SANs * fix shortcode to troubleshoot deploy (#9057) * Clarification in dynamic Kubelet config doc (#9061) * Promote sysctls to Beta (#8804) * Promote sysctls to Beta * Copyedits Signed-off-by: Misty Stanley-Jones <mistyhacks@google.com> * Review comments * Address feedback * More feedback * kubectl reference docs for 1.11 (#9080) * Update Kubernetes API 1.11 ref docs (#8977) * Update v1alpha1 to v1beta1. * Adjust left nav for 1.11 ref docs. * Trim list of old ref docs. * Update Federation API ref docs for 1.11. (#9064) * Update Federation API ref docs for 1.11. * Add titles. * Update definitions.html * CRD versioning Public Documentation (#8834) * CRD versioning Public Documentation * Copyedit Signed-off-by: Misty Stanley-Jones <mistyhacks@google.com> * Address feedback * More rewrites * Address feedback * Update main CRD page in light of versioning * Reorg CRD docs * Further reorg * Tweak title * CSI documentation update for raw block volume support (#8927) * CSI documetation update for raw block volume support * minor edits for "CSI raw block volume support" Some small grammar and style nits. * minor CSIBlockVolume edits * Update kubectl component ref page for 1.11. (#9094) * Update kubectl component ref page for 1.11. * Add title. Replace stevepe with username. * crd versioning doc: fix nits (#9142) * Update `DynamicKubeletConfig` feature to beta (#9110) xref: kubernetes/kubernetes#64275 * Documentation for dynamic volume limits based on node type (#8871) * add cos for storage limits * Update docs specific for aws and gce * fix some minor things * Update storage-limits.md * Add k8s version to feature-state shortcode * The Doc update for ScheduleDaemonSetPods (#8842) Signed-off-by: Da K. Ma <klaus1982.cn@gmail.com> * Update docs related to PersistentVolumeLabel admission control (#9109) The said admission controller is disabled by default in 1.11 (kubernetes/kubernetes#64326) and scheduled to be removed in future release. * client exec auth: updates for 1.11 (#9154) * Updates HA kubeadm docs (#9066) * Updates HA kubeadm docs Signed-off-by: Chuck Ha <ha.chuck@gmail.com> * kubeadm HA - Add stacked control plane steps * ssh instructions and some typos in the bash scripts Signed-off-by: Chuck Ha <ha.chuck@gmail.com> * Fix typos and copypasta errors * Fix rebase issues * Integrate more changes Signed-off-by: Chuck Ha <ha.chuck@gmail.com> * copyedits, layout and formatting fixes * final copyedits * Adds a sanity check for load balancer connection Signed-off-by: Chuck Ha <ha.chuck@gmail.com> * formatting fixes, copyedits * fix typos, formatting * Document the Pod Ready++ feature (#9180) Closes: #9107 Xref: kubernetes/kubernetes#64057 * Mention 'KubeletPluginsWatcher' feature (#9177) * Mention 'KubeletPluginsWatcher' feature This feature is more developers oriented than users oriented, so simply mention it in the feature gate should be fine. In future, when the design doc is migrated from Google doc to the kubernetes/community repo, we can add links to it for users who want to dig deeper. Closes: #9108 Xref: kubernetes/kubernetes#63328, kubernetes/kubernetes#64605 * Copyedit * Amend dynamic volume list docs (#9181) The dynamic volume list feature has been documented but the feature gate related was not there yet. Closes: #9105 * Document for service account projection (#9182) This adds docs for the service account projection feature. Xref: kubernetes/kubernetes#63819, kubernetes/community#1973 Closes: #9102 * Update pod priority and preemption user docs (#9172) * Update pod priority and preemption user docs * Copyedit * Documentation on setting node name with Kubeadm (#8925) * Documentation on setting node name with Kubeadm * copyedit * Add kubeadm upgrade docs for 1.11 (#9089) * Add kubeadm upgrade docs for 1.11 * Initial docs review feedback * Add 1-11 to outline * Fix formatting on tab blocks * Move file to correct location * Add `kubeadm upgrade node config` step * Overzealous ediffing * copyedit, fix lists and headings * clarify --force flag for fixing bad state * Get TOML ready for 1.11 release * Blog post for 1.11 release (#9254) * Blog post for 1.11 release * Update 2018-06-26-kubernetes-1.11-release-announcement.md * Update 2018-06-26-kubernetes-1.11-release-announcement.md * Update 2018-06-26-kubernetes-1.11-release-announcement.md
2018-06-27 22:26:18 +00:00
weight: 10
---
{{% capture overview %}}
2018-11-27 09:29:22 +00:00
Configuring the [aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) allows the Kubernetes apiserver to be extended with additional APIs, which are not part of the core Kubernetes APIs.
{{% /capture %}}
{{% capture prerequisites %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
2018-07-30 19:03:57 +00:00
{{< note >}}
There are a few setup requirements for getting the aggregation layer working in your environment to support mutual TLS auth between the proxy and extension apiservers. Kubernetes and the kube-apiserver have multiple CAs, so make sure that the proxy is signed by the aggregation layer CA and not by something else, like the master CA.
{{< caution >}}
Reusing the same CA for different client types can negatively impact the cluster's ability to function. For more information, see [CA Reusage and Conflicts](#ca-reusage-and-conflicts).
{{< /caution >}}
2018-07-30 19:03:57 +00:00
{{< /note >}}
2019-02-18 02:16:00 +00:00
{{% /capture %}}
{{% capture steps %}}
2019-02-18 02:16:00 +00:00
## Authentication Flow
Unlike Custom Resource Definitions (CRDs), the Aggregation API involves another server - your Extension apiserver - in addition to the standard Kubernetes apiserver. The Kubernetes apiserver will need to communicate with your extension apiserver, and your extension apiserver will need to communicate with the Kubernetes apiserver. In order for this communication to be secured, the Kubernetes apiserver uses x509 certificates to authenticate itself to the extension apiserver.
This section describes how the authentication and authorization flows work, and how to configure them.
The high-level flow is as follows:
1. Kubernetes apiserver: authenticate the requesting user and authorize their rights to the requested API path.
2. Kubernetes apiserver: proxy the request to the extension apiserver
2019-02-18 02:16:00 +00:00
3. Extension apiserver: authenticate the request from the Kubernetes apiserver
4. Extension apiserver: authorize the request from the original user
5. Extension apiserver: execute
The rest of this section describes these steps in detail.
The flow can be seen in the following diagram.
![aggregation auth flows](/images/docs/aggregation-api-auth-flow.png).
The source for the above swimlanes can be found in the source of this document.
<!--
Swimlanes generated at https://swimlanes.io with the source as follows:
-----BEGIN-----
title: Welcome to swimlanes.io
User -> kube-apiserver / aggregator:
note:
1. The user makes a request to the Kube API server using any recognized credential (e.g. OIDC or client certs)
kube-apiserver / aggregator -> kube-apiserver / aggregator: authentication
note:
2. The Kube API server authenticates the incoming request using any configured authentication methods (e.g. OIDC or client certs)
kube-apiserver / aggregator -> kube-apiserver / aggregator: authorization
note:
3. The Kube API server authorizes the requested URL using any configured authorization method (e.g. RBAC)
kube-apiserver / aggregator -> aggregated apiserver:
note:
4. The aggregator opens a connection to the aggregated API server using `--proxy-client-cert-file`/`--proxy-client-key-file` client certificate/key to secure the channel
5. The aggregator sends the user info from step 1 to the aggregated API server as http headers, as defined by the following flags:
* `--requestheader-username-headers`
* `--requestheader-group-headers`
* `--requestheader-extra-headers-prefix`
aggregated apiserver -> aggregated apiserver: authentication
note:
6. The aggregated apiserver authenticates the incoming request using the auth proxy authentication method:
* verifies the request has a recognized auth proxy client certificate
* pulls user info from the incoming request's http headers
By default, it pulls the configuration information for this from a configmap in the kube-system namespace that is published by the kube-apiserver, containing the info from the `--requestheader-...` flags provided to the kube-apiserver (CA bundle to use, auth proxy client certificate names to allow, http header names to use, etc)
aggregated apiserver -> kube-apiserver / aggregator: authorization
note:
7. The aggregated apiserver authorizes the incoming request by making a SubjectAccessReview call to the kube-apiserver
aggregated apiserver -> aggregated apiserver: admission
note:
8. For mutating requests, the aggregated apiserver runs admission checks. by default, the namespace lifecycle admission plugin ensures namespaced resources are created in a namespace that exists in the kube-apiserver
-----END-----
-->
### Kubernetes Apiserver Authentication and Authorization
A request to an API path that is served by an extension apiserver begins the same way as all API requests: communication to the Kubernetes apiserver. This path already has been registered with the Kubernetes apiserver by the extension apiserver.
The user communicates with the Kubernetes apiserver, requesting access to the path. The Kubernetes apiserver uses standard authentication and authorization configured with the Kubernetes apiserver to authenticate the user and authorize access to the specific path.
For an overview of authenticating to a Kubernetes cluster, see ["Authenticating to a Cluster"](/docs/reference/access-authn-authz/authentication/). For an overview of authorization of access to Kubernetes cluster resources, see ["Authorization Overview"](/docs/reference/access-authn-authz/authorization/).
Everything to this point has been standard Kubernetes API requests, authentication and authorization.
The Kubernetes apiserver now is prepared to send the request to the extension apiserver.
### Kubernetes Apiserver Proxies the Request
The Kubernetes apiserver now will send, or proxy, the request to the extension apiserver that registered to handle the request. In order to do so, it needs to know several things:
1. How should the Kubernetes apiserver authenticate to the extension apiserver, informing the extension apiserver that the request, which comes over the network, is coming from a valid Kubernetes apiserver?
2. How should the Kubernetes apiserver inform the extension apiserver of the username and group for which the original request was authenticated?
In order to provide for these two, you must configure the Kubernetes apiserver using several flags.
#### Kubernetes Apiserver Client Authentication
The Kubernetes apiserver connects to the extension apiserver over TLS, authenticating itself using a client certificate. You must provide the following to the Kubernetes apiserver upon startup, using the provided flags:
* private key file via `--proxy-client-key-file`
* signed client certificate file via `--proxy-client-cert-file`
* certificate of the CA that signed the client certificate file via `--requestheader-client-ca-file`
* valid Common Name values (CNs) in the signed client certificate via `--requestheader-allowed-names`
2019-02-18 02:16:00 +00:00
The Kubernetes apiserver will use the files indicated by `--proxy-client-*-file` to authenticate to the extension apiserver. In order for the request to be considered valid by a compliant extension apiserver, the following conditions must be met:
1. The connection must be made using a client certificate that is signed by the CA whose certificate is in `--requestheader-client-ca-file`.
2. The connection must be made using a client certificate whose CN is one of those listed in `--requestheader-allowed-names`. **Note:** You can set this option to blank as `--requestheader-allowed-names=""`. This will indicate to an extension apiserver that _any_ CN is acceptable.
When started with these options, the Kubernetes apiserver will:
1. Use them to authenticate to the extension apiserver.
2. Create a configmap in the `kube-system` namespace called `extension-apiserver-authentication`, in which it will place the CA certificate and the allowed CNs. These in turn can be retrieved by extension apiservers to validate requests.
Note that the same client certificate is used by the Kubernetes apiserver to authenticate against _all_ extension apiservers. It does not create a client certificate per extension apiserver, but rather a single one to authenticate as the Kubernetes apiserver. This same one is reused for all extension apiserver requests.
#### Original Request Username and Group
When the Kubernetes apiserver proxies the request to the extension apiserver, it informs the extension apiserver of the username and group with which the original request successfully authenticated. It provides these in http headers of its proxied request. You must inform the Kubernetes apiserver of the names of the headers to be used.
* the header in which to store the username via `--requestheader-username-headers`
* the header in which to store the group via `--requestheader-group-headers`
* the prefix to append to all extra headers via `--requestheader-extra-headers-prefix`
These header names are also placed in the `extension-apiserver-authentication` configmap, so they can be retrieved and used by extension apiservers.
### Extension Apiserver Authenticates the Request
The extension apiserver, upon receiving a proxied request from the Kubernetes apiserver, must validate that the request actually did come from a valid authenticating proxy, which role the Kubernetes apiserver is fulfilling. The extension apiserver validates it via:
1. Retrieve the following from the configmap in `kube-system`, as described above:
* Client CA certificate
* List of allowed names (CNs)
* Header names for username, group and extra info
2. Check that the TLS connection was authenticated using a client certificate which:
* Was signed by the CA whose certificate matches the retrieved CA certificate.
* Has a CN in the list of allowed CNs, unless the list is blank, in which case all CNs are allowed.
* Extract the username and group from the appropriate headers
If the above passes, then the request is a valid proxied request from a legitimate authenticating proxy, in this case the Kubernetes apiserver.
Note that it is the responsibility of the extension apiserver implementation to provide the above. Many do it by default, leveraging the `k8s.io/apiserver/` package. Others may provide options to override it using command-line options.
In order to have permission to retrieve the configmap, an extension apiserver requires the appropriate role. There is a default role named `extension-apiserver-authentication-reader` in the `kube-system` namespace which can be assigned.
### Extension Apiserver Authorizes the Request
The extension apiserver now can validate that the user/group retrieved from the headers are authorized to execute the given request. It does so by sending a standard [SubjectAccessReview](/docs/reference/access-authn-authz/authorization/) request to the Kubernetes apiserver.
In order for the extension apiserver to be authorized itself to submit the `SubjectAccessReview` request to the Kubernetes apiserver, it needs the correct permissions. Kubernetes includes a default `ClusterRole` named `system:auth-delegator` that has the appropriate permissions. It can be granted to the extension apiserver's service account.
### Extension Apiserver Executes
If the `SubjectAccessReview` passes, the extension apiserver executes the request.
## Enable Kubernetes Apiserver flags
2019-02-18 02:16:00 +00:00
Enable the aggregation layer via the following `kube-apiserver` flags. They may have already been taken care of by your provider.
--requestheader-client-ca-file=<path to aggregator CA cert>
--requestheader-allowed-names=front-proxy-client
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--proxy-client-cert-file=<path to aggregator proxy cert>
--proxy-client-key-file=<path to aggregator proxy key>
### CA Reusage and Conflicts
The Kubernetes apiserver has two client CA options:
* `--client-ca-file`
* `--requestheader-client-ca-file`
Each of these functions independently and can conflict with each other, if not used correctly.
* `--client-ca-file`: When a request arrives to the Kubernetes apiserver, if this option is enabled, the Kubernetes apiserver checks the certificate of the request. If it is signed by one of the CA certificates in the file referenced by `--client-ca-file`, then the request is treated as a legitimate request, and the user is the value of the common name `CN=`, while the group is the organization `O=`. See the [documentaton on TLS authentication](/docs/reference/access-authn-authz/authentication/#x509-client-certs).
* `--requestheader-client-ca-file`: When a request arrives to the Kubernetes apiserver, if this option is enabled, the Kubernetes apiserver checks the certificate of the request. If it is signed by one of the CA certificates in the file reference by `--requestheader-client-ca-file`, then the request is treated as a potentially legitimate request. The Kubernetes apiserver then checks if the common name `CN=` is one of the names in the list provided by `--requestheader-allowed-names`. If the name is allowed, the request is approved; if it is not, the request is not.
If _both_ `--client-ca-file` and `--requestheader-client-ca-file` are provided, then the request first checks the `--requestheader-client-ca-file` CA and then the `--client-ca-file`. Normally, different CAs, either root CAs or intermediate CAs, are used for each of these options; regular client requests match against `--client-ca-file`, while aggregation requests match against `--requestheader-client-ca-file`. However, if both use the _same_ CA, then client requests that normally would pass via `--client-ca-file` will fail, because the CA will match the CA in `--requestheader-client-ca-file`, but the common name `CN=` will **not** match one of the acceptable common names in `--requestheader-allowed-names`. This can cause your kubelets and other control plane components, as well as end-users, to be unable to authenticate to the Kubernetes apiserver.
For this reason, use different CA certs for the `--client-ca-file` option - to authorize control plane components and end-users - and the `--requestheader-client-ca-file` option - to authorize aggregation apiserver requests.
2018-12-23 01:07:48 +00:00
{{< warning >}}
Do **not** reuse a CA that is used in a different context unless you understand the risks and the mechanisms to protect the CA's usage.
{{< /warning >}}
2019-02-18 02:16:00 +00:00
If you are not running kube-proxy on a host running the API server, then you must make sure that the system is enabled with the following `kube-apiserver` flag:
--enable-aggregator-routing=true
{{% /capture %}}
### Register APIService objects
You can dynamically configure what client requests are proxied to extension
apiserver. The following is an example registration:
```yaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: <name of the registration object>
spec:
group: <API group name this extension apiserver hosts>
version: <API version this extension apiserver hosts>
groupPriorityMinimum: <priority this APIService for this group, see API documentation>
versionPriority: <prioritizes ordering of this version within a group, see API documentation>
service:
namespace: <namespace of the extension apiserver service>
name: <name of the extension apiserver service>
caBundle: <pem encoded ca cert that signs the server cert used by the webhook>
```
#### Contacting the extension apiserver
Once the Kubernetes apiserver has determined a request should be sent to a extension apiserver,
it needs to know how to contact it.
The `service` stanza is a reference to the service for a extension apiserver.
The service namespace and name are required. The port is optional and defaults to 443.
The path is optional and defaults to "/".
Here is an example of an extension apiserver that is configured to be called on port "1234"
at the subpath "/my-path", and to verify the TLS connection against the ServerName
`my-service-name.my-service-namespace.svc` using a custom CA bundle.
```yaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
...
spec:
...
service:
namespace: my-service-namespace
name: my-service-name
port: 1234
caBundle: "Ci0tLS0tQk...<base64-encoded PEM bundle>...tLS0K"
...
```
{{% capture whatsnext %}}
* [Setup an extension api-server](/docs/tasks/access-kubernetes-api/setup-extension-api-server/) to work with the aggregation layer.
* For a high level overview, see [Extending the Kubernetes API with the aggregation layer](/docs/concepts/api-extension/apiserver-aggregation/).
* Learn how to [Extend the Kubernetes API Using Custom Resource Definitions](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/).
{{% /capture %}}