Merge pull request #37863 from krol3/merged-main-dev-1.26-02
Merge main branch into dev-1.26pull/37715/head
commit
b5aafaaac4
|
@ -32,7 +32,7 @@ Kubernetes releases now generate provenance attestation files describing the sta
|
|||
|
||||
### HorizontalPodAutoscaler v2 graduates to GA
|
||||
|
||||
The HorizontalPodAutscaler `autoscaling/v2` stable API moved to GA in 1.23. The HorizontalPodAutoscaler `autoscaling/v2beta2` API has been deprecated.
|
||||
The HorizontalPodAutoscaler `autoscaling/v2` stable API moved to GA in 1.23. The HorizontalPodAutoscaler `autoscaling/v2beta2` API has been deprecated.
|
||||
|
||||
### Generic Ephemeral Volume feature graduates to GA
|
||||
|
||||
|
|
|
@ -11,7 +11,9 @@ weight: 80
|
|||
|
||||
System component logs record events happening in cluster, which can be very useful for debugging.
|
||||
You can configure log verbosity to see more or less detail.
|
||||
Logs can be as coarse-grained as showing errors within a component, or as fine-grained as showing step-by-step traces of events (like HTTP access logs, pod state changes, controller actions, or scheduler decisions).
|
||||
Logs can be as coarse-grained as showing errors within a component, or as fine-grained as showing
|
||||
step-by-step traces of events (like HTTP access logs, pod state changes, controller actions, or
|
||||
scheduler decisions).
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -22,9 +24,9 @@ generates log messages for the Kubernetes system components.
|
|||
|
||||
For more information about klog configuration, see the [Command line tool reference](/docs/reference/command-line-tools-reference/).
|
||||
|
||||
Kubernetes is in the process of simplifying logging in its components. The
|
||||
following klog command line flags [are
|
||||
deprecated](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
|
||||
Kubernetes is in the process of simplifying logging in its components.
|
||||
The following klog command line flags
|
||||
[are deprecated](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
|
||||
starting with Kubernetes 1.23 and will be removed in a future release:
|
||||
|
||||
- `--add-dir-header`
|
||||
|
@ -39,13 +41,12 @@ starting with Kubernetes 1.23 and will be removed in a future release:
|
|||
- `--skip-log-headers`
|
||||
- `--stderrthreshold`
|
||||
|
||||
Output will always be written to stderr, regardless of the output
|
||||
format. Output redirection is expected to be handled by the component which
|
||||
invokes a Kubernetes component. This can be a POSIX shell or a tool like
|
||||
systemd.
|
||||
Output will always be written to stderr, regardless of the output format. Output redirection is
|
||||
expected to be handled by the component which invokes a Kubernetes component. This can be a POSIX
|
||||
shell or a tool like systemd.
|
||||
|
||||
In some cases, for example a distroless container or a Windows system service,
|
||||
those options are not available. Then the
|
||||
In some cases, for example a distroless container or a Windows system service, those options are
|
||||
not available. Then the
|
||||
[`kube-log-runner`](https://github.com/kubernetes/kubernetes/blob/d2a8a81639fcff8d1221b900f66d28361a170654/staging/src/k8s.io/component-base/logs/kube-log-runner/README.md)
|
||||
binary can be used as wrapper around a Kubernetes component to redirect
|
||||
output. A prebuilt binary is included in several Kubernetes base images under
|
||||
|
@ -64,33 +65,36 @@ This table shows how `kube-log-runner` invocations correspond to shell redirecti
|
|||
### Klog output
|
||||
|
||||
An example of the traditional klog native format:
|
||||
|
||||
```
|
||||
I1025 00:15:15.525108 1 httplog.go:79] GET /api/v1/namespaces/kube-system/pods/metrics-server-v0.3.1-57c75779f-9p8wg: (1.512ms) 200 [pod_nanny/v0.0.0 (linux/amd64) kubernetes/$Format 10.56.1.19:51756]
|
||||
```
|
||||
|
||||
The message string may contain line breaks:
|
||||
|
||||
```
|
||||
I1025 00:15:15.525108 1 example.go:79] This is a message
|
||||
which has a line break.
|
||||
```
|
||||
|
||||
|
||||
### Structured Logging
|
||||
|
||||
{{< feature-state for_k8s_version="v1.23" state="beta" >}}
|
||||
|
||||
{{< warning >}}
|
||||
Migration to structured log messages is an ongoing process. Not all log messages are structured in this version. When parsing log files, you must also handle unstructured log messages.
|
||||
Migration to structured log messages is an ongoing process. Not all log messages are structured in
|
||||
this version. When parsing log files, you must also handle unstructured log messages.
|
||||
|
||||
Log formatting and value serialization are subject to change.
|
||||
{{< /warning>}}
|
||||
|
||||
Structured logging introduces a uniform structure in log messages allowing for programmatic extraction of information. You can store and process structured logs with less effort and cost.
|
||||
The code which generates a log message determines whether it uses the traditional unstructured klog output
|
||||
or structured logging.
|
||||
Structured logging introduces a uniform structure in log messages allowing for programmatic
|
||||
extraction of information. You can store and process structured logs with less effort and cost.
|
||||
The code which generates a log message determines whether it uses the traditional unstructured
|
||||
klog output or structured logging.
|
||||
|
||||
The default formatting of structured log messages is as text, with a format that
|
||||
is backward compatible with traditional klog:
|
||||
The default formatting of structured log messages is as text, with a format that is backward
|
||||
compatible with traditional klog:
|
||||
|
||||
```ini
|
||||
<klog header> "<message>" <key1>="<value1>" <key2>="<value2>" ...
|
||||
|
@ -105,6 +109,7 @@ I1025 00:15:15.525108 1 controller_utils.go:116] "Pod status updated" pod=
|
|||
Strings are quoted. Other values are formatted with
|
||||
[`%+v`](https://pkg.go.dev/fmt#hdr-Printing), which may cause log messages to
|
||||
continue on the next line [depending on the data](https://github.com/kubernetes/kubernetes/issues/106428).
|
||||
|
||||
```
|
||||
I1025 00:15:15.525108 1 example.go:116] "Example" data="This is text with a line break\nand \"quotation marks\"." someInt=1 someFloat=0.1 someStruct={StringField: First line,
|
||||
second line.}
|
||||
|
@ -164,15 +169,18 @@ I0404 18:03:31.171962 452150 logger.go:95] "another runtime" duration="1m0s"
|
|||
{{< feature-state for_k8s_version="v1.19" state="alpha" >}}
|
||||
|
||||
{{<warning >}}
|
||||
JSON output does not support many standard klog flags. For list of unsupported klog flags, see the [Command line tool reference](/docs/reference/command-line-tools-reference/).
|
||||
JSON output does not support many standard klog flags. For list of unsupported klog flags, see the
|
||||
[Command line tool reference](/docs/reference/command-line-tools-reference/).
|
||||
|
||||
Not all logs are guaranteed to be written in JSON format (for example, during process start). If you intend to parse logs, make sure you can handle log lines that are not JSON as well.
|
||||
Not all logs are guaranteed to be written in JSON format (for example, during process start).
|
||||
If you intend to parse logs, make sure you can handle log lines that are not JSON as well.
|
||||
|
||||
Field names and JSON serialization are subject to change.
|
||||
{{< /warning >}}
|
||||
|
||||
The `--logging-format=json` flag changes the format of logs from klog native format to JSON format.
|
||||
Example of JSON log format (pretty printed):
|
||||
|
||||
```json
|
||||
{
|
||||
"ts": 1580306777.04728,
|
||||
|
@ -187,13 +195,14 @@ Example of JSON log format (pretty printed):
|
|||
```
|
||||
|
||||
Keys with special meaning:
|
||||
|
||||
* `ts` - timestamp as Unix time (required, float)
|
||||
* `v` - verbosity (only for info and not for error messages, int)
|
||||
* `err` - error string (optional, string)
|
||||
* `msg` - message (required, string)
|
||||
|
||||
|
||||
List of components currently supporting JSON format:
|
||||
|
||||
* {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}}
|
||||
* {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}}
|
||||
* {{< glossary_tooltip term_id="kube-scheduler" text="kube-scheduler" >}}
|
||||
|
@ -201,8 +210,9 @@ List of components currently supporting JSON format:
|
|||
|
||||
### Log verbosity level
|
||||
|
||||
The `-v` flag controls log verbosity. Increasing the value increases the number of logged events. Decreasing the value decreases the number of logged events.
|
||||
Increasing verbosity settings logs increasingly less severe events. A verbosity setting of 0 logs only critical events.
|
||||
The `-v` flag controls log verbosity. Increasing the value increases the number of logged events.
|
||||
Decreasing the value decreases the number of logged events. Increasing verbosity settings logs
|
||||
increasingly less severe events. A verbosity setting of 0 logs only critical events.
|
||||
|
||||
### Log location
|
||||
|
||||
|
@ -228,3 +238,4 @@ The `logrotate` tool rotates logs daily, or once the log size is greater than 10
|
|||
* Read about [Contextual Logging](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging)
|
||||
* Read about [deprecation of klog flags](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
|
||||
* Read about the [Conventions for logging severity](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md)
|
||||
|
||||
|
|
|
@ -10,7 +10,8 @@ weight: 70
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
System component metrics can give a better look into what is happening inside them. Metrics are particularly useful for building dashboards and alerts.
|
||||
System component metrics can give a better look into what is happening inside them. Metrics are
|
||||
particularly useful for building dashboards and alerts.
|
||||
|
||||
Kubernetes components emit metrics in [Prometheus format](https://prometheus.io/docs/instrumenting/exposition_formats/).
|
||||
This format is structured plain text, designed so that people and machines can both read it.
|
||||
|
@ -19,7 +20,8 @@ This format is structured plain text, designed so that people and machines can b
|
|||
|
||||
## Metrics in Kubernetes
|
||||
|
||||
In most cases metrics are available on `/metrics` endpoint of the HTTP server. For components that doesn't expose endpoint by default it can be enabled using `--bind-address` flag.
|
||||
In most cases metrics are available on `/metrics` endpoint of the HTTP server. For components that
|
||||
doesn't expose endpoint by default it can be enabled using `--bind-address` flag.
|
||||
|
||||
Examples of those components:
|
||||
|
||||
|
@ -29,13 +31,18 @@ Examples of those components:
|
|||
* {{< glossary_tooltip term_id="kube-scheduler" text="kube-scheduler" >}}
|
||||
* {{< glossary_tooltip term_id="kubelet" text="kubelet" >}}
|
||||
|
||||
In a production environment you may want to configure [Prometheus Server](https://prometheus.io/) or some other metrics scraper
|
||||
to periodically gather these metrics and make them available in some kind of time series database.
|
||||
In a production environment you may want to configure [Prometheus Server](https://prometheus.io/)
|
||||
or some other metrics scraper to periodically gather these metrics and make them available in some
|
||||
kind of time series database.
|
||||
|
||||
Note that {{< glossary_tooltip term_id="kubelet" text="kubelet" >}} also exposes metrics in `/metrics/cadvisor`, `/metrics/resource` and `/metrics/probes` endpoints. Those metrics do not have same lifecycle.
|
||||
Note that {{< glossary_tooltip term_id="kubelet" text="kubelet" >}} also exposes metrics in
|
||||
`/metrics/cadvisor`, `/metrics/resource` and `/metrics/probes` endpoints. Those metrics do not
|
||||
have same lifecycle.
|
||||
|
||||
If your cluster uses {{< glossary_tooltip term_id="rbac" text="RBAC" >}}, reading metrics requires
|
||||
authorization via a user, group or ServiceAccount with a ClusterRole that allows accessing
|
||||
`/metrics`. For example:
|
||||
|
||||
If your cluster uses {{< glossary_tooltip term_id="rbac" text="RBAC" >}}, reading metrics requires authorization via a user, group or ServiceAccount with a ClusterRole that allows accessing `/metrics`.
|
||||
For example:
|
||||
```yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
|
@ -55,6 +62,7 @@ Alpha metric → Stable metric → Deprecated metric → Hidden metric → De
|
|||
Alpha metrics have no stability guarantees. These metrics can be modified or deleted at any time.
|
||||
|
||||
Stable metrics are guaranteed to not change. This means:
|
||||
|
||||
* A stable metric without a deprecated signature will not be deleted or renamed
|
||||
* A stable metric's type will not be modified
|
||||
|
||||
|
@ -79,45 +87,64 @@ For example:
|
|||
some_counter 0
|
||||
```
|
||||
|
||||
Hidden metrics are no longer published for scraping, but are still available for use. To use a hidden metric, please refer to the [Show hidden metrics](#show-hidden-metrics) section.
|
||||
Hidden metrics are no longer published for scraping, but are still available for use. To use a
|
||||
hidden metric, please refer to the [Show hidden metrics](#show-hidden-metrics) section.
|
||||
|
||||
Deleted metrics are no longer published and cannot be used.
|
||||
|
||||
|
||||
## Show hidden metrics
|
||||
|
||||
As described above, admins can enable hidden metrics through a command-line flag on a specific binary. This intends to be used as an escape hatch for admins if they missed the migration of the metrics deprecated in the last release.
|
||||
As described above, admins can enable hidden metrics through a command-line flag on a specific
|
||||
binary. This intends to be used as an escape hatch for admins if they missed the migration of the
|
||||
metrics deprecated in the last release.
|
||||
|
||||
The flag `show-hidden-metrics-for-version` takes a version for which you want to show metrics deprecated in that release. The version is expressed as x.y, where x is the major version, y is the minor version. The patch version is not needed even though a metrics can be deprecated in a patch release, the reason for that is the metrics deprecation policy runs against the minor release.
|
||||
The flag `show-hidden-metrics-for-version` takes a version for which you want to show metrics
|
||||
deprecated in that release. The version is expressed as x.y, where x is the major version, y is
|
||||
the minor version. The patch version is not needed even though a metrics can be deprecated in a
|
||||
patch release, the reason for that is the metrics deprecation policy runs against the minor release.
|
||||
|
||||
The flag can only take the previous minor version as it's value. All metrics hidden in previous will be emitted if admins set the previous version to `show-hidden-metrics-for-version`. The too old version is not allowed because this violates the metrics deprecated policy.
|
||||
The flag can only take the previous minor version as it's value. All metrics hidden in previous
|
||||
will be emitted if admins set the previous version to `show-hidden-metrics-for-version`. The too
|
||||
old version is not allowed because this violates the metrics deprecated policy.
|
||||
|
||||
Take metric `A` as an example, here assumed that `A` is deprecated in 1.n. According to metrics deprecated policy, we can reach the following conclusion:
|
||||
Take metric `A` as an example, here assumed that `A` is deprecated in 1.n. According to metrics
|
||||
deprecated policy, we can reach the following conclusion:
|
||||
|
||||
* In release `1.n`, the metric is deprecated, and it can be emitted by default.
|
||||
* In release `1.n+1`, the metric is hidden by default and it can be emitted by command line `show-hidden-metrics-for-version=1.n`.
|
||||
* In release `1.n+1`, the metric is hidden by default and it can be emitted by command line
|
||||
`show-hidden-metrics-for-version=1.n`.
|
||||
* In release `1.n+2`, the metric should be removed from the codebase. No escape hatch anymore.
|
||||
|
||||
If you're upgrading from release `1.12` to `1.13`, but still depend on a metric `A` deprecated in `1.12`, you should set hidden metrics via command line: `--show-hidden-metrics=1.12` and remember to remove this metric dependency before upgrading to `1.14`
|
||||
If you're upgrading from release `1.12` to `1.13`, but still depend on a metric `A` deprecated in
|
||||
`1.12`, you should set hidden metrics via command line: `--show-hidden-metrics=1.12` and remember
|
||||
to remove this metric dependency before upgrading to `1.14`
|
||||
|
||||
## Disable accelerator metrics
|
||||
|
||||
The kubelet collects accelerator metrics through cAdvisor. To collect these metrics, for accelerators like NVIDIA GPUs, kubelet held an open handle on the driver. This meant that in order to perform infrastructure changes (for example, updating the driver), a cluster administrator needed to stop the kubelet agent.
|
||||
The kubelet collects accelerator metrics through cAdvisor. To collect these metrics, for
|
||||
accelerators like NVIDIA GPUs, kubelet held an open handle on the driver. This meant that in order
|
||||
to perform infrastructure changes (for example, updating the driver), a cluster administrator
|
||||
needed to stop the kubelet agent.
|
||||
|
||||
The responsibility for collecting accelerator metrics now belongs to the vendor rather than the kubelet. Vendors must provide a container that collects metrics and exposes them to the metrics service (for example, Prometheus).
|
||||
The responsibility for collecting accelerator metrics now belongs to the vendor rather than the
|
||||
kubelet. Vendors must provide a container that collects metrics and exposes them to the metrics
|
||||
service (for example, Prometheus).
|
||||
|
||||
The [`DisableAcceleratorUsageMetrics` feature gate](/docs/reference/command-line-tools-reference/feature-gates/) disables metrics collected by the kubelet, with a [timeline for enabling this feature by default](https://github.com/kubernetes/enhancements/tree/411e51027db842355bd489691af897afc1a41a5e/keps/sig-node/1867-disable-accelerator-usage-metrics#graduation-criteria).
|
||||
The [`DisableAcceleratorUsageMetrics` feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
disables metrics collected by the kubelet, with a
|
||||
[timeline for enabling this feature by default](https://github.com/kubernetes/enhancements/tree/411e51027db842355bd489691af897afc1a41a5e/keps/sig-node/1867-disable-accelerator-usage-metrics#graduation-criteria).
|
||||
|
||||
## Component metrics
|
||||
|
||||
### kube-controller-manager metrics
|
||||
|
||||
Controller manager metrics provide important insight into the performance and health of the controller manager.
|
||||
These metrics include common Go language runtime metrics such as go_routine count and controller specific metrics such as
|
||||
etcd request latencies or Cloudprovider (AWS, GCE, OpenStack) API latencies that can be used
|
||||
to gauge the health of a cluster.
|
||||
Controller manager metrics provide important insight into the performance and health of the
|
||||
controller manager. These metrics include common Go language runtime metrics such as go_routine
|
||||
count and controller specific metrics such as etcd request latencies or Cloudprovider (AWS, GCE,
|
||||
OpenStack) API latencies that can be used to gauge the health of a cluster.
|
||||
|
||||
Starting from Kubernetes 1.7, detailed Cloudprovider metrics are available for storage operations for GCE, AWS, Vsphere and OpenStack.
|
||||
Starting from Kubernetes 1.7, detailed Cloudprovider metrics are available for storage operations
|
||||
for GCE, AWS, Vsphere and OpenStack.
|
||||
These metrics can be used to monitor health of persistent volume operations.
|
||||
|
||||
For example, for GCE these metrics are called:
|
||||
|
@ -136,9 +163,15 @@ cloudprovider_gce_api_request_duration_seconds { request = "list_disk"}
|
|||
|
||||
{{< feature-state for_k8s_version="v1.21" state="beta" >}}
|
||||
|
||||
The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current or historical scheduling limits, quickly identify workloads that cannot schedule due to lack of resources, and compare actual usage to the pod's request.
|
||||
The scheduler exposes optional metrics that reports the requested resources and the desired limits
|
||||
of all running pods. These metrics can be used to build capacity planning dashboards, assess
|
||||
current or historical scheduling limits, quickly identify workloads that cannot schedule due to
|
||||
lack of resources, and compare actual usage to the pod's request.
|
||||
|
||||
The kube-scheduler identifies the resource [requests and limits](/docs/concepts/configuration/manage-resources-containers/)
|
||||
configured for each Pod; when either a request or limit is non-zero, the kube-scheduler reports a
|
||||
metrics timeseries. The time series is labelled by:
|
||||
|
||||
The kube-scheduler identifies the resource [requests and limits](/docs/concepts/configuration/manage-resources-containers/) configured for each Pod; when either a request or limit is non-zero, the kube-scheduler reports a metrics timeseries. The time series is labelled by:
|
||||
- namespace
|
||||
- pod name
|
||||
- the node where the pod is scheduled or an empty string if not yet scheduled
|
||||
|
@ -147,32 +180,47 @@ The kube-scheduler identifies the resource [requests and limits](/docs/concepts/
|
|||
- the name of the resource (for example, `cpu`)
|
||||
- the unit of the resource if known (for example, `cores`)
|
||||
|
||||
Once a pod reaches completion (has a `restartPolicy` of `Never` or `OnFailure` and is in the `Succeeded` or `Failed` pod phase, or has been deleted and all containers have a terminated state) the series is no longer reported since the scheduler is now free to schedule other pods to run. The two metrics are called `kube_pod_resource_request` and `kube_pod_resource_limit`.
|
||||
Once a pod reaches completion (has a `restartPolicy` of `Never` or `OnFailure` and is in the
|
||||
`Succeeded` or `Failed` pod phase, or has been deleted and all containers have a terminated state)
|
||||
the series is no longer reported since the scheduler is now free to schedule other pods to run.
|
||||
The two metrics are called `kube_pod_resource_request` and `kube_pod_resource_limit`.
|
||||
|
||||
The metrics are exposed at the HTTP endpoint `/metrics/resources` and require the same authorization as the `/metrics`
|
||||
endpoint on the scheduler. You must use the `--show-hidden-metrics-for-version=1.20` flag to expose these alpha stability metrics.
|
||||
The metrics are exposed at the HTTP endpoint `/metrics/resources` and require the same
|
||||
authorization as the `/metrics` endpoint on the scheduler. You must use the
|
||||
`--show-hidden-metrics-for-version=1.20` flag to expose these alpha stability metrics.
|
||||
|
||||
## Disabling metrics
|
||||
|
||||
You can explicitly turn off metrics via command line flag `--disabled-metrics`. This may be desired if, for example, a metric is causing a performance problem. The input is a list of disabled metrics (i.e. `--disabled-metrics=metric1,metric2`).
|
||||
You can explicitly turn off metrics via command line flag `--disabled-metrics`. This may be
|
||||
desired if, for example, a metric is causing a performance problem. The input is a list of
|
||||
disabled metrics (i.e. `--disabled-metrics=metric1,metric2`).
|
||||
|
||||
## Metric cardinality enforcement
|
||||
|
||||
Metrics with unbounded dimensions could cause memory issues in the components they instrument. To limit resource use, you can use the `--allow-label-value` command line option to dynamically configure an allow-list of label values for a metric.
|
||||
Metrics with unbounded dimensions could cause memory issues in the components they instrument. To
|
||||
limit resource use, you can use the `--allow-label-value` command line option to dynamically
|
||||
configure an allow-list of label values for a metric.
|
||||
|
||||
In alpha stage, the flag can only take in a series of mappings as metric label allow-list.
|
||||
Each mapping is of the format `<metric_name>,<label_name>=<allowed_labels>` where
|
||||
`<allowed_labels>` is a comma-separated list of acceptable label names.
|
||||
|
||||
The overall format looks like:
|
||||
`--allow-label-value <metric_name>,<label_name>='<allow_value1>, <allow_value2>...', <metric_name2>,<label_name>='<allow_value1>, <allow_value2>...', ...`.
|
||||
|
||||
```
|
||||
--allow-label-value <metric_name>,<label_name>='<allow_value1>, <allow_value2>...', <metric_name2>,<label_name>='<allow_value1>, <allow_value2>...', ...
|
||||
```
|
||||
|
||||
Here is an example:
|
||||
`--allow-label-value number_count_metric,odd_number='1,3,5', number_count_metric,even_number='2,4,6', date_gauge_metric,weekend='Saturday,Sunday'`
|
||||
|
||||
```none
|
||||
--allow-label-value number_count_metric,odd_number='1,3,5', number_count_metric,even_number='2,4,6', date_gauge_metric,weekend='Saturday,Sunday'
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read about the [Prometheus text format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format) for metrics
|
||||
* Read about the [Prometheus text format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format)
|
||||
for metrics
|
||||
* See the list of [stable Kubernetes metrics](https://github.com/kubernetes/kubernetes/blob/master/test/instrumentation/testdata/stable-metrics-list.yaml)
|
||||
* Read about the [Kubernetes deprecation policy](/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior)
|
||||
|
||||
|
|
|
@ -7,79 +7,132 @@ weight: 10
|
|||
---
|
||||
|
||||
<!-- overview -->
|
||||
This document highlights and consolidates configuration best practices that are introduced throughout the user guide, Getting Started documentation, and examples.
|
||||
|
||||
This is a living document. If you think of something that is not on this list but might be useful to others, please don't hesitate to file an issue or submit a PR.
|
||||
This document highlights and consolidates configuration best practices that are introduced
|
||||
throughout the user guide, Getting Started documentation, and examples.
|
||||
|
||||
This is a living document. If you think of something that is not on this list but might be useful
|
||||
to others, please don't hesitate to file an issue or submit a PR.
|
||||
|
||||
<!-- body -->
|
||||
## General Configuration Tips
|
||||
|
||||
- When defining configurations, specify the latest stable API version.
|
||||
|
||||
- Configuration files should be stored in version control before being pushed to the cluster. This allows you to quickly roll back a configuration change if necessary. It also aids cluster re-creation and restoration.
|
||||
- Configuration files should be stored in version control before being pushed to the cluster. This
|
||||
allows you to quickly roll back a configuration change if necessary. It also aids cluster
|
||||
re-creation and restoration.
|
||||
|
||||
- Write your configuration files using YAML rather than JSON. Though these formats can be used interchangeably in almost all scenarios, YAML tends to be more user-friendly.
|
||||
- Write your configuration files using YAML rather than JSON. Though these formats can be used
|
||||
interchangeably in almost all scenarios, YAML tends to be more user-friendly.
|
||||
|
||||
- Group related objects into a single file whenever it makes sense. One file is often easier to manage than several. See the [guestbook-all-in-one.yaml](https://github.com/kubernetes/examples/tree/master/guestbook/all-in-one/guestbook-all-in-one.yaml) file as an example of this syntax.
|
||||
- Group related objects into a single file whenever it makes sense. One file is often easier to
|
||||
manage than several. See the
|
||||
[guestbook-all-in-one.yaml](https://github.com/kubernetes/examples/tree/master/guestbook/all-in-one/guestbook-all-in-one.yaml)
|
||||
file as an example of this syntax.
|
||||
|
||||
- Note also that many `kubectl` commands can be called on a directory. For example, you can call `kubectl apply` on a directory of config files.
|
||||
- Note also that many `kubectl` commands can be called on a directory. For example, you can call
|
||||
`kubectl apply` on a directory of config files.
|
||||
|
||||
- Don't specify default values unnecessarily: simple, minimal configuration will make errors less likely.
|
||||
|
||||
- Put object descriptions in annotations, to allow better introspection.
|
||||
|
||||
|
||||
## "Naked" Pods versus ReplicaSets, Deployments, and Jobs {#naked-pods-vs-replicasets-deployments-and-jobs}
|
||||
|
||||
- Don't use naked Pods (that is, Pods not bound to a [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) or [Deployment](/docs/concepts/workloads/controllers/deployment/)) if you can avoid it. Naked Pods will not be rescheduled in the event of a node failure.
|
||||
|
||||
A Deployment, which both creates a ReplicaSet to ensure that the desired number of Pods is always available, and specifies a strategy to replace Pods (such as [RollingUpdate](/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment)), is almost always preferable to creating Pods directly, except for some explicit [`restartPolicy: Never`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) scenarios. A [Job](/docs/concepts/workloads/controllers/job/) may also be appropriate.
|
||||
- Don't use naked Pods (that is, Pods not bound to a [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) or
|
||||
[Deployment](/docs/concepts/workloads/controllers/deployment/)) if you can avoid it. Naked Pods
|
||||
will not be rescheduled in the event of a node failure.
|
||||
|
||||
A Deployment, which both creates a ReplicaSet to ensure that the desired number of Pods is
|
||||
always available, and specifies a strategy to replace Pods (such as
|
||||
[RollingUpdate](/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment)), is
|
||||
almost always preferable to creating Pods directly, except for some explicit
|
||||
[`restartPolicy: Never`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) scenarios.
|
||||
A [Job](/docs/concepts/workloads/controllers/job/) may also be appropriate.
|
||||
|
||||
## Services
|
||||
|
||||
- Create a [Service](/docs/concepts/services-networking/service/) before its corresponding backend workloads (Deployments or ReplicaSets), and before any workloads that need to access it. When Kubernetes starts a container, it provides environment variables pointing to all the Services which were running when the container was started. For example, if a Service named `foo` exists, all containers will get the following variables in their initial environment:
|
||||
- Create a [Service](/docs/concepts/services-networking/service/) before its corresponding backend
|
||||
workloads (Deployments or ReplicaSets), and before any workloads that need to access it.
|
||||
When Kubernetes starts a container, it provides environment variables pointing to all the Services
|
||||
which were running when the container was started. For example, if a Service named `foo` exists,
|
||||
all containers will get the following variables in their initial environment:
|
||||
|
||||
```shell
|
||||
FOO_SERVICE_HOST=<the host the Service is running on>
|
||||
FOO_SERVICE_PORT=<the port the Service is running on>
|
||||
```
|
||||
|
||||
*This does imply an ordering requirement* - any `Service` that a `Pod` wants to access must be created before the `Pod` itself, or else the environment variables will not be populated. DNS does not have this restriction.
|
||||
*This does imply an ordering requirement* - any `Service` that a `Pod` wants to access must be
|
||||
created before the `Pod` itself, or else the environment variables will not be populated.
|
||||
DNS does not have this restriction.
|
||||
|
||||
- An optional (though strongly recommended) [cluster add-on](/docs/concepts/cluster-administration/addons/) is a DNS server. The
|
||||
DNS server watches the Kubernetes API for new `Services` and creates a set of DNS records for each. If DNS has been enabled throughout the cluster then all `Pods` should be able to do name resolution of `Services` automatically.
|
||||
- An optional (though strongly recommended) [cluster add-on](/docs/concepts/cluster-administration/addons/)
|
||||
is a DNS server. The DNS server watches the Kubernetes API for new `Services` and creates a set
|
||||
of DNS records for each. If DNS has been enabled throughout the cluster then all `Pods` should be
|
||||
able to do name resolution of `Services` automatically.
|
||||
|
||||
- Don't specify a `hostPort` for a Pod unless it is absolutely necessary. When you bind a Pod to a `hostPort`, it limits the number of places the Pod can be scheduled, because each <`hostIP`, `hostPort`, `protocol`> combination must be unique. If you don't specify the `hostIP` and `protocol` explicitly, Kubernetes will use `0.0.0.0` as the default `hostIP` and `TCP` as the default `protocol`.
|
||||
- Don't specify a `hostPort` for a Pod unless it is absolutely necessary. When you bind a Pod to a
|
||||
`hostPort`, it limits the number of places the Pod can be scheduled, because each <`hostIP`,
|
||||
`hostPort`, `protocol`> combination must be unique. If you don't specify the `hostIP` and
|
||||
`protocol` explicitly, Kubernetes will use `0.0.0.0` as the default `hostIP` and `TCP` as the
|
||||
default `protocol`.
|
||||
|
||||
If you only need access to the port for debugging purposes, you can use the [apiserver proxy](/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls) or [`kubectl port-forward`](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/).
|
||||
If you only need access to the port for debugging purposes, you can use the
|
||||
[apiserver proxy](/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls)
|
||||
or [`kubectl port-forward`](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/).
|
||||
|
||||
If you explicitly need to expose a Pod's port on the node, consider using a [NodePort](/docs/concepts/services-networking/service/#type-nodeport) Service before resorting to `hostPort`.
|
||||
If you explicitly need to expose a Pod's port on the node, consider using a
|
||||
[NodePort](/docs/concepts/services-networking/service/#type-nodeport) Service before resorting to
|
||||
`hostPort`.
|
||||
|
||||
- Avoid using `hostNetwork`, for the same reasons as `hostPort`.
|
||||
|
||||
- Use [headless Services](/docs/concepts/services-networking/service/#headless-services) (which have a `ClusterIP` of `None`) for service discovery when you don't need `kube-proxy` load balancing.
|
||||
- Use [headless Services](/docs/concepts/services-networking/service/#headless-services)
|
||||
(which have a `ClusterIP` of `None`) for service discovery when you don't need `kube-proxy`
|
||||
load balancing.
|
||||
|
||||
## Using Labels
|
||||
|
||||
- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app.kubernetes.io/name: MyApp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app.kubernetes.io/name: MyApp`. See the [guestbook](https://github.com/kubernetes/examples/tree/master/guestbook/) app for examples of this approach.
|
||||
- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify
|
||||
__semantic attributes__ of your application or Deployment, such as `{ app.kubernetes.io/name:
|
||||
MyApp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the
|
||||
appropriate Pods for other resources; for example, a Service that selects all `tier: frontend`
|
||||
Pods, or all `phase: test` components of `app.kubernetes.io/name: MyApp`.
|
||||
See the [guestbook](https://github.com/kubernetes/examples/tree/master/guestbook/) app
|
||||
for examples of this approach.
|
||||
|
||||
A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. When you need to update a running service without downtime, use a [Deployment](/docs/concepts/workloads/controllers/deployment/).
|
||||
A Service can be made to span multiple Deployments by omitting release-specific labels from its
|
||||
selector. When you need to update a running service without downtime, use a
|
||||
[Deployment](/docs/concepts/workloads/controllers/deployment/).
|
||||
|
||||
A desired state of an object is described by a Deployment, and if changes to that spec are _applied_, the deployment controller changes the actual state to the desired state at a controlled rate.
|
||||
A desired state of an object is described by a Deployment, and if changes to that spec are
|
||||
_applied_, the deployment controller changes the actual state to the desired state at a controlled
|
||||
rate.
|
||||
|
||||
- Use the [Kubernetes common labels](/docs/concepts/overview/working-with-objects/common-labels/) for common use cases. These standardized labels enrich the metadata in a way that allows tools, including `kubectl` and [dashboard](/docs/tasks/access-application-cluster/web-ui-dashboard), to work in an interoperable way.
|
||||
- Use the [Kubernetes common labels](/docs/concepts/overview/working-with-objects/common-labels/)
|
||||
for common use cases. These standardized labels enrich the metadata in a way that allows tools,
|
||||
including `kubectl` and [dashboard](/docs/tasks/access-application-cluster/web-ui-dashboard), to
|
||||
work in an interoperable way.
|
||||
|
||||
- You can manipulate labels for debugging. Because Kubernetes controllers (such as ReplicaSet) and Services match to Pods using selector labels, removing the relevant labels from a Pod will stop it from being considered by a controller or from being served traffic by a Service. If you remove the labels of an existing Pod, its controller will create a new Pod to take its place. This is a useful way to debug a previously "live" Pod in a "quarantine" environment. To interactively remove or add labels, use [`kubectl label`](/docs/reference/generated/kubectl/kubectl-commands#label).
|
||||
- You can manipulate labels for debugging. Because Kubernetes controllers (such as ReplicaSet) and
|
||||
Services match to Pods using selector labels, removing the relevant labels from a Pod will stop
|
||||
it from being considered by a controller or from being served traffic by a Service. If you remove
|
||||
the labels of an existing Pod, its controller will create a new Pod to take its place. This is a
|
||||
useful way to debug a previously "live" Pod in a "quarantine" environment. To interactively remove
|
||||
or add labels, use [`kubectl label`](/docs/reference/generated/kubectl/kubectl-commands#label).
|
||||
|
||||
## Using kubectl
|
||||
|
||||
- Use `kubectl apply -f <directory>`. This looks for Kubernetes configuration in all `.yaml`, `.yml`, and `.json` files in `<directory>` and passes it to `apply`.
|
||||
|
||||
- Use label selectors for `get` and `delete` operations instead of specific object names. See the sections on [label selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) and [using labels effectively](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively).
|
||||
|
||||
- Use `kubectl create deployment` and `kubectl expose` to quickly create single-container Deployments and Services. See [Use a Service to Access an Application in a Cluster](/docs/tasks/access-application-cluster/service-access-application-cluster/) for an example.
|
||||
- Use `kubectl apply -f <directory>`. This looks for Kubernetes configuration in all `.yaml`,
|
||||
`.yml`, and `.json` files in `<directory>` and passes it to `apply`.
|
||||
|
||||
- Use label selectors for `get` and `delete` operations instead of specific object names. See the
|
||||
sections on [label selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors)
|
||||
and [using labels effectively](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively).
|
||||
|
||||
- Use `kubectl create deployment` and `kubectl expose` to quickly create single-container
|
||||
Deployments and Services.
|
||||
See [Use a Service to Access an Application in a Cluster](/docs/tasks/access-application-cluster/service-access-application-cluster/)
|
||||
for an example.
|
||||
|
||||
|
|
|
@ -16,8 +16,9 @@ methods for adding custom resources and how to choose between them.
|
|||
<!-- body -->
|
||||
## Custom resources
|
||||
|
||||
A *resource* is an endpoint in the [Kubernetes API](/docs/concepts/overview/kubernetes-api/) that stores a collection of
|
||||
[API objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/) of a certain kind; for example, the built-in *pods* resource contains a collection of Pod objects.
|
||||
A *resource* is an endpoint in the [Kubernetes API](/docs/concepts/overview/kubernetes-api/) that
|
||||
stores a collection of [API objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/)
|
||||
of a certain kind; for example, the built-in *pods* resource contains a collection of Pod objects.
|
||||
|
||||
A *custom resource* is an extension of the Kubernetes API that is not necessarily available in a default
|
||||
Kubernetes installation. It represents a customization of a particular Kubernetes installation. However,
|
||||
|
@ -68,63 +69,77 @@ or let your API stand alone.
|
|||
|
||||
In a Declarative API, typically:
|
||||
|
||||
- Your API consists of a relatively small number of relatively small objects (resources).
|
||||
- The objects define configuration of applications or infrastructure.
|
||||
- The objects are updated relatively infrequently.
|
||||
- Humans often need to read and write the objects.
|
||||
- The main operations on the objects are CRUD-y (creating, reading, updating and deleting).
|
||||
- Transactions across objects are not required: the API represents a desired state, not an exact state.
|
||||
- Your API consists of a relatively small number of relatively small objects (resources).
|
||||
- The objects define configuration of applications or infrastructure.
|
||||
- The objects are updated relatively infrequently.
|
||||
- Humans often need to read and write the objects.
|
||||
- The main operations on the objects are CRUD-y (creating, reading, updating and deleting).
|
||||
- Transactions across objects are not required: the API represents a desired state, not an exact state.
|
||||
|
||||
Imperative APIs are not declarative.
|
||||
Signs that your API might not be declarative include:
|
||||
|
||||
- The client says "do this", and then gets a synchronous response back when it is done.
|
||||
- The client says "do this", and then gets an operation ID back, and has to check a separate Operation object to determine completion of the request.
|
||||
- You talk about Remote Procedure Calls (RPCs).
|
||||
- Directly storing large amounts of data; for example, > a few kB per object, or > 1000s of objects.
|
||||
- High bandwidth access (10s of requests per second sustained) needed.
|
||||
- Store end-user data (such as images, PII, etc.) or other large-scale data processed by applications.
|
||||
- The natural operations on the objects are not CRUD-y.
|
||||
- The API is not easily modeled as objects.
|
||||
- You chose to represent pending operations with an operation ID or an operation object.
|
||||
- The client says "do this", and then gets a synchronous response back when it is done.
|
||||
- The client says "do this", and then gets an operation ID back, and has to check a separate
|
||||
Operation object to determine completion of the request.
|
||||
- You talk about Remote Procedure Calls (RPCs).
|
||||
- Directly storing large amounts of data; for example, > a few kB per object, or > 1000s of objects.
|
||||
- High bandwidth access (10s of requests per second sustained) needed.
|
||||
- Store end-user data (such as images, PII, etc.) or other large-scale data processed by applications.
|
||||
- The natural operations on the objects are not CRUD-y.
|
||||
- The API is not easily modeled as objects.
|
||||
- You chose to represent pending operations with an operation ID or an operation object.
|
||||
|
||||
## Should I use a ConfigMap or a custom resource?
|
||||
|
||||
Use a ConfigMap if any of the following apply:
|
||||
|
||||
* There is an existing, well-documented configuration file format, such as a `mysql.cnf` or `pom.xml`.
|
||||
* There is an existing, well-documented configuration file format, such as a `mysql.cnf` or
|
||||
`pom.xml`.
|
||||
* You want to put the entire configuration into one key of a ConfigMap.
|
||||
* The main use of the configuration file is for a program running in a Pod on your cluster to consume the file to configure itself.
|
||||
* Consumers of the file prefer to consume via file in a Pod or environment variable in a pod, rather than the Kubernetes API.
|
||||
* The main use of the configuration file is for a program running in a Pod on your cluster to
|
||||
consume the file to configure itself.
|
||||
* Consumers of the file prefer to consume via file in a Pod or environment variable in a pod,
|
||||
rather than the Kubernetes API.
|
||||
* You want to perform rolling updates via Deployment, etc., when the file is updated.
|
||||
|
||||
{{< note >}}
|
||||
Use a {{< glossary_tooltip text="Secret" term_id="secret" >}} for sensitive data, which is similar to a ConfigMap but more secure.
|
||||
Use a {{< glossary_tooltip text="Secret" term_id="secret" >}} for sensitive data, which is similar
|
||||
to a ConfigMap but more secure.
|
||||
{{< /note >}}
|
||||
|
||||
Use a custom resource (CRD or Aggregated API) if most of the following apply:
|
||||
|
||||
* You want to use Kubernetes client libraries and CLIs to create and update the new resource.
|
||||
* You want top-level support from `kubectl`; for example, `kubectl get my-object object-name`.
|
||||
* You want to build new automation that watches for updates on the new object, and then CRUD other objects, or vice versa.
|
||||
* You want to build new automation that watches for updates on the new object, and then CRUD other
|
||||
objects, or vice versa.
|
||||
* You want to write automation that handles updates to the object.
|
||||
* You want to use Kubernetes API conventions like `.spec`, `.status`, and `.metadata`.
|
||||
* You want the object to be an abstraction over a collection of controlled resources, or a summarization of other resources.
|
||||
* You want the object to be an abstraction over a collection of controlled resources, or a
|
||||
summarization of other resources.
|
||||
|
||||
## Adding custom resources
|
||||
|
||||
Kubernetes provides two ways to add custom resources to your cluster:
|
||||
|
||||
- CRDs are simple and can be created without any programming.
|
||||
- [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) requires programming, but allows more control over API behaviors like how data is stored and conversion between API versions.
|
||||
- [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
|
||||
requires programming, but allows more control over API behaviors like how data is stored and
|
||||
conversion between API versions.
|
||||
|
||||
Kubernetes provides these two options to meet the needs of different users, so that neither ease of use nor flexibility is compromised.
|
||||
Kubernetes provides these two options to meet the needs of different users, so that neither ease
|
||||
of use nor flexibility is compromised.
|
||||
|
||||
Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts as a proxy. This arrangement is called [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) (AA). To users, the Kubernetes API appears extended.
|
||||
Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts as
|
||||
a proxy. This arrangement is called [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)(AA).
|
||||
To users, the Kubernetes API appears extended.
|
||||
|
||||
CRDs allow users to create new types of resources without adding another API server. You do not need to understand API Aggregation to use CRDs.
|
||||
CRDs allow users to create new types of resources without adding another API server. You do not
|
||||
need to understand API Aggregation to use CRDs.
|
||||
|
||||
Regardless of how they are installed, the new resources are referred to as Custom Resources to distinguish them from built-in Kubernetes resources (like pods).
|
||||
Regardless of how they are installed, the new resources are referred to as Custom Resources to
|
||||
distinguish them from built-in Kubernetes resources (like pods).
|
||||
|
||||
{{< note >}}
|
||||
Avoid using a Custom Resource as data storage for application, end user, or monitoring data:
|
||||
|
@ -156,10 +171,14 @@ and use a controller to handle events.
|
|||
|
||||
## API server aggregation
|
||||
|
||||
Usually, each resource in the Kubernetes API requires code that handles REST requests and manages persistent storage of objects. The main Kubernetes API server handles built-in resources like *pods* and *services*, and can also generically handle custom resources through [CRDs](#customresourcedefinitions).
|
||||
Usually, each resource in the Kubernetes API requires code that handles REST requests and manages
|
||||
persistent storage of objects. The main Kubernetes API server handles built-in resources like
|
||||
*pods* and *services*, and can also generically handle custom resources through
|
||||
[CRDs](#customresourcedefinitions).
|
||||
|
||||
The [aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) allows you to provide specialized
|
||||
implementations for your custom resources by writing and deploying your own API server.
|
||||
The [aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
|
||||
allows you to provide specialized implementations for your custom resources by writing and
|
||||
deploying your own API server.
|
||||
The main API server delegates requests to your API server for the custom resources that you handle,
|
||||
making them available to all of its clients.
|
||||
|
||||
|
@ -170,7 +189,8 @@ CRDs are easier to use. Aggregated APIs are more flexible. Choose the method tha
|
|||
Typically, CRDs are a good fit if:
|
||||
|
||||
* You have a handful of fields
|
||||
* You are using the resource within your company, or as part of a small open-source project (as opposed to a commercial product)
|
||||
* You are using the resource within your company, or as part of a small open-source project (as
|
||||
opposed to a commercial product)
|
||||
|
||||
### Comparing ease of use
|
||||
|
||||
|
@ -203,7 +223,8 @@ Aggregated APIs offer more advanced API features and customization of other feat
|
|||
|
||||
### Common Features
|
||||
|
||||
When you create a custom resource, either via a CRD or an AA, you get many features for your API, compared to implementing it outside the Kubernetes platform:
|
||||
When you create a custom resource, either via a CRD or an AA, you get many features for your API,
|
||||
compared to implementing it outside the Kubernetes platform:
|
||||
|
||||
| Feature | What it does |
|
||||
| ------- | ------------ |
|
||||
|
@ -228,42 +249,51 @@ There are several points to be aware of before adding a custom resource to your
|
|||
|
||||
### Third party code and new points of failure
|
||||
|
||||
While creating a CRD does not automatically add any new points of failure (for example, by causing third party code to run on your API server), packages (for example, Charts) or other installation bundles often include CRDs as well as a Deployment of third-party code that implements the business logic for a new custom resource.
|
||||
While creating a CRD does not automatically add any new points of failure (for example, by causing
|
||||
third party code to run on your API server), packages (for example, Charts) or other installation
|
||||
bundles often include CRDs as well as a Deployment of third-party code that implements the
|
||||
business logic for a new custom resource.
|
||||
|
||||
Installing an Aggregated API server always involves running a new Deployment.
|
||||
|
||||
### Storage
|
||||
|
||||
Custom resources consume storage space in the same way that ConfigMaps do. Creating too many custom resources may overload your API server's storage space.
|
||||
Custom resources consume storage space in the same way that ConfigMaps do. Creating too many
|
||||
custom resources may overload your API server's storage space.
|
||||
|
||||
Aggregated API servers may use the same storage as the main API server, in which case the same warning applies.
|
||||
Aggregated API servers may use the same storage as the main API server, in which case the same
|
||||
warning applies.
|
||||
|
||||
### Authentication, authorization, and auditing
|
||||
|
||||
CRDs always use the same authentication, authorization, and audit logging as the built-in resources of your API server.
|
||||
CRDs always use the same authentication, authorization, and audit logging as the built-in
|
||||
resources of your API server.
|
||||
|
||||
If you use RBAC for authorization, most RBAC roles will not grant access to the new resources (except the cluster-admin role or any role created with wildcard rules). You'll need to explicitly grant access to the new resources. CRDs and Aggregated APIs often come bundled with new role definitions for the types they add.
|
||||
If you use RBAC for authorization, most RBAC roles will not grant access to the new resources
|
||||
(except the cluster-admin role or any role created with wildcard rules). You'll need to explicitly
|
||||
grant access to the new resources. CRDs and Aggregated APIs often come bundled with new role
|
||||
definitions for the types they add.
|
||||
|
||||
Aggregated API servers may or may not use the same authentication, authorization, and auditing as the primary API server.
|
||||
Aggregated API servers may or may not use the same authentication, authorization, and auditing as
|
||||
the primary API server.
|
||||
|
||||
## Accessing a custom resource
|
||||
|
||||
Kubernetes [client libraries](/docs/reference/using-api/client-libraries/) can be used to access custom resources. Not all client libraries support custom resources. The _Go_ and _Python_ client libraries do.
|
||||
Kubernetes [client libraries](/docs/reference/using-api/client-libraries/) can be used to access
|
||||
custom resources. Not all client libraries support custom resources. The _Go_ and _Python_ client
|
||||
libraries do.
|
||||
|
||||
When you add a custom resource, you can access it using:
|
||||
|
||||
- `kubectl`
|
||||
- The Kubernetes dynamic client.
|
||||
- A REST client that you write.
|
||||
- A client generated using [Kubernetes client generation tools](https://github.com/kubernetes/code-generator) (generating one is an advanced undertaking, but some projects may provide a client along with the CRD or AA).
|
||||
|
||||
|
||||
- A client generated using [Kubernetes client generation tools](https://github.com/kubernetes/code-generator)
|
||||
(generating one is an advanced undertaking, but some projects may provide a client along with
|
||||
the CRD or AA).
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Learn how to [Extend the Kubernetes API with the aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/).
|
||||
|
||||
* Learn how to [Extend the Kubernetes API with CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/).
|
||||
|
||||
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
---
|
||||
title: Device Plugins
|
||||
description: Device plugins let you configure your cluster with support for devices or resources that require vendor-specific setup, such as GPUs, NICs, FPGAs, or non-volatile main memory.
|
||||
description: >
|
||||
Device plugins let you configure your cluster with support for devices or resources that require
|
||||
vendor-specific setup, such as GPUs, NICs, FPGAs, or non-volatile main memory.
|
||||
content_type: concept
|
||||
weight: 20
|
||||
---
|
||||
|
@ -33,12 +35,12 @@ service Registration {
|
|||
A device plugin can register itself with the kubelet through this gRPC service.
|
||||
During the registration, the device plugin needs to send:
|
||||
|
||||
* The name of its Unix socket.
|
||||
* The Device Plugin API version against which it was built.
|
||||
* The `ResourceName` it wants to advertise. Here `ResourceName` needs to follow the
|
||||
[extended resource naming scheme](/docs/concepts/configuration/manage-resources-containers/#extended-resources)
|
||||
as `vendor-domain/resourcetype`.
|
||||
(For example, an NVIDIA GPU is advertised as `nvidia.com/gpu`.)
|
||||
* The name of its Unix socket.
|
||||
* The Device Plugin API version against which it was built.
|
||||
* The `ResourceName` it wants to advertise. Here `ResourceName` needs to follow the
|
||||
[extended resource naming scheme](/docs/concepts/configuration/manage-resources-containers/#extended-resources)
|
||||
as `vendor-domain/resourcetype`.
|
||||
(For example, an NVIDIA GPU is advertised as `nvidia.com/gpu`.)
|
||||
|
||||
Following a successful registration, the device plugin sends the kubelet the
|
||||
list of devices it manages, and the kubelet is then in charge of advertising those
|
||||
|
@ -133,12 +135,12 @@ The general workflow of a device plugin includes the following steps:
|
|||
path `/var/lib/kubelet/device-plugins/kubelet.sock`.
|
||||
|
||||
* After successfully registering itself, the device plugin runs in serving mode, during which it keeps
|
||||
monitoring device health and reports back to the kubelet upon any device state changes.
|
||||
It is also responsible for serving `Allocate` gRPC requests. During `Allocate`, the device plugin may
|
||||
do device-specific preparation; for example, GPU cleanup or QRNG initialization.
|
||||
If the operations succeed, the device plugin returns an `AllocateResponse` that contains container
|
||||
runtime configurations for accessing the allocated devices. The kubelet passes this information
|
||||
to the container runtime.
|
||||
monitoring device health and reports back to the kubelet upon any device state changes.
|
||||
It is also responsible for serving `Allocate` gRPC requests. During `Allocate`, the device plugin may
|
||||
do device-specific preparation; for example, GPU cleanup or QRNG initialization.
|
||||
If the operations succeed, the device plugin returns an `AllocateResponse` that contains container
|
||||
runtime configurations for accessing the allocated devices. The kubelet passes this information
|
||||
to the container runtime.
|
||||
|
||||
### Handling kubelet restarts
|
||||
|
||||
|
@ -156,8 +158,7 @@ The canonical directory `/var/lib/kubelet/device-plugins` requires privileged ac
|
|||
so a device plugin must run in a privileged security context.
|
||||
If you're deploying a device plugin as a DaemonSet, `/var/lib/kubelet/device-plugins`
|
||||
must be mounted as a {{< glossary_tooltip term_id="volume" >}}
|
||||
in the plugin's
|
||||
[PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core).
|
||||
in the plugin's [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core).
|
||||
|
||||
If you choose the DaemonSet approach you can rely on Kubernetes to: place the device plugin's
|
||||
Pod onto Nodes, to restart the daemon Pod after failure, and to help automate upgrades.
|
||||
|
@ -214,7 +215,8 @@ service PodResourcesLister {
|
|||
|
||||
The `List` endpoint provides information on resources of running pods, with details such as the
|
||||
id of exclusively allocated CPUs, device id as it was reported by device plugins and id of
|
||||
the NUMA node where these devices are allocated. Also, for NUMA-based machines, it contains the information about memory and hugepages reserved for a container.
|
||||
the NUMA node where these devices are allocated. Also, for NUMA-based machines, it contains the
|
||||
information about memory and hugepages reserved for a container.
|
||||
|
||||
```gRPC
|
||||
// ListPodResourcesResponse is the response returned by List function
|
||||
|
@ -285,6 +287,7 @@ conjunction with the List() endpoint. The result obtained by `GetAllocatableReso
|
|||
the same unless the underlying resources exposed to kubelet change. This happens rarely but when
|
||||
it does (for example: hotplug/hotunplug, device health changes), client is expected to call
|
||||
`GetAlloctableResources` endpoint.
|
||||
|
||||
However, calling `GetAllocatableResources` endpoint is not sufficient in case of cpu and/or memory
|
||||
update and Kubelet needs to be restarted to reflect the correct resource capacity and allocatable.
|
||||
{{< /note >}}
|
||||
|
@ -297,20 +300,22 @@ message AllocatableResourcesResponse {
|
|||
repeated int64 cpu_ids = 2;
|
||||
repeated ContainerMemory memory = 3;
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Starting from Kubernetes v1.23, the `GetAllocatableResources` is enabled by default.
|
||||
You can disable it by turning off the
|
||||
`KubeletPodResourcesGetAllocatable` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
You can disable it by turning off the `KubeletPodResourcesGetAllocatable`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
|
||||
Preceding Kubernetes v1.23, to enable this feature `kubelet` must be started with the following flag:
|
||||
|
||||
`--feature-gates=KubeletPodResourcesGetAllocatable=true`
|
||||
|
||||
`ContainerDevices` do expose the topology information declaring to which NUMA cells the device is affine.
|
||||
The NUMA cells are identified using a opaque integer ID, which value is consistent to what device
|
||||
plugins report [when they register themselves to the kubelet](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#device-plugin-integration-with-the-topology-manager).
|
||||
```
|
||||
--feature-gates=KubeletPodResourcesGetAllocatable=true
|
||||
```
|
||||
|
||||
`ContainerDevices` do expose the topology information declaring to which NUMA cells the device is
|
||||
affine. The NUMA cells are identified using a opaque integer ID, which value is consistent to
|
||||
what device plugins report
|
||||
[when they register themselves to the kubelet](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#device-plugin-integration-with-the-topology-manager).
|
||||
|
||||
The gRPC service is served over a unix socket at `/var/lib/kubelet/pod-resources/kubelet.sock`.
|
||||
Monitoring agents for device plugin resources can be deployed as a daemon, or as a DaemonSet.
|
||||
|
@ -320,15 +325,17 @@ DaemonSet, `/var/lib/kubelet/pod-resources` must be mounted as a
|
|||
{{< glossary_tooltip term_id="volume" >}} in the device monitoring agent's
|
||||
[PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core).
|
||||
|
||||
Support for the `PodResourcesLister service` requires `KubeletPodResources` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled.
|
||||
Support for the `PodResourcesLister service` requires `KubeletPodResources`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled.
|
||||
It is enabled by default starting with Kubernetes 1.15 and is v1 since Kubernetes 1.20.
|
||||
|
||||
## Device plugin integration with the Topology Manager
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
|
||||
|
||||
The Topology Manager is a Kubelet component that allows resources to be co-ordinated in a Topology aligned manner. In order to do this, the Device Plugin API was extended to include a `TopologyInfo` struct.
|
||||
|
||||
The Topology Manager is a Kubelet component that allows resources to be co-ordinated in a Topology
|
||||
aligned manner. In order to do this, the Device Plugin API was extended to include a
|
||||
`TopologyInfo` struct.
|
||||
|
||||
```gRPC
|
||||
message TopologyInfo {
|
||||
|
@ -339,11 +346,17 @@ message NUMANode {
|
|||
int64 ID = 1;
|
||||
}
|
||||
```
|
||||
Device Plugins that wish to leverage the Topology Manager can send back a populated TopologyInfo struct as part of the device registration, along with the device IDs and the health of the device. The device manager will then use this information to consult with the Topology Manager and make resource assignment decisions.
|
||||
|
||||
`TopologyInfo` supports setting a `nodes` field to either `nil` or a list of NUMA nodes. This allows the Device Plugin to advertise a device that spans multiple NUMA nodes.
|
||||
Device Plugins that wish to leverage the Topology Manager can send back a populated TopologyInfo
|
||||
struct as part of the device registration, along with the device IDs and the health of the device.
|
||||
The device manager will then use this information to consult with the Topology Manager and make
|
||||
resource assignment decisions.
|
||||
|
||||
Setting `TopologyInfo` to `nil` or providing an empty list of NUMA nodes for a given device indicates that the Device Plugin does not have a NUMA affinity preference for that device.
|
||||
`TopologyInfo` supports setting a `nodes` field to either `nil` or a list of NUMA nodes. This
|
||||
allows the Device Plugin to advertise a device that spans multiple NUMA nodes.
|
||||
|
||||
Setting `TopologyInfo` to `nil` or providing an empty list of NUMA nodes for a given device
|
||||
indicates that the Device Plugin does not have a NUMA affinity preference for that device.
|
||||
|
||||
An example `TopologyInfo` struct populated for a device by a Device Plugin:
|
||||
|
||||
|
@ -358,8 +371,10 @@ pluginapi.Device{ID: "25102017", Health: pluginapi.Healthy, Topology:&pluginapi.
|
|||
Here are some examples of device plugin implementations:
|
||||
|
||||
* The [AMD GPU device plugin](https://github.com/RadeonOpenCompute/k8s-device-plugin)
|
||||
* The [Intel device plugins](https://github.com/intel/intel-device-plugins-for-kubernetes) for Intel GPU, FPGA, QAT, VPU, SGX, DSA, DLB and IAA devices
|
||||
* The [KubeVirt device plugins](https://github.com/kubevirt/kubernetes-device-plugins) for hardware-assisted virtualization
|
||||
* The [Intel device plugins](https://github.com/intel/intel-device-plugins-for-kubernetes) for
|
||||
Intel GPU, FPGA, QAT, VPU, SGX, DSA, DLB and IAA devices
|
||||
* The [KubeVirt device plugins](https://github.com/kubevirt/kubernetes-device-plugins) for
|
||||
hardware-assisted virtualization
|
||||
* The [NVIDIA GPU device plugin for Container-Optimized OS](https://github.com/GoogleCloudPlatform/container-engine-accelerators/tree/master/cmd/nvidia_gpu)
|
||||
* The [RDMA device plugin](https://github.com/hustcat/k8s-rdma-device-plugin)
|
||||
* The [SocketCAN device plugin](https://github.com/collabora/k8s-socketcan)
|
||||
|
@ -367,11 +382,13 @@ Here are some examples of device plugin implementations:
|
|||
* The [SR-IOV Network device plugin](https://github.com/intel/sriov-network-device-plugin)
|
||||
* The [Xilinx FPGA device plugins](https://github.com/Xilinx/FPGA_as_a_Service/tree/master/k8s-device-plugin) for Xilinx FPGA devices
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Learn about [scheduling GPU resources](/docs/tasks/manage-gpus/scheduling-gpus/) using device plugins
|
||||
* Learn about [advertising extended resources](/docs/tasks/administer-cluster/extended-resource-node/) on a node
|
||||
* Learn about [scheduling GPU resources](/docs/tasks/manage-gpus/scheduling-gpus/) using device
|
||||
plugins
|
||||
* Learn about [advertising extended resources](/docs/tasks/administer-cluster/extended-resource-node/)
|
||||
on a node
|
||||
* Learn about the [Topology Manager](/docs/tasks/administer-cluster/topology-manager/)
|
||||
* Read about using [hardware acceleration for TLS ingress](/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/) with Kubernetes
|
||||
* Read about using [hardware acceleration for TLS ingress](/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/)
|
||||
with Kubernetes
|
||||
|
||||
|
|
|
@ -12,9 +12,12 @@ weight: 10
|
|||
<!-- overview -->
|
||||
|
||||
Kubernetes {{< skew currentVersion >}} supports [Container Network Interface](https://github.com/containernetworking/cni)
|
||||
(CNI) plugins for cluster networking. You must use a CNI plugin that is compatible with your cluster and that suits your needs. Different plugins are available (both open- and closed- source) in the wider Kubernetes ecosystem.
|
||||
(CNI) plugins for cluster networking. You must use a CNI plugin that is compatible with your
|
||||
cluster and that suits your needs. Different plugins are available (both open- and closed- source)
|
||||
in the wider Kubernetes ecosystem.
|
||||
|
||||
A CNI plugin is required to implement the [Kubernetes network model](/docs/concepts/services-networking/#the-kubernetes-network-model).
|
||||
A CNI plugin is required to implement the
|
||||
[Kubernetes network model](/docs/concepts/services-networking/#the-kubernetes-network-model).
|
||||
|
||||
You must use a CNI plugin that is compatible with the
|
||||
[v0.4.0](https://github.com/containernetworking/cni/blob/spec-v0.4.0/SPEC.md) or later
|
||||
|
@ -26,43 +29,62 @@ CNI specification (plugins can be compatible with multiple spec versions).
|
|||
|
||||
## Installation
|
||||
|
||||
A Container Runtime, in the networking context, is a daemon on a node configured to provide CRI Services for kubelet. In particular, the Container Runtime must be configured to load the CNI plugins required to implement the Kubernetes network model.
|
||||
A Container Runtime, in the networking context, is a daemon on a node configured to provide CRI
|
||||
Services for kubelet. In particular, the Container Runtime must be configured to load the CNI
|
||||
plugins required to implement the Kubernetes network model.
|
||||
|
||||
{{< note >}}
|
||||
Prior to Kubernetes 1.24, the CNI plugins could also be managed by the kubelet using the `cni-bin-dir` and `network-plugin` command-line parameters.
|
||||
These command-line parameters were removed in Kubernetes 1.24, with management of the CNI no longer in scope for kubelet.
|
||||
Prior to Kubernetes 1.24, the CNI plugins could also be managed by the kubelet using the
|
||||
`cni-bin-dir` and `network-plugin` command-line parameters.
|
||||
These command-line parameters were removed in Kubernetes 1.24, with management of the CNI no
|
||||
longer in scope for kubelet.
|
||||
|
||||
See [Troubleshooting CNI plugin-related errors](/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/)
|
||||
if you are facing issues following the removal of dockershim.
|
||||
{{< /note >}}
|
||||
|
||||
For specific information about how a Container Runtime manages the CNI plugins, see the documentation for that Container Runtime, for example:
|
||||
For specific information about how a Container Runtime manages the CNI plugins, see the
|
||||
documentation for that Container Runtime, for example:
|
||||
|
||||
- [containerd](https://github.com/containerd/containerd/blob/main/script/setup/install-cni)
|
||||
- [CRI-O](https://github.com/cri-o/cri-o/blob/main/contrib/cni/README.md)
|
||||
|
||||
For specific information about how to install and manage a CNI plugin, see the documentation for that plugin or [networking provider](/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model).
|
||||
For specific information about how to install and manage a CNI plugin, see the documentation for
|
||||
that plugin or [networking provider](/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model).
|
||||
|
||||
## Network Plugin Requirements
|
||||
|
||||
For plugin developers and users who regularly build or deploy Kubernetes, the plugin may also need specific configuration to support kube-proxy.
|
||||
The iptables proxy depends on iptables, and the plugin may need to ensure that container traffic is made available to iptables.
|
||||
For example, if the plugin connects containers to a Linux bridge, the plugin must set the `net/bridge/bridge-nf-call-iptables` sysctl to `1` to ensure that the iptables proxy functions correctly.
|
||||
If the plugin does not use a Linux bridge, but uses something like Open vSwitch or some other mechanism instead, it should ensure container traffic is appropriately routed for the proxy.
|
||||
For plugin developers and users who regularly build or deploy Kubernetes, the plugin may also need
|
||||
specific configuration to support kube-proxy. The iptables proxy depends on iptables, and the
|
||||
plugin may need to ensure that container traffic is made available to iptables. For example, if
|
||||
the plugin connects containers to a Linux bridge, the plugin must set the
|
||||
`net/bridge/bridge-nf-call-iptables` sysctl to `1` to ensure that the iptables proxy functions
|
||||
correctly. If the plugin does not use a Linux bridge, but uses something like Open vSwitch or
|
||||
some other mechanism instead, it should ensure container traffic is appropriately routed for the
|
||||
proxy.
|
||||
|
||||
By default, if no kubelet network plugin is specified, the `noop` plugin is used, which sets `net/bridge/bridge-nf-call-iptables=1` to ensure simple configurations (like Docker with a bridge) work correctly with the iptables proxy.
|
||||
By default, if no kubelet network plugin is specified, the `noop` plugin is used, which sets
|
||||
`net/bridge/bridge-nf-call-iptables=1` to ensure simple configurations (like Docker with a bridge)
|
||||
work correctly with the iptables proxy.
|
||||
|
||||
### Loopback CNI
|
||||
|
||||
In addition to the CNI plugin installed on the nodes for implementing the Kubernetes network model, Kubernetes also requires the container runtimes to provide a loopback interface `lo`, which is used for each sandbox (pod sandboxes, vm sandboxes, ...).
|
||||
Implementing the loopback interface can be accomplished by re-using the [CNI loopback plugin.](https://github.com/containernetworking/plugins/blob/master/plugins/main/loopback/loopback.go) or by developing your own code to achieve this (see [this example from CRI-O](https://github.com/cri-o/ocicni/blob/release-1.24/pkg/ocicni/util_linux.go#L91)).
|
||||
In addition to the CNI plugin installed on the nodes for implementing the Kubernetes network
|
||||
model, Kubernetes also requires the container runtimes to provide a loopback interface `lo`, which
|
||||
is used for each sandbox (pod sandboxes, vm sandboxes, ...).
|
||||
Implementing the loopback interface can be accomplished by re-using the
|
||||
[CNI loopback plugin.](https://github.com/containernetworking/plugins/blob/master/plugins/main/loopback/loopback.go)
|
||||
or by developing your own code to achieve this (see
|
||||
[this example from CRI-O](https://github.com/cri-o/ocicni/blob/release-1.24/pkg/ocicni/util_linux.go#L91)).
|
||||
|
||||
### Support hostPort
|
||||
|
||||
The CNI networking plugin supports `hostPort`. You can use the official [portmap](https://github.com/containernetworking/plugins/tree/master/plugins/meta/portmap)
|
||||
The CNI networking plugin supports `hostPort`. You can use the official
|
||||
[portmap](https://github.com/containernetworking/plugins/tree/master/plugins/meta/portmap)
|
||||
plugin offered by the CNI plugin team or use your own plugin with portMapping functionality.
|
||||
|
||||
If you want to enable `hostPort` support, you must specify `portMappings capability` in your `cni-conf-dir`.
|
||||
For example:
|
||||
If you want to enable `hostPort` support, you must specify `portMappings capability` in your
|
||||
`cni-conf-dir`. For example:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -97,11 +119,13 @@ For example:
|
|||
|
||||
**Experimental Feature**
|
||||
|
||||
The CNI networking plugin also supports pod ingress and egress traffic shaping. You can use the official [bandwidth](https://github.com/containernetworking/plugins/tree/master/plugins/meta/bandwidth)
|
||||
The CNI networking plugin also supports pod ingress and egress traffic shaping. You can use the
|
||||
official [bandwidth](https://github.com/containernetworking/plugins/tree/master/plugins/meta/bandwidth)
|
||||
plugin offered by the CNI plugin team or use your own plugin with bandwidth control functionality.
|
||||
|
||||
If you want to enable traffic shaping support, you must add the `bandwidth` plugin to your CNI configuration file
|
||||
(default `/etc/cni/net.d`) and ensure that the binary is included in your CNI bin dir (default `/opt/cni/bin`).
|
||||
If you want to enable traffic shaping support, you must add the `bandwidth` plugin to your CNI
|
||||
configuration file (default `/etc/cni/net.d`) and ensure that the binary is included in your CNI
|
||||
bin dir (default `/opt/cni/bin`).
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -132,8 +156,8 @@ If you want to enable traffic shaping support, you must add the `bandwidth` plug
|
|||
}
|
||||
```
|
||||
|
||||
Now you can add the `kubernetes.io/ingress-bandwidth` and `kubernetes.io/egress-bandwidth` annotations to your pod.
|
||||
For example:
|
||||
Now you can add the `kubernetes.io/ingress-bandwidth` and `kubernetes.io/egress-bandwidth`
|
||||
annotations to your Pod. For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -146,3 +170,4 @@ metadata:
|
|||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
|
|
@ -115,7 +115,10 @@ detail the structure of that `.status` field, and its content for each different
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn about the most important basic Kubernetes objects, such as [Pod](/docs/concepts/workloads/pods/).
|
||||
* Learn about [controllers](/docs/concepts/architecture/controller/) in Kubernetes.
|
||||
* [Using the Kubernetes API](/docs/reference/using-api/) explains some more API concepts.
|
||||
Learn more about the following:
|
||||
* [Pods](https://kubernetes.io/docs/concepts/workloads/pods/) which are the most important basic Kubernetes objects.
|
||||
* [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) objects.
|
||||
* [Controllers](https://kubernetes.io/docs/concepts/architecture/controller/) in Kubernetes.
|
||||
* [Kubernetes API overview](https://kubernetes.io/docs/reference/using-api/) which explains some more API concepts.
|
||||
* [kubectl](https://kubernetes.io/docs/reference/kubectl/) and [kubectl commands](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands).
|
||||
|
||||
|
|
|
@ -49,13 +49,15 @@ blind to the existence or non-existence of host ports.
|
|||
Kubernetes networking addresses four concerns:
|
||||
- Containers within a Pod [use networking to communicate](/docs/concepts/services-networking/dns-pod-service/) via loopback.
|
||||
- Cluster networking provides communication between different Pods.
|
||||
- The [Service](/docs/concepts/services-networking/service/) resource lets you
|
||||
[expose an application running in Pods](/docs/concepts/services-networking/connect-applications-service/)
|
||||
- The [Service](/docs/concepts/services-networking/service/) API lets you
|
||||
[expose an application running in Pods](/docs/tutorials/services/connect-applications-service/)
|
||||
to be reachable from outside your cluster.
|
||||
- [Ingress](/docs/concepts/services-networking/ingress/) provides extra functionality
|
||||
specifically for exposing HTTP applications, websites and APIs.
|
||||
- You can also use Services to
|
||||
[publish services only for consumption inside your cluster](/docs/concepts/services-networking/service-traffic-policy/).
|
||||
|
||||
The [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/) tutorial lets you learn about Services and Kubernetes networking with a hands-on example.
|
||||
|
||||
[Cluster Networking](/docs/concepts/cluster-administration/networking/) explains how to set
|
||||
up networking for your cluster, and also provides an overview of the technologies involved.
|
||||
|
|
|
@ -273,4 +273,4 @@ networking and topology-aware routing.
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
|
||||
* Follow the [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/) tutorial
|
||||
|
|
|
@ -65,4 +65,4 @@ Kubernetes considers all endpoints.
|
|||
|
||||
* Read about [Topology Aware Hints](/docs/concepts/services-networking/topology-aware-hints)
|
||||
* Read about [Service External Traffic Policy](/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip)
|
||||
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
|
||||
* Follow the [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/) tutorial
|
||||
|
|
|
@ -145,7 +145,6 @@ spec:
|
|||
targetPort: http-web-svc
|
||||
```
|
||||
|
||||
|
||||
This works even if there is a mixture of Pods in the Service using a single
|
||||
configured name, with the same network protocol available via different
|
||||
port numbers. This offers a lot of flexibility for deploying and evolving
|
||||
|
@ -353,7 +352,7 @@ thus is only available to use as-is.
|
|||
|
||||
Note that the kube-proxy starts up in different modes, which are determined by its configuration.
|
||||
- The kube-proxy's configuration is done via a ConfigMap, and the ConfigMap for kube-proxy
|
||||
effectively deprecates the behaviour for almost all of the flags for the kube-proxy.
|
||||
effectively deprecates the behavior for almost all of the flags for the kube-proxy.
|
||||
- The ConfigMap for the kube-proxy does not support live reloading of configuration.
|
||||
- The ConfigMap parameters for the kube-proxy cannot all be validated and verified on startup.
|
||||
For example, if your operating system doesn't allow you to run iptables commands,
|
||||
|
@ -420,7 +419,7 @@ The IPVS proxy mode is based on netfilter hook function that is similar to
|
|||
iptables mode, but uses a hash table as the underlying data structure and works
|
||||
in the kernel space.
|
||||
That means kube-proxy in IPVS mode redirects traffic with lower latency than
|
||||
kube-proxy in iptables mode, with much better performance when synchronising
|
||||
kube-proxy in iptables mode, with much better performance when synchronizing
|
||||
proxy rules. Compared to the other proxy modes, IPVS mode also supports a
|
||||
higher throughput of network traffic.
|
||||
|
||||
|
@ -662,7 +661,8 @@ Kubernetes `ServiceTypes` allow you to specify what kind of Service you want.
|
|||
* [`ExternalName`](#externalname): Maps the Service to the contents of the
|
||||
`externalName` field (e.g. `foo.bar.example.com`), by returning a `CNAME` record
|
||||
with its value. No proxying of any kind is set up.
|
||||
{{< note >}}You need either `kube-dns` version 1.7 or CoreDNS version 0.0.8 or higher
|
||||
{{< note >}}
|
||||
You need either `kube-dns` version 1.7 or CoreDNS version 0.0.8 or higher
|
||||
to use the `ExternalName` type.
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -740,11 +740,11 @@ kube-proxy only selects the loopback interface for NodePort Services.
|
|||
The default for `--nodeport-addresses` is an empty list.
|
||||
This means that kube-proxy should consider all available network interfaces for NodePort.
|
||||
(That's also compatible with earlier Kubernetes releases.)
|
||||
Note that this Service is visible as `<NodeIP>:spec.ports[*].nodePort`
|
||||
and `.spec.clusterIP:spec.ports[*].port`.
|
||||
{{< note >}}
|
||||
This Service is visible as `<NodeIP>:spec.ports[*].nodePort` and `.spec.clusterIP:spec.ports[*].port`.
|
||||
If the `--nodeport-addresses` flag for kube-proxy or the equivalent field
|
||||
in the kube-proxy configuration file is set, `<NodeIP>` would be a filtered node IP address (or possibly IP addresses).
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
### Type LoadBalancer {#loadbalancer}
|
||||
|
||||
|
@ -793,7 +793,6 @@ _As an alpha feature_, you can configure a load balanced Service to
|
|||
[omit](#load-balancer-nodeport-allocation) assigning a node port, provided that the
|
||||
cloud provider implementation supports this.
|
||||
|
||||
|
||||
{{< note >}}
|
||||
|
||||
On **Azure**, if you want to use a user-specified public type `loadBalancerIP`, you first need
|
||||
|
@ -1395,7 +1394,7 @@ fail with a message indicating an IP address could not be allocated.
|
|||
In the control plane, a background controller is responsible for creating that
|
||||
map (needed to support migrating from older versions of Kubernetes that used
|
||||
in-memory locking). Kubernetes also uses controllers to check for invalid
|
||||
assignments (eg due to administrator intervention) and for cleaning up allocated
|
||||
assignments (e.g. due to administrator intervention) and for cleaning up allocated
|
||||
IP addresses that are no longer used by any Services.
|
||||
|
||||
#### IP address ranges for `type: ClusterIP` Services {#service-ip-static-sub-range}
|
||||
|
@ -1471,7 +1470,7 @@ through a load-balancer, though in those cases the client IP does get altered.
|
|||
|
||||
#### IPVS
|
||||
|
||||
iptables operations slow down dramatically in large scale cluster e.g 10,000 Services.
|
||||
iptables operations slow down dramatically in large scale cluster e.g. 10,000 Services.
|
||||
IPVS is designed for load balancing and based on in-kernel hash tables.
|
||||
So you can achieve performance consistency in large number of Services from IPVS-based kube-proxy.
|
||||
Meanwhile, IPVS-based kube-proxy has more sophisticated load balancing algorithms
|
||||
|
@ -1553,6 +1552,6 @@ followed by the data from the client.
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
|
||||
* Follow the [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/) tutorial
|
||||
* Read about [Ingress](/docs/concepts/services-networking/ingress/)
|
||||
* Read about [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/)
|
||||
|
|
|
@ -159,4 +159,4 @@ zone.
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
|
||||
* Follow the [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/) tutorial
|
||||
|
|
|
@ -13,65 +13,107 @@ weight: 60
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
In Kubernetes, a _VolumeSnapshot_ represents a snapshot of a volume on a storage system. This document assumes that you are already familiar with Kubernetes [persistent volumes](/docs/concepts/storage/persistent-volumes/).
|
||||
|
||||
|
||||
|
||||
In Kubernetes, a _VolumeSnapshot_ represents a snapshot of a volume on a storage
|
||||
system. This document assumes that you are already familiar with Kubernetes
|
||||
[persistent volumes](/docs/concepts/storage/persistent-volumes/).
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Introduction
|
||||
|
||||
Similar to how API resources `PersistentVolume` and `PersistentVolumeClaim` are used to provision volumes for users and administrators, `VolumeSnapshotContent` and `VolumeSnapshot` API resources are provided to create volume snapshots for users and administrators.
|
||||
Similar to how API resources `PersistentVolume` and `PersistentVolumeClaim` are
|
||||
used to provision volumes for users and administrators, `VolumeSnapshotContent`
|
||||
and `VolumeSnapshot` API resources are provided to create volume snapshots for
|
||||
users and administrators.
|
||||
|
||||
A `VolumeSnapshotContent` is a snapshot taken from a volume in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a PersistentVolume is a cluster resource.
|
||||
A `VolumeSnapshotContent` is a snapshot taken from a volume in the cluster that
|
||||
has been provisioned by an administrator. It is a resource in the cluster just
|
||||
like a PersistentVolume is a cluster resource.
|
||||
|
||||
A `VolumeSnapshot` is a request for snapshot of a volume by a user. It is similar to a PersistentVolumeClaim.
|
||||
A `VolumeSnapshot` is a request for snapshot of a volume by a user. It is similar
|
||||
to a PersistentVolumeClaim.
|
||||
|
||||
`VolumeSnapshotClass` allows you to specify different attributes belonging to a `VolumeSnapshot`. These attributes may differ among snapshots taken from the same volume on the storage system and therefore cannot be expressed by using the same `StorageClass` of a `PersistentVolumeClaim`.
|
||||
`VolumeSnapshotClass` allows you to specify different attributes belonging to a
|
||||
`VolumeSnapshot`. These attributes may differ among snapshots taken from the same
|
||||
volume on the storage system and therefore cannot be expressed by using the same
|
||||
`StorageClass` of a `PersistentVolumeClaim`.
|
||||
|
||||
Volume snapshots provide Kubernetes users with a standardized way to copy a volume's contents at a particular point in time without creating an entirely new volume. This functionality enables, for example, database administrators to backup databases before performing edit or delete modifications.
|
||||
Volume snapshots provide Kubernetes users with a standardized way to copy a volume's
|
||||
contents at a particular point in time without creating an entirely new volume. This
|
||||
functionality enables, for example, database administrators to backup databases before
|
||||
performing edit or delete modifications.
|
||||
|
||||
Users need to be aware of the following when using this feature:
|
||||
|
||||
* API Objects `VolumeSnapshot`, `VolumeSnapshotContent`, and `VolumeSnapshotClass` are {{< glossary_tooltip term_id="CustomResourceDefinition" text="CRDs" >}}, not part of the core API.
|
||||
* `VolumeSnapshot` support is only available for CSI drivers.
|
||||
* As part of the deployment process of `VolumeSnapshot`, the Kubernetes team provides a snapshot controller to be deployed into the control plane, and a sidecar helper container called csi-snapshotter to be deployed together with the CSI driver. The snapshot controller watches `VolumeSnapshot` and `VolumeSnapshotContent` objects and is responsible for the creation and deletion of `VolumeSnapshotContent` object. The sidecar csi-snapshotter watches `VolumeSnapshotContent` objects and triggers `CreateSnapshot` and `DeleteSnapshot` operations against a CSI endpoint.
|
||||
* There is also a validating webhook server which provides tightened validation on snapshot objects. This should be installed by the Kubernetes distros along with the snapshot controller and CRDs, not CSI drivers. It should be installed in all Kubernetes clusters that has the snapshot feature enabled.
|
||||
* CSI drivers may or may not have implemented the volume snapshot functionality. The CSI drivers that have provided support for volume snapshot will likely use the csi-snapshotter. See [CSI Driver documentation](https://kubernetes-csi.github.io/docs/) for details.
|
||||
* The CRDs and snapshot controller installations are the responsibility of the Kubernetes distribution.
|
||||
- API Objects `VolumeSnapshot`, `VolumeSnapshotContent`, and `VolumeSnapshotClass`
|
||||
are {{< glossary_tooltip term_id="CustomResourceDefinition" text="CRDs" >}}, not
|
||||
part of the core API.
|
||||
- `VolumeSnapshot` support is only available for CSI drivers.
|
||||
- As part of the deployment process of `VolumeSnapshot`, the Kubernetes team provides
|
||||
a snapshot controller to be deployed into the control plane, and a sidecar helper
|
||||
container called csi-snapshotter to be deployed together with the CSI driver.
|
||||
The snapshot controller watches `VolumeSnapshot` and `VolumeSnapshotContent` objects
|
||||
and is responsible for the creation and deletion of `VolumeSnapshotContent` object.
|
||||
The sidecar csi-snapshotter watches `VolumeSnapshotContent` objects and triggers
|
||||
`CreateSnapshot` and `DeleteSnapshot` operations against a CSI endpoint.
|
||||
- There is also a validating webhook server which provides tightened validation on
|
||||
snapshot objects. This should be installed by the Kubernetes distros along with
|
||||
the snapshot controller and CRDs, not CSI drivers. It should be installed in all
|
||||
Kubernetes clusters that has the snapshot feature enabled.
|
||||
- CSI drivers may or may not have implemented the volume snapshot functionality.
|
||||
The CSI drivers that have provided support for volume snapshot will likely use
|
||||
the csi-snapshotter. See [CSI Driver documentation](https://kubernetes-csi.github.io/docs/) for details.
|
||||
- The CRDs and snapshot controller installations are the responsibility of the Kubernetes distribution.
|
||||
|
||||
## Lifecycle of a volume snapshot and volume snapshot content
|
||||
|
||||
`VolumeSnapshotContents` are resources in the cluster. `VolumeSnapshots` are requests for those resources. The interaction between `VolumeSnapshotContents` and `VolumeSnapshots` follow this lifecycle:
|
||||
`VolumeSnapshotContents` are resources in the cluster. `VolumeSnapshots` are requests
|
||||
for those resources. The interaction between `VolumeSnapshotContents` and `VolumeSnapshots`
|
||||
follow this lifecycle:
|
||||
|
||||
### Provisioning Volume Snapshot
|
||||
|
||||
There are two ways snapshots may be provisioned: pre-provisioned or dynamically provisioned.
|
||||
|
||||
#### Pre-provisioned {#static}
|
||||
A cluster administrator creates a number of `VolumeSnapshotContents`. They carry the details of the real volume snapshot on the storage system which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.
|
||||
|
||||
A cluster administrator creates a number of `VolumeSnapshotContents`. They carry the details
|
||||
of the real volume snapshot on the storage system which is available for use by cluster users.
|
||||
They exist in the Kubernetes API and are available for consumption.
|
||||
|
||||
#### Dynamic
|
||||
Instead of using a pre-existing snapshot, you can request that a snapshot to be dynamically taken from a PersistentVolumeClaim. The [VolumeSnapshotClass](/docs/concepts/storage/volume-snapshot-classes/) specifies storage provider-specific parameters to use when taking a snapshot.
|
||||
|
||||
Instead of using a pre-existing snapshot, you can request that a snapshot to be dynamically
|
||||
taken from a PersistentVolumeClaim. The [VolumeSnapshotClass](/docs/concepts/storage/volume-snapshot-classes/)
|
||||
specifies storage provider-specific parameters to use when taking a snapshot.
|
||||
|
||||
### Binding
|
||||
|
||||
The snapshot controller handles the binding of a `VolumeSnapshot` object with an appropriate `VolumeSnapshotContent` object, in both pre-provisioned and dynamically provisioned scenarios. The binding is a one-to-one mapping.
|
||||
The snapshot controller handles the binding of a `VolumeSnapshot` object with an appropriate
|
||||
`VolumeSnapshotContent` object, in both pre-provisioned and dynamically provisioned scenarios.
|
||||
The binding is a one-to-one mapping.
|
||||
|
||||
In the case of pre-provisioned binding, the VolumeSnapshot will remain unbound until the requested VolumeSnapshotContent object is created.
|
||||
In the case of pre-provisioned binding, the VolumeSnapshot will remain unbound until the
|
||||
requested VolumeSnapshotContent object is created.
|
||||
|
||||
### Persistent Volume Claim as Snapshot Source Protection
|
||||
|
||||
The purpose of this protection is to ensure that in-use
|
||||
{{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}}
|
||||
API objects are not removed from the system while a snapshot is being taken from it (as this may result in data loss).
|
||||
API objects are not removed from the system while a snapshot is being taken from it
|
||||
(as this may result in data loss).
|
||||
|
||||
While a snapshot is being taken of a PersistentVolumeClaim, that PersistentVolumeClaim is in-use. If you delete a PersistentVolumeClaim API object in active use as a snapshot source, the PersistentVolumeClaim object is not removed immediately. Instead, removal of the PersistentVolumeClaim object is postponed until the snapshot is readyToUse or aborted.
|
||||
While a snapshot is being taken of a PersistentVolumeClaim, that PersistentVolumeClaim
|
||||
is in-use. If you delete a PersistentVolumeClaim API object in active use as a snapshot
|
||||
source, the PersistentVolumeClaim object is not removed immediately. Instead, removal of
|
||||
the PersistentVolumeClaim object is postponed until the snapshot is readyToUse or aborted.
|
||||
|
||||
### Delete
|
||||
|
||||
Deletion is triggered by deleting the `VolumeSnapshot` object, and the `DeletionPolicy` will be followed. If the `DeletionPolicy` is `Delete`, then the underlying storage snapshot will be deleted along with the `VolumeSnapshotContent` object. If the `DeletionPolicy` is `Retain`, then both the underlying snapshot and `VolumeSnapshotContent` remain.
|
||||
Deletion is triggered by deleting the `VolumeSnapshot` object, and the `DeletionPolicy`
|
||||
will be followed. If the `DeletionPolicy` is `Delete`, then the underlying storage snapshot
|
||||
will be deleted along with the `VolumeSnapshotContent` object. If the `DeletionPolicy` is
|
||||
`Retain`, then both the underlying snapshot and `VolumeSnapshotContent` remain.
|
||||
|
||||
## VolumeSnapshots
|
||||
|
||||
|
@ -88,13 +130,17 @@ spec:
|
|||
persistentVolumeClaimName: pvc-test
|
||||
```
|
||||
|
||||
`persistentVolumeClaimName` is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.
|
||||
`persistentVolumeClaimName` is the name of the PersistentVolumeClaim data source
|
||||
for the snapshot. This field is required for dynamically provisioning a snapshot.
|
||||
|
||||
A volume snapshot can request a particular class by specifying the name of a
|
||||
[VolumeSnapshotClass](/docs/concepts/storage/volume-snapshot-classes/)
|
||||
using the attribute `volumeSnapshotClassName`. If nothing is set, then the default class is used if available.
|
||||
using the attribute `volumeSnapshotClassName`. If nothing is set, then the
|
||||
default class is used if available.
|
||||
|
||||
For pre-provisioned snapshots, you need to specify a `volumeSnapshotContentName` as the source for the snapshot as shown in the following example. The `volumeSnapshotContentName` source field is required for pre-provisioned snapshots.
|
||||
For pre-provisioned snapshots, you need to specify a `volumeSnapshotContentName`
|
||||
as the source for the snapshot as shown in the following example. The
|
||||
`volumeSnapshotContentName` source field is required for pre-provisioned snapshots.
|
||||
|
||||
```yaml
|
||||
apiVersion: snapshot.storage.k8s.io/v1
|
||||
|
@ -108,7 +154,8 @@ spec:
|
|||
|
||||
## Volume Snapshot Contents
|
||||
|
||||
Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates `VolumeSnapshotContent` objects. Here is an example:
|
||||
Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning,
|
||||
the snapshot common controller creates `VolumeSnapshotContent` objects. Here is an example:
|
||||
|
||||
```yaml
|
||||
apiVersion: snapshot.storage.k8s.io/v1
|
||||
|
@ -128,9 +175,13 @@ spec:
|
|||
uid: 72d9a349-aacd-42d2-a240-d775650d2455
|
||||
```
|
||||
|
||||
`volumeHandle` is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snapshot. It specifies the volume source of the snapshot.
|
||||
`volumeHandle` is the unique identifier of the volume created on the storage
|
||||
backend and returned by the CSI driver during the volume creation. This field
|
||||
is required for dynamically provisioning a snapshot.
|
||||
It specifies the volume source of the snapshot.
|
||||
|
||||
For pre-provisioned snapshots, you (as cluster administrator) are responsible for creating the `VolumeSnapshotContent` object as follows.
|
||||
For pre-provisioned snapshots, you (as cluster administrator) are responsible
|
||||
for creating the `VolumeSnapshotContent` object as follows.
|
||||
|
||||
```yaml
|
||||
apiVersion: snapshot.storage.k8s.io/v1
|
||||
|
@ -148,19 +199,24 @@ spec:
|
|||
namespace: default
|
||||
```
|
||||
|
||||
`snapshotHandle` is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage system that this `VolumeSnapshotContent` represents.
|
||||
`snapshotHandle` is the unique identifier of the volume snapshot created on
|
||||
the storage backend. This field is required for the pre-provisioned snapshots.
|
||||
It specifies the CSI snapshot id on the storage system that this
|
||||
`VolumeSnapshotContent` represents.
|
||||
|
||||
`sourceVolumeMode` is the mode of the volume whose snapshot is taken. The value
|
||||
of the `sourceVolumeMode` field can be either `Filesystem` or `Block`. If the
|
||||
source volume mode is not specified, Kubernetes treats the snapshot as if the
|
||||
`sourceVolumeMode` is the mode of the volume whose snapshot is taken. The value
|
||||
of the `sourceVolumeMode` field can be either `Filesystem` or `Block`. If the
|
||||
source volume mode is not specified, Kubernetes treats the snapshot as if the
|
||||
source volume's mode is unknown.
|
||||
|
||||
`volumeSnapshotRef` is the reference of the corresponding `VolumeSnapshot`. Note that when the `VolumeSnapshotContent` is being created as a pre-provisioned snapshot, the `VolumeSnapshot` referenced in `volumeSnapshotRef` might not exist yet.
|
||||
`volumeSnapshotRef` is the reference of the corresponding `VolumeSnapshot`. Note that
|
||||
when the `VolumeSnapshotContent` is being created as a pre-provisioned snapshot, the
|
||||
`VolumeSnapshot` referenced in `volumeSnapshotRef` might not exist yet.
|
||||
|
||||
## Converting the volume mode of a Snapshot {#convert-volume-mode}
|
||||
|
||||
If the `VolumeSnapshots` API installed on your cluster supports the `sourceVolumeMode`
|
||||
field, then the API has the capability to prevent unauthorized users from converting
|
||||
field, then the API has the capability to prevent unauthorized users from converting
|
||||
the mode of a volume.
|
||||
|
||||
To check if your cluster has capability for this feature, run the following command:
|
||||
|
@ -169,12 +225,12 @@ To check if your cluster has capability for this feature, run the following comm
|
|||
$ kubectl get crd volumesnapshotcontent -o yaml
|
||||
```
|
||||
|
||||
If you want to allow users to create a `PersistentVolumeClaim` from an existing
|
||||
`VolumeSnapshot`, but with a different volume mode than the source, the annotation
|
||||
`snapshot.storage.kubernetes.io/allowVolumeModeChange: "true"`needs to be added to
|
||||
the `VolumeSnapshotContent` that corresponds to the `VolumeSnapshot`.
|
||||
If you want to allow users to create a `PersistentVolumeClaim` from an existing
|
||||
`VolumeSnapshot`, but with a different volume mode than the source, the annotation
|
||||
`snapshot.storage.kubernetes.io/allowVolumeModeChange: "true"`needs to be added to
|
||||
the `VolumeSnapshotContent` that corresponds to the `VolumeSnapshot`.
|
||||
|
||||
For pre-provisioned snapshots, `Spec.SourceVolumeMode` needs to be populated
|
||||
For pre-provisioned snapshots, `Spec.SourceVolumeMode` needs to be populated
|
||||
by the cluster administrator.
|
||||
|
||||
An example `VolumeSnapshotContent` resource with this feature enabled would look like:
|
||||
|
@ -200,7 +256,7 @@ spec:
|
|||
## Provisioning Volumes from Snapshots
|
||||
|
||||
You can provision a new volume, pre-populated with data from a snapshot, by using
|
||||
the *dataSource* field in the `PersistentVolumeClaim` object.
|
||||
the _dataSource_ field in the `PersistentVolumeClaim` object.
|
||||
|
||||
For more details, see
|
||||
[Volume Snapshot and Restore Volume from Snapshot](/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support).
|
||||
|
|
|
@ -346,7 +346,7 @@ your container's memory limit.
|
|||
|
||||
A size limit can be specified for the default medium, which limits the capacity
|
||||
of the `emptyDir` volume. The storage is allocated from [node ephemeral
|
||||
storage](docs/concepts/configuration/manage-resources-containers/#setting-requests-and-limits-for-local-ephemeral-storage).
|
||||
storage](/docs/concepts/configuration/manage-resources-containers/#setting-requests-and-limits-for-local-ephemeral-storage).
|
||||
If that is filled up from another source (for example, log files or image
|
||||
overlays), the `emptyDir` may run out of capacity before this limit.
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Reference Docs Overview
|
||||
title: Updating Reference Documentation
|
||||
main_menu: true
|
||||
weight: 80
|
||||
---
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Documenting a feature for a release
|
||||
title: Documenting a feature for a release
|
||||
linktitle: Documenting for a release
|
||||
content_type: concept
|
||||
main_menu: true
|
||||
|
@ -7,7 +7,7 @@ weight: 20
|
|||
card:
|
||||
name: contribute
|
||||
weight: 45
|
||||
title: Documenting a feature for a release
|
||||
title: Documenting a feature for a release
|
||||
---
|
||||
<!-- overview -->
|
||||
|
||||
|
@ -128,7 +128,7 @@ make sure you add it to [Alpha/Beta Feature gates](/docs/reference/command-line-
|
|||
table as part of your pull request. With new feature gates, a description of
|
||||
the feature gate is also required. If your feature is GA'ed or deprecated,
|
||||
make sure to move it from the
|
||||
[Feature gates for Alpha/Feature](docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features) table
|
||||
[Feature gates for Alpha/Feature](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features) table
|
||||
to [Feature gates for graduated or deprecated features](/docs/reference/command-line-tools-reference/feature-gates-removed/#feature-gates-that-are-removed)
|
||||
table with Alpha and Beta history intact.
|
||||
|
||||
|
|
|
@ -102,7 +102,7 @@ After submitting at least 5 substantial pull requests and meeting the other
|
|||
## Reviewers
|
||||
|
||||
Reviewers are responsible for reviewing open pull requests. Unlike member
|
||||
feedback, you must address reviewer feedback. Reviewers are members of the
|
||||
feedback, the PR author must address reviewer feedback. Reviewers are members of the
|
||||
[@kubernetes/sig-docs-{language}-reviews](https://github.com/orgs/kubernetes/teams?query=sig-docs)
|
||||
GitHub team.
|
||||
|
||||
|
@ -146,7 +146,7 @@ separately for reviewer status in SIG Docs.
|
|||
|
||||
To apply:
|
||||
|
||||
1. Open a pull request that adds your GitHub user name to a section of the
|
||||
1. Open a pull request that adds your GitHub username to a section of the
|
||||
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS_ALIASES) file
|
||||
in the `kubernetes/website` repository.
|
||||
|
||||
|
@ -154,7 +154,7 @@ To apply:
|
|||
If you aren't sure where to add yourself, add yourself to `sig-docs-en-reviews`.
|
||||
{{< /note >}}
|
||||
|
||||
1. Assign the PR to one or more SIG-Docs approvers (user names listed under
|
||||
1. Assign the PR to one or more SIG-Docs approvers (usernames listed under
|
||||
`sig-docs-{language}-owners`).
|
||||
|
||||
If approved, a SIG Docs lead adds you to the appropriate GitHub team. Once added,
|
||||
|
@ -192,7 +192,7 @@ into the website repository. This comes with certain responsibilities.
|
|||
{{< /warning >}}
|
||||
|
||||
- Make sure that proposed changes meet the
|
||||
[contribution guidelines](/docs/contribute/style/content-guide/#contributing-content).
|
||||
[documentation content guide](/docs/contribute/style/content-guide/).
|
||||
|
||||
If you ever have a question, or you're not sure about something, feel free
|
||||
to call for additional review.
|
||||
|
|
|
@ -11,109 +11,146 @@ weight: 50
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
This is a Cluster Administrator guide to service accounts. You should be familiar with
|
||||
[configuring Kubernetes service accounts](/docs/tasks/configure-pod-container/configure-service-account/).
|
||||
A _ServiceAccount_ provides an identity for processes that run in a Pod.
|
||||
|
||||
Support for authorization and user accounts is planned but incomplete. Sometimes
|
||||
incomplete features are referred to in order to better describe service accounts.
|
||||
A process inside a Pod can use the identity of its associated service account to
|
||||
authenticate to the cluster's API server.
|
||||
|
||||
For an introduction to service accounts, read [configure service accounts](/docs/tasks/configure-pod-container/configure-service-account/).
|
||||
|
||||
This task guide explains some of the concepts behind ServiceAccounts. The
|
||||
guide also explains how to obtain or revoke tokens that represent
|
||||
ServiceAccounts.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
|
||||
To be able to follow these steps exactly, ensure you have a namespace named
|
||||
`examplens`.
|
||||
If you don't, create one by running:
|
||||
|
||||
```shell
|
||||
kubectl create namespace examplens
|
||||
```
|
||||
|
||||
## User accounts versus service accounts
|
||||
|
||||
Kubernetes distinguishes between the concept of a user account and a service account
|
||||
for a number of reasons:
|
||||
|
||||
- User accounts are for humans. Service accounts are for processes, which run
|
||||
in pods.
|
||||
- User accounts are intended to be global. Names must be unique across all
|
||||
namespaces of a cluster. Service accounts are namespaced.
|
||||
- Typically, a cluster's user accounts might be synced from a corporate
|
||||
- User accounts are for humans. Service accounts are for application processes,
|
||||
which (for Kubernetes) run in containers that are part of pods.
|
||||
- User accounts are intended to be global: names must be unique across all
|
||||
namespaces of a cluster. No matter what namespace you look at, a particular
|
||||
username that represents a user represents the same user.
|
||||
In Kubernetes, service accounts are namespaced: two different namespaces can
|
||||
contain ServiceAccounts that have identical names.
|
||||
- Typically, a cluster's user accounts might be synchronised from a corporate
|
||||
database, where new user account creation requires special privileges and is
|
||||
tied to complex business processes. Service account creation is intended to be
|
||||
more lightweight, allowing cluster users to create service accounts for
|
||||
specific tasks by following the principle of least privilege.
|
||||
- Auditing considerations for humans and service accounts may differ.
|
||||
- A config bundle for a complex system may include definition of various service
|
||||
tied to complex business processes. By contrast, service account creation is
|
||||
intended to be more lightweight, allowing cluster users to create service accounts
|
||||
for specific tasks on demand. Separating ServiceAccount creation from the steps to
|
||||
onboard human users makes it easier for workloads to following the principle of
|
||||
least privilege.
|
||||
- Auditing considerations for humans and service accounts may differ; the separation
|
||||
makes that easier to achieve.
|
||||
- A configuration bundle for a complex system may include definition of various service
|
||||
accounts for components of that system. Because service accounts can be created
|
||||
without many constraints and have namespaced names, such config is portable.
|
||||
without many constraints and have namespaced names, such configuration is
|
||||
usually portable.
|
||||
|
||||
## Service account automation
|
||||
|
||||
Three separate components cooperate to implement the automation around service accounts:
|
||||
|
||||
- A `ServiceAccount` admission controller
|
||||
- A Token controller
|
||||
- A `ServiceAccount` controller
|
||||
|
||||
### ServiceAccount Admission Controller
|
||||
|
||||
The modification of pods is implemented via a plugin
|
||||
called an [Admission Controller](/docs/reference/access-authn-authz/admission-controllers/).
|
||||
It is part of the API server.
|
||||
It acts synchronously to modify pods as they are created or updated. When this plugin is active
|
||||
(and it is by default on most distributions), then it does the following when a pod is created or modified:
|
||||
|
||||
1. If the pod does not have a `ServiceAccount` set, it sets the `ServiceAccount` to `default`.
|
||||
1. It ensures that the `ServiceAccount` referenced by the pod exists, and otherwise rejects it.
|
||||
1. It adds a `volume` to the pod which contains a token for API access if neither the
|
||||
ServiceAccount `automountServiceAccountToken` nor the Pod's `automountServiceAccountToken`
|
||||
is set to `false`.
|
||||
1. It adds a `volumeSource` to each container of the pod mounted at
|
||||
`/var/run/secrets/kubernetes.io/serviceaccount`, if the previous step has created a volume
|
||||
for the ServiceAccount token.
|
||||
1. If the pod does not contain any `imagePullSecrets`, then `imagePullSecrets` of the
|
||||
`ServiceAccount` are added to the pod.
|
||||
|
||||
#### Bound Service Account Token Volume
|
||||
## Bound service account token volume mechanism {#bound-service-account-token-volume}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.22" state="stable" >}}
|
||||
|
||||
The ServiceAccount admission controller will add the following projected volume instead of a
|
||||
Secret-based volume for the non-expiring service account token created by the Token controller.
|
||||
By default, the Kubernetes control plane (specifically, the
|
||||
[ServiceAccount admission controller](#service-account-admission-controller))
|
||||
adds a [projected volume](/docs/concepts/storage/projected-volumes/) to Pods,
|
||||
and this volume includes a token for Kubernetes API access.
|
||||
|
||||
Here's an example of how that looks for a launched Pod:
|
||||
|
||||
```yaml
|
||||
- name: kube-api-access-<random-suffix>
|
||||
projected:
|
||||
defaultMode: 420 # 0644
|
||||
sources:
|
||||
- serviceAccountToken:
|
||||
expirationSeconds: 3607
|
||||
path: token
|
||||
- configMap:
|
||||
items:
|
||||
- key: ca.crt
|
||||
path: ca.crt
|
||||
name: kube-root-ca.crt
|
||||
- downwardAPI:
|
||||
items:
|
||||
- fieldRef:
|
||||
apiVersion: v1
|
||||
fieldPath: metadata.namespace
|
||||
path: namespace
|
||||
...
|
||||
- name: kube-api-access-<random-suffix>
|
||||
projected:
|
||||
sources:
|
||||
- serviceAccountToken:
|
||||
path: token # must match the path the app expects
|
||||
- configMap:
|
||||
items:
|
||||
- key: ca.crt
|
||||
path: ca.crt
|
||||
name: kube-root-ca.crt
|
||||
- downwardAPI:
|
||||
items:
|
||||
- fieldRef:
|
||||
apiVersion: v1
|
||||
fieldPath: metadata.namespace
|
||||
path: namespace
|
||||
```
|
||||
|
||||
This projected volume consists of three sources:
|
||||
That manifest snippet defines a projected volume that consists of three sources. In this case,
|
||||
each source also represents a single path within that volume. The three sources are:
|
||||
|
||||
1. A `serviceAccountToken` acquired from kube-apiserver via TokenRequest API. It will expire
|
||||
after 1 hour by default or when the pod is deleted. It is bound to the pod and it has
|
||||
its audience set to match the audience of the `kube-apiserver`.
|
||||
1. A `configMap` containing a CA bundle used for verifying connections to the kube-apiserver.
|
||||
1. A `downwardAPI` that references the namespace of the pod.
|
||||
1. A `serviceAccountToken` source, that contains a token that the kubelet acquires from kube-apiserver
|
||||
The kubelet fetches time-bound tokens using the TokenRequest API. A token served for a TokenRequest expires
|
||||
either when the pod is deleted or after a defined lifespan (by default, that is 1 hour).
|
||||
The token is bound to the specific Pod and has the kube-apiserver as its audience.
|
||||
This mechanism superseded an earlier mechanism that added a volume based on a Secret,
|
||||
where the Secret represented the ServiceAccount for the Pod, but did not expire.
|
||||
1. A `configMap` source. The ConfigMap contains a bundle of certificate authority data. Pods can use these
|
||||
certificates to make sure that they are connecting to your cluster's kube-apiserver (and not to middlebox
|
||||
or an accidentally misconfigured peer).
|
||||
1. A `downwardAPI` source that looks up the name of thhe namespace containing the Pod, and makes
|
||||
that name information available to application code running inside the Pod.
|
||||
|
||||
See more details about [projected volumes](/docs/tasks/configure-pod-container/configure-projected-volume-storage/).
|
||||
Any container within the Pod that mounts this particular volume can access the above information.
|
||||
|
||||
### Token Controller
|
||||
{{< note >}}
|
||||
There is no specific mechanism to invalidate a token issued via TokenRequest. If you no longer
|
||||
trust a bound service account token for a Pod, you can delete that Pod. Deleting a Pod expires
|
||||
its bound service account tokens.
|
||||
{{< /note >}}
|
||||
|
||||
TokenController runs as part of `kube-controller-manager`. It acts asynchronously. It:
|
||||
## Manual Secret management for ServiceAccounts
|
||||
|
||||
- watches ServiceAccount creation and creates a corresponding
|
||||
ServiceAccount token Secret to allow API access.
|
||||
- watches ServiceAccount deletion and deletes all corresponding ServiceAccount
|
||||
Versions of Kubernetes before v1.22 automatically created credentials for accessing
|
||||
the Kubernetes API. This older mechanism was based on creating token Secrets that
|
||||
could then be mounted into running Pods.
|
||||
|
||||
In more recent versions, including Kubernetes v{{< skew currentVersion >}}, API credentials
|
||||
are [obtained directly](#bound-service-account-token-volume) using the
|
||||
[TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) API,
|
||||
and are mounted into Pods using a projected volume.
|
||||
The tokens obtained using this method have bounded lifetimes, and are automatically
|
||||
invalidated when the Pod they are mounted into is deleted.
|
||||
|
||||
You can still [manually create](/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount) a Secret to hold a service account token; for example, if you need a token that never expires.
|
||||
|
||||
Once you manually create a Secret and link it to a ServiceAccount, the Kubernetes control plane automatically populates the token into that Secret.
|
||||
|
||||
{{< note >}}
|
||||
Although the manual mechanism for creating a long-lived ServiceAccount token exists,
|
||||
using [TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/)
|
||||
to obtain short-lived API access tokens is recommended instead.
|
||||
{{< /note >}}
|
||||
|
||||
## Control plane details
|
||||
|
||||
### Token controller
|
||||
|
||||
The service account token controller runs as part of `kube-controller-manager`.
|
||||
This controller acts asynchronously. It:
|
||||
|
||||
- watches for ServiceAccount deletion and deletes all corresponding ServiceAccount
|
||||
token Secrets.
|
||||
- watches ServiceAccount token Secret addition, and ensures the referenced
|
||||
- watches for ServiceAccount token Secret addition, and ensures the referenced
|
||||
ServiceAccount exists, and adds a token to the Secret if needed.
|
||||
- watches Secret deletion and removes a reference from the corresponding
|
||||
- watches for Secret deletion and removes a reference from the corresponding
|
||||
ServiceAccount if needed.
|
||||
|
||||
You must pass a service account private key file to the token controller in
|
||||
|
@ -123,39 +160,232 @@ Similarly, you must pass the corresponding public key to the `kube-apiserver`
|
|||
using the `--service-account-key-file` flag. The public key will be used to
|
||||
verify the tokens during authentication.
|
||||
|
||||
#### To create additional API tokens
|
||||
### ServiceAccount admission controller
|
||||
|
||||
A controller loop ensures a Secret with an API token exists for each
|
||||
ServiceAccount. To create additional API tokens for a ServiceAccount, create a
|
||||
Secret of type `kubernetes.io/service-account-token` with an annotation
|
||||
referencing the ServiceAccount, and the controller will update it with a
|
||||
generated token:
|
||||
The modification of pods is implemented via a plugin
|
||||
called an [Admission Controller](/docs/reference/access-authn-authz/admission-controllers/).
|
||||
It is part of the API server.
|
||||
This admission controller acts synchronously to modify pods as they are created.
|
||||
When this plugin is active (and it is by default on most distributions), then
|
||||
it does the following when a Pod is created:
|
||||
|
||||
Below is a sample configuration for such a Secret:
|
||||
1. If the pod does not have a `.spec.serviceAccountName` set, the admission controller sets the name of the
|
||||
ServiceAccount for this incoming Pod to `default`.
|
||||
1. The admission controller ensures that the ServiceAccount referenced by the incoming Pod exists. If there
|
||||
is no ServiceAccount with a matching name, the admission controller rejects the incoming Pod. That check
|
||||
applies even for the `default` ServiceAccount.
|
||||
1. Provided that neither the ServiceAccount's `automountServiceAccountToken` field nor the
|
||||
Pod's `automountServiceAccountToken` field is set to `false`:
|
||||
- the admission controller mutates the incoming Pod, adding an extra
|
||||
{{< glossary_tooltip text="volume" term_id="volume" >}} that contains
|
||||
a token for API access.
|
||||
- the admission controller adds a `volumeMount` to each container in the Pod,
|
||||
skipping any containers that already have a volume mount defined for the path
|
||||
`/var/run/secrets/kubernetes.io/serviceaccount`.
|
||||
For Linux containers, that volume is mounted at `/var/run/secrets/kubernetes.io/serviceaccount`;
|
||||
on Windows nodes, the mount is at the equivalent path.
|
||||
1. If the spec of the incoming Pod does already contain any `imagePullSecrets`, then the
|
||||
admission controller adds `imagePullSecrets`, copying them from the `ServiceAccount`.
|
||||
|
||||
### TokenRequest API
|
||||
|
||||
{{< feature-state for_k8s_version="v1.22" state="stable" >}}
|
||||
|
||||
You use the [TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/)
|
||||
subresource of a ServiceAccount to obtain a time-bound token for that ServiceAccount.
|
||||
You don't need to call this to obtain an API token for use within a container, since
|
||||
the kubelet sets this up for you using a _projected volume_.
|
||||
|
||||
If you want to use the TokenRequest API from `kubectl`, see
|
||||
[Manually create an API token for a ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount).
|
||||
|
||||
The Kubernetes control plane (specifically, the ServiceAccount admission controller)
|
||||
adds a projected volume to Pods, and the kubelet ensures that this volume contains a token
|
||||
that lets containers authenticate as the right ServiceAccount.
|
||||
|
||||
(This mechanism superseded an earlier mechanism that added a volume based on a Secret,
|
||||
where the Secret represented the ServiceAccount for the Pod but did not expire.)
|
||||
|
||||
Here's an example of how that looks for a launched Pod:
|
||||
|
||||
```yaml
|
||||
...
|
||||
- name: kube-api-access-<random-suffix>
|
||||
projected:
|
||||
defaultMode: 420 # decimal equivalent of octal 0644
|
||||
sources:
|
||||
- serviceAccountToken:
|
||||
expirationSeconds: 3607
|
||||
path: token
|
||||
- configMap:
|
||||
items:
|
||||
- key: ca.crt
|
||||
path: ca.crt
|
||||
name: kube-root-ca.crt
|
||||
- downwardAPI:
|
||||
items:
|
||||
- fieldRef:
|
||||
apiVersion: v1
|
||||
fieldPath: metadata.namespace
|
||||
path: namespace
|
||||
```
|
||||
|
||||
That manifest snippet defines a projected volume that combines information from three sources:
|
||||
|
||||
1. A `serviceAccountToken` source, that contains a token that the kubelet acquires from kube-apiserver
|
||||
The kubelet fetches time-bound tokens using the TokenRequest API. A token served for a TokenRequest expires
|
||||
either when the pod is deleted or after a defined lifespan (by default, that is 1 hour).
|
||||
The token is bound to the specific Pod and has the kube-apiserver as its audience.
|
||||
1. A `configMap` source. The ConfigMap contains a bundle of certificate authority data. Pods can use these
|
||||
certificates to make sure that they are connecting to your cluster's kube-apiserver (and not to middlebox
|
||||
or an accidentally misconfigured peer).
|
||||
1. A `downwardAPI` source. This `downwardAPI` volume makes the name of the namespace container the Pod available
|
||||
to application code running inside the Pod.
|
||||
|
||||
Any container within the Pod that mounts this volume can access the above information.
|
||||
|
||||
## Create additional API tokens {#create-token}
|
||||
|
||||
{{< caution >}}
|
||||
Only create long-lived API tokens if the [token request](#tokenrequest-api) mechanism
|
||||
is not suitable. The token request mechanism provides time-limited tokens; because these
|
||||
expire, they represent a lower risk to information security.
|
||||
{{< /caution >}}
|
||||
|
||||
To create a non-expiring, persisted API token for a ServiceAccount, create a
|
||||
Secret of type `kubernetes.io/service-account-token` with an annotation
|
||||
referencing the ServiceAccount. The control plane then generates a long-lived token and
|
||||
updates that Secret with that generated token data.
|
||||
|
||||
Here is a sample manifest for such a Secret:
|
||||
|
||||
{{< codenew file="secret/serviceaccount/mysecretname.yaml" >}}
|
||||
|
||||
To create a Secret based on this example, run:
|
||||
```shell
|
||||
kubectl -n examplens create -f https://k8s.io/examples/secret/serviceaccount/mysecretname.yaml
|
||||
```
|
||||
|
||||
To see the details for that Secret, run:
|
||||
|
||||
```shell
|
||||
kubectl -n examplens describe secret mysecretname
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
```
|
||||
Name: mysecretname
|
||||
Namespace: examplens
|
||||
Labels: <none>
|
||||
Annotations: kubernetes.io/service-account.name=myserviceaccount
|
||||
kubernetes.io/service-account.uid=8a85c4c4-8483-11e9-bc42-526af7764f64
|
||||
|
||||
Type: kubernetes.io/service-account-token
|
||||
|
||||
Data
|
||||
====
|
||||
ca.crt: 1362 bytes
|
||||
namespace: 9 bytes
|
||||
token: ...
|
||||
```
|
||||
|
||||
If you launch a new Pod into the `examplens` namespace, it can use the `myserviceaccount`
|
||||
service-account-token Secret that you just created.
|
||||
|
||||
## Delete/invalidate a ServiceAccount token {#delete-token}
|
||||
|
||||
If you know the name of the Secret that contains the token you want to remove:
|
||||
|
||||
```shell
|
||||
kubectl delete secret name-of-secret
|
||||
```
|
||||
|
||||
Otherwise, first find the Secret for the ServiceAccount.
|
||||
|
||||
```shell
|
||||
# This assumes that you already have a namespace named 'examplens'
|
||||
kubectl -n examplens get serviceaccount/example-automated-thing -o yaml
|
||||
```
|
||||
The output is similar to:
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: mysecretname
|
||||
annotations:
|
||||
kubernetes.io/service-account.name: myserviceaccount
|
||||
type: kubernetes.io/service-account-token
|
||||
kubectl.kubernetes.io/last-applied-configuration: |
|
||||
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"example-automated-thing","namespace":"examplens"}}
|
||||
creationTimestamp: "2019-07-21T07:07:07Z"
|
||||
name: example-automated-thing
|
||||
namespace: examplens
|
||||
resourceVersion: "777"
|
||||
selfLink: /api/v1/namespaces/examplens/serviceaccounts/example-automated-thing
|
||||
uid: f23fd170-66f2-4697-b049-e1e266b7f835
|
||||
secrets:
|
||||
- name: example-automated-thing-token-zyxwv
|
||||
```
|
||||
Then, delete the Secret you now know the name of:
|
||||
```shell
|
||||
kubectl -n examplens delete secret/example-automated-thing-token-zyxwv
|
||||
```
|
||||
|
||||
The control plane spots that the ServiceAccount is missing its Secret,
|
||||
and creates a replacement:
|
||||
|
||||
```shell
|
||||
kubectl create -f ./secret.yaml
|
||||
kubectl describe secret mysecretname
|
||||
kubectl -n examplens get serviceaccount/example-automated-thing -o yaml
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
annotations:
|
||||
kubectl.kubernetes.io/last-applied-configuration: |
|
||||
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"example-automated-thing","namespace":"examplens"}}
|
||||
creationTimestamp: "2019-07-21T07:07:07Z"
|
||||
name: example-automated-thing
|
||||
namespace: examplens
|
||||
resourceVersion: "1026"
|
||||
selfLink: /api/v1/namespaces/examplens/serviceaccounts/example-automated-thing
|
||||
uid: f23fd170-66f2-4697-b049-e1e266b7f835
|
||||
secrets:
|
||||
- name: example-automated-thing-token-4rdrh
|
||||
```
|
||||
|
||||
#### To delete/invalidate a ServiceAccount token Secret
|
||||
## Clean up
|
||||
|
||||
If you created a namespace `examplens` to experiment with, you can remove it:
|
||||
```shell
|
||||
kubectl delete secret mysecretname
|
||||
kubectl delete namespace examplens
|
||||
```
|
||||
|
||||
## Control plane details
|
||||
|
||||
### ServiceAccount controller
|
||||
|
||||
A ServiceAccount controller manages the ServiceAccounts inside namespaces, and
|
||||
ensures a ServiceAccount named "default" exists in every active namespace.
|
||||
|
||||
### Token controller
|
||||
|
||||
The service account token controller runs as part of `kube-controller-manager`.
|
||||
This controller acts asynchronously. It:
|
||||
|
||||
- watches for ServiceAccount creation and creates a corresponding
|
||||
ServiceAccount token Secret to allow API access.
|
||||
- watches for ServiceAccount deletion and deletes all corresponding ServiceAccount
|
||||
token Secrets.
|
||||
- watches for ServiceAccount token Secret addition, and ensures the referenced
|
||||
ServiceAccount exists, and adds a token to the Secret if needed.
|
||||
- watches for Secret deletion and removes a reference from the corresponding
|
||||
ServiceAccount if needed.
|
||||
|
||||
You must pass a service account private key file to the token controller in
|
||||
the `kube-controller-manager` using the `--service-account-private-key-file`
|
||||
flag. The private key is used to sign generated service account tokens.
|
||||
Similarly, you must pass the corresponding public key to the `kube-apiserver`
|
||||
using the `--service-account-key-file` flag. The public key will be used to
|
||||
verify the tokens during authentication.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
- Read more details about [projected volumes](/docs/concepts/storage/projected-volumes/).
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -306,6 +306,14 @@ sets this label on that Pod. The value of the label is the name of the Pod being
|
|||
See [Pod Name Label](/docs/concepts/workloads/controllers/statefulset/#pod-name-label) in the
|
||||
StatefulSet topic for more details.
|
||||
|
||||
### scheduler.alpha.kubernetes.io/node-selector {#schedulerkubernetesnode-selector}
|
||||
|
||||
Example: `scheduler.alpha.kubernetes.io/node-selector: "name-of-node-selector"`
|
||||
|
||||
Used on: Namespace
|
||||
|
||||
The [PodNodeSelector](/docs/reference/access-authn-authz/admission-controllers/#podnodeselector) uses this annotation key to assign node selectors to pods in namespaces.
|
||||
|
||||
### topology.kubernetes.io/region {#topologykubernetesioregion}
|
||||
|
||||
Example:
|
||||
|
|
|
@ -198,6 +198,6 @@ balancer, the control plane looks up that external IP address and populates it i
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Follow the [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/) tutorial
|
||||
* Read about [Service](/docs/concepts/services-networking/service/)
|
||||
* Read about [Ingress](/docs/concepts/services-networking/ingress/)
|
||||
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
|
||||
|
|
|
@ -153,7 +153,6 @@ the Hello World application, enter this command:
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
Learn more about
|
||||
[connecting applications with services](/docs/concepts/services-networking/connect-applications-service/).
|
||||
|
||||
Follow the
|
||||
[Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/)
|
||||
tutorial.
|
||||
|
|
|
@ -50,6 +50,5 @@ To enable service topology, enable the `ServiceTopology`
|
|||
* Read about [Topology Aware Hints](/docs/concepts/services-networking/topology-aware-hints/), the replacement for the `topologyKeys` field.
|
||||
* Read about [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/)
|
||||
* Read about the [Service Topology](/docs/concepts/services-networking/service-topology/) concept
|
||||
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
|
||||
|
||||
* Read [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/)
|
||||
|
||||
|
|
|
@ -7,6 +7,10 @@ description: Creating Secret objects using kubectl command line.
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
This page shows you how to create, edit, manage, and delete Kubernetes
|
||||
{{<glossary_tooltip text="Secrets" term_id="secret">}} using the `kubectl`
|
||||
command-line tool.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
|
@ -15,64 +19,64 @@ description: Creating Secret objects using kubectl command line.
|
|||
|
||||
## Create a Secret
|
||||
|
||||
A `Secret` can contain user credentials required by pods to access a database.
|
||||
For example, a database connection string consists of a username and password.
|
||||
You can store the username in a file `./username.txt` and the password in a
|
||||
file `./password.txt` on your local machine.
|
||||
A `Secret` object stores sensitive data such as credentials
|
||||
used by Pods to access services. For example, you might need a Secret to store
|
||||
the username and password needed to access a database.
|
||||
|
||||
```shell
|
||||
echo -n 'admin' > ./username.txt
|
||||
echo -n '1f2d1e2e67df' > ./password.txt
|
||||
```
|
||||
In these commands, the `-n` flag ensures that the generated files do not have
|
||||
an extra newline character at the end of the text. This is important because
|
||||
when `kubectl` reads a file and encodes the content into a base64 string, the
|
||||
extra newline character gets encoded too.
|
||||
You can create the Secret by passing the raw data in the command, or by storing
|
||||
the credentials in files that you pass in the command. The following commands
|
||||
create a Secret that stores the username `admin` and the password `S!B\*d$zDsb=`.
|
||||
|
||||
The `kubectl create secret` command packages these files into a Secret and creates
|
||||
the object on the API server.
|
||||
### Use raw data
|
||||
|
||||
Run the following command:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic db-user-pass \
|
||||
--from-file=./username.txt \
|
||||
--from-file=./password.txt
|
||||
--from-literal=username=devuser \
|
||||
--from-literal=password='S!B\*d$zDsb='
|
||||
```
|
||||
You must use single quotes `''` to escape special characters such as `$`, `\`,
|
||||
`*`, `=`, and `!` in your strings. If you don't, your shell will interpret these
|
||||
characters.
|
||||
|
||||
The output is similar to:
|
||||
### Use source files
|
||||
|
||||
1. Store the credentials in files with the values encoded in base64:
|
||||
|
||||
```shell
|
||||
echo -n 'admin' | base64 > ./username.txt
|
||||
echo -n 'S!B\*d$zDsb=' | base64 > ./password.txt
|
||||
```
|
||||
The `-n` flag ensures that the generated files do not have an extra newline
|
||||
character at the end of the text. This is important because when `kubectl`
|
||||
reads a file and encodes the content into a base64 string, the extra
|
||||
newline character gets encoded too. You do not need to escape special
|
||||
characters in strings that you include in a file.
|
||||
|
||||
1. Pass the file paths in the `kubectl` command:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic db-user-pass \
|
||||
--from-file=./username.txt \
|
||||
--from-file=./password.txt
|
||||
```
|
||||
The default key name is the file name. You can optionally set the key name
|
||||
using `--from-file=[key=]source`. For example:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic db-user-pass \
|
||||
--from-file=username=./username.txt \
|
||||
--from-file=password=./password.txt
|
||||
```
|
||||
|
||||
With either method, the output is similar to:
|
||||
|
||||
```
|
||||
secret/db-user-pass created
|
||||
```
|
||||
|
||||
The default key name is the filename. You can optionally set the key name using
|
||||
`--from-file=[key=]source`. For example:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic db-user-pass \
|
||||
--from-file=username=./username.txt \
|
||||
--from-file=password=./password.txt
|
||||
```
|
||||
|
||||
You do not need to escape special characters in password strings that you
|
||||
include in a file.
|
||||
|
||||
You can also provide Secret data using the `--from-literal=<key>=<value>` tag.
|
||||
This tag can be specified more than once to provide multiple key-value pairs.
|
||||
Note that special characters such as `$`, `\`, `*`, `=`, and `!` will be
|
||||
interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_(computing))
|
||||
and require escaping.
|
||||
|
||||
In most shells, the easiest way to escape the password is to surround it with
|
||||
single quotes (`'`). For example, if your password is `S!B\*d$zDsb=`,
|
||||
run the following command:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic db-user-pass \
|
||||
--from-literal=username=devuser \
|
||||
--from-literal=password='S!B\*d$zDsb='
|
||||
```
|
||||
|
||||
## Verify the Secret
|
||||
### Verify the Secret {#verify-the-secret}
|
||||
|
||||
Check that the Secret was created:
|
||||
|
||||
|
@ -87,10 +91,10 @@ NAME TYPE DATA AGE
|
|||
db-user-pass Opaque 2 51s
|
||||
```
|
||||
|
||||
You can view a description of the `Secret`:
|
||||
View the details of the Secret:
|
||||
|
||||
```shell
|
||||
kubectl describe secrets/db-user-pass
|
||||
kubectl describe secret db-user-pass
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
@ -113,52 +117,77 @@ The commands `kubectl get` and `kubectl describe` avoid showing the contents
|
|||
of a `Secret` by default. This is to protect the `Secret` from being exposed
|
||||
accidentally, or from being stored in a terminal log.
|
||||
|
||||
To check the actual content of the encoded data, refer to [Decoding the Secret](#decoding-secret).
|
||||
### Decode the Secret {#decoding-secret}
|
||||
|
||||
## Decoding the Secret {#decoding-secret}
|
||||
1. View the contents of the Secret you created:
|
||||
|
||||
To view the contents of the Secret you created, run the following command:
|
||||
```shell
|
||||
kubectl get secret db-user-pass -o jsonpath='{.data}'
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```json
|
||||
{"password":"UyFCXCpkJHpEc2I9","username":"YWRtaW4="}
|
||||
```
|
||||
|
||||
1. Decode the `password` data:
|
||||
|
||||
```shell
|
||||
echo 'UyFCXCpkJHpEc2I9' | base64 --decode
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
S!B\*d$zDsb=
|
||||
```
|
||||
|
||||
{{<caution>}}This is an example for documentation purposes. In practice,
|
||||
this method could cause the command with the encoded data to be stored in
|
||||
your shell history. Anyone with access to your computer could find the
|
||||
command and decode the secret. A better approach is to combine the view and
|
||||
decode commands.{{</caution>}}
|
||||
|
||||
```shell
|
||||
kubectl get secret db-user-pass -o jsonpath='{.data.password}' | base64 --decode
|
||||
```
|
||||
|
||||
## Edit a Secret {#edit-secret}
|
||||
|
||||
You can edit an existing `Secret` object unless it is
|
||||
[immutable](/docs/concepts/configuration/secret/#secret-immutable). To edit a
|
||||
Secret, run the following command:
|
||||
|
||||
```shell
|
||||
kubectl get secret db-user-pass -o jsonpath='{.data}'
|
||||
kubectl edit secrets <secret-name>
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
This opens your default editor and allows you to update the base64 encoded
|
||||
Secret values in the `data` field, such as in the following example:
|
||||
|
||||
```json
|
||||
{"password":"MWYyZDFlMmU2N2Rm","username":"YWRtaW4="}
|
||||
```yaml
|
||||
# Please edit the object below. Lines beginning with a '#' will be ignored,
|
||||
# and an empty file will abort the edit. If an error occurs while saving this file, it will be
|
||||
# reopened with the relevant failures.
|
||||
#
|
||||
apiVersion: v1
|
||||
data:
|
||||
password: UyFCXCpkJHpEc2I9
|
||||
username: YWRtaW4=
|
||||
kind: Secret
|
||||
metadata:
|
||||
creationTimestamp: "2022-06-28T17:44:13Z"
|
||||
name: db-user-pass
|
||||
namespace: default
|
||||
resourceVersion: "12708504"
|
||||
uid: 91becd59-78fa-4c85-823f-6d44436242ac
|
||||
type: Opaque
|
||||
```
|
||||
|
||||
Now you can decode the `password` data:
|
||||
## Clean up
|
||||
|
||||
```shell
|
||||
# This is an example for documentation purposes.
|
||||
# If you did things this way, the data 'MWYyZDFlMmU2N2Rm' could be stored in
|
||||
# your shell history.
|
||||
# Someone with access to you computer could find that remembered command
|
||||
# and base-64 decode the secret, perhaps without your knowledge.
|
||||
# It's usually better to combine the steps, as shown later in the page.
|
||||
echo 'MWYyZDFlMmU2N2Rm' | base64 --decode
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
1f2d1e2e67df
|
||||
```
|
||||
|
||||
In order to avoid storing a secret encoded value in your shell history, you can
|
||||
run the following command:
|
||||
|
||||
```shell
|
||||
kubectl get secret db-user-pass -o jsonpath='{.data.password}' | base64 --decode
|
||||
```
|
||||
|
||||
The output shall be similar as above.
|
||||
|
||||
## Clean Up
|
||||
|
||||
Delete the Secret you created:
|
||||
To delete a Secret, run the following command:
|
||||
|
||||
```shell
|
||||
kubectl delete secret db-user-pass
|
||||
|
@ -170,4 +199,4 @@ kubectl delete secret db-user-pass
|
|||
|
||||
- Read more about the [Secret concept](/docs/concepts/configuration/secret/)
|
||||
- Learn how to [manage Secrets using config files](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
|
||||
- Learn how to [manage Secrets using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
|
||||
- Learn how to [manage Secrets using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
|
|
@ -8,42 +8,63 @@ content_type: task
|
|||
weight: 90
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
A service account provides an identity for processes that run in a Pod.
|
||||
Kubernetes offers two distinct ways for clients that run within your
|
||||
cluster, or that otherwise have a relationship to your cluster's
|
||||
{{< glossary_tooltip text="control plane" term_id="control-plane" >}}
|
||||
to authenticate to the
|
||||
{{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}.
|
||||
|
||||
{{< note >}}
|
||||
This document is a user introduction to Service Accounts and describes how service accounts behave in a cluster set up
|
||||
as recommended by the Kubernetes project. Your cluster administrator may have
|
||||
customized the behavior in your cluster, in which case this documentation may
|
||||
not apply.
|
||||
{{< /note >}}
|
||||
A _service account_ provides an identity for processes that run in a Pod,
|
||||
and maps to a ServiceAccount object. When you authenticate to the API
|
||||
server, you identify yourself as a particular _user_. Kubernetes recognises
|
||||
the concept of a user, however, Kubernetes itself does **not** have a User
|
||||
API.
|
||||
|
||||
When you (a human) access the cluster (for example, using `kubectl`), you are
|
||||
authenticated by the apiserver as a particular User Account (currently this is
|
||||
usually `admin`, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver.
|
||||
When they do, they are authenticated as a particular Service Account (for example, `default`).
|
||||
This task guide is about ServiceAccounts, which do exist in the Kubernetes
|
||||
API. The guide shows you some ways to configure ServiceAccounts for Pods.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Use the Default Service Account to access the API server
|
||||
## Use the default service account to access the API server
|
||||
|
||||
When you create a pod, if you do not specify a service account, it is
|
||||
automatically assigned the `default` service account in the same namespace.
|
||||
If you get the raw json or yaml for a pod you have created (for example, `kubectl get pods/<podname> -o yaml`),
|
||||
you can see the `spec.serviceAccountName` field has been
|
||||
[automatically set](/docs/concepts/overview/working-with-objects/object-management/).
|
||||
When Pods contact the API server, Pods authenticate as a particular
|
||||
ServiceAccount (for example, `default`). There is always at least one
|
||||
ServiceAccount in each {{< glossary_tooltip text="namespace" term_id="namespace" >}}.
|
||||
|
||||
You can access the API from inside a pod using automatically mounted service account credentials, as described in
|
||||
[Accessing the Cluster](/docs/tasks/access-application-cluster/access-cluster).
|
||||
The API permissions of the service account depend on the
|
||||
[authorization plugin and policy](/docs/reference/access-authn-authz/authorization/#authorization-modules) in use.
|
||||
Every Kubernetes namespace contains at least one ServiceAccount: the default
|
||||
ServiceAccount for that namespace, named `default`.
|
||||
If you do not specify a ServiceAccount when you create a Pod, Kubernetes
|
||||
automatically assigns the ServiceAccount named `default` in that namespace.
|
||||
|
||||
You can fetch the details for a Pod you have created. For example:
|
||||
```shell
|
||||
kubectl get pods/<podname> -o yaml
|
||||
```
|
||||
|
||||
In the output, you see a field `spec.serviceAccountName`.
|
||||
Kubernetes [automatically](/docs/user-guide/working-with-resources/#resources-are-automatically-modified)
|
||||
sets that value if you don't specify it when you create a Pod.
|
||||
|
||||
An application running inside a Pod can access the Kubernetes API using
|
||||
automatically mounted service account credentials. See [accessing the Cluster](/docs/user-guide/accessing-the-cluster/#accessing-the-api-from-a-pod) to learn more.
|
||||
|
||||
When a Pod authenticates as a ServiceAccount, its level of access depends on the
|
||||
[authorization plugin and policy](/docs/reference/access-authn-authz/authorization/#authorization-modules)
|
||||
in use.
|
||||
|
||||
### Opt out of API credential automounting
|
||||
|
||||
If you don't want the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}
|
||||
to automatically mount a ServiceAccount's API credentials, you can opt out of
|
||||
the default behavior.
|
||||
You can opt out of automounting API credentials on `/var/run/secrets/kubernetes.io/serviceaccount/token` for a service account by setting `automountServiceAccountToken: false` on the ServiceAccount:
|
||||
|
||||
For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
|
@ -53,8 +74,7 @@ automountServiceAccountToken: false
|
|||
...
|
||||
```
|
||||
|
||||
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
|
||||
|
||||
You can also opt out of automounting API credentials for a particular Pod:
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
|
@ -66,12 +86,16 @@ spec:
|
|||
...
|
||||
```
|
||||
|
||||
The pod spec takes precedence over the service account if both specify a `automountServiceAccountToken` value.
|
||||
If both the ServiceAccount and the Pod's `.spec` specify a value for
|
||||
`automountServiceAccountToken`, the Pod spec takes precedence.
|
||||
|
||||
## Use Multiple Service Accounts
|
||||
## Use more than one ServiceAccount {#use-multiple-service-accounts}
|
||||
|
||||
Every namespace has a default service account resource called `default`.
|
||||
You can list this and any other serviceAccount resources in the namespace with this command:
|
||||
Every namespace has at least one ServiceAccount: the default ServiceAccount
|
||||
resource, called `default`.
|
||||
You can list all ServiceAccount resources in your
|
||||
[current namespace](/docs/concepts/overview/working-with-objects/namespaces/#setting-the-namespace-preference)
|
||||
with:
|
||||
|
||||
```shell
|
||||
kubectl get serviceaccounts
|
||||
|
@ -110,38 +134,73 @@ The output is similar to this:
|
|||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
creationTimestamp: 2015-06-16T00:12:59Z
|
||||
creationTimestamp: 2019-06-16T00:12:34Z
|
||||
name: build-robot
|
||||
namespace: default
|
||||
resourceVersion: "272500"
|
||||
uid: 721ab723-13bc-11e5-aec2-42010af0021e
|
||||
```
|
||||
|
||||
You may use authorization plugins to [set permissions on service accounts](/docs/reference/access-authn-authz/rbac/#service-account-permissions).
|
||||
You can use authorization plugins to
|
||||
[set permissions on service accounts](/docs/reference/access-authn-authz/rbac/#service-account-permissions).
|
||||
|
||||
To use a non-default service account, set the `spec.serviceAccountName`
|
||||
field of a pod to the name of the service account you wish to use.
|
||||
field of a Pod to the name of the ServiceAccount you wish to use.
|
||||
|
||||
The service account has to exist at the time the pod is created, or it will be rejected.
|
||||
|
||||
You cannot update the service account of an already created pod.
|
||||
You can only set the `serviceAccountName` field when creating a Pod, or in a
|
||||
template for a new Pod. You cannot update the `.spec.serviceAccountName` field
|
||||
of a Pod that already exists.
|
||||
|
||||
{{< note >}}
|
||||
The `spec.serviceAccount` field is a deprecated alias for `spec.serviceAccountName`.
|
||||
The `.spec.serviceAccount` field is a deprecated alias for `.spec.serviceAccountName`.
|
||||
If you want to remove the fields from a workload resource, set both fields to empty explicitly
|
||||
on the [pod template](/docs/concepts/workloads/pods#pod-templates).
|
||||
{{< /note >}}
|
||||
|
||||
You can clean up the service account from this example like this:
|
||||
|
||||
### Cleanup {#cleanup-use-multiple-service-accounts}
|
||||
|
||||
If you tried creating `build-robot` ServiceAccount from the example above,
|
||||
you can clean it up by running:
|
||||
|
||||
```shell
|
||||
kubectl delete serviceaccount/build-robot
|
||||
```
|
||||
|
||||
## Manually create a service account API token
|
||||
## Manually create an API token for a ServiceAccount
|
||||
|
||||
Suppose we have an existing service account named "build-robot" as mentioned above, and we create
|
||||
a new secret manually.
|
||||
Suppose you have an existing service account named "build-robot" as mentioned earlier.
|
||||
|
||||
You can get a time-limited API token for that ServiceAccount using `kubectl`:
|
||||
|
||||
```shell
|
||||
kubectl create token admin-user
|
||||
```
|
||||
|
||||
The output from that command is a token that you can use to authenticate as that
|
||||
ServiceAccount. You can request a specific token duration using the `--duration`
|
||||
command line argument to `kubectl create token` (the actual duration of the issued
|
||||
token might be shorter, or could even be longer).
|
||||
|
||||
{{< note >}}
|
||||
Versions of Kubernetes before v1.22 automatically created long term credentials for
|
||||
accessing the Kubernetes API. This older mechanism was based on creating token Secrets
|
||||
that could then be mounted into running Pods.
|
||||
In more recent versions, including Kubernetes v{{< skew currentVersion >}}, API credentials
|
||||
are obtained directly by using the [TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) API,
|
||||
and are mounted into Pods using a [projected volume](/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume).
|
||||
The tokens obtained using this method have bounded lifetimes, and are automatically
|
||||
invalidated when the Pod they are mounted into is deleted.
|
||||
|
||||
You can still manually create a service account token Secret; for example, if you need a token that never expires.
|
||||
However, using the [TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/)
|
||||
subresource to obtain a token to access the API is recommended instead.
|
||||
{{< /note >}}
|
||||
|
||||
### Manually create a long-lived API token for a ServiceAccount
|
||||
|
||||
If you want to obtain an API token for a ServiceAccount, you create a new Secret
|
||||
with a special annotation, `kubernetes.io/service-account.name`.
|
||||
|
||||
```shell
|
||||
kubectl apply -f - <<EOF
|
||||
|
@ -155,9 +214,16 @@ type: kubernetes.io/service-account-token
|
|||
EOF
|
||||
```
|
||||
|
||||
Now you can confirm that the newly built secret is populated with an API token for the "build-robot" service account.
|
||||
If you view the Secret using:
|
||||
```shell
|
||||
kubectl get secret/build-robot-secret -o yaml
|
||||
```
|
||||
|
||||
Any tokens for non-existent service accounts will be cleaned up by the token controller.
|
||||
you can see that the Secret now contains an API token for the "build-robot" ServiceAccount.
|
||||
|
||||
Because of the annotation you set, the control plane automatically generates a token for that
|
||||
ServiceAccounts, and stores them into the associated Secret. The control plane also cleans up
|
||||
tokens for deleted ServiceAccounts.
|
||||
|
||||
```shell
|
||||
kubectl describe secrets/build-robot-secret
|
||||
|
@ -183,11 +249,19 @@ token: ...
|
|||
|
||||
{{< note >}}
|
||||
The content of `token` is elided here.
|
||||
|
||||
Take care not to display the contents of a `kubernetes.io/service-account-token`
|
||||
Secret somewhere that your terminal / computer screen could be seen by an
|
||||
onlooker.
|
||||
{{< /note >}}
|
||||
|
||||
When you delete a ServiceAccount that has an associated Secret, the Kubernetes
|
||||
control plane automatically cleans up the long-lived token from that Secret.
|
||||
|
||||
## Add ImagePullSecrets to a service account
|
||||
|
||||
### Create an imagePullSecret
|
||||
First, [create an imagePullSecret](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod).
|
||||
Next, verify it has been created. For example:
|
||||
|
||||
- Create an imagePullSecret, as described in [Specifying ImagePullSecrets on a Pod](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod).
|
||||
|
||||
|
@ -211,41 +285,44 @@ The content of `token` is elided here.
|
|||
|
||||
### Add image pull secret to service account
|
||||
|
||||
Next, modify the default service account for the namespace to use this secret as an imagePullSecret.
|
||||
Next, modify the default service account for the namespace to use this Secret as an imagePullSecret.
|
||||
|
||||
|
||||
```shell
|
||||
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}'
|
||||
```
|
||||
|
||||
You can instead use `kubectl edit`, or manually edit the YAML manifests as shown below:
|
||||
You can achieve the same outcome by editing the object manually:
|
||||
|
||||
```shell
|
||||
kubectl get serviceaccounts default -o yaml > ./sa.yaml
|
||||
kubectl edit serviceaccount/default
|
||||
```
|
||||
|
||||
The output of the `sa.yaml` file is similar to this:
|
||||
|
||||
Your selected text editor will open with a configuration looking something like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
creationTimestamp: 2015-08-07T22:02:39Z
|
||||
creationTimestamp: 2021-07-07T22:02:39Z
|
||||
name: default
|
||||
namespace: default
|
||||
resourceVersion: "243024"
|
||||
uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6
|
||||
```
|
||||
|
||||
Using your editor of choice (for example `vi`), open the `sa.yaml` file, delete line with key `resourceVersion`, add lines with `imagePullSecrets:` and save.
|
||||
Using your editor, delete the line with key `resourceVersion`, add lines for `imagePullSecrets:` and save it.
|
||||
Leave the `uid` value set the same as you found it.
|
||||
|
||||
The output of the `sa.yaml` file is similar to this:
|
||||
After you made those changes, the edited ServiceAccount looks something like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
creationTimestamp: 2015-08-07T22:02:39Z
|
||||
creationTimestamp: 2021-07-07T22:02:39Z
|
||||
name: default
|
||||
namespace: default
|
||||
uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6
|
||||
|
@ -253,13 +330,7 @@ imagePullSecrets:
|
|||
- name: myregistrykey
|
||||
```
|
||||
|
||||
Finally replace the serviceaccount with the new updated `sa.yaml` file
|
||||
|
||||
```shell
|
||||
kubectl replace serviceaccount default -f ./sa.yaml
|
||||
```
|
||||
|
||||
### Verify imagePullSecrets was added to pod spec
|
||||
### Verify that imagePullSecrets are set for new Pods
|
||||
|
||||
Now, when a new Pod is created in the current namespace and using the default ServiceAccount, the new Pod has its `spec.imagePullSecrets` field set automatically:
|
||||
|
||||
|
@ -274,12 +345,7 @@ The output is:
|
|||
myregistrykey
|
||||
```
|
||||
|
||||
<!--## Adding Secrets to a service account.
|
||||
|
||||
TODO: Test and explain how to use additional non-K8s secrets with an existing service account.
|
||||
-->
|
||||
|
||||
## Service Account Token Volume Projection
|
||||
## ServiceAccount token volume projection
|
||||
|
||||
{{< feature-state for_k8s_version="v1.20" state="stable" >}}
|
||||
|
||||
|
@ -287,31 +353,31 @@ TODO: Test and explain how to use additional non-K8s secrets with an existing se
|
|||
To enable and use token request projection, you must specify each of the following
|
||||
command line arguments to `kube-apiserver`:
|
||||
|
||||
* `--service-account-issuer`
|
||||
|
||||
It can be used as the Identifier of the service account token issuer. You can specify the `--service-account-issuer` argument multiple times, this can be useful to enable a non-disruptive change of the issuer. When this flag is specified multiple times, the first is used to generate tokens and all are used to determine which issuers are accepted. You must be running Kubernetes v1.22 or later to be able to specify `--service-account-issuer` multiple times.
|
||||
* `--service-account-key-file`
|
||||
|
||||
File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If specified multiple times, tokens signed by any of the specified keys are considered valid by the Kubernetes API server.
|
||||
* `--service-account-signing-key-file`
|
||||
|
||||
Path to the file that contains the current private key of the service account token issuer. The issuer signs issued ID tokens with this private key.
|
||||
* `--api-audiences` (can be omitted)
|
||||
|
||||
The service account token authenticator validates that tokens used against the API are bound to at least one of these audiences. If `api-audiences` is specified multiple times, tokens for any of the specified audiences are considered valid by the Kubernetes API server. If the `--service-account-issuer` flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL.
|
||||
`--service-account-issuer`
|
||||
: defines the Identifier of the service account token issuer. You can specify the `--service-account-issuer` argument multiple times, this can be useful to enable a non-disruptive change of the issuer. When this flag is specified multiple times, the first is used to generate tokens and all are used to determine which issuers are accepted. You must be running Kubernetes v1.22 or later to be able to specify `--service-account-issuer` multiple times.
|
||||
`--service-account-key-file`
|
||||
: specifies the path to a file containing PEM-encoded X.509 private or public keys (RSA or ECDSA), used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If specified multiple times, tokens signed by any of the specified keys are considered valid by the Kubernetes API server.
|
||||
`--service-account-signing-key-file`
|
||||
: specifies the path to a file that contains the current private key of the service account token issuer. The issuer signs issued ID tokens with this private key.
|
||||
`--api-audiences` (can be omitted)
|
||||
: defines audiences for ServiceAccount tokens. The service account token authenticator validates that tokens used against the API are bound to at least one of these audiences. If `api-audiences` is specified multiple times, tokens for any of the specified audiences are considered valid by the Kubernetes API server. If you specify the `--service-account-issuer` command line argument but you don't set `--api-audiences`, the control plane defaults to a single element audience list that contains only the issuer URL.
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
The kubelet can also project a service account token into a Pod. You can
|
||||
The kubelet can also project a ServiceAccount token into a Pod. You can
|
||||
specify desired properties of the token, such as the audience and the validity
|
||||
duration. These properties are not configurable on the default service account
|
||||
token. The service account token will also become invalid against the API when
|
||||
the Pod or the ServiceAccount is deleted.
|
||||
duration. These properties are _not_ configurable on the default ServiceAccount
|
||||
token. The token will also become invalid against the API when either the Pod
|
||||
or the ServiceAccount is deleted.
|
||||
|
||||
This behavior is configured on a PodSpec using a ProjectedVolume type called
|
||||
[ServiceAccountToken](/docs/concepts/storage/volumes/#projected). To provide a
|
||||
pod with a token with an audience of "vault" and a validity duration of two
|
||||
hours, you would configure the following in your PodSpec:
|
||||
You can configure this behavior for the `spec` of a Pod using a
|
||||
[projected volume](/docs/concepts/storage/volumes/#projected) type called
|
||||
`ServiceAccountToken`.
|
||||
|
||||
### Launch a Pod using service account token projection
|
||||
|
||||
To provide a Pod with a token with an audience of `vault` and a validity duration
|
||||
of two hours, you could define a Pod manifest that is similar to:
|
||||
|
||||
{{< codenew file="pods/pod-projected-svc-token.yaml" >}}
|
||||
|
||||
|
@ -321,19 +387,24 @@ Create the Pod:
|
|||
kubectl create -f https://k8s.io/examples/pods/pod-projected-svc-token.yaml
|
||||
```
|
||||
|
||||
The kubelet will request and store the token on behalf of the pod, make the
|
||||
token available to the pod at a configurable file path, and refresh the token as it approaches expiration.
|
||||
The kubelet proactively rotates the token if it is older than 80% of its total TTL, or if the token is older than 24 hours.
|
||||
The kubelet will: request and store the token on behalf of the Pod; make
|
||||
the token available to the Pod at a configurable file path; and refresh
|
||||
the token as it approaches expiration. The kubelet proactively requests rotation
|
||||
for the token if it is older than 80% of its total time-to-live (TTL),
|
||||
or if the token is older than 24 hours.
|
||||
|
||||
The application is responsible for reloading the token when it rotates. Periodic reloading (e.g. once every 5 minutes) is sufficient for most use cases.
|
||||
The application is responsible for reloading the token when it rotates. It's
|
||||
often good enough for the application to load the token on a schedule
|
||||
(for example: once every 5 minutes), without tracking the actual expiry time.
|
||||
|
||||
## Service Account Issuer Discovery
|
||||
### Service account issuer discovery
|
||||
|
||||
{{< feature-state for_k8s_version="v1.21" state="stable" >}}
|
||||
|
||||
The Service Account Issuer Discovery feature is enabled when the Service Account
|
||||
Token Projection feature is enabled, as described
|
||||
[above](#service-account-token-volume-projection).
|
||||
If you have enabled [token projection](#service-account-token-volume-projection)
|
||||
for ServiceAccounts in your cluster, then you can also make use of the discovery
|
||||
feature. Kubernetes provides a way for clients to federate as an _identity provider_,
|
||||
so that one or more external systems can act as a _relying party_.
|
||||
|
||||
{{< note >}}
|
||||
The issuer URL must comply with the
|
||||
|
@ -341,27 +412,16 @@ The issuer URL must comply with the
|
|||
practice, this means it must use the `https` scheme, and should serve an OpenID
|
||||
provider configuration at `{service-account-issuer}/.well-known/openid-configuration`.
|
||||
|
||||
If the URL does not comply, the `ServiceAccountIssuerDiscovery` endpoints will
|
||||
not be registered, even if the feature is enabled.
|
||||
If the URL does not comply, ServiceAccount issuer discovery endpoints are not
|
||||
registered or accessible.
|
||||
{{< /note >}}
|
||||
|
||||
The Service Account Issuer Discovery feature enables federation of Kubernetes
|
||||
service account tokens issued by a cluster (the _identity provider_) with
|
||||
external systems (_relying parties_).
|
||||
|
||||
When enabled, the Kubernetes API server provides an OpenID Provider
|
||||
Configuration document at `/.well-known/openid-configuration` and the associated
|
||||
JSON Web Key Set (JWKS) at `/openid/v1/jwks`. The OpenID Provider Configuration
|
||||
is sometimes referred to as the _discovery document_.
|
||||
|
||||
Clusters include a default RBAC ClusterRole called
|
||||
`system:service-account-issuer-discovery`. A default RBAC ClusterRoleBinding
|
||||
assigns this role to the `system:serviceaccounts` group, which all service
|
||||
accounts implicitly belong to. This allows pods running on the cluster to access
|
||||
the service account discovery document via their mounted service account token.
|
||||
Administrators may, additionally, choose to bind the role to
|
||||
`system:authenticated` or `system:unauthenticated` depending on their security
|
||||
requirements and which external systems they intend to federate with.
|
||||
When enabled, the Kubernetes API server publishes an OpenID Provider
|
||||
Configuration document via HTTP. The configuration document is published at
|
||||
`/.well-known/openid-configuration`.
|
||||
The OpenID Provider Configuration is sometimes referred to as the _discovery document_.
|
||||
The Kubernetes API server publishes the related
|
||||
JSON Web Key Set (JWKS), also via HTTP, at `/openid/v1/jwks`.
|
||||
|
||||
{{< note >}}
|
||||
The responses served at `/.well-known/openid-configuration` and
|
||||
|
@ -370,6 +430,15 @@ compliant. Those documents contain only the parameters necessary to perform
|
|||
validation of Kubernetes service account tokens.
|
||||
{{< /note >}}
|
||||
|
||||
Clusters that use {{< glossary_tooltip text="RBAC" term_id="rbac">}} include a
|
||||
default ClusterRole called `system:service-account-issuer-discovery`.
|
||||
A default ClusterRoleBinding assigns this role to the `system:serviceaccounts` group,
|
||||
which all ServiceAccounts implicitly belong to.
|
||||
This allows pods running on the cluster to access the service account discovery document
|
||||
via their mounted service account token. Administrators may, additionally, choose to
|
||||
bind the role to `system:authenticated` or `system:unauthenticated` depending on their
|
||||
security requirements and which external systems they intend to federate with.
|
||||
|
||||
The JWKS response contains public keys that a relying party can use to validate
|
||||
the Kubernetes service account tokens. Relying parties first query for the
|
||||
OpenID Provider Configuration, and use the `jwks_uri` field in the response to
|
||||
|
@ -377,7 +446,7 @@ find the JWKS.
|
|||
|
||||
In many cases, Kubernetes API servers are not available on the public internet,
|
||||
but public endpoints that serve cached responses from the API server can be made
|
||||
available by users or service providers. In these cases, it is possible to
|
||||
available by users or by service providers. In these cases, it is possible to
|
||||
override the `jwks_uri` in the OpenID Provider Configuration so that it points
|
||||
to the public endpoint, rather than the API server's address, by passing the
|
||||
`--service-account-jwks-uri` flag to the API server. Like the issuer URL, the
|
||||
|
@ -388,6 +457,13 @@ JWKS URI is required to use the `https` scheme.
|
|||
|
||||
See also:
|
||||
|
||||
- [Cluster Admin Guide to Service Accounts](/docs/reference/access-authn-authz/service-accounts-admin/)
|
||||
- [Service Account Signing Key Retrieval KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1393-oidc-discovery)
|
||||
- [OIDC Discovery Spec](https://openid.net/specs/openid-connect-discovery-1_0.html)
|
||||
* Read the [Cluster Admin Guide to Service Accounts](/docs/reference/access-authn-authz/service-accounts-admin/)
|
||||
* Read about [Authorization in Kubernetes](/docs/reference/access-authn-authz/authorization/)
|
||||
* Read about [Secrets](/docs/concepts/configuration/secret/)
|
||||
* or learn to [distribute credentials securely using Secrets](/docs/tasks/inject-data-application/distribute-credentials-secure/)
|
||||
* but also bear in mind that using Secrets for authenticating as a ServiceAccount
|
||||
is deprecated. The recommended alternative is
|
||||
[ServiceAccount token volume projection](#service-account-token-volume-projection).
|
||||
* Read about [projected volumes](/docs/tasks/configure-pod-container/configure-projected-volume-storage/).
|
||||
* For background on OIDC discovery, read the [ServiceAccount signing key retrieval](https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1393-oidc-discovery) Kubernetes Enhancement Proposal
|
||||
* Read the [OIDC Discovery Spec](https://openid.net/specs/openid-connect-discovery-1_0.html)
|
||||
|
|
|
@ -30,6 +30,7 @@ The following methods exist for installing kubectl on Windows:
|
|||
|
||||
```powershell
|
||||
curl.exe -LO "https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe"
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
To find out the latest stable version (for example, for scripting), take a look at [https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt).
|
||||
|
@ -98,7 +99,6 @@ If you have installed Docker Desktop before, you may need to place your `PATH` e
|
|||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
1. Test to ensure the version you installed is up-to-date:
|
||||
|
||||
```powershell
|
||||
|
@ -158,7 +158,7 @@ Below are the procedures to set up autocompletion for PowerShell.
|
|||
curl.exe -LO "https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl-convert.exe"
|
||||
```
|
||||
|
||||
1. Validate the binary (optional)
|
||||
1. Validate the binary (optional).
|
||||
|
||||
Download the `kubectl-convert` checksum file:
|
||||
|
||||
|
@ -183,7 +183,7 @@ Below are the procedures to set up autocompletion for PowerShell.
|
|||
|
||||
1. Append or prepend the `kubectl-convert` binary folder to your `PATH` environment variable.
|
||||
|
||||
1. Verify plugin is successfully installed
|
||||
1. Verify the plugin is successfully installed.
|
||||
|
||||
```shell
|
||||
kubectl convert --help
|
||||
|
|
|
@ -49,6 +49,7 @@ Before walking through each tutorial, you may want to bookmark the
|
|||
|
||||
## Services
|
||||
|
||||
* [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/)
|
||||
* [Using Source IP](/docs/tutorials/services/source-ip/)
|
||||
|
||||
## Security
|
||||
|
|
|
@ -37,7 +37,7 @@ weight: 10
|
|||
<li><i>LoadBalancer</i> - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.</li>
|
||||
<li><i>ExternalName</i> - Maps the Service to the contents of the <code>externalName</code> field (e.g. <code>foo.bar.example.com</code>), by returning a <code>CNAME</code> record with its value. No proxying of any kind is set up. This type requires v1.7 or higher of <code>kube-dns</code>, or CoreDNS version 0.0.8 or higher.</li>
|
||||
</ul>
|
||||
<p>More information about the different types of Services can be found in the <a href="/docs/tutorials/services/source-ip/">Using Source IP</a> tutorial. Also see <a href="/docs/concepts/services-networking/connect-applications-service">Connecting Applications with Services</a>.</p>
|
||||
<p>More information about the different types of Services can be found in the <a href="/docs/tutorials/services/source-ip/">Using Source IP</a> tutorial. Also see <a href="/docs/tutorials/services/connect-applications-service/">Connecting Applications with Services</a>.</p>
|
||||
<p>Additionally, note that there are some use cases with Services that involve not defining a <code>selector</code> in the spec. A Service created without <code>selector</code> will also not create the corresponding Endpoints object. This allows users to manually map a Service to specific endpoints. Another possibility why there may be no selector is you are strictly using <code>type: ExternalName</code>.</p>
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
|
|
|
@ -4,8 +4,8 @@ reviewers:
|
|||
- lavalamp
|
||||
- thockin
|
||||
title: Connecting Applications with Services
|
||||
content_type: concept
|
||||
weight: 40
|
||||
content_type: tutorial
|
||||
weight: 20
|
||||
---
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@ Now that you have a continuously running, replicated application you can expose
|
|||
|
||||
Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly create links between pods or map container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document elaborates on how you can run reliable services on such a networking model.
|
||||
|
||||
This guide uses a simple nginx server to demonstrate proof of concept.
|
||||
This tutorial uses a simple nginx web server to demonstrate the concept.
|
||||
|
||||
<!-- body -->
|
||||
|
|
@ -2,7 +2,7 @@
|
|||
title: Using Source IP
|
||||
content_type: tutorial
|
||||
min-kubernetes-server-version: v1.5
|
||||
weight: 10
|
||||
weight: 40
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -416,5 +416,5 @@ kubectl delete deployment source-ip-app
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn more about [connecting applications via services](/docs/concepts/services-networking/connect-applications-service/)
|
||||
* Learn more about [connecting applications via services](/docs/tutorials/services/connect-applications-service/)
|
||||
* Read how to [Create an External Load Balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/)
|
||||
|
|
|
@ -175,4 +175,4 @@ kubectl delete deployment hello-world
|
|||
## {{% heading "whatsnext" %}}
|
||||
|
||||
Learn more about
|
||||
[connecting applications with services](/docs/concepts/services-networking/connect-applications-service/).
|
||||
[connecting applications with services](/docs/tutorials/services/connect-applications-service/).
|
||||
|
|
|
@ -418,5 +418,5 @@ labels to delete multiple resources with one command.
|
|||
|
||||
* Complete the [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) Interactive Tutorials
|
||||
* Use Kubernetes to create a blog using [Persistent Volumes for MySQL and Wordpress](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog)
|
||||
* Read more about [connecting applications](/docs/concepts/services-networking/connect-applications-service/)
|
||||
* Read more about [connecting applications with services](/docs/tutorials/services/connect-applications-service/)
|
||||
* Read more about [Managing Resources](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)
|
||||
|
|
|
@ -0,0 +1,7 @@
|
|||
apiVersion: v1
|
||||
kind: Secret
|
||||
type: kubernetes.io/service-account-token
|
||||
metadata:
|
||||
name: mysecretname
|
||||
annotations:
|
||||
- kubernetes.io/service-account.name: myserviceaccount
|
|
@ -13,7 +13,7 @@ weight: 50
|
|||
|
||||
Les objets `secret` de Kubernetes vous permettent de stocker et de gérer des informations sensibles, telles que les mots de passe, les jetons OAuth et les clés ssh.
|
||||
Mettre ces informations dans un `secret` est plus sûr et plus flexible que de le mettre en dur dans la définition d'un {{< glossary_tooltip term_id="pod" >}} ou dans une {{< glossary_tooltip text="container image" term_id="image" >}}.
|
||||
Voir [Document de conception des secrets](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) pour plus d'informations.
|
||||
Voir [Document de conception des secrets](https://github.com/kubernetes/design-proposals-archive/blob/main/auth/secrets.md) pour plus d'informations.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -5,58 +5,174 @@ description: Contribution documentation Kubernetes
|
|||
linktitle: Contribuer
|
||||
main_menu: true
|
||||
weight: 80
|
||||
no_list: true
|
||||
card:
|
||||
name: contribuer
|
||||
weight: 10
|
||||
title: Commencez à contribuer à K8s
|
||||
---
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Si vous souhaitez contribuer à la documentation ou au site Web de Kubernetes, nous serons ravis de vous aider!
|
||||
Tout le monde peut contribuer, que vous soyez nouveau dans le projet ou que vous y travailliez depuis longtemps, et que vous vous identifiez vous-même en tant que développeur, utilisateur final ou quelqu'un qui ne supporte tout simplement pas les fautes de frappe.
|
||||
*Kubernetes accueille les améliorations de tous les contributeurs, nouveaux et expérimentés!*
|
||||
|
||||
{{< note >}}
|
||||
Pour en savoir plus sur comment contribuer à Kubernetes en général, consultez la
|
||||
[documentation des contributeurs](https://www.kubernetes.dev/docs/).
|
||||
|
||||
Vous pouvez également lire la
|
||||
[page](https://contribute.cncf.io/contributors/projects/#kubernetes)
|
||||
de {{< glossary_tooltip text="CNCF" term_id="cncf" >}} sur la contribution à Kubernetes.
|
||||
{{< /note >}}
|
||||
|
||||
---
|
||||
|
||||
Ce site Web est maintenu par [Kubernetes SIG Docs](/docs/contribute/#get-involved-with-sig-docs).
|
||||
|
||||
Contributeurs à la documentation Kubernetes:
|
||||
|
||||
- Améliorer le contenu existant
|
||||
- Créer du nouveau contenu
|
||||
- Traduire la documentation
|
||||
- Gérer et publier la documentation faisant partie du cycle de mise à jour de Kubernetes
|
||||
|
||||
|
||||
Pour vous impliquer de plusieurs façons dans la communauté Kubernetes ou d’en savoir plus sur nous, visitez le [Site de la communauté Kubernetes](/community/).
|
||||
Pour plus d'informations sur le guide de style de la documentation Kubernetes, reportez-vous à la section [style guide](/docs/contribute/style/style-guide/).
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Types de contributeurs
|
||||
## Démarrage
|
||||
|
||||
- Un _membre_ de l'organisation Kubernetes qui a [signé le CLA](/docs/contribute/start#sign-the-cla) et contribué un peu de temps et d'efforts au projet.
|
||||
Voir [Adhésion à la communauté](https://github.com/kubernetes/community/blob/master/community-membership.md) pour des critères spécifiques d'adhésion.
|
||||
- Un SIG Docs _relecteur_ est un membre de l'organisation Kubernetes qui
|
||||
a exprimé son intérêt pour l'examen des Pull Requests de la documentation et qui a été ajouté au groupe GitHub approprié et aux fichiers `OWNERS` dans le dépôt GitHub, par un approbateur SIG Docs.
|
||||
- Un SIG Docs _approbateur_ est un membre en règle qui a démontré un engagement continu envers le projet.
|
||||
Un approbateur peut fusionner des Pull Requests et publier du contenu au nom de l'organisation Kubernetes.
|
||||
Les approbateurs peuvent également représenter les documents SIG dans la communauté plus large de Kubernetes.
|
||||
Certaines des tâches d’un approbateur SIG Docs, telles que la coordination d’une publication, nécessitent un temps considérable.
|
||||
Tout le monde peut ouvrir un ticket concernant la documentation ou contribuer à une modification avec une pull request (PR) au
|
||||
[répertoire de GitHub `kubernetes/website`](https://github.com/kubernetes/website).
|
||||
Vous devez être à l'aise avec [git](https://git-scm.com/) et [GitHub](https://lab.github.com/) pour travailler effectivement dans la communauté Kubernetes.
|
||||
|
||||
## Façons de contribuer
|
||||
Pour participer à la documentation:
|
||||
|
||||
Cette liste est divisée en tâches accessibles à tous, ou juste aux membres de l'organisation Kubernetes, ou encore en tâches requérant un haut niveau d'autorisation et de familiarité avec les processus SIG Docs.
|
||||
Contribuer de manière constante au fil du temps peut vous aider à comprendre certaines des décisions relatives à l’outillage et à l’organisation qui ont déjà été prises.
|
||||
1. Signez le [Contributor License Agreement (CLA)](https://github.com/kubernetes/community/blob/master/CLA.md) du CNCF.
|
||||
2. Familiarisez-vous avec le [répertoire de documentation](https://github.com/kubernetes/website) et le
|
||||
[générateur de site statique](https://gohugo.io) du site Web.
|
||||
3. Assurez-vous de comprendre les processus de base pour
|
||||
[soumettre un pull request](/docs/contribute/new-content/open-a-pr/) et
|
||||
[examiner les modifications](/docs/contribute/review/reviewing-prs/).
|
||||
|
||||
Il ne s'agit pas d'une liste exhaustive des manières dont vous pouvez contribuer à la documentation de Kubernetes, mais cela devrait vous aider à démarrer.
|
||||
<!-- See https://github.com/kubernetes/website/issues/28808 pour live-editor URL to this figure -->
|
||||
<!-- You can also cut/paste the mermaid code into the live editor at https://mermaid-js.github.io/mermaid-live-editor to play around with it -->
|
||||
|
||||
- [N'importe qui](/docs/contribute/start/)
|
||||
- Remplir des rapports d'anomalie reproductibles
|
||||
- [Membre](/docs/contribute/start/)
|
||||
- Améliorer la documentation existante
|
||||
- Proposer des idées d'améliorations sur [Slack](http://slack.k8s.io/) ou sur la [liste de diffusion SIG docs](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
|
||||
- Améliorer l'accessibilité de la documentation
|
||||
- Fournir des commentaires non contraignants sur les PRs
|
||||
- Écrire un article de blog ou une étude de cas
|
||||
- [Relecteur](/docs/contribute/intermediate/)
|
||||
- Documenter les nouvelles fonctionnalités
|
||||
- Trier et classer les problèmes
|
||||
- Relire des PRs
|
||||
- Créez des diagrammes, des ressources graphiques et des vidéos / capture vidéo d'écran incorporables
|
||||
- Localisation
|
||||
- Contribuer à d'autres dépôts en tant que représentant de la documentation
|
||||
- Éditer des chaînes pour l'utilisateur dans le code
|
||||
- Améliorer les commentaires dans le code, Godoc
|
||||
- [Approbateur](/docs/contribute/advanced/)
|
||||
- Publier le contenu des contributeurs en approuvant et en fusionnant les PRs
|
||||
- Participer à une équipe de publication de Kubernetes en tant que représentant de la documentation
|
||||
- Proposer des améliorations au guide de style
|
||||
- Proposer des améliorations aux tests de documentation
|
||||
- Proposer des améliorations au site Web de Kubernetes ou à d'autres outils
|
||||
{{< mermaid >}}
|
||||
flowchart TB
|
||||
subgraph third[Soumettre PR]
|
||||
direction TB
|
||||
U[ ] -.-
|
||||
Q[Améliorer contenu] --- N[Créer contenu]
|
||||
N --- O[Traduire docs]
|
||||
O --- P[Gérer/publier docs faisant partie<br>du cycle de mise à jour de K8s]
|
||||
|
||||
end
|
||||
|
||||
subgraph second[Révision]
|
||||
direction TB
|
||||
T[ ] -.-
|
||||
D[Regarder<br>le site Web et<br>répertoire de K8s] --- E[Consulter le <br>générateur de<br>site statique Hugo]
|
||||
E --- F[Comprendre les commandes<br>de base de GitHub]
|
||||
F --- G[Réviser PR existantes<br>et changer les<br>procès de révision]
|
||||
end
|
||||
|
||||
subgraph first[Inscription]
|
||||
direction TB
|
||||
S[ ] -.-
|
||||
B[Signer le<br>Contributor License<br>Agreement de CNCF] ---
|
||||
C[Joindre le canal Slack<br>appelé sig-docs] --- M[Participer aux<br>réunions vidéo hebdomadaires<br>ou réunion sur Slack]
|
||||
end
|
||||
|
||||
A([fa:fa-user Nouveau<br>Contributeur]) --> first
|
||||
A --> second
|
||||
A --> third
|
||||
A --> H[Posez des questions!!!]
|
||||
|
||||
|
||||
classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px;
|
||||
classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold
|
||||
classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000
|
||||
class A,B,C,D,E,F,G,H,M,Q,N,O,P grey
|
||||
class S,T,U spacewhite
|
||||
class first,second,third white
|
||||
{{</ mermaid >}}
|
||||
Figure 1. Premiers pas pour un nouveau contributeur.
|
||||
|
||||
La figure 1 présente une feuille de route pour les nouveaux contributeurs. Vous pouvez suivre certaines ou toutes les étapes pour `Inscription` et `Révision`. Vous êtes maintenant prêt à soumettre des PRs qui atteignent vos objectifs de contribution, dont certains listés sous `Soumettre PR`. Encore une fois, les questions sont toujours les bienvenues !
|
||||
|
||||
Certaines tâches nécessitent plus de confiance et plus d'accès dans l'organisation Kubernetes.
|
||||
Visitez [Participer à SIG Docs](/docs/contribute/participate/) pour plus de détails sur les rôles et les autorisations.
|
||||
|
||||
|
||||
## Votre première contribution
|
||||
|
||||
Vous pouvez préparer votre première contribution en révisant à l'avance plusieurs étapes. La figure 2 décrit les étapes et les détails suivent.
|
||||
|
||||
<!-- See https://github.com/kubernetes/website/issues/28808 for live-editor URL to this figure -->
|
||||
<!-- You can also cut/paste the mermaid code into the live editor at https://mermaid-js.github.io/mermaid-live-editor to play around with it -->
|
||||
|
||||
{{< mermaid >}}
|
||||
flowchart LR
|
||||
subgraph second[Première Contribution]
|
||||
direction TB
|
||||
S[ ] -.-
|
||||
G[Révisez les PRs des<br>autres membres de K8s] -->
|
||||
A[Vérifiez la liste de problèmes<br>de K8s/website pour <br>des bon premiers PRs] --> B[Soumettez une PR!!]
|
||||
end
|
||||
subgraph first[Suggested Prep]
|
||||
direction TB
|
||||
T[ ] -.-
|
||||
D[Lisez l'aperçu de contribution] -->E[Lisez le contenu de K8s<br>et guide de style]
|
||||
E --> F[Étudiez sur les types de contenu<br>et shortcodes de Hugo]
|
||||
end
|
||||
|
||||
|
||||
first ----> second
|
||||
|
||||
|
||||
classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px;
|
||||
classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold
|
||||
classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000
|
||||
class A,B,D,E,F,G grey
|
||||
class S,T spacewhite
|
||||
class first,second white
|
||||
{{</ mermaid >}}
|
||||
Figure 2. Préparation pour votre première contribution.
|
||||
|
||||
- Lisez [l'aperçu de la contribution](/docs/contribute/new-content/) pour en savoir plus sur les différentes façons dont vous pouvez contribuer.
|
||||
- Vérifiez la [liste de problèmes `kubernetes/website`](https://github.com/kubernetes/website/issues/) pour les problèmes qui constituent de bons points d'entrée.
|
||||
- [Soumettez un pull request dans GitHub](/docs/contribute/new-content/open-a-pr/#changes-using-github) sur la documentation existante et apprenez-en plus sur comment créer des tickets dans GitHub.
|
||||
- [Révisez des pull requests](/docs/contribute/review/reviewing-prs/) d'autres membres de la communauté Kubernetes pour en vérifier l'exactitude et la langue.
|
||||
- Lisez le [contenu](/docs/contribute/style/content-guide/) et les [guides de style](/docs/contribute/style/style-guide/)de Kubernetes afin de pouvoir laisser des commentaires éclairés.
|
||||
- Étudiez sur les [types de contenu de page](/docs/contribute/style/page-content-types/)
|
||||
et [shortcodes](/docs/contribute/style/hugo-shortcodes/)de Hugo.
|
||||
|
||||
## Prochaines étapes
|
||||
|
||||
- Apprenez à [travailler à partir d'un clone local](/docs/contribute/new-content/open-a-pr/#fork-the-repo) du répertoire.
|
||||
- Documentez [les fonctionnalités](/docs/contribute/new-content/new-features/) d'une nouvelle version.
|
||||
- Participez à [SIG Docs](/docs/contribute/participate/), et devenez un
|
||||
[membre ou réviseur](/docs/contribute/participate/roles-and-responsibilities/).
|
||||
|
||||
- Demarrez ou aidez avec une [localisation](/docs/contribute/localization/).
|
||||
|
||||
## Engagez-vous dans SIG Docs
|
||||
|
||||
[SIG Docs](/docs/contribute/participate/) est le groupe de contributeurs qui publient et maintiennent la documentation Kubernetes et le site Web. S'impliquer dans SIG Docs est un excellent moyen pour les contributeurs Kubernetes (développement de fonctionnalités ou autre) d'avoir un impact important sur le projet Kubernetes.
|
||||
|
||||
SIG Docs communique avec différentes méthodes:
|
||||
|
||||
- [Joignez `#sig-docs` à l'instance Kubernetes sur Slack](https://slack.k8s.io/). Assurez-vous de vous présenter!
|
||||
- [Joignez la liste de diffusion `kubernetes-sig-docs`](https://groups.google.com/forum/#!forum/kubernetes-sig-docs), où des discussions plus larges ont lieu et les décisions officielles sont enregistrées.
|
||||
- Joignez la [réunion vidéo SIG Docs](https://github.com/kubernetes/community/tree/master/sig-docs) qui a lieu toutes les deux semaines. Les réunions sont toujours annoncées sur `#sig-docs` et ajoutées au [calendrier des réunions de la communauté Kubernetes](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles). Vous devrez télécharger [Zoom](https://zoom.us/download) ou vous connecter à l'aide d'un téléphone.
|
||||
- Joignez la réunion stand-up de SIG Docs sur Slack (async) les semaines où la réunion vidéo Zoom en personne n'a pas lieu. Les rendez-vous sont toujours annoncés sur `#sig-docs`. Vous pouvez contribuer à l'un des fils de discussion jusqu'à 24 heures après l'annonce de la réunion.
|
||||
|
||||
|
||||
## Autres façons de contribuer
|
||||
|
||||
- Visitez le site de la [communauté Kubernetes](/community/). Participez sur Twitter ou Stack Overflow, découvrez les meetups et événements Kubernetes locaux, et davantage encore.
|
||||
- Lisez le [cheatsheet de contributor](https://www.kubernetes.dev/docs/contributor-cheatsheet/) pour vous impliquer dans le développement des fonctionnalités de Kubernetes.
|
||||
- Visitez le site des contributeurs pour en savoir plus sur [les contributeurs Kubernetes](https://www.kubernetes.dev/) et des [ressources supplémentaires pour les contributeurs](https://www.kubernetes.dev/resources/).
|
||||
- Soumettez un article de [blog ou une étude de cas](/docs/contribute/new-content/blogs-case-studies/).
|
||||
|
|
|
@ -135,7 +135,7 @@ baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
|
|||
enabled=1
|
||||
gpgcheck=1
|
||||
repo_gpgcheck=1
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
EOF
|
||||
yum install -y kubectl
|
||||
{{< /tab >}}
|
||||
|
|
|
@ -73,7 +73,7 @@ baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
|
|||
enabled=1
|
||||
gpgcheck=1
|
||||
repo_gpgcheck=1
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
EOF
|
||||
yum install -y kubectl
|
||||
{{< /tab >}}
|
||||
|
|
|
@ -499,7 +499,7 @@ a way to extend Kubernetes with supports for new kinds of volumes. The volumes c
|
|||
durable external storage, or provide ephemeral storage, or they might offer a read-only interface
|
||||
to information using a filesystem paradigm.
|
||||
|
||||
Kubernetes also includes support for [FlexVolume](/docs/concepts/storage/volumes/#flexvolume) plugins,
|
||||
Kubernetes also includes support for [FlexVolume](/docs/concepts/storage/volumes/#flexvolume-deprecated) plugins,
|
||||
which are deprecated since Kubernetes v1.23 (in favour of CSI).
|
||||
-->
|
||||
### 存储插件 {#storage-plugins}
|
||||
|
@ -508,7 +508,7 @@ which are deprecated since Kubernetes v1.23 (in favour of CSI).
|
|||
Kubernetes 的方式使其支持新类别的卷。
|
||||
这些卷可以由持久的外部存储提供支持,可以提供临时存储,还可以使用文件系统范型为信息提供只读接口。
|
||||
|
||||
Kubernetes 还包括对 [FlexVolume](/zh-cn/docs/concepts/storage/volumes/#flexvolume)
|
||||
Kubernetes 还包括对 [FlexVolume](/zh-cn/docs/concepts/storage/volumes/#flexvolume-deprecated)
|
||||
插件的支持,该插件自 Kubernetes v1.23 起被弃用(被 CSI 替代)。
|
||||
|
||||
<!--
|
||||
|
|
|
@ -1,13 +1,13 @@
|
|||
---
|
||||
title: 注解
|
||||
content_type: concept
|
||||
weight: 50
|
||||
weight: 60
|
||||
---
|
||||
|
||||
<!--
|
||||
title: Annotations
|
||||
content_type: concept
|
||||
weight: 50
|
||||
weight: 60
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -124,7 +124,7 @@ If the prefix is omitted, the annotation Key is presumed to be private to the us
|
|||
-->
|
||||
## 语法和字符集
|
||||
|
||||
_注解(Annotations)_ 存储的形式是键/值对。有效的注解键分为两部分:
|
||||
**注解(Annotations)** 存储的形式是键/值对。有效的注解键分为两部分:
|
||||
可选的前缀和名称,以斜杠(`/`)分隔。
|
||||
名称段是必需项,并且必须在 63 个字符以内,以字母数字字符(`[a-z0-9A-Z]`)开头和结尾,
|
||||
并允许使用破折号(`-`),下划线(`_`),点(`.`)和字母数字。
|
||||
|
|
|
@ -1,11 +1,13 @@
|
|||
---
|
||||
title: 推荐使用的标签
|
||||
content_type: concept
|
||||
weight: 100
|
||||
---
|
||||
<!--
|
||||
---
|
||||
title: Recommended Labels
|
||||
content_type: concept
|
||||
weight: 100
|
||||
---
|
||||
-->
|
||||
|
||||
|
|
|
@ -1,10 +1,12 @@
|
|||
---
|
||||
title: 字段选择器
|
||||
weight: 60
|
||||
content_type: concept
|
||||
weight: 70
|
||||
---
|
||||
<!--
|
||||
title: Field Selectors
|
||||
weight: 60
|
||||
content_type: concept
|
||||
weight: 70
|
||||
-->
|
||||
|
||||
<!--
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Finalizers
|
||||
content_type: concept
|
||||
weight: 60
|
||||
weight: 80
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: 对象名称和 IDs
|
||||
title: 对象名称和 ID
|
||||
content_type: concept
|
||||
weight: 20
|
||||
weight: 30
|
||||
---
|
||||
<!--
|
||||
reviewers:
|
||||
|
@ -9,7 +9,7 @@ reviewers:
|
|||
- thockin
|
||||
title: Object Names and IDs
|
||||
content_type: concept
|
||||
weight: 20
|
||||
weight: 30
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -164,7 +164,7 @@ Some resource types have additional restrictions on their names.
|
|||
某些资源类型可能具有额外的命名约束。
|
||||
{{< /note >}}
|
||||
|
||||
## UIDs
|
||||
## UID
|
||||
|
||||
{{< glossary_definition term_id="uid" length="all" >}}
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: 名字空间
|
||||
content_type: concept
|
||||
weight: 30
|
||||
weight: 45
|
||||
---
|
||||
<!--
|
||||
reviewers:
|
||||
|
@ -10,7 +10,7 @@ reviewers:
|
|||
- thockin
|
||||
title: Namespaces
|
||||
content_type: concept
|
||||
weight: 30
|
||||
weight: 45
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Kubernetes 对象管理
|
||||
content_type: concept
|
||||
weight: 15
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: 属主与附属
|
||||
content_type: concept
|
||||
weight: 60
|
||||
weight: 90
|
||||
---
|
||||
<!--
|
||||
title: Owners and Dependents
|
||||
|
|
|
@ -3,11 +3,7 @@ title: Pod 安全策略
|
|||
content_type: concept
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!--
|
||||
reviewers:
|
||||
- liggitt
|
||||
- tallclair
|
||||
title: Pod Security Policies
|
||||
content_type: concept
|
||||
weight: 30
|
||||
|
@ -15,18 +11,20 @@ weight: 30
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state for_k8s_version="v1.21" state="deprecated" >}}
|
||||
|
||||
{{< note >}}
|
||||
{{% alert title="被移除的特性" color="warning" %}}
|
||||
<!--
|
||||
PodSecurityPolicy was [deprecated](/blog/2021/04/08/kubernetes-1-21-release-announcement/#podsecuritypolicy-deprecation)
|
||||
in Kubernetes v1.21, and removed from Kubernetes in v1.25.
|
||||
Instead of using PodSecurityPolicy, you can enforce similar restrictions on Pods using
|
||||
either or both:
|
||||
-->
|
||||
PodSecurityPolicy 在 Kubernetes v1.21
|
||||
中[被弃用](/blog/2021/04/08/kubernetes-1-21-release-announcement/#podsecuritypolicy-deprecation),
|
||||
在 Kubernetes v1.25 中被移除。
|
||||
{{% /alert %}}
|
||||
|
||||
<!--
|
||||
Instead of using PodSecurityPolicy, you can enforce similar restrictions on Pods using
|
||||
either or both:
|
||||
-->
|
||||
作为替代,你可以使用下面任一方式执行类似的限制,或者同时使用下面这两种方式。
|
||||
|
||||
<!--
|
||||
|
@ -44,10 +42,10 @@ see [PodSecurityPolicy Deprecation: Past, Present, and Future](/blog/2021/04/06/
|
|||
有关如何迁移,
|
||||
参阅[从 PodSecurityPolicy 迁移到内置的 PodSecurity 准入控制器](/zh-cn/docs/tasks/configure-pod-container/migrate-from-psp/)。
|
||||
有关移除此 API 的更多信息,参阅
|
||||
[PodSecurityPolicy 弃用: 过去、现在和未来](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/)。
|
||||
[弃用 PodSecurityPolicy:过去、现在、未来](/zh-cn/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/)。
|
||||
|
||||
<!--
|
||||
If you are not running Kubernetes v{{< skew currentVersion >}}, check the documentation for
|
||||
your version of Kubernetes.
|
||||
-->
|
||||
如果所运行的 Kubernetes 不是 v{{< skew currentVersion >}} 版本,则需要查看你所使用的 Kubernetes 版本的对应文档。
|
||||
{{< /note >}}
|
||||
|
|
|
@ -577,18 +577,33 @@ Some uses for an `emptyDir` are:
|
|||
* 在 Web 服务器容器服务数据时,保存内容管理器容器获取的文件。
|
||||
|
||||
<!--
|
||||
Depending on your environment, `emptyDir` volumes are stored on whatever medium that backs the
|
||||
node such as disk or SSD, or network storage. However, if you set the `emptyDir.medium` field
|
||||
to `"Memory"`, Kubernetes mounts a tmpfs (RAM-backed filesystem) for you instead.
|
||||
While tmpfs is very fast, be aware that unlike disks, tmpfs is cleared on
|
||||
node reboot and any files you write count against your container's
|
||||
memory limit.
|
||||
The `emptyDir.medium` field controls where `emptyDir` volumes are stored. By
|
||||
default `emptyDir` volumes are stored on whatever medium that backs the node
|
||||
such as disk, SSD, or network storage, depending on your environment. If you set
|
||||
the `emptyDir.medium` field to `"Memory"`, Kubernetes mounts a tmpfs (RAM-backed
|
||||
filesystem) for you instead. While tmpfs is very fast, be aware that unlike
|
||||
disks, tmpfs is cleared on node reboot and any files you write count against
|
||||
your container's memory limit.
|
||||
-->
|
||||
取决于你的环境,`emptyDir` 卷存储在该节点所使用的介质上;这里的介质可以是磁盘或 SSD
|
||||
或网络存储。但是,你可以将 `emptyDir.medium` 字段设置为 `"Memory"`,以告诉 Kubernetes
|
||||
为你挂载 tmpfs(基于 RAM 的文件系统)。
|
||||
虽然 tmpfs 速度非常快,但是要注意它与磁盘不同。
|
||||
tmpfs 在节点重启时会被清除,并且你所写入的所有文件都会计入容器的内存消耗,受容器内存限制约束。
|
||||
`emptyDir.medium` 字段用来控制 `emptyDir` 卷的存储位置。
|
||||
默认情况下,`emptyDir` 卷存储在该节点所使用的介质上;
|
||||
此处的介质可以是磁盘、SSD 或网络存储,这取决于你的环境。
|
||||
你可以将 `emptyDir.medium` 字段设置为 `"Memory"`,
|
||||
以告诉 Kubernetes 为你挂载 tmpfs(基于 RAM 的文件系统)。
|
||||
虽然 tmpfs 速度非常快,但是要注意它与磁盘不同:tmpfs 在节点重启时会被清除,
|
||||
并且你所写入的所有文件都会计入容器的内存消耗,受容器内存限制约束。
|
||||
|
||||
<!--
|
||||
A size limit can be specified for the default medium, which limits the capacity
|
||||
of the `emptyDir` volume. The storage is allocated from [node ephemeral
|
||||
storage](docs/concepts/configuration/manage-resources-containers/#setting-requests-and-limits-for-local-ephemeral-storage).
|
||||
If that is filled up from another source (for example, log files or image
|
||||
overlays), the `emptyDir` may run out of capacity before this limit.
|
||||
-->
|
||||
你可以通过为默认介质指定大小限制,来限制 `emptyDir` 卷的存储容量。
|
||||
此存储是从[节点临时存储](/zh-cn/docs/concepts/configuration/manage-resources-containers/#setting-requests-and-limits-for-local-ephemeral-storage)中分配的。
|
||||
如果来自其他来源(如日志文件或镜像分层数据)的数据占满了存储,`emptyDir`
|
||||
可能会在达到此限制之前发生存储容量不足的问题。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
|
@ -620,7 +635,8 @@ spec:
|
|||
name: cache-volume
|
||||
volumes:
|
||||
- name: cache-volume
|
||||
emptyDir: {}
|
||||
emptyDir:
|
||||
sizeLimit: 500Mi
|
||||
```
|
||||
|
||||
<!--
|
||||
|
@ -886,7 +902,7 @@ spec:
|
|||
```
|
||||
|
||||
<!--
|
||||
### glusterfs (deprecated)
|
||||
### glusterfs (deprecated) {#glusterfs}
|
||||
-->
|
||||
### glusterfs(已弃用) {#glusterfs}
|
||||
|
||||
|
@ -897,7 +913,7 @@ A `glusterfs` volume allows a [Glusterfs](https://www.gluster.org) (an open
|
|||
source networked filesystem) volume to be mounted into your Pod. Unlike
|
||||
`emptyDir`, which is erased when a Pod is removed, the contents of a
|
||||
`glusterfs` volume are preserved and the volume is merely unmounted. This
|
||||
means that a glusterfs volume can be pre-populated with data, and that data can
|
||||
means that a `glusterfs` volume can be pre-populated with data, and that data can
|
||||
be shared between pods. GlusterFS can be mounted by multiple writers
|
||||
simultaneously.
|
||||
-->
|
||||
|
@ -1270,7 +1286,7 @@ spec:
|
|||
nfs:
|
||||
server: my-nfs-server.example.com
|
||||
path: /my-nfs-volume
|
||||
readonly: true
|
||||
readOnly: true
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
|
@ -2093,7 +2109,7 @@ The following in-tree plugins support persistent storage on Windows nodes:
|
|||
* [`vsphereVolume`](#vspherevolume)
|
||||
|
||||
<!--
|
||||
### flexVolume (deprecated)
|
||||
### flexVolume (deprecated) {#flexvolume}
|
||||
-->
|
||||
### flexVolume(已弃用) {#flexvolume}
|
||||
|
||||
|
|
|
@ -239,6 +239,8 @@ Take the previous frontend ReplicaSet example, and the Pods specified in the fol
|
|||
ReplicaSet 的选择算符相匹配的标签。原因在于 ReplicaSet 并不仅限于拥有在其模板中设置的
|
||||
Pod,它还可以像前面小节中所描述的那样获得其他 Pod。
|
||||
|
||||
以前面的 frontend ReplicaSet 为例,并在以下清单中指定这些 Pod:
|
||||
|
||||
{{< codenew file="pods/pod-rs.yaml" >}}
|
||||
|
||||
<!--
|
||||
|
|
|
@ -2,13 +2,13 @@
|
|||
title: 进阶贡献
|
||||
slug: advanced
|
||||
content_type: concept
|
||||
weight: 98
|
||||
weight: 100
|
||||
---
|
||||
<!--
|
||||
title: Advanced contributing
|
||||
slug: advanced
|
||||
content_type: concept
|
||||
weight: 98
|
||||
weight: 100
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: 查看站点分析
|
||||
content_type: concept
|
||||
weight: 100
|
||||
weight: 120
|
||||
card:
|
||||
name: contribute
|
||||
weight: 100
|
||||
|
@ -10,7 +10,7 @@ card:
|
|||
<!--
|
||||
title: Viewing Site Analytics
|
||||
content_type: concept
|
||||
weight: 100
|
||||
weight: 120
|
||||
card:
|
||||
name: contribute
|
||||
weight: 100
|
||||
|
|
|
@ -10,7 +10,7 @@ card:
|
|||
title: 为发行版本撰写功能特性文档
|
||||
---
|
||||
<!--
|
||||
title: Documenting a feature for a release
|
||||
title: Documenting a feature for a release
|
||||
linktitle: Documenting for a release
|
||||
content_type: concept
|
||||
main_menu: true
|
||||
|
@ -18,7 +18,7 @@ weight: 20
|
|||
card:
|
||||
name: contribute
|
||||
weight: 45
|
||||
title: Documenting a feature for a release
|
||||
title: Documenting a feature for a release
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -248,7 +248,7 @@ make sure you add it to [Alpha/Beta Feature gates](/docs/reference/command-line-
|
|||
table as part of your pull request. With new feature gates, a description of
|
||||
the feature gate is also required. If your feature is GA'ed or deprecated,
|
||||
make sure to move it from the
|
||||
[Feature gates for Alpha/Feature](docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features) table
|
||||
[Feature gates for Alpha/Feature](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features) table
|
||||
to [Feature gates for graduated or deprecated features](/docs/reference/command-line-tools-reference/feature-gates-removed/#feature-gates-that-are-removed)
|
||||
table with Alpha and Beta history intact.
|
||||
-->
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
---
|
||||
title: 内容组织
|
||||
content_type: concept
|
||||
weight: 40
|
||||
weight: 90
|
||||
---
|
||||
<!--
|
||||
title: Content organization
|
||||
content_type: concept
|
||||
weight: 40
|
||||
weight: 90
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -2,13 +2,13 @@
|
|||
title: 图表指南
|
||||
linktitle: 图表指南
|
||||
content_type: concept
|
||||
weight: 15
|
||||
weight: 60
|
||||
---
|
||||
<!--
|
||||
title: Diagram Guide
|
||||
linktitle: Diagram guide
|
||||
content_type: concept
|
||||
weight: 15
|
||||
weight: 60
|
||||
-->
|
||||
|
||||
<!--Overview-->
|
||||
|
|
|
@ -1,10 +1,12 @@
|
|||
---
|
||||
title: 定制 Hugo 短代码
|
||||
content_type: concept
|
||||
weight: 120
|
||||
---
|
||||
<!--
|
||||
title: Custom Hugo Shortcodes
|
||||
content_type: concept
|
||||
weight: 120
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: 页面内容类型
|
||||
content_type: concept
|
||||
weight: 30
|
||||
weight: 80
|
||||
card:
|
||||
name: contribute
|
||||
weight: 30
|
||||
|
@ -9,7 +9,7 @@ card:
|
|||
<!--
|
||||
title: Page content types
|
||||
content_type: concept
|
||||
weight: 30
|
||||
weight: 80
|
||||
card:
|
||||
name: contribute
|
||||
weight: 30
|
||||
|
|
|
@ -2,13 +2,13 @@
|
|||
title: 文档样式指南
|
||||
linktitle: 样式指南
|
||||
content_type: concept
|
||||
weight: 10
|
||||
weight: 40
|
||||
---
|
||||
<!--
|
||||
title: Documentation Style Guide
|
||||
linktitle: Style guide
|
||||
content_type: concept
|
||||
weight: 10
|
||||
weight: 40
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
---
|
||||
title: 撰写新主题
|
||||
content_type: task
|
||||
weight: 20
|
||||
weight: 70
|
||||
---
|
||||
<!--
|
||||
title: Writing a new topic
|
||||
content_type: task
|
||||
weight: 20
|
||||
weight: 70
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -11,7 +11,7 @@ reviewers:
|
|||
- enj
|
||||
title: Certificate Signing Requests
|
||||
content_type: concept
|
||||
weight: 20
|
||||
weight: 25
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,10 +1,12 @@
|
|||
---
|
||||
title: Kubelet 认证/鉴权
|
||||
weight: 110
|
||||
---
|
||||
<!--
|
||||
reviewers:
|
||||
- liggitt
|
||||
title: Kubelet authentication/authorization
|
||||
weight: 110
|
||||
-->
|
||||
|
||||
<!--
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
title: TLS 启动引导
|
||||
content_type: concept
|
||||
weight: 120
|
||||
---
|
||||
<!--
|
||||
reviewers:
|
||||
|
@ -10,6 +11,7 @@ reviewers:
|
|||
- awly
|
||||
title: TLS bootstrapping
|
||||
content_type: concept
|
||||
weight: 120
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Webhook 模式
|
||||
content_type: concept
|
||||
weight: 95
|
||||
weight: 100
|
||||
---
|
||||
<!--
|
||||
reviewers:
|
||||
|
@ -11,7 +11,7 @@ reviewers:
|
|||
- liggitt
|
||||
title: Webhook Mode
|
||||
content_type: concept
|
||||
weight: 95
|
||||
weight: 100
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -308,7 +308,7 @@ https://godoc.org/k8s.io/kubelet/config/v1beta1#KubeletConfiguration。</p>
|
|||
</span><span style="color:#bbb"></span><span style="color:#000;font-weight:bold">etcd</span>:<span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#998;font-style:italic"># one of local or external</span><span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">local</span>:<span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">imageRepository</span>:<span style="color:#bbb"> </span><span style="color:#d14">"k8s.gcr.io"</span><span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">imageRepository</span>:<span style="color:#bbb"> </span><span style="color:#d14">"registry.k8s.io"</span><span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">imageTag</span>:<span style="color:#bbb"> </span><span style="color:#d14">"3.2.24"</span><span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">dataDir</span>:<span style="color:#bbb"> </span><span style="color:#d14">"/var/lib/etcd"</span><span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">extraArgs</span>:<span style="color:#bbb">
|
||||
|
@ -362,7 +362,7 @@ https://godoc.org/k8s.io/kubelet/config/v1beta1#KubeletConfiguration。</p>
|
|||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">readOnly</span>:<span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">false</span><span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">pathType</span>:<span style="color:#bbb"> </span>File<span style="color:#bbb">
|
||||
</span><span style="color:#bbb"></span><span style="color:#000;font-weight:bold">certificatesDir</span>:<span style="color:#bbb"> </span><span style="color:#d14">"/etc/kubernetes/pki"</span><span style="color:#bbb">
|
||||
</span><span style="color:#bbb"></span><span style="color:#000;font-weight:bold">imageRepository</span>:<span style="color:#bbb"> </span><span style="color:#d14">"k8s.gcr.io"</span><span style="color:#bbb">
|
||||
</span><span style="color:#bbb"></span><span style="color:#000;font-weight:bold">imageRepository</span>:<span style="color:#bbb"> </span><span style="color:#d14">"registry.k8s.io"</span><span style="color:#bbb">
|
||||
</span><span style="color:#bbb"></span><span style="color:#000;font-weight:bold">useHyperKubeImage</span>:<span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">false</span><span style="color:#bbb">
|
||||
</span><span style="color:#bbb"></span><span style="color:#000;font-weight:bold">clusterName</span>:<span style="color:#bbb"> </span><span style="color:#d14">"example-cluster"</span><span style="color:#bbb">
|
||||
</span><span style="color:#bbb"></span>---<span style="color:#bbb">
|
||||
|
@ -563,16 +563,17 @@ be used for assigning a stable DNS to the control plane.</li>
|
|||
<td>
|
||||
<!--
|
||||
<p><code>imageRepository</code> sets the container registry to pull images from.
|
||||
If empty, <code>k8s.gcr.io</code> will be used by default; in case of kubernetes version is
|
||||
If empty, <code>registry.k8s.io</code> will be used by default;
|
||||
in case of kubernetes version is
|
||||
a CI build (kubernetes version starts with <code>ci/</code>) <code>gcr.io/k8s-staging-ci-images</code>
|
||||
is used as a default for control plane components and for kube-proxy, while
|
||||
<code>k8s.gcr.io</code> will be used for all the other images.</p>
|
||||
<code>registry.k8s.io</code> will be used for all the other images.</p>
|
||||
-->
|
||||
<p><code>imageRepository</code> 设置用来拉取镜像的容器仓库。
|
||||
如果此字段为空,默认使用 <code>k8s.gcr.io</code>;
|
||||
如果此字段为空,默认使用 <code>registry.k8s.io</code>;
|
||||
当 Kubernetes 用来执行 CI 构造时(Kubernetes 版本以 <code>ci/</code> 开头),
|
||||
将默认使用 <code>gcr.io/k8s-staging-ci-images</code> 来拉取控制面组件镜像,
|
||||
而使用 <code>k8s.gcr.io</code> 来拉取所有其他镜像。</p>
|
||||
而使用 <code>registry.k8s.io</code> 来拉取所有其他镜像。</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>useHyperKubeImage</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
|
|
|
@ -60,7 +60,7 @@ The following client libraries are officially maintained by
|
|||
| dotnet | [github.com/kubernetes-client/csharp](https://github.com/kubernetes-client/csharp) | [browse](https://github.com/kubernetes-client/csharp/tree/master/examples/simple)
|
||||
| Go | [github.com/kubernetes/client-go/](https://github.com/kubernetes/client-go/) | [browse](https://github.com/kubernetes/client-go/tree/master/examples)
|
||||
| Haskell | [github.com/kubernetes-client/haskell](https://github.com/kubernetes-client/haskell) | [browse](https://github.com/kubernetes-client/haskell/tree/master/kubernetes-client/example)
|
||||
| Java | [github.com/kubernetes-client/java](https://github.com/kubernetes-client/java/) | [browse](https://github.com/kubernetes-client/java#installation)
|
||||
| Java | [github.com/kubernetes-client/java](https://github.com/kubernetes-client/java/) | [browse](https://github.com/kubernetes-client/java/tree/master/examples)
|
||||
| JavaScript | [github.com/kubernetes-client/javascript](https://github.com/kubernetes-client/javascript) | [browse](https://github.com/kubernetes-client/javascript/tree/master/examples)
|
||||
| Perl | [github.com/kubernetes-client/perl/](https://github.com/kubernetes-client/perl/) | [browse](https://github.com/kubernetes-client/perl/tree/master/examples)
|
||||
| Python | [github.com/kubernetes-client/python/](https://github.com/kubernetes-client/python/) | [browse](https://github.com/kubernetes-client/python/tree/master/examples)
|
||||
|
@ -72,7 +72,7 @@ The following client libraries are officially maintained by
|
|||
| dotnet | [github.com/kubernetes-client/csharp](https://github.com/kubernetes-client/csharp) | [浏览](https://github.com/kubernetes-client/csharp/tree/master/examples/simple)
|
||||
| Go | [github.com/kubernetes/client-go/](https://github.com/kubernetes/client-go/) | [浏览](https://github.com/kubernetes/client-go/tree/master/examples)
|
||||
| Haskell | [github.com/kubernetes-client/haskell](https://github.com/kubernetes-client/haskell) | [浏览](https://github.com/kubernetes-client/haskell/tree/master/kubernetes-client/example)
|
||||
| Java | [github.com/kubernetes-client/java](https://github.com/kubernetes-client/java/) | [浏览](https://github.com/kubernetes-client/java#installation)
|
||||
| Java | [github.com/kubernetes-client/java](https://github.com/kubernetes-client/java/) | [浏览](https://github.com/kubernetes-client/java/tree/master/examples)
|
||||
| JavaScript | [github.com/kubernetes-client/javascript](https://github.com/kubernetes-client/javascript) | [浏览](https://github.com/kubernetes-client/javascript/tree/master/examples)
|
||||
| Perl | [github.com/kubernetes-client/perl/](https://github.com/kubernetes-client/perl/) | [浏览](https://github.com/kubernetes-client/perl/tree/master/examples)
|
||||
| Python | [github.com/kubernetes-client/python/](https://github.com/kubernetes-client/python/) | [浏览](https://github.com/kubernetes-client/python/tree/master/examples)
|
||||
|
|
|
@ -1,14 +1,14 @@
|
|||
---
|
||||
title: 使用 kubeadm 支持双协议栈
|
||||
content_type: task
|
||||
weight: 110
|
||||
weight: 100
|
||||
min-kubernetes-server-version: 1.21
|
||||
---
|
||||
|
||||
<!--
|
||||
title: Dual-stack support with kubeadm
|
||||
content_type: task
|
||||
weight: 110
|
||||
weight: 100
|
||||
min-kubernetes-server-version: 1.21
|
||||
-->
|
||||
|
||||
|
|
|
@ -1,13 +1,13 @@
|
|||
---
|
||||
title: Turnkey 云解决方案
|
||||
content_type: concept
|
||||
weight: 30
|
||||
weight: 40
|
||||
---
|
||||
<!--
|
||||
---
|
||||
title: Turnkey Cloud Solutions
|
||||
content_type: concept
|
||||
weight: 30
|
||||
weight: 40
|
||||
---
|
||||
-->
|
||||
<!-- overview -->
|
||||
|
|
|
@ -29,9 +29,13 @@ admission controller. This can be done effectively using a combination of dry-ru
|
|||
{{% version-check %}}
|
||||
|
||||
<!--
|
||||
- Ensure the `PodSecurity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features) is enabled.
|
||||
If you are currently running a version of Kubernetes other than
|
||||
{{ skew currentVersion }}, you may want to switch to viewing this
|
||||
page in the documentation for the version of Kubernetes that you
|
||||
are actually running.
|
||||
-->
|
||||
- 确保 `PodSecurity` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)被启用。
|
||||
如果你目前运行的 Kubernetes 版本不是 {{ skew currentVersion }},
|
||||
你可能要切换本页面以查阅你实际所运行的 Kubernetes 版本文档。
|
||||
|
||||
<!--
|
||||
This page assumes you are already familiar with the basic [Pod Security Admission](/docs/concepts/security/pod-security-admission/)
|
||||
|
@ -141,7 +145,7 @@ to other policy enforcement mechanisms, and can provide a useful fallback runnin
|
|||
admission webhooks.
|
||||
-->
|
||||
即便 Pod 安全性准入无法满足你的所有需求,该机制也是设计用作其他策略实施机制的
|
||||
_补充_,因此可以和其他准入 Webhook 一起运行,进而提供一种有用的兜底机制。
|
||||
**补充**,因此可以和其他准入 Webhook 一起运行,进而提供一种有用的兜底机制。
|
||||
|
||||
<!--
|
||||
## 1. Review namespace permissions {#review-namespace-permissions}
|
||||
|
@ -164,8 +168,7 @@ Pod 安全性准入是通过[名字空间上的标签](/zh-cn/docs/concepts/secu
|
|||
名字空间的人都可以更改该名字空间的 Pod 安全性级别,而这可能会被利用来绕过约束性更强的策略。
|
||||
在继续执行迁移操作之前,请确保只有被信任的、有特权的用户具有这类名字空间访问权限。
|
||||
不建议将这类强大的访问权限授予不应获得权限提升的用户,不过如果你必须这样做,
|
||||
你需要使用一个
|
||||
[准入 Webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||
你需要使用一个[准入 Webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||
来针对为 Namespace 对象设置 Pod 安全性级别设置额外的约束。
|
||||
|
||||
<!--
|
||||
|
@ -185,8 +188,8 @@ policies](#psp-update-rollout) section below.
|
|||
针对要修改的、已存在的 PodSecurityPolicy,你应该将这里所建议的更改写入到其离线副本中。
|
||||
所克隆的 PSP 应该与原来的副本名字不同,并且按字母序要排到原副本之前
|
||||
(例如,可以向 PSP 名字前加一个 `0`)。
|
||||
先不要在 Kubernetes 中创建新的策略 - 这类操作会在后文的[推出更新的策略](#psp-update-rollout)
|
||||
部分讨论。
|
||||
先不要在 Kubernetes 中创建新的策略 -
|
||||
这类操作会在后文的[推出更新的策略](#psp-update-rollout)部分讨论。
|
||||
|
||||
<!--
|
||||
### 2.a. Eliminate purely mutating fields {#eliminate-mutating-fields}
|
||||
|
@ -504,7 +507,7 @@ kubectl label --dry-run=server --overwrite ns $NAMESPACE pod-security.kubernetes
|
|||
This command will return a warning for any _existing_ pods that are not valid under the proposed
|
||||
level.
|
||||
-->
|
||||
此命令会针对在所提议的级别下不再合法的所有 _现存_ Pod 返回警告信息。
|
||||
此命令会针对在所提议的级别下不再合法的所有 **现存** Pod 返回警告信息。
|
||||
|
||||
<!--
|
||||
The second option is better for catching workloads that are not currently running: audit mode. When
|
||||
|
@ -612,9 +615,8 @@ audit, and/or warn level for unlabeled namespaces. See
|
|||
for more information.
|
||||
-->
|
||||
你也可以静态配置 Pod 安全性准入控制器,为尚未打标签的名字空间设置默认的
|
||||
enforce、audit 与/或 warn 级别。详细信息可参阅
|
||||
[配置准入控制器](/zh-cn/docs/tasks/configure-pod-container/enforce-standards-admission-controller/#configure-the-admission-controller)
|
||||
页面。
|
||||
enforce、audit 与/或 warn 级别。
|
||||
详细信息可参阅[配置准入控制器](/zh-cn/docs/tasks/configure-pod-container/enforce-standards-admission-controller/#configure-the-admission-controller)页面。
|
||||
|
||||
<!--
|
||||
## 5. Disable PodSecurityPolicy {#disable-psp}
|
||||
|
|
|
@ -103,7 +103,7 @@ ConfigMap above as `/redis-master/redis.conf` inside the Pod.
|
|||
|
||||
* 由 `spec.volumes[1]` 创建一个名为 `config` 的卷。
|
||||
* `spec.volumes[1].items[0]` 下的 `key` 和 `path` 会将来自 `example-redis-config`
|
||||
ConfigMap 中的 `redis-config` 密钥公开在 `config` 卷上一个名为 `redis-config` 的文件中。
|
||||
ConfigMap 中的 `redis-config` 密钥公开在 `config` 卷上一个名为 `redis.conf` 的文件中。
|
||||
* 然后 `config` 卷被 `spec.containers[0].volumeMounts[1]` 挂载在 `/redis-master`。
|
||||
|
||||
这样做的最终效果是将上面 `example-redis-config` 配置中 `data.redis-config`
|
||||
|
|
|
@ -3,12 +3,13 @@ kind: ClusterRole
|
|||
metadata:
|
||||
annotations:
|
||||
kubernetes.io/description: |-
|
||||
Add endpoints write permissions to the edit and admin roles. This was
|
||||
removed by default in 1.22 because of CVE-2021-25740. See
|
||||
https://issue.k8s.io/103675. This can allow writers to direct LoadBalancer
|
||||
or Ingress implementations to expose backend IPs that would not otherwise
|
||||
be accessible, and can circumvent network policies or security controls
|
||||
intended to prevent/isolate access to those backends.
|
||||
将端点写入权限添加到 edit 和 admin 角色。此特性因 CVE-2021-25740 在 1.22
|
||||
中默认被移除。请参阅 https://issue.k8s.io/103675
|
||||
这一设置将允许写者要求 LoadBalancer 或 Ingress 的实现向外暴露后端 IP 地址,
|
||||
所暴露的 IP 地址无法通过其他方式访问,
|
||||
并且可以规避对这些后端访问进行预防/隔离的网络策略或安全控制机制。
|
||||
EndpointSlice 从未包含在 edit 和 admin 角色中,
|
||||
因此 EndpointSlice API 没有什么可恢复的。
|
||||
labels:
|
||||
rbac.authorization.k8s.io/aggregate-to-edit: "true"
|
||||
name: custom:aggregate-to-edit:endpoints # 你可以随意愿更改这个 name
|
||||
|
|
|
@ -1,11 +1,12 @@
|
|||
{{- $isBlogPost := eq .Section "blog" }}
|
||||
{{- $ogType := cond (.IsHome) "website" "article" }}
|
||||
<!-- per-page robot indexing controls -->
|
||||
{{- if hugo.IsProduction -}}
|
||||
<meta name="ROBOTS" content="INDEX, FOLLOW">
|
||||
{{- else -}}
|
||||
<meta name="ROBOTS" content="NOINDEX, NOFOLLOW">
|
||||
{{- end -}}
|
||||
|
||||
{{ $outputFormat := partial "outputformat.html" . -}}
|
||||
{{ if and hugo.IsProduction (ne $outputFormat "print") -}}
|
||||
<meta name="robots" content="index, follow">
|
||||
{{ else -}}
|
||||
<meta name="robots" content="noindex, nofollow">
|
||||
{{ end -}}
|
||||
|
||||
<!-- alternative translations -->
|
||||
{{ range .Translations -}}
|
||||
|
|
|
@ -120,6 +120,7 @@
|
|||
/docs/concepts/jobs/run-to-completion-finite-workloads/ /docs/concepts/workloads/controllers/job/ 301
|
||||
/id/docs/concepts/jobs/run-to-completion-finite-workloads/ /id/docs/concepts/workloads/controllers/job/ 301
|
||||
/docs/concepts/nodes/node/ /docs/concepts/architecture/nodes/ 301
|
||||
/docs/concepts/services-networking/connect-applications-service/ /docs/tutorials/services/connect-applications-service/ 301
|
||||
/docs/concepts/object-metadata/annotations/ /docs/concepts/overview/working-with-objects/annotations/ 301
|
||||
/docs/concepts/overview/ /docs/concepts/overview/what-is-kubernetes/ 301
|
||||
/docs/concepts/overview/extending/ /docs/concepts/extend-kubernetes/ 301
|
||||
|
@ -393,7 +394,7 @@
|
|||
/docs/user-guide/configmap/ /docs/tasks/configure-pod-container/configure-pod-configmap/ 301
|
||||
/docs/user-guide/configmap/README/ /docs/tasks/configure-pod-container/configure-pod-configmap/ 301
|
||||
/docs/user-guide/configuring-containers/ /docs/tasks/configure-pod-container/configure-pod-configmap/ 301
|
||||
/docs/user-guide/connecting-applications/ /docs/concepts/services-networking/connect-applications-service/ 301
|
||||
/docs/user-guide/connecting-applications/ /docs/tutorials/services/connect-applications-service/ 301
|
||||
/docs/user-guide/connecting-to-applications-port-forward/ /docs/tasks/access-application-cluster/port-forward-access-application-cluster/ 301
|
||||
/docs/user-guide/connecting-to-applications-proxy/ /docs/tasks/access-kubernetes-api/http-proxy-access-api/ 301
|
||||
/docs/user-guide/container-environment/ /docs/concepts/containers/container-lifecycle-hooks/ 301
|
||||
|
|
|
@ -40,7 +40,7 @@ error_msgs = []
|
|||
# pip should be installed when Python is installed, but just in case...
|
||||
if not (shutil.which('pip') or shutil.which('pip3')):
|
||||
error_msgs.append(
|
||||
"Install pip so you can install PyYAML. https://pip.pypa.io/en/stable/installing")
|
||||
"Install pip so you can install PyYAML. https://pip.pypa.io/en/stable/installation")
|
||||
|
||||
reqs = subprocess.check_output([sys.executable, '-m', 'pip', 'freeze'])
|
||||
installed_packages = [r.decode().split('==')[0] for r in reqs.split()]
|
||||
|
|
Loading…
Reference in New Issue