Merge pull request #38052 from krol3/merged-main-dev-1.26

Merge main branch into dev-1.26
pull/38173/head
Kubernetes Prow Robot 2022-11-29 11:59:09 -08:00 committed by GitHub
commit cec61c1754
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
132 changed files with 5076 additions and 1318 deletions

View File

@ -212,6 +212,7 @@ aliases:
- ngtuna
- truongnh1992
sig-docs-ru-owners: # Admins for Russian content
- Arhell
- msheldyakov
- aisonaku
- potapy4
@ -245,11 +246,11 @@ aliases:
# authoritative source: git.k8s.io/community/OWNERS_ALIASES
committee-steering: # provide PR approvals for announcements
- cblecker
- cpanato
- bentheelder
- justaugustus
- mrbobbytables
- palnabarun
- parispittman
- tpepper
# authoritative source: https://git.k8s.io/sig-release/OWNERS_ALIASES
sig-release-leads:

View File

@ -42,7 +42,7 @@ baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubectl
{{< /tab >}}

View File

@ -30,7 +30,7 @@ This then points to the other benefit of next generation PaaS being built on top
Kubernetes is infrastructure for next generation applications, PaaS and more. Given this, Im really excited by our [announcement](https://azure.microsoft.com/en-us/blog/kubernetes-now-generally-available-on-azure-container-service/) today that Kubernetes on Azure Container Service has reached general availability. When you deploy your next generation application to Azure, whether on a PaaS or deployed directly onto Kubernetes itself (or both) you can deploy it onto a managed, supported Kubernetes cluster.
Furthermore, because we know that the world of PaaS and software development in general is a hybrid one, were excited to announce the preview availability of [Windows clusters in Azure Container Service](https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough). Were also working on [hybrid clusters](https://github.com/Azure/acs-engine/blob/master/docs/kubernetes/windows.md) in [ACS-Engine](https://github.com/Azure/acs-engine) and expect to roll those out to general availability in the coming months.
Furthermore, because we know that the world of PaaS and software development in general is a hybrid one, were excited to announce the preview availability of [Windows clusters in Azure Container Service](https://learn.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough). Were also working on [hybrid clusters](https://github.com/Azure/acs-engine/blob/master/docs/kubernetes/windows.md) in [ACS-Engine](https://github.com/Azure/acs-engine) and expect to roll those out to general availability in the coming months.
Im thrilled to see how containers and container as a service is changing the world of compute, Im confident that were only scratching the surface of the transformation well see in the coming months and years.

View File

@ -94,7 +94,7 @@ If youd like to try out Kubeflow, we have a number of options for you:
1. You can use sample walkthroughs hosted on [Katacoda](https://www.katacoda.com/kubeflow)
2. You can follow a guided tutorial with existing models from the [examples repository](https://github.com/kubeflow/examples). These include the [GitHub Issue Summarization](https://github.com/kubeflow/examples/tree/master/github_issue_summarization), [MNIST](https://github.com/kubeflow/examples/tree/master/mnist) and [Reinforcement Learning with Agents](https://github.com/kubeflow/examples/tree/v0.5.1/agents).
3. You can start a cluster on your own and try your own model. Any Kubernetes conformant cluster will support Kubeflow including those from contributors [Caicloud](https://www.prnewswire.com/news-releases/caicloud-releases-its-kubernetes-based-cluster-as-a-service-product-claas-20-and-the-first-tensorflow-as-a-service-taas-11-while-closing-6m-series-a-funding-300418071.html), [Canonical](https://jujucharms.com/canonical-kubernetes/), [Google](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-container-cluster), [Heptio](https://heptio.com/products/kubernetes-subscription/), [Mesosphere](https://github.com/mesosphere/dcos-kubernetes-quickstart), [Microsoft](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough), [IBM](https://cloud.ibm.com/docs/containers?topic=containers-cs_cluster_tutorial#cs_cluster_tutorial), [Red Hat/Openshift ](https://docs.openshift.com/container-platform/3.3/install_config/install/quick_install.html#install-config-install-quick-install)and [Weaveworks](https://www.weave.works/product/cloud/).
3. You can start a cluster on your own and try your own model. Any Kubernetes conformant cluster will support Kubeflow including those from contributors [Caicloud](https://www.prnewswire.com/news-releases/caicloud-releases-its-kubernetes-based-cluster-as-a-service-product-claas-20-and-the-first-tensorflow-as-a-service-taas-11-while-closing-6m-series-a-funding-300418071.html), [Canonical](https://jujucharms.com/canonical-kubernetes/), [Google](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-container-cluster), [Heptio](https://heptio.com/products/kubernetes-subscription/), [Mesosphere](https://github.com/mesosphere/dcos-kubernetes-quickstart), [Microsoft](https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough), [IBM](https://cloud.ibm.com/docs/containers?topic=containers-cs_cluster_tutorial#cs_cluster_tutorial), [Red Hat/Openshift ](https://docs.openshift.com/container-platform/3.3/install_config/install/quick_install.html#install-config-install-quick-install)and [Weaveworks](https://www.weave.works/product/cloud/).
There were also a number of sessions at KubeCon + CloudNativeCon EU 2018 covering Kubeflow. The links to the talks are here; the associated videos will be posted in the coming days.

View File

@ -10,11 +10,11 @@ date: 2018-10-08
With Kubernetes v1.12, Azure virtual machine scale sets (VMSS) and cluster-autoscaler have reached their General Availability (GA) and User Assigned Identity is available as a preview feature.
_Azure VMSS allow you to create and manage identical, load balanced VMs that automatically increase or decrease based on demand or a set schedule. This enables you to easily manage and scale multiple VMs to provide high availability and application resiliency, ideal for large-scale applications like container workloads [[1]](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview)._
_Azure VMSS allow you to create and manage identical, load balanced VMs that automatically increase or decrease based on demand or a set schedule. This enables you to easily manage and scale multiple VMs to provide high availability and application resiliency, ideal for large-scale applications like container workloads [[1]](https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview)._
Cluster autoscaler allows you to adjust the size of the Kubernetes clusters based on the load conditions automatically.
Another exciting feature which v1.12 brings to the table is the ability to use User Assigned Identities with Kubernetes clusters [[12]](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview).
Another exciting feature which v1.12 brings to the table is the ability to use User Assigned Identities with Kubernetes clusters [[12]](https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview).
In this article, we will do a brief overview of VMSS, cluster autoscaler and user assigned identity features on Azure.
@ -22,7 +22,7 @@ In this article, we will do a brief overview of VMSS, cluster autoscaler and use
Azures Virtual Machine Scale sets (VMSS) feature offers users an ability to automatically create VMs from a single central configuration, provide load balancing via L4 and L7 load balancing, provide a path to use availability zones for high availability, provides large-scale VM instances et. al.
VMSS consists of a group of virtual machines, which are identical and can be managed and configured at a group level. More details of this feature in Azure itself can be found at the following link [[1]](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview).
VMSS consists of a group of virtual machines, which are identical and can be managed and configured at a group level. More details of this feature in Azure itself can be found at the following link [[1]](https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview).
With Kubernetes v1.12 customers can create k8s cluster out of VMSS instances and utilize VMSS features.
@ -254,7 +254,7 @@ Cluster Autoscaler currently supports four VM types: standard (VMAS), VMSS, ACS
## User Assigned Identity
Inorder for the Kubernetes cluster components to securely talk to the cloud services, it needs to authenticate with the cloud provider. In Azure Kubernetes clusters, up until now this was done using two ways - Service Principals or Managed Identities. In case of service principal the credentials are stored within the cluster and there are password rotation and other challenges which user needs to incur to accommodate this model. Managed service identities takes out this burden from the user and manages the service instances directly [[12]](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview).
Inorder for the Kubernetes cluster components to securely talk to the cloud services, it needs to authenticate with the cloud provider. In Azure Kubernetes clusters, up until now this was done using two ways - Service Principals or Managed Identities. In case of service principal the credentials are stored within the cluster and there are password rotation and other challenges which user needs to incur to accommodate this model. Managed service identities takes out this burden from the user and manages the service instances directly [[12]](https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview).
There are two kinds of managed identities possible - one is system assigned and another is user assigned. In case of system assigned identity each vm in the Kubernetes cluster is assigned a managed identity during creation. This identity is used by various Kubernetes components needing access to Azure resources. Examples to these operations are getting/updating load balancer configuration, getting/updating vm information etc. With the system assigned managed identity, user has no control over the identity which is assigned to the underlying vm. The system automatically assigns it and this reduces the flexibility for the user.
@ -273,7 +273,7 @@ env.ServiceManagementEndpoint,
config.UserAssignedIdentityID)
```
This calls hits either the instance metadata service or the vm extension [[12]](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview) to gather the token which is then used to access various resources.
This calls hits either the instance metadata service or the vm extension [[12]](https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview) to gather the token which is then used to access various resources.
## Setting up a cluster with user assigned identity
@ -304,11 +304,11 @@ For azure specific discussions - please checkout the Azure SIG page at [[6]](htt
For CA, please checkout the Autoscaler project here [[7]](http://www.github.com/kubernetes/autoscaler) and join the [#sig-autoscaling](https://kubernetes.slack.com/messages/sig-autoscaling) Slack for more discussions.
For the acs-engine (the unmanaged variety) on Azure docs can be found here: [[9]](https://github.com/Azure/acs-engine). More details about the managed service from Azure Kubernetes Service (AKS) here [[5]](https://docs.microsoft.com/en-us/azure/aks/).
For the acs-engine (the unmanaged variety) on Azure docs can be found here: [[9]](https://github.com/Azure/acs-engine). More details about the managed service from Azure Kubernetes Service (AKS) here [[5]](https://learn.microsoft.com/en-us/azure/aks/).
## References
1) https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview
1) https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview
2) /docs/concepts/architecture/cloud-controller/
@ -316,7 +316,7 @@ For the acs-engine (the unmanaged variety) on Azure docs can be found here: [[9]
4) https://github.com/Azure/acs-engine/blob/master/docs/kubernetes/deploy.md
5) https://docs.microsoft.com/en-us/azure/aks/
5) https://learn.microsoft.com/en-us/azure/aks/
6) https://github.com/kubernetes/community/tree/master/sig-azure
@ -330,7 +330,7 @@ For the acs-engine (the unmanaged variety) on Azure docs can be found here: [[9]
11) /docs/concepts/architecture/
12) https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview
12) https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview
13) https://github.com/Azure/acs-engine/tree/master/examples/kubernetes-msi-userassigned

View File

@ -16,7 +16,7 @@ New to Windows 10 and WSL2, or new to Docker and Kubernetes? Welcome to this blo
For the last few years, Kubernetes became a de-facto standard platform for running containerized services and applications in distributed environments. While a wide variety of distributions and installers exist to deploy Kubernetes in the cloud environments (public, private or hybrid), or within the bare metal environments, there is still a need to deploy and run Kubernetes locally, for example, on the developer's workstation.
Kubernetes has been originally designed to be deployed and used in the Linux environments. However, a good number of users (and not only application developers) use Windows OS as their daily driver. When Microsoft revealed WSL - [the Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/), the line between Windows and Linux environments became even less visible.
Kubernetes has been originally designed to be deployed and used in the Linux environments. However, a good number of users (and not only application developers) use Windows OS as their daily driver. When Microsoft revealed WSL - [the Windows Subsystem for Linux](https://learn.microsoft.com/en-us/windows/wsl/), the line between Windows and Linux environments became even less visible.
Also, WSL brought an ability to run Kubernetes on Windows almost seamlessly!
@ -31,7 +31,7 @@ Since we will explain how to install KinD, we won't go into too much detail arou
However, here is the list of the prerequisites needed and their version/lane:
- OS: Windows 10 version 2004, Build 19041
- [WSL2 enabled](https://docs.microsoft.com/en-us/windows/wsl/wsl2-install)
- [WSL2 enabled](https://learn.microsoft.com/en-us/windows/wsl/wsl2-install)
- In order to install the distros as WSL2 by default, once WSL2 installed, run the command `wsl.exe --set-default-version 2` in Powershell
- WSL2 distro installed from the Windows Store - the distro used is Ubuntu-18.04
- [Docker Desktop for Windows](https://hub.docker.com/editions/community/docker-ce-desktop-windows), stable channel - the version used is 2.2.0.4

View File

@ -69,7 +69,7 @@ has 5 replicas, with `maxUnavailable` set to 2 and `partition` set to 0.
I can trigger a rolling update by changing the image to `k8s.gcr.io/nginx-slim:0.9`. Once I initiate the rolling update, I can
watch the pods update 2 at a time as the current value of maxUnavailable is 2. The below output shows a span of time and is not
complete. The maxUnavailable can be an absolute number (for example, 2) or a percentage of desired Pods (for example, 10%). The
absolute number is calculated from percentage by rounding down.
absolute number is calculated from percentage by rounding up to the nearest integer.
```
kubectl get pods --watch
```

View File

@ -0,0 +1,136 @@
---
layout: blog
title: "Kubernetes Removals, Deprecations, and Major Changes in 1.26"
date: 2022-11-18
slug: upcoming-changes-in-kubernetes-1-26
---
**Author**: Frederico Muñoz (SAS)
Change is an integral part of the Kubernetes life-cycle: as Kubernetes grows and matures, features may be deprecated, removed, or replaced with improvements for the health of the project. For Kubernetes v1.26 there are several planned: this article identifies and describes some of them, based on the information available at this mid-cycle point in the v1.26 release process, which is still ongoing and can introduce additional changes.
## The Kubernetes API Removal and Deprecation process {#k8s-api-deprecation-process}
The Kubernetes project has a [well-documented deprecation policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/) for features. This policy states that stable APIs may only be deprecated when a newer, stable version of that same API is available and that APIs have a minimum lifetime for each stability level. A deprecated API is one that has been marked for removal in a future Kubernetes release; it will continue to function until removal (at least one year from the deprecation), but usage will result in a warning being displayed. Removed APIs are no longer available in the current version, at which point you must migrate to using the replacement.
* Generally available (GA) or stable API versions may be marked as deprecated but must not be removed within a major version of Kubernetes.
* Beta or pre-release API versions must be supported for 3 releases after deprecation.
* Alpha or experimental API versions may be removed in any release without prior deprecation notice.
Whether an API is removed as a result of a feature graduating from beta to stable or because that API simply did not succeed, all removals comply with this deprecation policy. Whenever an API is removed, migration options are communicated in the documentation.
## A note about the removal of the CRI `v1alpha2` API and containerd 1.5 support {#cri-api-removal}
Following the adoption of the [Container Runtime Interface](https://kubernetes.io/docs/concepts/architecture/cri/) (CRI) and the [removal of dockershim] in v1.24 , the CRI is the supported and documented way through which Kubernetes interacts withdifferent container runtimes. Each kubelet negotiates which version of CRI to use with the container runtime on that node.
The Kubernetes project recommends using CRI version `v1`; in Kubernetes v1.25 the kubelet can also negotiate the use of CRI `v1alpha2` (which was deprecated along at the same time as adding support for the stable `v1` interface).
Kubernetes v1.26 will not support CRI `v1alpha2`. That [removal](https://github.com/kubernetes/kubernetes/pull/110618) will result in the kubelet not registering the node if the container runtime doesn't support CRI `v1`. This means that containerd minor version 1.5 and older will not be supported in Kubernetes 1.26; if you use containerd, you will need to upgrade to containerd version 1.6.0 or later **before** you upgrade that node to Kubernetes v1.26. Other container runtimes that only support the `v1alpha2` are equally affected: if that affects you, you should contact the container runtime vendor for advice or check their website for additional instructions in how to move forward.
If you want to benefit from v1.26 features and still use an older container runtime, you can run an older kubelet. The [supported skew](/releases/version-skew-policy/#kubelet) for the kubelet allows you to run a v1.25 kubelet, which still is still compatible with `v1alpha2` CRI support, even if you upgrade the control plane to the 1.26 minor release of Kubernetes.
As well as container runtimes themselves, that there are tools like [stargz-snapshotter](https://github.com/containerd/stargz-snapshotter) that act as a proxy between kubelet and container runtime and those also might be affected.
## Deprecations and removals in Kubernetes v1.26 {#deprecations-removals}
In addition to the above, Kubernetes v1.26 is targeted to include several additional removals and deprecations.
### Removal of the `v1beta1` flow control API group
The `flowcontrol.apiserver.k8s.io/v1beta1` API version of FlowSchema and PriorityLevelConfiguration [will no longer be served in v1.26](https://kubernetes.io/docs/reference/using-api/deprecation-guide/#flowcontrol-resources-v126). Users should migrate manifests and API clients to use the `flowcontrol.apiserver.k8s.io/v1beta2` API version, available since v1.23.
### Removal of the `v2beta2` HorizontalPodAutoscaler API
The `autoscaling/v2beta2` API version of HorizontalPodAutoscaler [will no longer be served in v1.26](https://kubernetes.io/docs/reference/using-api/deprecation-guide/#horizontalpodautoscaler-v126). Users should migrate manifests and API clients to use the `autoscaling/v2` API version, available since v1.23.
### Removal of in-tree credential management code
In this upcoming release, legacy vendor-specific authentication code that is part of Kubernetes
will be [removed](https://github.com/kubernetes/kubernetes/pull/112341) from both
`client-go` and `kubectl`.
The existing mechanism supports authentication for two specific cloud providers:
Azure and Google Cloud.
In its place, Kubernetes already offers a vendor-neutral
[authentication plugin mechanism](/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins) -
you can switch over right now, before the v1.26 release happens.
If you're affected, you can find additional guidance on how to proceed for
[Azure](https://github.com/Azure/kubelogin#readme) and for
[Google Cloud](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke).
### Removal of `kube-proxy` userspace modes
The `userspace` proxy mode, deprecated for over a year, is [no longer supported on either Linux or Windows](https://github.com/kubernetes/kubernetes/pull/112133) and will be removed in this release. Users should use `iptables` or `ipvs` on Linux, or `kernelspace` on Windows: using `--mode userspace` will now fail.
### Removal of in-tree OpenStack cloud provider
Kubernetes is switching from in-tree code for storage integrations, in favor of the Container Storage Interface (CSI).
As part of this, Kubernetes v1.26 will remove the the deprecated in-tree storage integration for OpenStack
(the `cinder` volume type). You should migrate to external cloud provider and CSI driver from
https://github.com/kubernetes/cloud-provider-openstack instead.
For more information, visit [Cinder in-tree to CSI driver migration](https://github.com/kubernetes/enhancements/issues/1489).
### Removal of the GlusterFS in-tree driver
The in-tree GlusterFS driver was [deprecated in v1.25](https://kubernetes.io/blog/2022/08/23/kubernetes-v1-25-release/#deprecations-and-removals), and will be removed from Kubernetes v1.26.
### Deprecation of non-inclusive `kubectl` flag
As part of the implementation effort of the [Inclusive Naming Initiative](https://www.cncf.io/announcements/2021/10/13/inclusive-naming-initiative-announces-new-community-resources-for-a-more-inclusive-future/),
the `--prune-whitelist` flag will be [deprecated](https://github.com/kubernetes/kubernetes/pull/113116), and replaced with `--prune-allowlist`.
Users that use this flag are strongly advised to make the necessary changes prior to the final removal of the flag, in a future release.
### Removal of dynamic kubelet configuration
_Dynamic kubelet configuration_ allowed [new kubelet configurations to be rolled out via the Kubernetes API](https://github.com/kubernetes/enhancements/tree/2cd758cc6ab617a93f578b40e97728261ab886ed/keps/sig-node/281-dynamic-kubelet-configuration), even in a live cluster.
A cluster operator could reconfigure the kubelet on a Node by specifying a ConfigMap
that contained the configuration data that the kubelet should use.
Dynamic kubelet configuration was removed from the kubelet in v1.24, and will be
[removed from the API server](https://github.com/kubernetes/kubernetes/pull/112643) in the v1.26 release.
### Deprecations for `kube-apiserver` command line arguments
The `--master-service-namespace` command line argument to the kube-apiserver doesn't have
any effect, and was already informally [deprecated](https://github.com/kubernetes/kubernetes/pull/38186).
That command line argument wil be formally marked as deprecated in v1.26, preparing for its
removal in a future release.
The Kubernetes project does not expect any impact from this deprecation and removal.
### Deprecations for `kubectl run` command line arguments
Several unused option arguments for the `kubectl run` subcommand will be [marked as deprecated](https://github.com/kubernetes/kubernetes/pull/112261), including:
* `--cascade`
* `--filename`
* `--force`
* `--grace-period`
* `--kustomize`
* `--recursive`
* `--timeout`
* `--wait`
These arguments are already ignored so no impact is expected: the explicit deprecation sets a warning message and prepares the removal of the argumentsin a future release.
### Removal of legacy command line arguments relating to logging
Kubernetes v1.26 will [remove](https://github.com/kubernetes/kubernetes/pull/112120) some
command line arguments relating to logging. These command line arguments were
already deprecated.
For more information, see [Deprecate klog specific flags in Kubernetes Components](https://github.com/kubernetes/enhancements/tree/3cb66bd0a1ef973ebcc974f935f0ac5cba9db4b2/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components).
## Looking ahead {#looking-ahead}
The official list of [API removals](https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-27) planned for Kubernetes 1.27 includes:
* All beta versions of the CSIStorageCapacity API; specifically: `storage.k8s.io/v1beta1`
### Want to know more?
Deprecations are announced in the Kubernetes release notes. You can see the announcements of pending deprecations in the release notes for:
* [Kubernetes 1.21](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#deprecation)
* [Kubernetes 1.22](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#deprecation)
* [Kubernetes 1.23](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#deprecation)
* [Kubernetes 1.24](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#deprecation)
* [Kubernetes 1.25](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#deprecation)
We will formally announce the deprecations that come with [Kubernetes 1.26](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#deprecation) as part of the CHANGELOG for that release.

View File

@ -106,7 +106,7 @@ updated to newer versions that support cgroup v2. For example:
## Identify the cgroup version on Linux Nodes {#check-cgroup-version}
The cgroup version depends on on the Linux distribution being used and the
The cgroup version depends on the Linux distribution being used and the
default cgroup version configured on the OS. To check which cgroup version your
distribution uses, run the `stat -fc %T /sys/fs/cgroup/` command on
the node:

View File

@ -39,7 +39,7 @@ and doesn't register as a node.
## Upgrading
When upgrading Kubernetes, then the kubelet tries to automatically select the
When upgrading Kubernetes, the kubelet tries to automatically select the
latest CRI version on restart of the component. If that fails, then the fallback
will take place as mentioned above. If a gRPC re-dial was required because the
container runtime has been upgraded, then the container runtime must also

View File

@ -101,9 +101,9 @@ the exact mechanisms for issuing and refreshing those session tokens.
There are several options to create a Secret:
- [create Secret using `kubectl` command](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- [create Secret from config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
- [create Secret using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
- [Use `kubectl`](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- [Use a configuration file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
- [Use the Kustomize tool](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
#### Constraints on Secret names and data {#restriction-names-data}
@ -132,41 +132,18 @@ number of Secrets (or other resources) in a namespace.
### Editing a Secret
You can edit an existing Secret using kubectl:
You can edit an existing Secret unless it is [immutable](#secret-immutable). To
edit a Secret, use one of the following methods:
```shell
kubectl edit secrets mysecret
```
* [Use `kubectl`](/docs/tasks/configmap-secret/managing-secret-using-kubectl/#edit-secret)
* [Use a configuration file](/docs/tasks/configmap-secret/managing-secret-using-config-file/#edit-secret)
This opens your default editor and allows you to update the base64 encoded Secret
values in the `data` field; for example:
You can also edit the data in a Secret using the [Kustomize tool](/docs/tasks/configmap-secret/managing-secret-using-kustomize/#edit-secret). However, this
method creates a new `Secret` object with the edited data.
```yaml
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file, it will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: { ... }
creationTimestamp: 2020-01-22T18:41:56Z
name: mysecret
namespace: default
resourceVersion: "164619"
uid: cfee02d6-c137-11e5-8d73-42010af00002
type: Opaque
```
That example manifest defines a Secret with two keys in the `data` field: `username` and `password`.
The values are Base64 strings in the manifest; however, when you use the Secret with a Pod
then the kubelet provides the _decoded_ data to the Pod and its containers.
You can package many keys and values into one Secret, or use many Secrets, whichever is convenient.
Depending on how you created the Secret, as well as how the Secret is used in
your Pods, updates to existing `Secret` objects are propagated automatically to
Pods that use the data. For more information, refer to [Mounted Secrets are updated automatically](#mounted-secrets-are-updated-automatically).
### Using a Secret
@ -1195,7 +1172,7 @@ A bootstrap type Secret has the following keys specified under `data`:
- `token-secret`: A random 16 character string as the actual token secret. Required.
- `description`: A human-readable string that describes what the token is
used for. Optional.
- `expiration`: An absolute UTC time using RFC3339 specifying when the token
- `expiration`: An absolute UTC time using [RFC3339](https://datatracker.ietf.org/doc/html/rfc3339) specifying when the token
should be expired. Optional.
- `usage-bootstrap-<usage>`: A boolean flag indicating additional usage for
the bootstrap token.

View File

@ -6,7 +6,6 @@ reviewers:
- erictune
- thockin
content_type: concept
no_list: true
---
<!-- overview -->
@ -18,7 +17,10 @@ run it.
Containers decouple applications from underlying host infrastructure.
This makes deployment easier in different cloud or OS environments.
Each {{< glossary_tooltip text="node" term_id="node" >}} in a Kubernetes
cluster runs the containers that form the
[Pods](/docs/concepts/workloads/pods/) assigned to that node.
Containers in a Pod are co-located and co-scheduled to run on the same node.
<!-- body -->
@ -29,17 +31,23 @@ software package, containing everything needed to run an application:
the code and any runtime it requires, application and system libraries,
and default values for any essential settings.
By design, a container is immutable: you cannot change the code of a
container that is already running. If you have a containerized application
and want to make changes, you need to build a new image that includes
the change, then recreate the container to start from the updated image.
Containers are intended to be stateless and
[immutable](https://glossary.cncf.io/immutable-infrastructure/):
you should not change
the code of a container that is already running. If you have a containerized
application and want to make changes, the correct process is to build a new
image that includes the change, then recreate the container to start from the
updated image.
## Container runtimes
{{< glossary_definition term_id="container-runtime" length="all" >}}
## {{% heading "whatsnext" %}}
* Read about [container images](/docs/concepts/containers/images/)
* Read about [Pods](/docs/concepts/workloads/pods/)
Usually, you can allow your cluster to pick the default container runtime
for a Pod. If you need to use more than one container runtime in your cluster,
you can specify the [RuntimeClass](/docs/concepts/containers/runtime-class/)
for a Pod to make sure that Kubernetes runs those containers using a
particular container runtime.
You can also use RuntimeClass to run different Pods with the same container
runtime but with different settings.

View File

@ -5,6 +5,7 @@ reviewers:
title: Images
content_type: concept
weight: 10
hide_summary: true # Listed separately in section index
---
<!-- overview -->
@ -19,6 +20,12 @@ before referring to it in a
This page provides an outline of the container image concept.
{{< note >}}
If you are looking for the container images for a Kubernetes
release (such as v{{< skew latestVersion >}}, the latest minor release),
visit [Download Kubernetes](https://kubernetes.io/releases/download/).
{{< /note >}}
<!-- body -->
## Image names

View File

@ -5,6 +5,7 @@ reviewers:
title: Runtime Class
content_type: concept
weight: 30
hide_summary: true # Listed separately in section index
---
<!-- overview -->

View File

@ -18,9 +18,16 @@ This page is an overview of Kubernetes.
<!-- body -->
Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s as an abbreviation results from counting the eight letters between the "K" and the "s". Google open-sourced the Kubernetes project in 2014. Kubernetes combines [over 15 years of Google's experience](/blog/2015/04/borg-predecessor-to-kubernetes/) running production workloads at scale with best-of-breed ideas and practices from the community.
Kubernetes is a portable, extensible, open source platform for managing containerized
workloads and services, that facilitates both declarative configuration and automation.
It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s as an abbreviation
results from counting the eight letters between the "K" and the "s". Google open-sourced the
Kubernetes project in 2014. Kubernetes combines
[over 15 years of Google's experience](/blog/2015/04/borg-predecessor-to-kubernetes/) running
production workloads at scale with best-of-breed ideas and practices from the community.
## Going back in time
@ -29,69 +36,136 @@ Let's take a look at why Kubernetes is so useful by going back in time.
![Deployment evolution](/images/docs/Container_Evolution.svg)
**Traditional deployment era:**
Early on, organizations ran applications on physical servers. There was no way to define resource boundaries for applications in a physical server, and this caused resource allocation issues. For example, if multiple applications run on a physical server, there can be instances where one application would take up most of the resources, and as a result, the other applications would underperform. A solution for this would be to run each application on a different physical server. But this did not scale as resources were underutilized, and it was expensive for organizations to maintain many physical servers.
Early on, organizations ran applications on physical servers. There was no way to define
resource boundaries for applications in a physical server, and this caused resource
allocation issues. For example, if multiple applications run on a physical server, there
can be instances where one application would take up most of the resources, and as a result,
the other applications would underperform. A solution for this would be to run each application
on a different physical server. But this did not scale as resources were underutilized, and it
was expensive for organizations to maintain many physical servers.
**Virtualized deployment era:** As a solution, virtualization was introduced. It allows you to run multiple Virtual Machines (VMs) on a single physical server's CPU. Virtualization allows applications to be isolated between VMs and provides a level of security as the information of one application cannot be freely accessed by another application.
**Virtualized deployment era:** As a solution, virtualization was introduced. It allows you
to run multiple Virtual Machines (VMs) on a single physical server's CPU. Virtualization
allows applications to be isolated between VMs and provides a level of security as the
information of one application cannot be freely accessed by another application.
Virtualization allows better utilization of resources in a physical server and allows better scalability because an application can be added or updated easily, reduces hardware costs, and much more. With virtualization you can present a set of physical resources as a cluster of disposable virtual machines.
Virtualization allows better utilization of resources in a physical server and allows
better scalability because an application can be added or updated easily, reduces
hardware costs, and much more. With virtualization you can present a set of physical
resources as a cluster of disposable virtual machines.
Each VM is a full machine running all the components, including its own operating system, on top of the virtualized hardware.
Each VM is a full machine running all the components, including its own operating
system, on top of the virtualized hardware.
**Container deployment era:** Containers are similar to VMs, but they have relaxed isolation properties to share the Operating System (OS) among the applications. Therefore, containers are considered lightweight. Similar to a VM, a container has its own filesystem, share of CPU, memory, process space, and more. As they are decoupled from the underlying infrastructure, they are portable across clouds and OS distributions.
**Container deployment era:** Containers are similar to VMs, but they have relaxed
isolation properties to share the Operating System (OS) among the applications.
Therefore, containers are considered lightweight. Similar to a VM, a container
has its own filesystem, share of CPU, memory, process space, and more. As they
are decoupled from the underlying infrastructure, they are portable across clouds
and OS distributions.
Containers have become popular because they provide extra benefits, such as:
* Agile application creation and deployment: increased ease and efficiency of container image creation compared to VM image use.
* Continuous development, integration, and deployment: provides for reliable and frequent container image build and deployment with quick and efficient rollbacks (due to image immutability).
* Dev and Ops separation of concerns: create application container images at build/release time rather than deployment time, thereby decoupling applications from infrastructure.
* Observability: not only surfaces OS-level information and metrics, but also application health and other signals.
* Environmental consistency across development, testing, and production: Runs the same on a laptop as it does in the cloud.
* Cloud and OS distribution portability: Runs on Ubuntu, RHEL, CoreOS, on-premises, on major public clouds, and anywhere else.
* Application-centric management: Raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources.
* Loosely coupled, distributed, elastic, liberated micro-services: applications are broken into smaller, independent pieces and can be deployed and managed dynamically not a monolithic stack running on one big single-purpose machine.
* Agile application creation and deployment: increased ease and efficiency of
container image creation compared to VM image use.
* Continuous development, integration, and deployment: provides for reliable
and frequent container image build and deployment with quick and efficient
rollbacks (due to image immutability).
* Dev and Ops separation of concerns: create application container images at
build/release time rather than deployment time, thereby decoupling
applications from infrastructure.
* Observability: not only surfaces OS-level information and metrics, but also
application health and other signals.
* Environmental consistency across development, testing, and production: Runs
the same on a laptop as it does in the cloud.
* Cloud and OS distribution portability: Runs on Ubuntu, RHEL, CoreOS, on-premises,
on major public clouds, and anywhere else.
* Application-centric management: Raises the level of abstraction from running an
OS on virtual hardware to running an application on an OS using logical resources.
* Loosely coupled, distributed, elastic, liberated micro-services: applications are
broken into smaller, independent pieces and can be deployed and managed dynamically
not a monolithic stack running on one big single-purpose machine.
* Resource isolation: predictable application performance.
* Resource utilization: high efficiency and density.
## Why you need Kubernetes and what it can do {#why-you-need-kubernetes-and-what-can-it-do}
Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn't it be easier if this behavior was handled by a system?
Containers are a good way to bundle and run your applications. In a production
environment, you need to manage the containers that run the applications and
ensure that there is no downtime. For example, if a container goes down, another
container needs to start. Wouldn't it be easier if this behavior was handled by a system?
That's how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example: Kubernetes can easily manage a canary deployment for your system.
That's how Kubernetes comes to the rescue! Kubernetes provides you with a framework
to run distributed systems resiliently. It takes care of scaling and failover for
your application, provides deployment patterns, and more. For example: Kubernetes
can easily manage a canary deployment for your system.
Kubernetes provides you with:
* **Service discovery and load balancing**
Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
Kubernetes can expose a container using the DNS name or using their own IP address.
If traffic to a container is high, Kubernetes is able to load balance and distribute
the network traffic so that the deployment is stable.
* **Storage orchestration**
Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
Kubernetes allows you to automatically mount a storage system of your choice, such as
local storages, public cloud providers, and more.
* **Automated rollouts and rollbacks**
You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
You can describe the desired state for your deployed containers using Kubernetes,
and it can change the actual state to the desired state at a controlled rate.
For example, you can automate Kubernetes to create new containers for your
deployment, remove existing containers and adopt all their resources to the new container.
* **Automatic bin packing**
You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks.
You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit
containers onto your nodes to make the best use of your resources.
* **Self-healing**
Kubernetes restarts containers that fail, replaces containers, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.
Kubernetes restarts containers that fail, replaces containers, kills containers that don't
respond to your user-defined health check, and doesn't advertise them to clients until they
are ready to serve.
* **Secret and configuration management**
Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.
Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens,
and SSH keys. You can deploy and update secrets and application configuration without
rebuilding your container images, and without exposing secrets in your stack configuration.
## What Kubernetes is not
Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) system. Since Kubernetes operates at the container level rather than at the hardware level, it provides some generally applicable features common to PaaS offerings, such as deployment, scaling, load balancing, and lets users integrate their logging, monitoring, and alerting solutions. However, Kubernetes is not monolithic, and these default solutions are optional and pluggable. Kubernetes provides the building blocks for building developer platforms, but preserves user choice and flexibility where it is important.
Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) system.
Since Kubernetes operates at the container level rather than at the hardware level,
it provides some generally applicable features common to PaaS offerings, such as
deployment, scaling, load balancing, and lets users integrate their logging, monitoring,
and alerting solutions. However, Kubernetes is not monolithic, and these default solutions
are optional and pluggable. Kubernetes provides the building blocks for building developer
platforms, but preserves user choice and flexibility where it is important.
Kubernetes:
* Does not limit the types of applications supported. Kubernetes aims to support an extremely diverse variety of workloads, including stateless, stateful, and data-processing workloads. If an application can run in a container, it should run great on Kubernetes.
* Does not deploy source code and does not build your application. Continuous Integration, Delivery, and Deployment (CI/CD) workflows are determined by organization cultures and preferences as well as technical requirements.
* Does not provide application-level services, such as middleware (for example, message buses), data-processing frameworks (for example, Spark), databases (for example, MySQL), caches, nor cluster storage systems (for example, Ceph) as built-in services. Such components can run on Kubernetes, and/or can be accessed by applications running on Kubernetes through portable mechanisms, such as the [Open Service Broker](https://openservicebrokerapi.org/).
* Does not dictate logging, monitoring, or alerting solutions. It provides some integrations as proof of concept, and mechanisms to collect and export metrics.
* Does not provide nor mandate a configuration language/system (for example, Jsonnet). It provides a declarative API that may be targeted by arbitrary forms of declarative specifications.
* Does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems.
* Additionally, Kubernetes is not a mere orchestration system. In fact, it eliminates the need for orchestration. The technical definition of orchestration is execution of a defined workflow: first do A, then B, then C. In contrast, Kubernetes comprises a set of independent, composable control processes that continuously drive the current state towards the provided desired state. It shouldn't matter how you get from A to C. Centralized control is also not required. This results in a system that is easier to use and more powerful, robust, resilient, and extensible.
* Does not limit the types of applications supported. Kubernetes aims to support an
extremely diverse variety of workloads, including stateless, stateful, and data-processing
workloads. If an application can run in a container, it should run great on Kubernetes.
* Does not deploy source code and does not build your application. Continuous Integration,
Delivery, and Deployment (CI/CD) workflows are determined by organization cultures and
preferences as well as technical requirements.
* Does not provide application-level services, such as middleware (for example, message buses),
data-processing frameworks (for example, Spark), databases (for example, MySQL), caches, nor
cluster storage systems (for example, Ceph) as built-in services. Such components can run on
Kubernetes, and/or can be accessed by applications running on Kubernetes through portable
mechanisms, such as the [Open Service Broker](https://openservicebrokerapi.org/).
* Does not dictate logging, monitoring, or alerting solutions. It provides some integrations
as proof of concept, and mechanisms to collect and export metrics.
* Does not provide nor mandate a configuration language/system (for example, Jsonnet). It provides
a declarative API that may be targeted by arbitrary forms of declarative specifications.
* Does not provide nor adopt any comprehensive machine configuration, maintenance, management,
or self-healing systems.
* Additionally, Kubernetes is not a mere orchestration system. In fact, it eliminates the need
for orchestration. The technical definition of orchestration is execution of a defined workflow:
first do A, then B, then C. In contrast, Kubernetes comprises a set of independent, composable
control processes that continuously drive the current state towards the provided desired state.
It shouldn't matter how you get from A to C. Centralized control is also not required. This
results in a system that is easier to use and more powerful, robust, resilient, and extensible.
## {{% heading "whatsnext" %}}
* Take a look at the [Kubernetes Components](/docs/concepts/overview/components/)
* Take a look at the [The Kubernetes API](/docs/concepts/overview/kubernetes-api/)
* Take a look at the [Cluster Architecture](/docs/concepts/architecture/)
* Ready to [Get Started](/docs/setup/)?
* Take a look at the [Kubernetes Components](/docs/concepts/overview/components/)
* Take a look at the [The Kubernetes API](/docs/concepts/overview/kubernetes-api/)
* Take a look at the [Cluster Architecture](/docs/concepts/architecture/)
* Ready to [Get Started](/docs/setup/)?

View File

@ -2,16 +2,18 @@
title: Understanding Kubernetes Objects
content_type: concept
weight: 10
card:
card:
name: concepts
weight: 40
---
<!-- overview -->
This page explains how Kubernetes objects are represented in the Kubernetes API, and how you can
express them in `.yaml` format.
<!-- body -->
## Understanding Kubernetes objects {#kubernetes-objects}
*Kubernetes objects* are persistent entities in the Kubernetes system. Kubernetes uses these
@ -32,7 +34,7 @@ interface, for example, the CLI makes the necessary Kubernetes API calls for you
the Kubernetes API directly in your own programs using one of the
[Client Libraries](/docs/reference/using-api/client-libraries/).
### Object Spec and Status
### Object spec and status
Almost every Kubernetes object includes two nested object fields that govern
the object's configuration: the object *`spec`* and the object *`status`*.
@ -86,7 +88,7 @@ The output is similar to this:
deployment.apps/nginx-deployment created
```
### Required Fields
### Required fields
In the `.yaml` file for the Kubernetes object you want to create, you'll need to set values for the following fields:
@ -116,9 +118,9 @@ detail the structure of that `.status` field, and its content for each different
## {{% heading "whatsnext" %}}
Learn more about the following:
* [Pods](https://kubernetes.io/docs/concepts/workloads/pods/) which are the most important basic Kubernetes objects.
* [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) objects.
* [Controllers](https://kubernetes.io/docs/concepts/architecture/controller/) in Kubernetes.
* [Kubernetes API overview](https://kubernetes.io/docs/reference/using-api/) which explains some more API concepts.
* [kubectl](https://kubernetes.io/docs/reference/kubectl/) and [kubectl commands](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands).
* [Pods](/docs/concepts/workloads/pods/) which are the most important basic Kubernetes objects.
* [Deployment](/docs/concepts/workloads/controllers/deployment/) objects.
* [Controllers](/docs/concepts/architecture/controller/) in Kubernetes.
* [Kubernetes API overview](/docs/reference/using-api/) which explains some more API concepts.
* [kubectl](/docs/reference/kubectl/) and [kubectl commands](/docs/reference/generated/kubectl/kubectl-commands).

View File

@ -99,5 +99,5 @@ UUIDs are standardized as ISO/IEC 9834-8 and as ITU-T X.667.
## {{% heading "whatsnext" %}}
* Read about [labels](/docs/concepts/overview/working-with-objects/labels/) in Kubernetes.
* Read about [labels](/docs/concepts/overview/working-with-objects/labels/) and [annotations](/docs/concepts/overview/working-with-objects/annotations/) in Kubernetes.
* See the [Identifiers and Names in Kubernetes](https://git.k8s.io/design-proposals-archive/architecture/identifiers.md) design document.

View File

@ -32,6 +32,26 @@ resources, such as different versions of the same software: use
{{< glossary_tooltip text="labels" term_id="label" >}} to distinguish
resources within the same namespace.
{{< note >}}
For a production cluster, consider _not_ using the `default` namespace. Instead, make other namespaces and use those.
{{< /note >}}
## Initial namespaces
Kubernetes starts with four initial namespaces:
`default`
: Kubernetes includes this namespace so that you can start using your new cluster without first creating a namespace.
`kube-node-lease`
: This namespace holds [Lease](/docs/reference/kubernetes-api/cluster-resources/lease-v1/) objects associated with each node. Node leases allow the kubelet to send [heartbeats](/docs/concepts/architecture/nodes/#heartbeats) so that the control plane can detect node failure.
`kube-public`
: This namespace is readable by *all* clients (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.
`kube-system`
: The namespace for objects created by the Kubernetes system.
## Working with Namespaces
Creation and deletion of namespaces are described in the
@ -56,16 +76,7 @@ kube-public Active 1d
kube-system Active 1d
```
Kubernetes starts with four initial namespaces:
* `default` The default namespace for objects with no other namespace
* `kube-system` The namespace for objects created by the Kubernetes system
* `kube-public` This namespace is created automatically and is readable by all users (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.
* `kube-node-lease` This namespace holds [Lease](/docs/reference/kubernetes-api/cluster-resources/lease-v1/)
objects associated with each node. Node leases allow the kubelet to send
[heartbeats](/docs/concepts/architecture/nodes/#heartbeats) so that the control plane
can detect node failure.
### Setting the namespace for a request
To set the namespace for a current request, use the `--namespace` flag.
@ -106,7 +117,7 @@ By creating namespaces with the same name as [public top-level
domains](https://data.iana.org/TLD/tlds-alpha-by-domain.txt), Services in these
namespaces can have short DNS names that overlap with public DNS records.
Workloads from any namespace performing a DNS lookup without a [trailing dot](https://datatracker.ietf.org/doc/html/rfc1034#page-8) will
be redirected to those services, taking precedence over public DNS.
be redirected to those services, taking precedence over public DNS.
To mitigate this, limit privileges for creating namespaces to trusted users. If
required, you could additionally configure third-party security controls, such
@ -116,13 +127,13 @@ to block creating any namespace with the name of [public
TLDs](https://data.iana.org/TLD/tlds-alpha-by-domain.txt).
{{< /warning >}}
## Not All Objects are in a Namespace
## Not all objects are in a namespace
Most Kubernetes resources (e.g. pods, services, replication controllers, and others) are
in some namespaces. However namespace resources are not themselves in a namespace.
And low-level resources, such as
[nodes](/docs/concepts/architecture/nodes/) and
persistentVolumes, are not in any namespace.
[persistentVolumes](/docs/concepts/storage/persistent-volumes/), are not in any namespace.
To see which Kubernetes resources are and aren't in a namespace:

View File

@ -98,7 +98,7 @@ your cluster. Those fields are:
{{< note >}}
The `minDomains` field is a beta field and enabled by default in 1.25. You can disable it by disabling the
`MinDomainsInPodToplogySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
`MinDomainsInPodTopologySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
{{< /note >}}
- The value of `minDomains` must be greater than 0, when specified.

View File

@ -220,13 +220,15 @@ following Pod-specific DNS policies. These policies are specified in the
See [related discussion](/docs/tasks/administer-cluster/dns-custom-nameservers)
for more details.
- "`ClusterFirst`": Any DNS query that does not match the configured cluster
domain suffix, such as "`www.kubernetes.io`", is forwarded to the upstream
nameserver inherited from the node. Cluster administrators may have extra
domain suffix, such as "`www.kubernetes.io`", is forwarded to an upstream
nameserver by the DNS server. Cluster administrators may have extra
stub-domain and upstream DNS servers configured.
See [related discussion](/docs/tasks/administer-cluster/dns-custom-nameservers)
for details on how DNS queries are handled in those cases.
- "`ClusterFirstWithHostNet`": For Pods running with hostNetwork, you should
explicitly set its DNS policy "`ClusterFirstWithHostNet`".
explicitly set its DNS policy to "`ClusterFirstWithHostNet`". Otherwise, Pods
running with hostNetwork and `"ClusterFirst"` will fallback to the behavior
of the `"Default"` policy.
- Note: This is not supported on Windows. See [below](#dns-windows) for details
- "`None`": It allows a Pod to ignore DNS settings from the Kubernetes
environment. All DNS settings are supposed to be provided using the

View File

@ -15,11 +15,9 @@ description: >-
{{< feature-state for_k8s_version="v1.21" state="stable" >}}
_EndpointSlices_ provide a simple way to track network endpoints within a
Kubernetes cluster. They offer a more scalable and extensible alternative to
Endpoints.
Kubernetes' _EndpointSlice_ API provides a way to track network endpoints
within a Kubernetes cluster. EndpointSlices offer a more scalable and extensible
alternative to [Endpoints](/docs/concepts/services-networking/service/#endpoints).
<!-- body -->
@ -274,3 +272,5 @@ networking and topology-aware routing.
## {{% heading "whatsnext" %}}
* Follow the [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/) tutorial
* Read the [API reference](/docs/reference/kubernetes-api/service-resources/endpoint-slice-v1/) for the EndpointSlice API
* Read the [API reference](/docs/reference/kubernetes-api/service-resources/endpoints-v1/) for the Endpoints API

View File

@ -201,5 +201,5 @@ spec:
* Read about [enabling Service Topology](/docs/tasks/administer-cluster/enabling-service-topology)
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
* Read [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/)

View File

@ -46,8 +46,7 @@ parameters are nearly the same with two exceptions:
for each individual projection.
## serviceAccountToken projected volumes {#serviceaccounttoken}
When the `TokenRequestProjection` feature is enabled, you can inject the token
for the current [service account](/docs/reference/access-authn-authz/authentication/#service-account-tokens)
You can inject the token for the current [service account](/docs/reference/access-authn-authz/authentication/#service-account-tokens)
into a Pod at a specified path. For example:
{{< codenew file="pods/storage/projected-service-account-token.yaml" >}}

View File

@ -92,6 +92,7 @@ For example, the line below states that the task must be started every Friday at
To generate CronJob schedule expressions, you can also use web tools like [crontab.guru](https://crontab.guru/).
## Time zones
For CronJobs with no time zone specified, the kube-controller-manager interprets schedules relative to its local time zone.
{{< feature-state for_k8s_version="v1.25" state="beta" >}}
@ -101,7 +102,7 @@ you can specify a time zone for a CronJob (if you don't enable that feature gate
Kubernetes that does not have experimental time zone support, all CronJobs in your cluster have an unspecified
timezone).
When you have the feature enabled, you can set `spec.timeZone` to the name of a valid [time zone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) name. For example, setting
When you have the feature enabled, you can set `spec.timeZone` to the name of a valid [time zone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). For example, setting
`spec.timeZone: "Etc/UTC"` instructs Kubernetes to interpret the schedule relative to Coordinated Universal Time.
A time zone database from the Go standard library is included in the binaries and used as a fallback in case an external database is not available on the system.
@ -121,15 +122,15 @@ If `startingDeadlineSeconds` is set to a value less than 10 seconds, the CronJob
{{< /caution >}}
For every CronJob, the CronJob {{< glossary_tooltip term_id="controller" >}} checks how many schedules it missed in the duration from its last scheduled time until now. If there are more than 100 missed schedules, then it does not start the job and logs the error
For every CronJob, the CronJob {{< glossary_tooltip term_id="controller" >}} checks how many schedules it missed in the duration from its last scheduled time until now. If there are more than 100 missed schedules, then it does not start the job and logs the error.
````
```
Cannot determine if job needs to be started. Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew.
````
```
It is important to note that if the `startingDeadlineSeconds` field is set (not `nil`), the controller counts how many missed jobs occurred from the value of `startingDeadlineSeconds` until now rather than from the last scheduled time until now. For example, if `startingDeadlineSeconds` is `200`, the controller counts how many missed jobs occurred in the last 200 seconds.
A CronJob is counted as missed if it has failed to be created at its scheduled time. For example, If `concurrencyPolicy` is set to `Forbid` and a CronJob was attempted to be scheduled when there was a previous schedule still running, then it would count as missed.
A CronJob is counted as missed if it has failed to be created at its scheduled time. For example, if `concurrencyPolicy` is set to `Forbid` and a CronJob was attempted to be scheduled when there was a previous schedule still running, then it would count as missed.
For example, suppose a CronJob is set to schedule a new Job every one minute beginning at `08:30:00`, and its
`startingDeadlineSeconds` field is not set. If the CronJob controller happens to
@ -137,7 +138,7 @@ be down from `08:29:00` to `10:21:00`, the job will not start as the number of m
To illustrate this concept further, suppose a CronJob is set to schedule a new Job every one minute beginning at `08:30:00`, and its
`startingDeadlineSeconds` is set to 200 seconds. If the CronJob controller happens to
be down for the same period as the previous example (`08:29:00` to `10:21:00`,) the Job will still start at 10:22:00. This happens as the controller now checks how many missed schedules happened in the last 200 seconds (ie, 3 missed schedules), rather than from the last scheduled time until now.
be down for the same period as the previous example (`08:29:00` to `10:21:00`,) the Job will still start at 10:22:00. This happens as the controller now checks how many missed schedules happened in the last 200 seconds (i.e., 3 missed schedules), rather than from the last scheduled time until now.
The CronJob is only responsible for creating Jobs that match its schedule, and
the Job in turn is responsible for the management of the Pods it represents.
@ -146,7 +147,7 @@ the Job in turn is responsible for the management of the Pods it represents.
Starting with Kubernetes v1.21 the second version of the CronJob controller
is the default implementation. To disable the default CronJob controller
and use the original CronJob controller instead, one pass the `CronJobControllerV2`
and use the original CronJob controller instead, pass the `CronJobControllerV2`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
flag to the {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}},
and set this flag to `false`. For example:

View File

@ -4,6 +4,13 @@ reviewers:
- bprashanth
- madhusudancs
title: ReplicaSet
feature:
title: Self-healing
anchor: How a ReplicaSet works
description: >
Restarts containers that fail, replaces and reschedules containers when nodes die,
kills containers that don't respond to your user-defined health check,
and doesn't advertise them to clients until they are ready to serve.
content_type: concept
weight: 20
---

View File

@ -3,12 +3,6 @@ reviewers:
- bprashanth
- janetkuo
title: ReplicationController
feature:
title: Self-healing
anchor: How a ReplicationController Works
description: >
Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.
content_type: concept
weight: 90
---

View File

@ -52,6 +52,10 @@ possible to add an ephemeral container using `kubectl edit`.
Like regular containers, you may not change or remove an ephemeral container
after you have added it to a Pod.
{{< note >}}
Ephemeral containers are not supported by [static pods](/docs/tasks/configure-pod-container/static-pod/).
{{< /note >}}
## Uses for ephemeral containers
Ephemeral containers are useful for interactive troubleshooting when `kubectl

View File

@ -65,8 +65,8 @@ In the bootstrap initialization process, the following occurs:
6. kubelet now has limited credentials to create and retrieve a certificate signing request (CSR)
7. kubelet creates a CSR for itself with the signerName set to `kubernetes.io/kube-apiserver-client-kubelet`
8. CSR is approved in one of two ways:
* If configured, kube-controller-manager automatically approves the CSR
* If configured, an outside process, possibly a person, approves the CSR using the Kubernetes API or via `kubectl`
* If configured, kube-controller-manager automatically approves the CSR
* If configured, an outside process, possibly a person, approves the CSR using the Kubernetes API or via `kubectl`
9. Certificate is created for the kubelet
10. Certificate is issued to the kubelet
11. kubelet retrieves the certificate
@ -126,7 +126,7 @@ of provisioning.
1. [Bootstrap Tokens](#bootstrap-tokens)
2. [Token authentication file](#token-authentication-file)
Bootstrap tokens are a simpler and more easily managed method to authenticate kubelets, and do not require any additional flags when starting kube-apiserver.
Using bootstrap tokens is a simpler and more easily managed method to authenticate kubelets, and does not require any additional flags when starting kube-apiserver.
Whichever method you choose, the requirement is that the kubelet be able to authenticate as a user with the rights to:
@ -176,7 +176,7 @@ systems). There are multiple ways you can generate a token. For example:
head -c 16 /dev/urandom | od -An -t x | tr -d ' '
```
will generate tokens that look like `02b50b05283e98dd0fd71db496ef01e8`.
This will generate tokens that look like `02b50b05283e98dd0fd71db496ef01e8`.
The token file should look like the following example, where the first three
values can be anything and the quoted group name should be as depicted:
@ -186,7 +186,7 @@ values can be anything and the quoted group name should be as depicted:
```
Add the `--token-auth-file=FILENAME` flag to the kube-apiserver command (in your
systemd unit file perhaps) to enable the token file. See docs
systemd unit file perhaps) to enable the token file. See docs
[here](/docs/reference/access-authn-authz/authentication/#static-token-file) for
further details.
@ -247,7 +247,7 @@ To provide the Kubernetes CA key and certificate to kube-controller-manager, use
--cluster-signing-cert-file="/etc/path/to/kubernetes/ca/ca.crt" --cluster-signing-key-file="/etc/path/to/kubernetes/ca/ca.key"
```
for example:
For example:
```shell
--cluster-signing-cert-file="/var/lib/kubernetes/ca.pem" --cluster-signing-key-file="/var/lib/kubernetes/ca-key.pem"
@ -312,7 +312,7 @@ by default. The controller uses the
[`SubjectAccessReview` API](/docs/reference/access-authn-authz/authorization/#checking-api-access) to
determine if a given user is authorized to request a CSR, then approves based on
the authorization outcome. To prevent conflicts with other approvers, the
builtin approver doesn't explicitly deny CSRs. It only ignores unauthorized
built-in approver doesn't explicitly deny CSRs. It only ignores unauthorized
requests. The controller also prunes expired certificates as part of garbage
collection.
@ -435,12 +435,12 @@ controller, or manually approve the serving certificate requests.
A deployment-specific approval process for kubelet serving certificates should typically only approve CSRs which:
1. are requested by nodes (ensure the `spec.username` field is of the form
`system:node:<nodeName>` and `spec.groups` contains `system:nodes`)
2. request usages for a serving certificate (ensure `spec.usages` contains `server auth`,
1. are requested by nodes (ensure the `spec.username` field is of the form
`system:node:<nodeName>` and `spec.groups` contains `system:nodes`)
2. request usages for a serving certificate (ensure `spec.usages` contains `server auth`,
optionally contains `digital signature` and `key encipherment`, and contains no other usages)
3. only have IP and DNS subjectAltNames that belong to the requesting node,
and have no URI and Email subjectAltNames (parse the x509 Certificate Signing Request
3. only have IP and DNS subjectAltNames that belong to the requesting node,
and have no URI and Email subjectAltNames (parse the x509 Certificate Signing Request
in `spec.request` to verify `subjectAltNames`)
{{< /note >}}
@ -460,7 +460,7 @@ You have several options for generating these credentials:
## kubectl approval
CSRs can be approved outside of the approval flows builtin to the controller
CSRs can be approved outside of the approval flows built into the controller
manager.
The signing controller does not immediately sign all certificate requests.
@ -469,6 +469,6 @@ appropriately-privileged user. This flow is intended to allow for automated
approval handled by an external approval controller or the approval controller
implemented in the core controller-manager. However cluster administrators can
also manually approve certificate requests using kubectl. An administrator can
list CSRs with `kubectl get csr` and describe one in detail with `kubectl
describe csr <name>`. An administrator can approve or deny a CSR with `kubectl
certificate approve <name>` and `kubectl certificate deny <name>`.
list CSRs with `kubectl get csr` and describe one in detail with
`kubectl describe csr <name>`. An administrator can approve or deny a CSR with
`kubectl certificate approve <name>` and `kubectl certificate deny <name>`.

View File

@ -96,7 +96,7 @@ Here's an example of how that looks for a launched Pod:
That manifest snippet defines a projected volume that consists of three sources. In this case,
each source also represents a single path within that volume. The three sources are:
1. A `serviceAccountToken` source, that contains a token that the kubelet acquires from kube-apiserver
1. A `serviceAccountToken` source, that contains a token that the kubelet acquires from kube-apiserver.
The kubelet fetches time-bound tokens using the TokenRequest API. A token served for a TokenRequest expires
either when the pod is deleted or after a defined lifespan (by default, that is 1 hour).
The token is bound to the specific Pod and has the kube-apiserver as its audience.
@ -105,7 +105,7 @@ each source also represents a single path within that volume. The three sources
1. A `configMap` source. The ConfigMap contains a bundle of certificate authority data. Pods can use these
certificates to make sure that they are connecting to your cluster's kube-apiserver (and not to middlebox
or an accidentally misconfigured peer).
1. A `downwardAPI` source that looks up the name of thhe namespace containing the Pod, and makes
1. A `downwardAPI` source that looks up the name of the namespace containing the Pod, and makes
that name information available to application code running inside the Pod.
Any container within the Pod that mounts this particular volume can access the above information.
@ -232,14 +232,14 @@ Here's an example of how that looks for a launched Pod:
That manifest snippet defines a projected volume that combines information from three sources:
1. A `serviceAccountToken` source, that contains a token that the kubelet acquires from kube-apiserver
1. A `serviceAccountToken` source, that contains a token that the kubelet acquires from kube-apiserver.
The kubelet fetches time-bound tokens using the TokenRequest API. A token served for a TokenRequest expires
either when the pod is deleted or after a defined lifespan (by default, that is 1 hour).
The token is bound to the specific Pod and has the kube-apiserver as its audience.
1. A `configMap` source. The ConfigMap contains a bundle of certificate authority data. Pods can use these
certificates to make sure that they are connecting to your cluster's kube-apiserver (and not to middlebox
or an accidentally misconfigured peer).
1. A `downwardAPI` source. This `downwardAPI` volume makes the name of the namespace container the Pod available
1. A `downwardAPI` source. This `downwardAPI` volume makes the name of the namespace containing the Pod available
to application code running inside the Pod.
Any container within the Pod that mounts this volume can access the above information.
@ -262,6 +262,7 @@ Here is a sample manifest for such a Secret:
{{< codenew file="secret/serviceaccount/mysecretname.yaml" >}}
To create a Secret based on this example, run:
```shell
kubectl -n examplens create -f https://k8s.io/examples/secret/serviceaccount/mysecretname.yaml
```
@ -273,6 +274,7 @@ kubectl -n examplens describe secret mysecretname
```
The output is similar to:
```
Name: mysecretname
Namespace: examplens
@ -306,7 +308,9 @@ Otherwise, first find the Secret for the ServiceAccount.
# This assumes that you already have a namespace named 'examplens'
kubectl -n examplens get serviceaccount/example-automated-thing -o yaml
```
The output is similar to:
```yaml
apiVersion: v1
kind: ServiceAccount
@ -321,9 +325,11 @@ metadata:
selfLink: /api/v1/namespaces/examplens/serviceaccounts/example-automated-thing
uid: f23fd170-66f2-4697-b049-e1e266b7f835
secrets:
- name: example-automated-thing-token-zyxwv
- name: example-automated-thing-token-zyxwv
```
Then, delete the Secret you now know the name of:
```shell
kubectl -n examplens delete secret/example-automated-thing-token-zyxwv
```
@ -334,6 +340,7 @@ and creates a replacement:
```shell
kubectl -n examplens get serviceaccount/example-automated-thing -o yaml
```
```yaml
apiVersion: v1
kind: ServiceAccount
@ -348,12 +355,13 @@ metadata:
selfLink: /api/v1/namespaces/examplens/serviceaccounts/example-automated-thing
uid: f23fd170-66f2-4697-b049-e1e266b7f835
secrets:
- name: example-automated-thing-token-4rdrh
- name: example-automated-thing-token-4rdrh
```
## Clean up
If you created a namespace `examplens` to experiment with, you can remove it:
```shell
kubectl delete namespace examplens
```

View File

@ -16,4 +16,4 @@ tags:
<!--more-->
Containers decouple applications from underlying host infrastructure to make deployment easier in different cloud or OS environments, and for easier scaling.
The applications that run inside containers are called containerized applications. The process of bundling these applications and their dependencies into a container image is called containerization.

View File

@ -16,3 +16,4 @@ A {{< glossary_tooltip term_id="container" >}} type that you can temporarily run
If you want to investigate a Pod that's running with problems, you can add an ephemeral container to that Pod and carry out diagnostics. Ephemeral containers have no resource or scheduling guarantees, and you should not use them to run any part of the workload itself.
Ephemeral containers are not supported by {{< glossary_tooltip text="static pods" term_id="static-pod" >}}.

View File

@ -4,7 +4,7 @@ id: etcd
date: 2018-04-12
full_link: /docs/tasks/administer-cluster/configure-upgrade-etcd/
short_description: >
Consistent and highly-available key value store used as Kubernetes' backing store for all cluster data.
Consistent and highly-available key value store used as backing store of Kubernetes for all cluster data.
aka:
tags:

View File

@ -15,4 +15,6 @@ A {{< glossary_tooltip text="pod" term_id="pod" >}} managed directly by the kube
daemon on a specific node,
<!--more-->
without the API server observing it.
without the API server observing it.
Static Pods do not support {{< glossary_tooltip text="ephemeral containers" term_id="ephemeral-container" >}}.

View File

@ -27,7 +27,7 @@ control, available resources, and expertise required to operate and manage a clu
You can [download Kubernetes](/releases/download/) to deploy a Kubernetes cluster
on a local machine, into the cloud, or for your own datacenter.
Several [Kubernetes components](/docs/concepts/overview/components/) such as `kube-apiserver` or `kube-proxy` can also be
Several [Kubernetes components](/docs/concepts/overview/components/) such as {{< glossary_tooltip text="kube-apiserver" term_id="kube-apiserver" >}} or {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}} can also be
deployed as [container images](/releases/download/#container-images) within the cluster.
It is **recommended** to run Kubernetes components as container images wherever

View File

@ -590,7 +590,7 @@ data and may need to be recreated from scratch.
Workarounds:
* Regularly [back up etcd](https://coreos.com/etcd/docs/latest/admin_guide.html). The
* Regularly [back up etcd](https://etcd.io/docs/v3.5/op-guide/recovery/). The
etcd data directory configured by kubeadm is at `/var/lib/etcd` on the control-plane node.
* Use multiple control-plane nodes. You can read

View File

@ -11,7 +11,7 @@ weight: 70
{{< note >}}
While kubeadm is being used as the management tool for external etcd nodes
in this guide, please note that kubeadm does not plan to support certificate rotation
or upgrades for such nodes. The long term plan is to empower the tool
or upgrades for such nodes. The long-term plan is to empower the tool
[etcdadm](https://github.com/kubernetes-sigs/etcdadm) to manage these
aspects.
{{< /note >}}
@ -32,7 +32,7 @@ etcd cluster of three members that can be used by kubeadm during cluster creatio
* Each host must have systemd and a bash compatible shell installed.
* Each host must [have a container runtime, kubelet, and kubeadm installed](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
* Each host should have access to the Kubernetes container image registry (`registry.k8s.io`) or list/pull the required etcd image using
`kubeadm config images list/pull`. This guide will setup etcd instances as
`kubeadm config images list/pull`. This guide will set up etcd instances as
[static pods](/docs/tasks/configure-pod-container/static-pod/) managed by a kubelet.
* Some infrastructure to copy files between hosts. For example `ssh` and `scp`
can satisfy this requirement.
@ -98,7 +98,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
export NAME1="infra1"
export NAME2="infra2"
# Create temp directories to store files that will end up on other hosts.
# Create temp directories to store files that will end up on other hosts
mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/
HOSTS=(${HOST0} ${HOST1} ${HOST2})
@ -136,7 +136,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
done
```
1. Generate the certificate authority
1. Generate the certificate authority.
If you already have a CA then the only action that is copying the CA's `crt` and
`key` file to `/etc/kubernetes/pki/etcd/ca.crt` and
@ -150,12 +150,12 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
kubeadm init phase certs etcd-ca
```
This creates two files
This creates two files:
- `/etc/kubernetes/pki/etcd/ca.crt`
- `/etc/kubernetes/pki/etcd/ca.key`
1. Create certificates for each member
1. Create certificates for each member.
```sh
kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml
@ -184,7 +184,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
find /tmp/${HOST1} -name ca.key -type f -delete
```
1. Copy certificates and kubeadm configs
1. Copy certificates and kubeadm configs.
The certificates have been generated and now they must be moved to their
respective hosts.
@ -199,7 +199,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
root@HOST $ mv pki /etc/kubernetes/
```
1. Ensure all expected files exist
1. Ensure all expected files exist.
The complete list of required files on `$HOST0` is:
@ -240,7 +240,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
└── server.key
```
On `$HOST2`
On `$HOST2`:
```
$HOME
@ -259,7 +259,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
└── server.key
```
1. Create the static pod manifests
1. Create the static pod manifests.
Now that the certificates and configs are in place it's time to create the
manifests. On each host run the `kubeadm` command to generate a static manifest
@ -271,7 +271,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
root@HOST2 $ kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml
```
1. Optional: Check the cluster health
1. Optional: Check the cluster health.
```sh
docker run --rm -it \
@ -286,7 +286,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
https://[HOST1 IP]:2379 is healthy: successfully committed proposal: took = 19.44402ms
https://[HOST2 IP]:2379 is healthy: successfully committed proposal: took = 35.926451ms
```
- Set `${ETCD_TAG}` to the version tag of your etcd image. For example `3.4.3-0`. To see the etcd image and tag that kubeadm uses execute `kubeadm config images list --kubernetes-version ${K8S_VERSION}`, where `${K8S_VERSION}` is for example `v1.17.0`
- Set `${ETCD_TAG}` to the version tag of your etcd image. For example `3.4.3-0`. To see the etcd image and tag that kubeadm uses execute `kubeadm config images list --kubernetes-version ${K8S_VERSION}`, where `${K8S_VERSION}` is for example `v1.17.0`.
- Set `${HOST0}`to the IP address of the host you are testing.
@ -294,7 +294,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
## {{% heading "whatsnext" %}}
Once you have a working 3 member etcd cluster, you can continue setting up a
highly available control plane using the [external etcd method with
kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/).
Once you have an etcd cluster with 3 working members, you can continue setting up a
highly available control plane using the
[external etcd method with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/).

View File

@ -43,12 +43,12 @@ kind: ClusterRole
metadata:
name: kubeadm:get-nodes
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
@ -59,16 +59,16 @@ roleRef:
kind: ClusterRole
name: kubeadm:get-nodes
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:kubeadm:default-node-token
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:kubeadm:default-node-token
```
## `ebtables` or some similar executable not found during installation
If you see the following warnings while running `kubeadm init`
```sh
```console
[preflight] WARNING: ebtables not found in system path
[preflight] WARNING: ethtool not found in system path
```
@ -82,7 +82,7 @@ Then you may be missing `ebtables`, `ethtool` or a similar executable on your no
If you notice that `kubeadm init` hangs after printing out the following line:
```sh
```console
[apiclient] Created API client, waiting for the control plane to become ready
```
@ -90,10 +90,10 @@ This may be caused by a number of problems. The most common are:
- network connection problems. Check that your machine has full network connectivity before continuing.
- the cgroup driver of the container runtime differs from that of the kubelet. To understand how to
configure it properly see [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/).
configure it properly see [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/).
- control plane containers are crashlooping or hanging. You can check this by running `docker ps`
and investigating each container by running `docker logs`. For other container runtime see
[Debugging Kubernetes nodes with crictl](/docs/tasks/debug/debug-cluster/crictl/).
and investigating each container by running `docker logs`. For other container runtime see
[Debugging Kubernetes nodes with crictl](/docs/tasks/debug/debug-cluster/crictl/).
## kubeadm blocks when removing managed containers
@ -204,21 +204,21 @@ in kube-apiserver logs. To fix the issue you must follow these steps:
1. Backup and delete `/etc/kubernetes/kubelet.conf` and `/var/lib/kubelet/pki/kubelet-client*` from the failed node.
1. From a working control plane node in the cluster that has `/etc/kubernetes/pki/ca.key` execute
`kubeadm kubeconfig user --org system:nodes --client-name system:node:$NODE > kubelet.conf`.
`$NODE` must be set to the name of the existing failed node in the cluster.
Modify the resulted `kubelet.conf` manually to adjust the cluster name and server endpoint,
or pass `kubeconfig user --config` (it accepts `InitConfiguration`). If your cluster does not have
the `ca.key` you must sign the embedded certificates in the `kubelet.conf` externally.
`kubeadm kubeconfig user --org system:nodes --client-name system:node:$NODE > kubelet.conf`.
`$NODE` must be set to the name of the existing failed node in the cluster.
Modify the resulted `kubelet.conf` manually to adjust the cluster name and server endpoint,
or pass `kubeconfig user --config` (it accepts `InitConfiguration`). If your cluster does not have
the `ca.key` you must sign the embedded certificates in the `kubelet.conf` externally.
1. Copy this resulted `kubelet.conf` to `/etc/kubernetes/kubelet.conf` on the failed node.
1. Restart the kubelet (`systemctl restart kubelet`) on the failed node and wait for
`/var/lib/kubelet/pki/kubelet-client-current.pem` to be recreated.
`/var/lib/kubelet/pki/kubelet-client-current.pem` to be recreated.
1. Manually edit the `kubelet.conf` to point to the rotated kubelet client certificates, by replacing
`client-certificate-data` and `client-key-data` with:
`client-certificate-data` and `client-key-data` with:
```yaml
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
```
```yaml
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
```
1. Restart the kubelet.
1. Make sure the node becomes `Ready`.
@ -241,7 +241,7 @@ Error from server (NotFound): the server could not find the requested resource
In some situations `kubectl logs` and `kubectl run` commands may return with the following errors in an otherwise functional cluster:
```sh
```console
Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc65b868-glc5m/mysql: dial tcp 10.19.0.41:10250: getsockopt: no route to host
```
@ -306,15 +306,17 @@ This version of Docker can prevent the kubelet from executing into the etcd cont
To work around the issue, choose one of these options:
- Roll back to an earlier version of Docker, such as 1.13.1-75
```
yum downgrade docker-1.13.1-75.git8633870.el7.centos.x86_64 docker-client-1.13.1-75.git8633870.el7.centos.x86_64 docker-common-1.13.1-75.git8633870.el7.centos.x86_64
```
```
yum downgrade docker-1.13.1-75.git8633870.el7.centos.x86_64 docker-client-1.13.1-75.git8633870.el7.centos.x86_64 docker-common-1.13.1-75.git8633870.el7.centos.x86_64
```
- Install one of the more recent recommended versions, such as 18.06:
```bash
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce-18.06.1.ce-3.el7.x86_64
```
```bash
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce-18.06.1.ce-3.el7.x86_64
```
## Not possible to pass a comma separated list of values to arguments inside a `--component-extra-args` flag

View File

@ -7,7 +7,7 @@ weight: 20
<!-- overview -->
When using client certificate authentication, you can generate certificates
manually through `easyrsa`, `openssl` or `cfssl`.
manually through [`easyrsa`](https://github.com/OpenVPN/easy-rsa), [`openssl`](https://github.com/openssl/openssl) or [`cfssl`](https://github.com/cloudflare/cfssl).
<!-- body -->

View File

@ -233,7 +233,9 @@ program to retrieve the contents of your Secret.
```
1. Verify the stored Secret is prefixed with `k8s:enc:aescbc:v1:` which indicates
the `aescbc` provider has encrypted the resulting data.
the `aescbc` provider has encrypted the resulting data. Confirm that the key name shown in `etcd`
matches the key name specified in the `EncryptionConfiguration` mentioned above. In this example,
you can see that the encryption key named `key1` is used in `etcd` and in `EncryptionConfiguration`.
1. Verify the Secret is correctly decrypted when retrieved via the API:

View File

@ -41,43 +41,39 @@ minikube version: v1.5.2
minikube start --network-plugin=cni
```
For minikube you can install Cilium using its CLI tool. Cilium will
automatically detect the cluster configuration and will install the appropriate
components for a successful installation:
For minikube you can install Cilium using its CLI tool. To do so, first download the latest
version of the CLI with the following command:
```shell
curl -LO https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz
```
Then extract the downloaded file to your `/usr/local/bin` directory with the following command:
```shell
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
rm cilium-linux-amd64.tar.gz
```
After running the above commands, you can now install Cilium with the following command:
```shell
cilium install
```
```
🔮 Auto-detected Kubernetes kind: minikube
✨ Running "minikube" validation checks
✅ Detected minikube version "1.20.0"
Cilium version not set, using default version "v1.10.0"
🔮 Auto-detected cluster name: minikube
🔮 Auto-detected IPAM mode: cluster-pool
🔮 Auto-detected datapath mode: tunnel
🔑 Generating CA...
2021/05/27 02:54:44 [INFO] generate received request
2021/05/27 02:54:44 [INFO] received CSR
2021/05/27 02:54:44 [INFO] generating key: ecdsa-256
2021/05/27 02:54:44 [INFO] encoded CSR
2021/05/27 02:54:44 [INFO] signed certificate with serial number 48713764918856674401136471229482703021230538642
🔑 Generating certificates for Hubble...
2021/05/27 02:54:44 [INFO] generate received request
2021/05/27 02:54:44 [INFO] received CSR
2021/05/27 02:54:44 [INFO] generating key: ecdsa-256
2021/05/27 02:54:44 [INFO] encoded CSR
2021/05/27 02:54:44 [INFO] signed certificate with serial number 3514109734025784310086389188421560613333279574
🚀 Creating Service accounts...
🚀 Creating Cluster roles...
🚀 Creating ConfigMap...
🚀 Creating Agent DaemonSet...
🚀 Creating Operator Deployment...
⌛ Waiting for Cilium to be installed...
```
Cilium will then automatically detect the cluster configuration and create and
install the appropriate components for a successful installation.
The components are:
- Certificate Authority (CA) in Secret `cilium-ca` and certificates for Hubble (Cilium's observability layer).
- Service accounts.
- Cluster roles.
- ConfigMap.
- Agent DaemonSet and an Operator Deployment.
After the installation, you can view the overall status of the Cilium deployment with the `cilium status` command.
See the expected output of the `status` command
[here](https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/#validate-the-installation).
The remainder of the Getting Started Guide explains how to enforce both L3/L4
(i.e., IP address + port) security policies, as well as L7 (e.g., HTTP) security

View File

@ -238,4 +238,3 @@ kubectl delete secret mysecret
- Read more about the [Secret concept](/docs/concepts/configuration/secret/)
- Learn how to [manage Secrets using kubectl](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- Learn how to [manage Secrets using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)

View File

@ -33,7 +33,7 @@ Run the following command:
```shell
kubectl create secret generic db-user-pass \
--from-literal=username=devuser \
--from-literal=username=admin \
--from-literal=password='S!B\*d$zDsb='
```
You must use single quotes `''` to escape special characters such as `$`, `\`,
@ -87,8 +87,8 @@ kubectl get secrets
The output is similar to:
```
NAME TYPE DATA AGE
db-user-pass Opaque 2 51s
NAME TYPE DATA AGE
db-user-pass Opaque 2 51s
```
View the details of the Secret:
@ -143,11 +143,13 @@ accidentally, or from being stored in a terminal log.
S!B\*d$zDsb=
```
{{<caution>}}This is an example for documentation purposes. In practice,
{{< caution >}}
This is an example for documentation purposes. In practice,
this method could cause the command with the encoded data to be stored in
your shell history. Anyone with access to your computer could find the
command and decode the secret. A better approach is to combine the view and
decode commands.{{</caution>}}
decode commands.
{{< /caution >}}
```shell
kubectl get secret db-user-pass -o jsonpath='{.data.password}' | base64 --decode
@ -193,10 +195,8 @@ To delete a Secret, run the following command:
kubectl delete secret db-user-pass
```
<!-- discussion -->
## {{% heading "whatsnext" %}}
- Read more about the [Secret concept](/docs/concepts/configuration/secret/)
- Learn how to [manage Secrets using config files](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
- Learn how to [manage Secrets using config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
- Learn how to [manage Secrets using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)

View File

@ -90,8 +90,7 @@ the Secret data and appending the hash value to the name. This ensures that
a new Secret is generated each time the data is modified.
To verify that the Secret was created and to decode the Secret data, refer to
[Managing Secrets using
kubectl](/docs/tasks/configmap-secret/managing-secret-using-kubectl/#verify-the-secret).
[Managing Secrets using kubectl](/docs/tasks/configmap-secret/managing-secret-using-kubectl/#verify-the-secret).
## Edit a Secret {#edit-secret}
@ -117,12 +116,11 @@ your Pods.
To delete a Secret, use `kubectl`:
```shell
kubectl delete secret <secret-name>
kubectl delete secret db-user-pass
```
<!-- Optional section; add links to information related to this topic. -->
## {{% heading "whatsnext" %}}
- Read more about the [Secret concept](/docs/concepts/configuration/secret/)
- Learn how to [manage Secrets with the `kubectl` command](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- Learn how to [manage Secrets using kubectl](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- Learn how to [manage Secrets using config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)

View File

@ -41,6 +41,7 @@ If you do not specify a ServiceAccount when you create a Pod, Kubernetes
automatically assigns the ServiceAccount named `default` in that namespace.
You can fetch the details for a Pod you have created. For example:
```shell
kubectl get pods/<podname> -o yaml
```
@ -75,6 +76,7 @@ automountServiceAccountToken: false
```
You can also opt out of automounting API credentials for a particular Pod:
```yaml
apiVersion: v1
kind: Pod
@ -92,8 +94,7 @@ If both the ServiceAccount and the Pod's `.spec` specify a value for
## Use more than one ServiceAccount {#use-multiple-service-accounts}
Every namespace has at least one ServiceAccount: the default ServiceAccount
resource, called `default`.
You can list all ServiceAccount resources in your
resource, called `default`. You can list all ServiceAccount resources in your
[current namespace](/docs/concepts/overview/working-with-objects/namespaces/#setting-the-namespace-preference)
with:
@ -157,7 +158,6 @@ If you want to remove the fields from a workload resource, set both fields to em
on the [pod template](/docs/concepts/workloads/pods#pod-templates).
{{< /note >}}
### Cleanup {#cleanup-use-multiple-service-accounts}
If you tried creating `build-robot` ServiceAccount from the example above,
@ -185,15 +185,17 @@ token might be shorter, or could even be longer).
{{< note >}}
Versions of Kubernetes before v1.22 automatically created long term credentials for
accessing the Kubernetes API. This older mechanism was based on creating token Secrets
that could then be mounted into running Pods.
In more recent versions, including Kubernetes v{{< skew currentVersion >}}, API credentials
are obtained directly by using the [TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) API,
and are mounted into Pods using a [projected volume](/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume).
that could then be mounted into running Pods. In more recent versions, including
Kubernetes v{{< skew currentVersion >}}, API credentials are obtained directly by using the
[TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) API,
and are mounted into Pods using a
[projected volume](/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume).
The tokens obtained using this method have bounded lifetimes, and are automatically
invalidated when the Pod they are mounted into is deleted.
You can still manually create a service account token Secret; for example, if you need a token that never expires.
However, using the [TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/)
You can still manually create a service account token Secret; for example,
if you need a token that never expires. However, using the
[TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/)
subresource to obtain a token to access the API is recommended instead.
{{< /note >}}
@ -215,6 +217,7 @@ EOF
```
If you view the Secret using:
```shell
kubectl get secret/build-robot-secret -o yaml
```
@ -251,8 +254,7 @@ token: ...
The content of `token` is elided here.
Take care not to display the contents of a `kubernetes.io/service-account-token`
Secret somewhere that your terminal / computer screen could be seen by an
onlooker.
Secret somewhere that your terminal / computer screen could be seen by an onlooker.
{{< /note >}}
When you delete a ServiceAccount that has an associated Secret, the Kubernetes
@ -263,31 +265,32 @@ control plane automatically cleans up the long-lived token from that Secret.
First, [create an imagePullSecret](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod).
Next, verify it has been created. For example:
- Create an imagePullSecret, as described in [Specifying ImagePullSecrets on a Pod](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod).
- Create an imagePullSecret, as described in
[Specifying ImagePullSecrets on a Pod](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod).
```shell
kubectl create secret docker-registry myregistrykey --docker-server=DUMMY_SERVER \
--docker-username=DUMMY_USERNAME --docker-password=DUMMY_DOCKER_PASSWORD \
--docker-email=DUMMY_DOCKER_EMAIL
```
```shell
kubectl create secret docker-registry myregistrykey --docker-server=DUMMY_SERVER \
--docker-username=DUMMY_USERNAME --docker-password=DUMMY_DOCKER_PASSWORD \
--docker-email=DUMMY_DOCKER_EMAIL
```
- Verify it has been created.
```shell
kubectl get secrets myregistrykey
```
The output is similar to this:
```shell
kubectl get secrets myregistrykey
```
```
NAME TYPE DATA AGE
myregistrykey   kubernetes.io/.dockerconfigjson   1       1d
```
The output is similar to this:
```
NAME TYPE DATA AGE
myregistrykey   kubernetes.io/.dockerconfigjson   1       1d
```
### Add image pull secret to service account
Next, modify the default service account for the namespace to use this Secret as an imagePullSecret.
```shell
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}'
```
@ -313,8 +316,8 @@ metadata:
uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6
```
Using your editor, delete the line with key `resourceVersion`, add lines for `imagePullSecrets:` and save it.
Leave the `uid` value set the same as you found it.
Using your editor, delete the line with key `resourceVersion`, add lines for
`imagePullSecrets:` and save it. Leave the `uid` value set the same as you found it.
After you made those changes, the edited ServiceAccount looks something like this:
@ -327,12 +330,13 @@ metadata:
namespace: default
uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6
imagePullSecrets:
- name: myregistrykey
- name: myregistrykey
```
### Verify that imagePullSecrets are set for new Pods
Now, when a new Pod is created in the current namespace and using the default ServiceAccount, the new Pod has its `spec.imagePullSecrets` field set automatically:
Now, when a new Pod is created in the current namespace and using the default
ServiceAccount, the new Pod has its `spec.imagePullSecrets` field set automatically:
```shell
kubectl run nginx --image=nginx --restart=Never
@ -354,13 +358,31 @@ To enable and use token request projection, you must specify each of the followi
command line arguments to `kube-apiserver`:
`--service-account-issuer`
: defines the Identifier of the service account token issuer. You can specify the `--service-account-issuer` argument multiple times, this can be useful to enable a non-disruptive change of the issuer. When this flag is specified multiple times, the first is used to generate tokens and all are used to determine which issuers are accepted. You must be running Kubernetes v1.22 or later to be able to specify `--service-account-issuer` multiple times.
: defines the Identifier of the service account token issuer. You can specify the
`--service-account-issuer` argument multiple times, this can be useful to enable
a non-disruptive change of the issuer. When this flag is specified multiple times,
the first is used to generate tokens and all are used to determine which issuers
are accepted. You must be running Kubernetes v1.22 or later to be able to specify
`--service-account-issuer` multiple times.
`--service-account-key-file`
: specifies the path to a file containing PEM-encoded X.509 private or public keys (RSA or ECDSA), used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If specified multiple times, tokens signed by any of the specified keys are considered valid by the Kubernetes API server.
: specifies the path to a file containing PEM-encoded X.509 private or public keys
(RSA or ECDSA), used to verify ServiceAccount tokens. The specified file can contain
multiple keys, and the flag can be specified multiple times with different files.
If specified multiple times, tokens signed by any of the specified keys are considered
valid by the Kubernetes API server.
`--service-account-signing-key-file`
: specifies the path to a file that contains the current private key of the service account token issuer. The issuer signs issued ID tokens with this private key.
: specifies the path to a file that contains the current private key of the service
account token issuer. The issuer signs issued ID tokens with this private key.
`--api-audiences` (can be omitted)
: defines audiences for ServiceAccount tokens. The service account token authenticator validates that tokens used against the API are bound to at least one of these audiences. If `api-audiences` is specified multiple times, tokens for any of the specified audiences are considered valid by the Kubernetes API server. If you specify the `--service-account-issuer` command line argument but you don't set `--api-audiences`, the control plane defaults to a single element audience list that contains only the issuer URL.
: defines audiences for ServiceAccount tokens. The service account token authenticator
validates that tokens used against the API are bound to at least one of these audiences.
If `api-audiences` is specified multiple times, tokens for any of the specified audiences
are considered valid by the Kubernetes API server. If you specify the `--service-account-issuer`
command line argument but you don't set `--api-audiences`, the control plane defaults to
a single element audience list that contains only the issuer URL.
{{< /note >}}
@ -452,18 +474,19 @@ to the public endpoint, rather than the API server's address, by passing the
`--service-account-jwks-uri` flag to the API server. Like the issuer URL, the
JWKS URI is required to use the `https` scheme.
## {{% heading "whatsnext" %}}
See also:
* Read the [Cluster Admin Guide to Service Accounts](/docs/reference/access-authn-authz/service-accounts-admin/)
* Read about [Authorization in Kubernetes](/docs/reference/access-authn-authz/authorization/)
* Read about [Secrets](/docs/concepts/configuration/secret/)
* or learn to [distribute credentials securely using Secrets](/docs/tasks/inject-data-application/distribute-credentials-secure/)
* but also bear in mind that using Secrets for authenticating as a ServiceAccount
- Read the [Cluster Admin Guide to Service Accounts](/docs/reference/access-authn-authz/service-accounts-admin/)
- Read about [Authorization in Kubernetes](/docs/reference/access-authn-authz/authorization/)
- Read about [Secrets](/docs/concepts/configuration/secret/)
- or learn to [distribute credentials securely using Secrets](/docs/tasks/inject-data-application/distribute-credentials-secure/)
- but also bear in mind that using Secrets for authenticating as a ServiceAccount
is deprecated. The recommended alternative is
[ServiceAccount token volume projection](#service-account-token-volume-projection).
* Read about [projected volumes](/docs/tasks/configure-pod-container/configure-projected-volume-storage/).
* For background on OIDC discovery, read the [ServiceAccount signing key retrieval](https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1393-oidc-discovery) Kubernetes Enhancement Proposal
* Read the [OIDC Discovery Spec](https://openid.net/specs/openid-connect-discovery-1_0.html)
- Read about [projected volumes](/docs/tasks/configure-pod-container/configure-projected-volume-storage/).
- For background on OIDC discovery, read the
[ServiceAccount signing key retrieval](https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1393-oidc-discovery)
Kubernetes Enhancement Proposal
- Read the [OIDC Discovery Spec](https://openid.net/specs/openid-connect-discovery-1_0.html)

View File

@ -12,17 +12,12 @@ A Container's file system lives only as long as the Container does. So when a
Container terminates and restarts, filesystem changes are lost. For more
consistent storage that is independent of the Container, you can use a
[Volume](/docs/concepts/storage/volumes/). This is especially important for stateful
applications, such as key-value stores (such as Redis) and databases.
applications, such as key-value stores (such as Redis) and databases.
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
<!-- steps -->
## Configure a volume for a Pod
@ -37,71 +32,71 @@ restarts. Here is the configuration file for the Pod:
1. Create the Pod:
```shell
kubectl apply -f https://k8s.io/examples/pods/storage/redis.yaml
```
```shell
kubectl apply -f https://k8s.io/examples/pods/storage/redis.yaml
```
1. Verify that the Pod's Container is running, and then watch for changes to
the Pod:
the Pod:
```shell
kubectl get pod redis --watch
```
The output looks like this:
```shell
kubectl get pod redis --watch
```
```shell
NAME READY STATUS RESTARTS AGE
redis 1/1 Running 0 13s
```
The output looks like this:
```shell
NAME READY STATUS RESTARTS AGE
redis 1/1 Running 0 13s
```
1. In another terminal, get a shell to the running Container:
```shell
kubectl exec -it redis -- /bin/bash
```
```shell
kubectl exec -it redis -- /bin/bash
```
1. In your shell, go to `/data/redis`, and then create a file:
```shell
root@redis:/data# cd /data/redis/
root@redis:/data/redis# echo Hello > test-file
```
```shell
root@redis:/data# cd /data/redis/
root@redis:/data/redis# echo Hello > test-file
```
1. In your shell, list the running processes:
```shell
root@redis:/data/redis# apt-get update
root@redis:/data/redis# apt-get install procps
root@redis:/data/redis# ps aux
```
```shell
root@redis:/data/redis# apt-get update
root@redis:/data/redis# apt-get install procps
root@redis:/data/redis# ps aux
```
The output is similar to this:
The output is similar to this:
```shell
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
redis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379
root 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash
root 15 0.0 0.0 17500 2072 ? R+ 00:48 0:00 ps aux
```
```shell
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
redis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379
root 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash
root 15 0.0 0.0 17500 2072 ? R+ 00:48 0:00 ps aux
```
1. In your shell, kill the Redis process:
```shell
root@redis:/data/redis# kill <pid>
```
```shell
root@redis:/data/redis# kill <pid>
```
where `<pid>` is the Redis process ID (PID).
where `<pid>` is the Redis process ID (PID).
1. In your original terminal, watch for changes to the Redis Pod. Eventually,
you will see something like this:
you will see something like this:
```shell
NAME READY STATUS RESTARTS AGE
redis 1/1 Running 0 13s
redis 0/1 Completed 0 6m
redis 1/1 Running 1 6m
```
```shell
NAME READY STATUS RESTARTS AGE
redis 1/1 Running 0 13s
redis 0/1 Completed 0 6m
redis 1/1 Running 1 6m
```
At this point, the Container has terminated and restarted. This is because the
Redis Pod has a
@ -110,38 +105,32 @@ of `Always`.
1. Get a shell into the restarted Container:
```shell
kubectl exec -it redis -- /bin/bash
```
```shell
kubectl exec -it redis -- /bin/bash
```
1. In your shell, go to `/data/redis`, and verify that `test-file` is still there.
```shell
root@redis:/data/redis# cd /data/redis/
root@redis:/data/redis# ls
test-file
```
```shell
root@redis:/data/redis# cd /data/redis/
root@redis:/data/redis# ls
test-file
```
1. Delete the Pod that you created for this exercise:
```shell
kubectl delete pod redis
```
```shell
kubectl delete pod redis
```
## {{% heading "whatsnext" %}}
- See [Volume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volume-v1-core).
* See [Volume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volume-v1-core).
* See [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core).
* In addition to the local disk storage provided by `emptyDir`, Kubernetes
supports many different network-attached storage solutions, including PD on
GCE and EBS on EC2, which are preferred for critical data and will handle
details such as mounting and unmounting the devices on the nodes. See
[Volumes](/docs/concepts/storage/volumes/) for more details.
- See [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core).
- In addition to the local disk storage provided by `emptyDir`, Kubernetes
supports many different network-attached storage solutions, including PD on
GCE and EBS on EC2, which are preferred for critical data and will handle
details such as mounting and unmounting the devices on the nodes. See
[Volumes](/docs/concepts/storage/volumes/) for more details.

View File

@ -38,6 +38,10 @@ The `spec` of a static Pod cannot refer to other API objects
{{< glossary_tooltip text="Secret" term_id="secret" >}}, etc).
{{< /note >}}
{{< note >}}
Static pods do not support [ephemeral containers](/docs/concepts/workloads/pods/ephemeral-containers/).
{{< /note >}}
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}

View File

@ -244,7 +244,13 @@ So, now run the Job:
kubectl apply -f ./job.yaml
```
Now wait a bit, then check on the job.
You can wait for the Job to succeed, with a timeout:
```shell
# The check for condition name is case insensitive
kubectl wait --for=condition=complete --timeout=300s job/job-wq-1
```
Next, check on the Job:
```shell
kubectl describe jobs/job-wq-1
@ -285,7 +291,9 @@ Events:
14s 14s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-p17e0
```
All our pods succeeded. Yay.
All the pods for that Job succeeded. Yay.

View File

@ -208,9 +208,18 @@ Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
33s 33s 1 {job-controller } Normal SuccessfulCreate Created pod: job-wq-2-lglf8
```
You can wait for the Job to succeed, with a timeout:
```shell
# The check for condition name is case insensitive
kubectl wait --for=condition=complete --timeout=300s job/job-wq-2
```
```shell
kubectl logs pods/job-wq-2-7r7b2
```
```
Worker with sessionID: bbd72d0a-9e5c-4dd6-abf6-416cc267991f
Initial queue state: empty=False
Working on banana

View File

@ -107,7 +107,14 @@ When you create this Job, the control plane creates a series of Pods, one for ea
Because `.spec.parallelism` is less than `.spec.completions`, the control plane waits for some of the first Pods to complete before starting more of them.
Once you have created the Job, wait a moment then check on progress:
You can wait for the Job to succeed, with a timeout:
```shell
# The check for condition name is case insensitive
kubectl wait --for=condition=complete --timeout=300s job/indexed-job
```
Now, describe the Job and check that it was successful.
```shell
kubectl describe jobs/indexed-job

View File

@ -94,7 +94,7 @@ recommended way to manage the creation and scaling of Pods.
Pod runs a Container based on the provided Docker image.
```shell
kubectl create deployment hello-node --image=registry.k8s.io/echoserver:1.4
kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.39 -- /agnhost netexec --http-port=8080
```
2. View the Deployment:

View File

@ -1042,7 +1042,7 @@ There are pending pods when an error occurred: Cannot evict pod as it would viol
pod/zk-2
```
Use `CTRL-C` to terminate to kubectl.
Use `CTRL-C` to terminate kubectl.
You cannot drain the third node because evicting `zk-2` would violate `zk-budget`. However, the node will remain cordoned.

View File

@ -14,6 +14,6 @@ spec:
spec:
containers:
- name: nginx
image: nginx:1.14.2
image: nginx:1.16.1
ports:
- containerPort: 80

View File

@ -78,8 +78,7 @@ releases may also occur in between these.
| Monthly Patch Release | Cherry Pick Deadline | Target date |
| --------------------- | -------------------- | ----------- |
| November 2022 | 2022-11-04 | 2022-11-09 |
| December 2022 | 2022-12-09 | 2022-12-14 |
| December 2022 | 2022-12-02 | 2022-12-07 |
| January 2023 | 2023-01-13 | 2023-01-18 |
| February 2023 | 2023-02-10 | 2023-02-15 |

View File

@ -0,0 +1,140 @@
---
title: Affichage des pods et des nœuds
weight: 10
---
<!DOCTYPE html>
<html lang="fr">
<body>
<div class="layout" id="top">
<main class="content">
<div class="row">
<div class="col-md-8">
<h3>Objectifs</h3>
<ul>
<li>En savoir plus sur les pods Kubernetes.</li>
<li>En savoir plus sur les nœuds Kubernetes.</li>
<li>Dépannez les applications déployées.</li>
</ul>
</div>
<div class="col-md-8">
<h2>Pods de Kubernetes</h2>
<p>Lorsque vous avez créé un déploiement dans le Module <a href="/fr/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/">2</a>, Kubernetes a créé un <b>Pod</b> pour héberger votre instance d'application. Un pod est une abstraction Kubernetes qui représente un groupe d'un ou plusieurs conteneurs d'application (tels que Docker), et certaines ressources partagées pour ces conteneurs. Ces ressources comprennent:</p>
<ul>
<li>Stockage partagé, en tant que Volumes</li>
<li>Mise en réseau, en tant qu'adresse IP d'un unique cluster</li>
<li>Informations sur l'exécution de chaque conteneur, telles que la version de l'image du conteneur ou les ports spécifiques à utiliser</li>
</ul>
<p>Un pod modélise un "hôte logique" spécifique à l'application et peut contenir différents conteneurs d'applications qui sont relativement étroitement couplés. Par exemple, un pod peut inclure à la fois le conteneur avec votre application Node.js ainsi qu'un conteneur différent qui alimente les données à être publiées par le serveur Web Node.js. Les conteneurs d'un pod partagent une adresse IP et un espace de port, sont toujours co-localisés et co-planifiés, et exécutés dans un contexte partagé sur le même nœud.</p>
<p>Les pods sont l'unité atomique de la plate-forme Kubernetes. Lorsque nous créons un déploiement sur Kubernetes, ce déploiement crée des pods avec des conteneurs à l'intérieur (par opposition à la création directe de conteneurs). Chaque pod est lié au nœud où il est planifié et y reste jusqu'à la résiliation (selon la politique de redémarrage) ou la suppression. En cas de défaillance d'un nœud, des pods identiques sont programmés sur d'autres nœuds disponibles dans le cluster.</p>
</div>
<div class="col-md-4">
<div class="content__box content__box_lined">
<h3>Sommaire:</h3>
<ul>
<li>Pods</li>
<li>Nœuds</li>
<li>Commandes principales de Kubectl</li>
</ul>
</div>
<div class="content__box content__box_fill">
<p><i>
Un pod est un groupe d'un ou plusieurs conteneurs applicatifs (tels que Docker) et comprend un stockage partagé (volumes), une adresse IP et des informations sur la façon de les exécuter.
</i></p>
</div>
</div>
</div>
<br>
<div class="row">
<div class="col-md-8">
<h2 style="color: #3771e3;">Aperçu des Pods</h2>
</div>
</div>
<div class="row">
<div class="col-md-8">
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_03_pods.svg"></p>
</div>
</div>
<br>
<div class="row">
<div class="col-md-8">
<h2>Nœuds</h2>
<p>Un Pod s'exécute toujours sur un <b>Nœud</b>. Un nœud est une machine de travail dans Kubernetes et peut être une machine virtuelle ou physique, selon le cluster. Chaque nœud est géré par le planificateur. Un nœud peut avoir plusieurs pods, et le planificateur Kubernetes gère automatiquement la planification des pods sur les nœuds du cluster. La planification automatique du planificateur tient compte des ressources disponibles sur chaque nœud.</p>
<p>Chaque nœud Kubernetes exécute au moins:</p>
<ul>
<li>Kubelet, un processus responsable de la communication entre le planificateur Kubernetes et le nœud ; il gère les Pods et les conteneurs s'exécutant sur une machine.</li>
<li>Un environnement d'exécution de conteneur (comme Docker) chargé d'extraire l'image du conteneur d'un registre, de décompresser le conteneur et d'exécuter l'application.</li>
</ul>
</div>
<div class="col-md-4">
<div class="content__box content__box_fill">
<p><i> Les conteneurs ne doivent être planifiés ensemble dans un seul pod que s'ils sont étroitement couplés et doivent partager des ressources telles que le disque. </i></p>
</div>
</div>
</div>
<br>
<div class="row">
<div class="col-md-8">
<h2 style="color: #3771e3;">Aperçu des Nœuds</h2>
</div>
</div>
<div class="row">
<div class="col-md-8">
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_03_nodes.svg"></p>
</div>
</div>
<br>
<div class="row">
<div class="col-md-8">
<h2>Dépannage avec kubectl</h2>
<p>Dans le module <a href="/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/">2</a>, vous avez utilisé l'interface de ligne de commande Kubectl. Vous continuerez à l'utiliser dans le module 3 pour obtenir des informations sur les applications déployées et leurs environnements. Les opérations les plus courantes peuvent être effectuées avec les commandes kubectl suivantes:</p>
<ul>
<li><b>kubectl get</b> - liste les ressources</li>
<li><b>kubectl describe</b> - affiche des informations détaillées sur une ressource</li>
<li><b>kubectl logs</b> - imprime les journaux d'un conteneur dans un pod</li>
<li><b>kubectl exec</b> - exécute une commande sur un conteneur dans un pod</li>
</ul>
<p>Vous pouvez utiliser ces commandes pour voir quand les applications ont été déployées, quels sont leurs statuts actuels, où elles s'exécutent et quelles sont leurs configurations.</p>
<p>Maintenant que nous en savons plus sur nos composants de cluster et la ligne de commande, explorons notre application.</p>
</div>
<div class="col-md-4">
<div class="content__box content__box_fill">
<p><i> Un nœud est une machine de travail dans Kubernetes et peut être une machine virtuelle ou une machine physique, selon le cluster. Plusieurs pods peuvent s'exécuter sur un nœud. </i></p>
</div>
</div>
</div>
<br>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/explore/explore-interactive/" role="button">Démarrer le didacticiel interactif <span class="btn__next"></span></a>
</div>
</div>
</main>
</div>
</body>
</html>

View File

@ -0,0 +1,7 @@
---
title: प्रलेखन शैली अवलोकन
main_menu: true
weight: 80
---
इस खंड के विषय लेखन शैली, सामग्री स्वरूपण, और संगठन, और कुबेरनेट्स प्रलेखन के लिए विशिष्ट Hugo अनुकूलन का उपयोग करने पर मार्गदर्शन प्रदान करते हैं।

View File

@ -0,0 +1,16 @@
---
title: ऐड-ऑन
id: addons
date: 2019-12-15
full_link: /docs/concepts/cluster-administration/addons/
short_description: >
संसाधन जो कुबेरनेट्स की कार्यक्षमता का विस्तार करते हैं।
aka:
tags:
- tool
---
संसाधन जो कुबेरनेट्स की कार्यक्षमता का विस्तार करते हैं।
<!--more-->
[ऐड-ऑन इंस्टॉल करना](/docs/concepts/cluster-administration/addons/) अपने क्लस्टर के साथ ऐड-ऑन का उपयोग करने के बारे में अधिक जानकारी देता है, और कुछ लोकप्रिय ऐड-ऑन को सूचीबद्ध करता है।

View File

@ -0,0 +1,22 @@
---
title: आत्मीयता
id: affinity
date: 2019-01-11
full_link: /docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
short_description: >
पॉड्स को कहां रखा जाए, यह निर्धारित करने के लिए शेड्यूलर द्वारा उपयोग किए जाने वाले नियम
aka:
tags:
- fundamental
---
कुबेरनेट्स में, _आत्मीयता_ नियमों का एक समूह है जो शेड्यूलर को संकेत देता है कि पॉड्स को कहाँ रखा जाए।
<!--more-->
आत्मीयता दो प्रकार की होती है:
* [नोड आत्मीयता](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)
* [पॉड-टू-पॉड आत्मीयता](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)
नियमों को कुबेरनेट्स {{< glossary_tooltip term_id="label" text="लेबल">}} और {{< glossary_tooltip term_id="selector" text="सेलेक्टर">}}
का उपयोग करके परिभाषित किया गया है, जो {{< glossary_tooltip term_id="pod" text="पॉड्स" >}} में निर्दिष्ट हैं ,
और उनका उपयोग इस बात पर निर्भर करता है कि आप शेड्यूलर को कितनी सख्ती से लागू करना चाहते हैं।

View File

@ -0,0 +1,19 @@
---
title: API समूह
id: api-group
date: 2019-09-02
full_link: /docs/concepts/overview/kubernetes-api/#api-groups-and-versioning
short_description: >
कुबेरनेट्स API में संबंधित पथों का एक समूह।
aka:
tags:
- fundamental
- architecture
---
कुबेरनेट्स API में संबंधित पथों का एक समूह।
<!--more-->
आप अपने API सर्वर के कॉन्फ़िगरेशन को बदलकर प्रत्येक API समूह को सक्षम या अक्षम कर सकते हैं। आप विशिष्ट संसाधनों के लिए पथ अक्षम या सक्षम भी कर सकते हैं। API समूह कुबेरनेट्स API का विस्तार करना आसान बनाता है। API समूह एक REST पथ में और एक क्रमबद्ध वस्तु के `apiVersion` फ़ील्ड में निर्दिष्ट है।
* अधिक जानकारी के लिए [API समूह](/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning) पढ़ें।

View File

@ -0,0 +1,18 @@
---
title: क्यूबएडीएम (Kubeadm)
id: kubeadm
date: 2018-04-12
full_link: /docs/admin/kubeadm/
short_description: >
कुबेरनेट्स को जल्दी से इंस्टॉल करने और एक सुरक्षित क्लस्टर स्थापित करने के लिए एक उपकरण।
aka:
tags:
- tool
- operation
---
कुबेरनेट्स को जल्दी से इंस्टॉल करने और एक सुरक्षित क्लस्टर स्थापित करने के लिए एक उपकरण।
<!--more-->
आप कंट्रोल प्लेन और {{< glossary_tooltip text="वर्कर नोड्स" term_id="node" >}} दोनों घटकों को स्थापित करने के लिए क्यूबएडीएम का उपयोग कर सकते हैं।

View File

@ -0,0 +1,22 @@
---
title: लिमिटरेंज (LimitRange)
id: limitrange
date: 2019-04-15
full_link: /docs/concepts/policy/limit-range/
short_description: >
नेमस्पेस में प्रति कंटेनर या पॉड में संसाधन खपत को सीमित करने के लिए प्रतिबंध प्रदान करता है।
aka:
tags:
- core-object
- fundamental
- architecture
related:
- pod
- container
---
नेमस्पेस में प्रति {{< glossary_tooltip text="कंटेनर" term_id="container" >}} या {{< glossary_tooltip text="पॉड" term_id="pod" >}} में संसाधन खपत को सीमित करने के लिए प्रतिबंध प्रदान करता है।
<!--more-->
लिमिटरेंज, टाइप (type) द्वारा बनाई जा सकने वाले ऑब्जेक्ट्स और साथ ही नेमस्पेस में अलग-अलग {{< glossary_tooltip text="कंटेनर" term_id="container" >}} या {{< glossary_tooltip text="पॉड" term_id="pod" >}} द्वारा अनुरोध/उपभोग किए जा सकने वाले कंप्यूट संसाधनों की मात्रा को सीमित करता है।

View File

@ -0,0 +1,19 @@
---
title: पॉड जीवनचक्र (Pod Lifecycle)
id: pod-lifecycle
date: 2019-02-17
full-link: /docs/concepts/workloads/pods/pod-lifecycle/
related:
- pod
- container
tags:
- fundamental
short_description: >
अवस्थाओं का क्रम जिसके माध्यम से एक पॉड अपने जीवनकाल में गुजरता है।
---
अवस्थाओं का क्रम जिसके माध्यम से एक पॉड अपने जीवनकाल में गुजरता है।
<!--अधिक-->
[पॉड जीवनचक्र](/docs/concepts/workloads/pods/pod-lifecycle/) को पॉड की अवस्थाओं या चरणों द्वारा परिभाषित किया जाता है। पाँच संभावित पॉड चरण हैं: Pending, Running, Succeeded, Failed और Unknown। पॉड स्थिति का एक उच्च-स्तरीय विवरण [पॉडस्टैटस](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podstatus-v1-core) `phase` फ़ील्ड में सारांशित किया गया है। .

View File

@ -0,0 +1,22 @@
---
title: स्टेटफुलसेट (StatefulSet)
id: statefulset
date: 2018-04-12
full_link: /docs/concepts/workloads/controllers/statefulset/
short_description: >
प्रत्येक पॉड के लिए स्थायी स्टोरेज और दृढ़ पहचानकर्ता के साथ, पॉड्स के एक सेट की डिप्लॉयमेंट और स्केलिंग का प्रबंधन करता है।
aka:
tags:
- fundamental
- core-object
- workload
- storage
---
{{<glossary_tooltip text="पॉड्स" term_id="pod" >}} के एक सेट की डिप्लॉयमेंट और स्केलिंग का प्रबंधन करता है, और इन पॉड्स के *क्रम और विशिष्टता के बारे में गारंटी प्रदान करता है*
<!--more-->
एक {{<glossary_tooltip text="डिप्लॉयमेंट" term_id="deployment">}} की तरह, एक स्टेटफुलसेट एक सदृश कंटेनर विनिर्देश पर आधारित पॉड्स का प्रबंधन करता है। डिप्लॉयमेंट के विपरीत, स्टेटफुलसेट अपने प्रत्येक पॉड के लिए एक चिपचिपा पहचान बनाए रखता है। ये पॉड एक ही विनिर्देश से बनाए गए हैं, लेकिन विनिमय करने योग्य नहीं हैं; प्रत्येक का एक स्थायी पहचानकर्ता होता है जिसे वह किसी भी पुनर्निर्धारण के दौरान बनाए रखता है।
यदि आप अपने वर्कलोड को दृढ़ता प्रदान करने के लिए स्टोरेज वॉल्यूम का उपयोग करना चाहते हैं, तो आप समाधान के हिस्से के रूप में स्टेटफुलसेट का उपयोग कर सकते हैं। हालांकि स्टेटफुलसेट में अलग-अलग पॉड विफलता के लिए अतिसंवेदनशील होते हैं, दृढ़ पॉड पहचानकर्ता मौजूदा वॉल्यूम को नए पॉड्स से मिलाना आसान बनाते हैं जो असफल होने वाले किसी भी पॉड को प्रतिस्थापित करता है।

View File

@ -18,7 +18,7 @@ weight: 10
<div class="row">
<div class="col-md-8">
<h3>Objectives</h3>
<h3>उद्देश्य</h3>
<ul>
<li>जानें कुबेरनेट्स क्लस्टर क्या है।</li>
<li>जानें मिनिक्यूब क्या है।</li>

View File

@ -7,85 +7,48 @@ cid: partners
---
<section id="users">
<main class="main-section">
<h5>Kubernetes collabora con i partner per creare per creare un codebase che supporti uno spettro di piattaforme complementari.</h5>
<div class="col-container">
<div class="col-nav">
<center>
<h5>
<b>Fornitori Certificati di Servizi su Kubernetes</b>
</h5>
<br>Fornitori di servizi riconosciuti e con grande esperienza nell'aiutare le imprese ad adottare con successo Kubernetes.
<br><br><br>
<button id="kcsp" class="button" onClick="updateSrc(this.id)">Guarda i Partners KCSP</button>
<br><br>Interessato a diventare un partner <a href="https://www.cncf.io/certification/kcsp/">KCSP</a>?
</center>
</div>
<div class="col-nav">
<center>
<h5>
<b>Distribuzioni di Kubernetes Certificate, Certified Hosted Platforms and Software di installazione Certificati</b>
</h5>La conformità del software assicura che le versioni di Kubernetes prodotte da ogni fornitore supportino coerentemente le API necessarie.
<br><br><br>
<button id="conformance" class="button" onClick="updateSrc(this.id)">Guarda i Partner certificati</button>
<br><br>Interessato a diventare un partner <a href="https://www.cncf.io/certification/software-conformance/">certificato Kubernetes</a>?
</center>
</div>
<div class="col-nav">
<center>
<h5><b>Partner per la Formazione su Kubernetes</b></h5>
<br>Professionisti riconosciuti e certificati, con solida esperienza nella formazione su tecnologie Cloud Native.
<br><br><br><br>
<button id="ktp" class="button" onClick="updateSrc(this.id)">Guarda i KTP partner</button>
<br><br>Interessato a diventare un partner <a href="https://www.cncf.io/certification/training/">KTP</a>?
</center>
</div>
</div>
<script src="https://code.jquery.com/jquery-3.3.1.min.js" integrity="sha256-FgpCb/KJQlLNfOu91ta32o/NMZxltwRo8QtmkMRdAu8=" crossorigin="anonymous"></script>
<script type="text/javascript">
var defaultLink = "https://landscape.cncf.io/category=kubernetes-certified-service-provider&format=card-mode&grouping=category&embed=yes";
var firstLink = "https://landscape.cncf.io/category=certified-kubernetes-distribution,certified-kubernetes-hosted,certified-kubernetes-installer&format=card-mode&grouping=category&embed=yes";
var secondLink = "https://landscape.cncf.io/category=kubernetes-training-partner&format=card-mode&grouping=category&embed=yes";
function updateSrc(buttonId) {
if (buttonId == "kcsp") {
$("#landscape").attr("src",defaultLink);
window.location.hash = "#kcsp";
}
if (buttonId == "conformance") {
$("#landscape").attr("src",firstLink);
window.location.hash = "#conformance";
}
if (buttonId == "ktp") {
$("#landscape").attr("src",secondLink);
window.location.hash = "#ktp";
}
}
// Automatically load the correct iframe based on the URL fragment
document.addEventListener('DOMContentLoaded', function() {
var showContent = "kcsp";
if (window.location.hash) {
console.log('hash is:', window.location.hash.substring(1));
showContent = window.location.hash.substring(1);
}
updateSrc(showContent);
});
</script>
<body>
<div id="frameHolder">
<iframe id="landscape" title="Panorama CNCF" frameBorder="0" scrolling="no" style="width: 1px; min-width: 100%" src=""></iframe>
<script src="https://landscape.cncf.io/iframeResizer.js"></script>
<h5>Kubernetes collabora con i partner per creare per creare un codebase che supporti uno spettro di piattaforme complementari.</h5>
<div class="col-container">
<div class="col-nav">
<center>
<h5>
<b>Fornitori Certificati di Servizi su Kubernetes</b>
</h5>
<br>Fornitori di servizi riconosciuti e con grande esperienza nell'aiutare le imprese ad adottare con successo Kubernetes.
<br><br><br>
<button class="button landscape-trigger landscape-default" data-landscape-types="kubernetes-certified-service-provider" id="kcsp">Guarda i Partners KCSP</button>
<br><br>Interessato a diventare un partner
<a href="https://www.cncf.io/certification/kcsp/">KCSP</a>?
</center>
</div>
<div class="col-nav">
<center>
<h5>
<b>Distribuzioni di Kubernetes Certificate, Certified Hosted Platforms and Software di installazione Certificati</b>
</h5>La conformità del software assicura che le versioni di Kubernetes prodotte da ogni fornitore supportino coerentemente le API necessarie.
<br><br><br>
<button class="button landscape-trigger" data-landscape-types="certified-kubernetes-distribution,certified-kubernetes-hosted,certified-kubernetes-installer" id="conformance">Guarda i Partner certificati</button>
<br><br>Interessato a diventare un partner
<a href="https://www.cncf.io/certification/software-conformance/">certificato Kubernetes</a>?
</center>
</div>
<div class="col-nav">
<center>
<h5>
<b>Partner per la Formazione su Kubernetes</b>
</h5>
<br>Professionisti riconosciuti e certificati, con solida esperienza nella formazione su tecnologie Cloud Native.
<br><br><br>
<button class="button landscape-trigger" data-landscape-types="kubernetes-training-partner" id="ktp">Guarda i KTP partner</button>
<br><br>Interessato a diventare un partner
<a href="https://www.cncf.io/certification/training/">KTP</a>?
</center>
</div>
</div>
</body>
</main>
{{< cncf-landscape helpers=true >}}
</section>
<style>
{{< include "partner-style.css" >}}
</style>
<script>
{{< include "partner-script.js" >}}
</script>

View File

@ -596,7 +596,7 @@ Replication Controllerは、終了することが想定されていないPod(Web
* Jobのさまざまな実行方法について学ぶ:
* [ワークキューを用いた粒度の粗い並列処理](/docs/tasks/job/coarse-parallel-processing-work-queue/)
* [ワークキューを用いた粒度の細かい並列処理](/docs/tasks/job/fine-parallel-processing-work-queue/)
* [静的な処理の割り当てを使用した並列処理のためのインデックス付きJob](/ja/docs/tasks/job/indexed-parallel-processing-static/) を使う(beta段階)
* [静的な処理の割り当てを使用した並列処理のためのインデックス付きJob](/ja/docs/tasks/job/indexed-parallel-processing-static/) を使う
* テンプレートを元に複数のJobを作成: [拡張機能を用いた並列処理](/docs/tasks/job/parallel-processing-expansion/)
* [終了したJobの自動クリーンアップ](#clean-up-finished-jobs-automatically)のリンクから、クラスターが完了または失敗したJobをどのようにクリーンアップするかをご確認ください。
* `Job`はKubernetes REST APIの一部です。JobのAPIを理解するために、{{< api-reference page="workload-resources/job-v1" >}}オブジェクトの定義をお読みください。

View File

@ -42,7 +42,7 @@ Kubernetesコミュニティで効果的に働くためには、[git](https://gi
3. [プルリクエストのオープン](/docs/contribute/new-content/open-a-pr/)と[変更レビュー](/ja/docs/contribute/review/reviewing-prs/)の基本的なプロセスを理解していることを確認してください。
一部のタスクでは、Kubernetes organizationで、より多くの信頼とアクセス権限が必要です。
役割と権限についての詳細は、[SIG Docsへの参加](/docs/contribute/participating/)を参照してください。
役割と権限についての詳細は、[SIG Docsへの参加](/ja/docs/contribute/participate/)を参照してください。
## はじめての貢献
- 貢献のための複数の方法について学ぶために[貢献の概要](/ja/docs/contribute/new-content/overview/)を読んでください。
@ -56,12 +56,12 @@ Kubernetesコミュニティで効果的に働くためには、[git](https://gi
- リポジトリの[ローカルクローンでの作業](/docs/contribute/new-content/open-a-pr/#fork-the-repo)について学んでください。
- [リリース機能](/docs/contribute/new-content/new-features/)について記載してください。
- [SIG Docs](/docs/contribute/participate/)に参加し、[memberやreviewer](/docs/contribute/participate/roles-and-responsibilities/)になってください。
- [SIG Docs](/ja/docs/contribute/participate/)に参加し、[memberやreviewer](/docs/contribute/participate/roles-and-responsibilities/)になってください。
- [国際化](/ja/docs/contribute/localization/)を始めたり、支援したりしてください。
## SIG Docsに参加する
[SIG Docs](/docs/contribute/participate/)はKubernetesのドキュメントとウェブサイトを公開・管理するコントリビューターのグループです。SIG Docsに参加することはKubernetesコントリビューター機能開発でもそれ以外でもにとってKubernetesプロジェクトに大きな影響を与える素晴らしい方法の一つです。
[SIG Docs](/ja/docs/contribute/participate/)はKubernetesのドキュメントとウェブサイトを公開・管理するコントリビューターのグループです。SIG Docsに参加することはKubernetesコントリビューター機能開発でもそれ以外でもにとってKubernetesプロジェクトに大きな影響を与える素晴らしい方法の一つです。
SIG Docsは複数の方法でコミュニケーションをとっています。

View File

@ -22,7 +22,7 @@ spec:
- |
set -ex
# Generate mysql server-id from pod ordinal index.
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
[[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo [mysqld] > /mnt/conf.d/server-id.cnf
# Add an offset to avoid reserved server-id=0 value.

View File

@ -231,7 +231,7 @@ Para solicitar um volume maior para uma PVC, edite a PVC e especifique um tamanh
#### Expansão de volume CSI
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
{{< feature-state for_k8s_version="v1.24" state="stable" >}}
O suporte à expansão de volumes CSI é habilitada por padrão, porém é necessário um driver CSI específico para suportar a expansão do volume. Verifique a documentação do driver CSI específico para mais informações.

View File

@ -0,0 +1,250 @@
---
title: Visão Geral de Autorização
content_type: concept
weight: 60
---
<!-- overview -->
Aprenda mais sobre autorização no Kubernetes, incluindo detalhes sobre
criação de políticas utilizando módulos de autorização suportados.
<!-- body -->
No Kubernetes, você deve estar autenticado (conectado) antes que sua requisição possa ser
autorizada (permissão concedida para acesso). Para obter informações sobre autenticação,
visite [Controlando Acesso à API do Kubernetes](/pt-br/docs/concepts/security/controlling-access/).
O Kubernetes espera atributos que são comuns a requisições de APIs REST. Isto significa
que autorização no Kubernetes funciona com sistemas de controle de acesso a nível de organizações
ou de provedores de nuvem que possam lidar com outras APIs além das APIs do Kubernetes.
## Determinar se uma requisição é permitida ou negada
O Kubernetes autoriza requisições de API utilizando o servidor de API. Ele avalia
todos os atributos de uma requisição em relação a todas as políticas disponíveis e permite ou nega a requisição.
Todas as partes de uma requisição de API deve ser permitidas por alguma política para que possa prosseguir.
Isto significa que permissões são negadas por padrão.
(Embora o Kubernetes use o servidor de API, controles de acesso e políticas que
dependem de campos específicos de tipos específicos de objetos são tratados pelos controladores de admissão.)
Quando múltiplos módulos de autorização são configurados, cada um será verificado em sequência.
Se qualquer dos autorizadores aprovarem ou negarem uma requisição, a decisão é imediatamente
retornada e nenhum outro autorizador é consultado. Se nenhum módulo de autorização tiver
nenhuma opinião sobre requisição, então a requisição é negada. Uma negação retorna um
código de status HTTP 403.
## Revisão de atributos de sua requisição
O Kubernetes revisa somente os seguintes atributos de uma requisição de API:
* **user** - O string de `user` fornecido durante a autenticação.
* **group** - A lista de nomes de grupos aos quais o usuário autenticado pertence.
* **extra** - Um mapa de chaves de string arbitrárias para valores de string, fornecido pela camada de autenticação.
* **API** - Indica se a solicitação é para um recurso de API.
* **Caminho da requisição** - Caminho para diversos endpoints que não manipulam recursos, como `/api` ou `/healthz`.
* **Verbo de requisição de API** - Verbos da API como `get`, `list`, `create`, `update`, `patch`, `watch`, `delete` e `deletecollection` que são utilizados para solicitações de recursos. Para determinar o verbo de requisição para um endpoint de recurso de API , consulte [Determine o verbo da requisição](/pt-br/docs/reference/access-authn-authz/authorization/#determine-the-request-verb).
* **Verbo de requisição HTTP** - Métodos HTTP em letras minúsculas como `get`, `post`, `put` e `delete` que são utilizados para requisições que não são de recursos.
* **Recurso** - O identificador ou nome do recurso que está sendo acessado (somente para requisições de recursos) - para requisições de recursos usando os verbos `get`, `update`, `patch` e `delete`, deve-se fornecer o nome do recurso.
* **Subrecurso** - O sub-recurso que está sendo acessado (somente para solicitações de recursos).
* **Namespace** - O namespace do objeto que está sendo acessado (somente para solicitações de recursos com namespace).
* **Grupo de API** - O {{< glossary_tooltip text="API Group" term_id="api-group" >}} sendo acessado (somente para requisições de recursos). Uma string vazia designa o [Grupo de API](/docs/reference/using-api/#api-groups) _core_.
## Determine o verbo da requisição {#determine-the-request-verb}
**Requisições de não-recursos**
Requisições sem recursos de `/api/v1/...` ou `/apis/<group>/<version>/...`
são considerados "requisições sem recursos" e usam o método HTTP em letras minúsculas da solicitação como o verbo.
Por exemplo, uma solicitação `GET` para endpoints como `/api` ou `/healthz` usaria `get` como o verbo.
**Requisições de recursos**
Para determinar o verbo de requisição para um endpoint de API de recurso, revise o verbo HTTP
utilizado e se a requisição atua ou não em um recurso individual ou em uma
coleção de recursos:
Verbo HTTP | Verbo de Requisição
---------- |---------------
POST | create
GET, HEAD | get (para recursos individuais), list (para coleções, includindo o conteúdo do objeto inteiro), watch (para observar um recurso individual ou coleção de recursos)
PUT | update
PATCH | patch
DELETE | delete (para recursos individuais), deletecollection (para coleções)
{{< caution >}}
Os verbos `get`, `list` e `watch` podem retornar todos os detalhes de um recurso. Eles são equivalentes em relação aos dados retornados. Por exemplo, `list` em `secrets` revelará os atributos de `data` de qualquer recurso retornado.
{{< /caution >}}
Às vezes, o Kubernetes verifica a autorização para permissões adicionais utilizando verbos especializados. Por exemplo:
* [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/)
* Verbo `use` em recursos `podsecuritypolicies` no grupo `policy` de API.
* [RBAC](/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping)
* Verbos `bind` e `escalate` em `roles` e recursos `clusterroles` no grupo `rbac.authorization.k8s.io` de API.
* [Authentication](/pt-br/docs/reference/access-authn-authz/authentication/)
* Verbo `impersonate` em `users`, `groups`, e `serviceaccounts` no grupo de API `core`, e o `userextras` no grupo `authentication.k8s.io` de API.
## Modos de Autorização {#authorization-modules}
O servidor da API Kubernetes pode autorizar uma solicitação usando um dos vários modos de autorização:
* **Node** - Um modo de autorização de finalidade especial que concede permissões a ```kubelets``` com base nos ```Pods``` que estão programados para execução. Para saber mais sobre como utilizar o modo de autorização do nó, consulte [Node Authorization](/docs/reference/access-authn-authz/node/).
* **ABAC** - Attribute-based access control (ABAC), ou Controle de acesso baseado em atributos, define um paradigma de controle de acesso pelo qual os direitos de acesso são concedidos aos usuários por meio do uso de políticas que combinam atributos. As políticas podem usar qualquer tipo de atributo (atributos de usuário, atributos de recurso, objeto, atributos de ambiente, etc.). Para saber mais sobre como usar o modo ABAC, consulte [ABAC Mode](/docs/reference/access-authn-authz/abac/).
* **RBAC** - Role-based access control (RBAC), ou controle de acesso baseado em função, é um método de regular o acesso a recursos computacionais ou de rede com base nas funções de usuários individuais dentro de uma empresa. Nesse contexto, acesso é a capacidade de um usuário individual realizar uma tarefa específica, como visualizar, criar ou modificar um arquivo. Para saber mais sobre como usar o modo RBAC, consulte [RBAC Mode](/docs/reference/access-authn-authz/rbac/)
* Quando especificado RBAC (Role-Based Access Control) usa o grupo de API `rbac.authorization.k8s.io` para orientar as decisões de autorização, permitindo que os administradores configurem dinamicamente as políticas de permissão por meio da API do Kubernetes.
* Para habilitar o modo RBAC, inicie o servidor de API (apiserver) com a opção `--authorization-mode=RBAC`.
* **Webhook** - Um WebHook é um retorno de chamada HTTP: um HTTP POST que ocorre quando algo acontece; uma simples notificação de evento via HTTP POST. Um aplicativo da Web que implementa WebHooks postará uma mensagem em um URL quando um determinado evento ocorrer. Para saber mais sobre como usar o modo Webhook, consulte [Webhook Mode](/docs/reference/access-authn-authz/webhook/).
#### Verificando acesso a API
`kubectl` fornece o subcomando `auth can-i` para consultar rapidamente a camada de autorização da API.
O comando usa a API `SelfSubjectAccessReview` para determinar se o usuário atual pode executar
uma determinada ação e funciona independentemente do modo de autorização utilizado.
```bash
# "can-i create" = "posso criar"
kubectl auth can-i create deployments --namespace dev
```
A saída é semelhante a esta:
```
yes
```
```shell
# "can-i create" = "posso criar"
kubectl auth can-i create deployments --namespace prod
```
A saída é semelhante a esta:
```
no
```
Os administradores podem combinar isso com [personificação de usuário](/pt-br/docs/reference/access-authn-authz/authentication/#personificação-de-usuário)
para determinar qual ação outros usuários podem executar.
```bash
# "can-i list" = "posso listar"
kubectl auth can-i list secrets --namespace dev --as dave
```
A saída é semelhante a esta:
```
no
```
Da mesma forma, para verificar se uma ServiceAccount chamada `dev-sa` no Namespace `dev`
pode listar ```Pods``` no namespace `target`:
```bash
# "can-i list" = "posso listar"
kubectl auth can-i list pods \
--namespace target \
--as system:serviceaccount:dev:dev-sa
```
A saída é semelhante a esta:
```
yes
```
`SelfSubjectAccessReview` faz parte do grupo de API `authorization.k8s.io`, que
expõe a autorização do servidor de API para serviços externos. Outros recursos
neste grupo inclui:
* `SubjectAccessReview` - Revisão de acesso para qualquer usuário, não apenas o atual. Útil para delegar decisões de autorização para o servidor de API. Por exemplo, o ```kubelet``` e extensões de servidores de API utilizam disso para determinar o acesso do usuário às suas próprias APIs.
* `LocalSubjectAccessReview` - Similar a `SubjectAccessReview`, mas restrito a um namespace específico.
* `SelfSubjectRulesReview` - Uma revisão que retorna o conjunto de ações que um usuário pode executar em um namespace. Útil para usuários resumirem rapidamente seu próprio acesso ou para interfaces de usuário mostrarem ações.
Essas APIs podem ser consultadas criando recursos normais do Kubernetes, onde a resposta no campo `status`
do objeto retornado é o resultado da consulta.
```bash
kubectl create -f - -o yaml << EOF
apiVersion: authorization.k8s.io/v1
kind: SelfSubjectAccessReview
spec:
resourceAttributes:
group: apps
resource: deployments
verb: create
namespace: dev
EOF
```
A `SelfSubjectAccessReview` gerada seria:
```yaml
apiVersion: authorization.k8s.io/v1
kind: SelfSubjectAccessReview
metadata:
creationTimestamp: null
spec:
resourceAttributes:
group: apps
resource: deployments
namespace: dev
verb: create
status:
allowed: true
denied: false
```
## Usando flags para seu módulo de autorização
Você deve incluir uma flag em sua política para indicar qual módulo de autorização
suas políticas incluem:
As seguintes flags podem ser utilizadas:
* `--authorization-mode=ABAC` O modo de controle de acesso baseado em atributos (ABAC) permite configurar políticas usando arquivos locais.
* `--authorization-mode=RBAC` O modo de controle de acesso baseado em função (RBAC) permite que você crie e armazene políticas usando a API do Kubernetes.
* `--authorization-mode=Webhook` WebHook é um modo de retorno de chamada HTTP que permite gerenciar a autorização usando endpoint REST.
* `--authorization-mode=Node` A autorização de nó é um modo de autorização de propósito especial que autoriza especificamente requisições de API feitas por ```kubelets```.
* `--authorization-mode=AlwaysDeny` Esta flag bloqueia todas as requisições. Utilize esta flag somente para testes.
* `--authorization-mode=AlwaysAllow` Esta flag permite todas as requisições. Utilize esta flag somente se não existam requisitos de autorização para as requisições de API.
Você pode escolher mais de um modulo de autorização. Módulos são verificados
em ordem, então, um modulo anterior tem maior prioridade para permitir ou negar uma requisição.
## Escalonamento de privilégios através da criação ou edição da cargas de trabalho {#privilege-escalation-via-pod-creation}
Usuários que podem criar ou editar ```pods``` em um namespace diretamente ou através de um [controlador](/pt-br/docs/concepts/architecture/controller/)
como, por exemplo, um operador, conseguiriam escalar seus próprios privilégios naquele namespace.
{{< caution >}}
Administradores de sistemas, tenham cuidado ao permitir acesso para criar ou editar cargas de trabalho.
Detalhes de como estas permissões podem ser usadas de forma maliciosa podem ser encontradas em [caminhos para escalonamento](#escalation-paths).
{{< /caution >}}
### Caminhos para escalonamento {#escalation-paths}
- Montagem de Secret arbitrários nesse namespace
- Pode ser utilizado para acessar Secret destinados a outras cargas de trabalho
- Pode ser utilizado para obter um token da conta de serviço com maior privilégio
- Uso de contas de serviço arbitrárias nesse namespace
- Pode executar ações da API do Kubernetes como outra carga de trabalho (personificação)
- Pode executar quaisquer ações privilegiadas que a conta de serviço tenha acesso
- Montagem de configmaps destinados a outras cargas de trabalho nesse namespace
- Pode ser utilizado para obter informações destinadas a outras cargas de trabalho, como nomes de host de banco de dados.
- Montagem de volumes destinados a outras cargas de trabalho nesse namespace
- Pode ser utilizado para obter informações destinadas a outras cargas de trabalho e alterá-las.
{{< caution >}}
Administradores de sistemas devem ser cuidadosos ao instalar CRDs que
promovam mudanças nas áreas mencionadas acima. Estes podem abrir caminhos para escalonamento.
Isto deve ser considerado ao decidir os controles de acesso baseado em função (RBAC).
{{< /caution >}}
## {{% heading "whatsnext" %}}
* Para aprender mais sobre autenticação, visite **Authentication** in [Controlando acesso a APIs do Kubernetes](/pt-br/docs/concepts/security/controlling-access/).
* Para aprender mais sobre Admission Control, visite [Utilizando Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/).

View File

@ -0,0 +1,19 @@
---
title: Camada de Agregação
id: aggregation-layer
date: 2018-10-08
full_link: /pt-br/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/
short_description: >
A camada de agregação permite que você instale APIs adicionais no estilo Kubernetes em seu cluster.
aka:
tags:
- architecture
- extension
- operation
---
A camada de agregação permite que você instale APIs adicionais no estilo Kubernetes em seu cluster.
<!--more-->
Depois de configurar o {{< glossary_tooltip text="Servidor da API do Kubernetes" term_id="kube-apiserver" >}} para [suportar APIs adicionais](/docs/tasks/extend-kubernetes/configure-aggregation-layer/), você pode adicionar objetos `APIService` para obter a URL da API adicional.

View File

@ -0,0 +1,18 @@
---
title: Operações do Cluster
id: cluster-operations
date: 2019-05-12
full_link:
short_description: >
O trabalho envolvido no gerenciamento de um cluster Kubernetes.
aka:
tags:
- operation
---
O trabalho envolvido no gerenciamento de um cluster Kubernetes: gerenciamento das operações diárias e coordenação das atualizações.
<!--more-->
Exemplos das tarefas de operações do cluster incluem: implantação de novos nós para dimensionar o cluster; realização de atualizações de software; implementação de controles de segurança; adição ou remoção de armazenamento; configuração da rede do cluster; gerenciamento de observabilidade em todo o cluster; e resposta a eventos.

View File

@ -0,0 +1,18 @@
---
title: Desenvolvedor (desambiguação)
id: developer
date: 2018-04-12
full_link:
short_description: >
Pode se referir a&#58; Desenvolvedor de Aplicativos, Colaborador de Código ou Desenvolvedor de Plataforma.
aka:
tags:
- community
- user-type
---
Pode se referir a&#58; {{< glossary_tooltip text="Desenvolvedor de Aplicativos" term_id="application-developer" >}}, {{< glossary_tooltip text="Colaborador de Código" term_id="code-contributor" >}}, ou {{< glossary_tooltip text="Desenvolvedor de Plataforma" term_id="platform-developer" >}}.
<!--more-->
Esse termo pode ter significados diferentes, dependendo do contexto.

View File

@ -0,0 +1,22 @@
---
title: StatefulSet
id: statefulset
date: 2018-04-12
full_link: /docs/concepts/workloads/controllers/statefulset/
short_description: >
Gerencia deployment e escalonamento de um conjunto de Pods, com armazenamento durável e identificadores persistentes para cada Pod.
aka:
tags:
- fundamental
- core-object
- workload
- storage
---
Gerencia o deployment e escalonamento de um conjunto de {{< glossary_tooltip text="Pods" term_id="pod" >}}, *e fornece garantias sobre a ordem e unicidade* desses Pods.
<!--more-->
Como o {{< glossary_tooltip term_id="deployment" >}}, um StatefulSet gerencia Pods que são baseados em uma especificação de container idêntica. Diferente do Deployment, um StatefulSet mantém uma identidade fixa para cada um de seus Pods. Esses pods são criados da mesma especificação, mas não são intercambiáveis: cada um tem uma identificação persistente que se mantém em qualquer reagendamento.
Se você quiser usar volumes de armazenamento para fornecer persistência para sua carga de trabalho, você pode usar um StatefulSet como parte da sua solução. Embora os Pods individuais em um StatefulSet sejam suscetíveis a falhas, os identificadores de pods persistentes facilitam a correspondência de volumes existentes com os novos pods que substituem qualquer um que tenha falhado.

View File

@ -0,0 +1,22 @@
---
title: Carga de Trabalho
id: workloads
date: 2019-02-13
full_link: /docs/concepts/workloads/
short_description: >
Uma carga de trabalho é uma aplicação sendo executada no Kubernetes.
aka:
tags:
- fundamental
---
Uma carga de trabalho é uma aplicação sendo executada no Kubernetes.
<!--more-->
Vários objetos principais que representam diferentes tipos ou partes de uma carga de trabalho
incluem os objetos DaemonSet, Deployment, Job, ReplicaSet, e StatefulSet.
Por exemplo, uma carga de trabalho que tem um servidor web e um banco de dados pode rodar o
banco de dados em um {{< glossary_tooltip term_id="StatefulSet" >}} e o servidor web
em um {{< glossary_tooltip term_id="Deployment" >}}.

View File

@ -0,0 +1,267 @@
Rode este comando para configurar a camada de gerenciamento do Kubernetes
### Sinopse
Rode este comando para configurar a camada de gerenciamento do Kubernetes
O comando "init" executa as fases abaixo:
```
preflight Efetua as verificações pré-execução
certs Geração de certificados
/ca Gera a autoridade de certificação (CA) auto-assinada do Kubernetes para provisionamento de identidades para outros componentes do Kubernetes
/apiserver Gera o certificado para o servidor da API do Kubernetes
/apiserver-kubelet-client Gera o certificado para o servidor da API se conectar ao Kubelet
/front-proxy-ca Gera a autoridade de certificação (CA) auto-assinada para provisionamento de identidades para o front proxy
/front-proxy-client Gera o certificado para o cliente do front proxy
/etcd-ca Gera a autoridade de certificação (CA) auto-assinada para provisionamento de identidades para o etcd
/etcd-server Gera o certificado para servir o etcd
/etcd-peer Gera o certificado para comunicação entre nós do etcd
/etcd-healthcheck-client Gera o certificado para liveness probes fazerem a verificação de integridade do etcd
/apiserver-etcd-client Gera o certificado que o servidor da API utiliza para comunicar-se com o etcd
/sa Gera uma chave privada para assinatura de tokens de conta de serviço, juntamente com sua chave pública
kubeconfig Gera todos os arquivos kubeconfig necessários para estabelecer a camada de gerenciamento e o arquivo kubeconfig de administração
/admin Gera um arquivo kubeconfig para o administrador e o próprio kubeadm utilizarem
/kubelet Gera um arquivo kubeconfig para o kubelet utilizar *somente* para fins de inicialização do cluster
/controller-manager Gera um arquivo kubeconfig para o gerenciador de controladores utilizar
/scheduler Gera um arquivo kubeconfig para o escalonador do Kubernetes utilizar
kubelet-start Escreve as configurações do kubelet e (re)inicializa o kubelet
control-plane Gera todos os manifestos de Pods estáticos necessários para estabelecer a camada de gerenciamento
/apiserver Gera o manifesto do Pod estático do kube-apiserver
/controller-manager Gera o manifesto do Pod estático do kube-controller-manager
/scheduler Gera o manifesto do Pod estático do kube-scheduler
etcd Gera o manifesto do Pod estático para um etcd local
/local Gera o manifesto do Pod estático para uma instância local e de nó único do etcd
upload-config Sobe a configuração do kubeadm e do kubelet para um ConfigMap
/kubeadm Sobe a configuração ClusterConfiguration do kubeadm para um ConfigMap
/kubelet Sobe a configuração do kubelet para um ConfigMap
upload-certs Sobe os certificados para o kubeadm-certs
mark-control-plane Marca um nó como parte da camada de gerenciamento
bootstrap-token Gera tokens de autoinicialização utilizados para associar um nó a um cluster
kubelet-finalize Atualiza configurações relevantes ao kubelet após a inicialização TLS
/experimental-cert-rotation Habilita rotação de certificados do cliente do kubelet
addon Instala os addons requeridos para passar nos testes de conformidade
/coredns Instala o addon CoreDNS em um cluster Kubernetes
/kube-proxy Instala o addon kube-proxy em um cluster Kubernetes
```
```
kubeadm init [flags]
```
### Opções
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--apiserver-advertise-address string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>O endereço IP que o servidor da API irá divulgar que está escutando. Quando não informado, a interface de rede padrão é utilizada.</p></td>
</tr>
<tr>
<td colspan="2">--apiserver-bind-port int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Padrão: 6443</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Porta para o servidor da API conectar-se.</p></td>
</tr>
<tr>
<td colspan="2">--apiserver-cert-extra-sans strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Nomes alternativos (<i>Subject Alternative Names</i>, ou SANs) opcionais a serem adicionados ao certificado utilizado pelo servidor da API. Pode conter endereços IP ou nomes DNS.</p></td>
</tr>
<tr>
<td colspan="2">--cert-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Padrão: "/etc/kubernetes/pki"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>O caminho para salvar e armazenar certificados.</p></td>
</tr>
<tr>
<td colspan="2">--certificate-key string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Chave utilizada para encriptar os certificados da camada de gerenciamento no Secret kubeadm-certs.</p></td>
</tr>
<tr>
<td colspan="2">--config string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Caminho para um arquivo de configuração do kubeadm.</p></td>
</tr>
<tr>
<td colspan="2">--control-plane-endpoint string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Especifica um endereço IP estável ou nome DNS para a camada de gerenciamento.</p></td>
</tr>
<tr>
<td colspan="2">--cri-socket string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Caminho para o soquete CRI se conectar. Se vazio, o kubeadm tentará autodetectar este valor; utilize esta opção somente se você possui mais que um CRI instalado ou se você possui um soquete CRI fora do padrão.</p></td>
</tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Não aplica as modificações; apenas imprime as alterações que seriam efetuadas.</p></td>
</tr>
<tr>
<td colspan="2">--feature-gates string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Um conjunto de pares chave=valor que descreve <i>feature gates</i> para várias funcionalidades. As opções são:<br/>PublicKeysECDSA=true|false (ALFA - padrão=false)<br/>RootlessControlPlane=true|false (ALFA - padrão=false)<br/>UnversionedKubeletConfigMap=true|false (BETA - padrão=true)</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>ajuda para init</p></td>
</tr>
<tr>
<td colspan="2">--ignore-preflight-errors strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Uma lista de verificações para as quais erros serão exibidos como avisos. Exemplos: 'IsPrivilegedUser,Swap'. O valor 'all' ignora erros de todas as verificações.</p></td>
</tr>
<tr>
<td colspan="2">--image-repository string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Padrão: "k8s.gcr.io"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Seleciona um registro de contêineres de onde baixar imagens.</p></td>
</tr>
<tr>
<td colspan="2">--kubernetes-version string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Padrão: "stable-1"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Seleciona uma versão do Kubernetes específica para a camada de gerenciamento.</p></td>
</tr>
<tr>
<td colspan="2">--node-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Especifica o nome do nó.</p></td>
</tr>
<tr>
<td colspan="2">--patches string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>
Caminho para um diretório contendo arquivos nomeados no padrão &quot;target[suffix][+patchtype].extension&quot;. Por exemplo, &quot;kube-apiserver0+merge.yaml&quot; ou somente &quot;etcd.json&quot;.
&quot;target&quot; pode ser um dos seguintes valores: &quot;kube-apiserver&quot;, &quot;kube-controller-manager&quot;, &quot;kube-scheduler&quot;, &quot;etcd&quot;.
&quot;patchtype&quot; pode ser &quot;strategic&quot;, &quot;merge&quot; ou &quot;json&quot; e corresponde aos formatos de patch suportados pelo kubectl. O valor padrão para &quot;patchtype&quot; é &quot;strategic&quot;.
&quot;extension&quot; deve ser &quot;json&quot; ou &quot;yaml&quot;. &quot;suffix&quot; é uma string opcional utilizada para determinar quais patches são aplicados primeiro em ordem alfanumérica.
</p></td>
</tr>
<tr>
<td colspan="2">--pod-network-cidr string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Especifica um intervalo de endereços IP para a rede do Pod. Quando especificado, a camada de gerenciamento irá automaticamente alocar CIDRs para cada nó.</p></td>
</tr>
<tr>
<td colspan="2">--service-cidr string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Padrão: "10.96.0.0/12"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Utiliza um intervalo alternativo de endereços IP para VIPs de serviço.</p></td>
</tr>
<tr>
<td colspan="2">--service-dns-domain string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Padrão: "cluster.local"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Utiliza um domínio alternativo para os serviços. Por exemplo, &quot;myorg.internal&quot;.</p></td>
</tr>
<tr>
<td colspan="2">--skip-certificate-key-print</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Não exibe a chave utilizada para encriptar os certificados da camada de gerenciamento.</p></td>
</tr>
<tr>
<td colspan="2">--skip-phases strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Lista de fases a serem ignoradas.</p></td>
</tr>
<tr>
<td colspan="2">--skip-token-print</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Pula a impressão do token de autoinicialização padrão gerado pelo comando 'kubeadm init'.</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>O token a ser utilizado para estabelecer confiança bidirecional entre nós de carga de trabalho e nós da camada de gerenciamento. O formato segue a expressão regular [a-z0-9]{6}.[a-z0-9]{16} - por exemplo, abcdef.0123456789abcdef.</p></td>
</tr>
<tr>
<td colspan="2">--token-ttl duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Padrão: 24h0m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A duração de tempo de um token antes deste ser automaticamente apagado (por exemplo, 1s, 2m, 3h). Quando informado '0', o token não expira.</p></td>
</tr>
<tr>
<td colspan="2">--upload-certs</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Sobe os certificados da camada de gerenciamento para o Secret kubeadm-certs.</p></td>
</tr>
</tbody>
</table>
### Opções herdadas de comandos superiores
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--rootfs string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>[EXPERIMENTAL] O caminho para o sistema de arquivos raiz 'real' do host.</p></td>
</tr>
</tbody>
</table>

View File

@ -0,0 +1,431 @@
---
title: kubeadm init
content_type: concept
weight: 20
---
<!-- overview -->
Este comando inicializa um nó da camada de gerenciamento do Kubernetes.
<!-- body -->
{{< include "generated/kubeadm_init.md" >}}
### Fluxo do comando Init {#init-workflow}
O comando `kubeadm init` inicializa um nó da camada de gerenciamento do Kubernetes
através da execução dos passos abaixo:
1. Roda uma série de verificações pré-execução para validar o estado do sistema
antes de efetuar mudanças. Algumas verificações emitem apenas avisos, outras
são consideradas erros e cancelam a execução do kubeadm até que o problema
seja corrigido ou que o usuário especifique a opção
`--ignore-preflight-errors=<lista-de-erros-a-ignorar>`.
1. Gera uma autoridade de certificação (CA) auto-assinada para criar identidades
para cada um dos componentes do cluster. O usuário pode informar seu próprio
certificado CA e/ou chave ao instalar estes arquivos no diretório de
certificados configurado através da opção `--cert-dir` (por padrão, este
diretório é `/etc/kubernetes/pki`).
Os certificados do servidor da API terão entradas adicionais para nomes
alternativos (_subject alternative names_, ou SANs) especificados através da
opção `--apiserver-cert-extra-sans`. Estes argumentos serão modificados para
caracteres minúsculos quando necessário.
1. Escreve arquivos kubeconfig adicionais no diretório `/etc/kubernetes` para o
kubelet, para o gerenciador de controladores e para o escalonador utilizarem
ao conectarem-se ao servidor da API, cada um com sua própria identidade, bem
como um arquivo kubeconfig adicional para administração do cluster chamado
`admin.conf`.
1. Gera manifestos de Pods estáticos para o servidor da API, para o gerenciador
de controladores e para o escalonador. No caso de uma instância externa do
etcd não ter sido providenciada, um manifesto de Pod estático adicional é
gerado para o etcd.
Manifestos de Pods estáticos são escritos no diretório `/etc/kubernetes/manifests`;
o kubelet lê este diretório em busca de manifestos de Pods para criar na
inicialização.
Uma vez que os Pods da camada de gerenciamento estejam criados e rodando,
a sequência de execução do comando `kubeadm init` pode continuar.
1. Aplica _labels_ e _taints_ ao nó da camada de gerenciamento de modo que cargas
de trabalho adicionais não sejam escalonadas para executar neste nó.
1. Gera o token que nós adicionais podem utilizar para associarem-se a uma
camada de gerenciamento no futuro. Opcionalmente, o usuário pode fornecer um
token através da opção `--token`, conforme descrito na documentação do
comando [kubeadm token](/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-token/).
1. Prepara todas as configurações necessárias para permitir que nós se associem
ao cluster utilizando os mecanismos de
[Tokens de Inicialização](/pt-br/docs/reference/access-authn-authz/bootstrap-tokens/)
e [Inicialização TLS](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/):
- Escreve um ConfigMap para disponibilizar toda a informação necessária para
associar-se a um cluster e para configurar regras de controle de acesso
baseada em funções (RBAC).
- Permite o acesso dos tokens de inicialização à API de assinaturas CSR.
- Configura a auto-aprovação de novas requisições CSR.
Para mais informações, consulte
[kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join/).
1. Instala um servidor DNS (CoreDNS) e os componentes adicionais do kube-proxy
através do servidor da API. A partir da versão 1.11 do Kubernetes, CoreDNS é
o servidor DNS padrão. Mesmo que o servidor DNS seja instalado nessa etapa,
o seu Pod não será escalonado até que um CNI seja instalado.
{{< warning >}}
O uso do kube-dns com o kubeadm foi descontinuado na versão v1.18 e removido
na versão v1.21 do Kubernetes.
{{< /warning >}}
### Utilizando fases de inicialização com o kubeadm {#init-phases}
O kubeadm permite que você crie um nó da camada de gerenciamento em fases
utilizando o comando `kubeadm init phase`.
Para visualizar a lista ordenada de fases e subfases, você pode rodar o comando
`kubeadm init --help`. A lista estará localizada no topo da ajuda e cada fase
tem sua descrição listada juntamente com o comando. Perceba que ao rodar o
comando `kubeadm init` todas as fases e subfases são executadas nesta ordem
exata.
Algumas fases possuem flags específicas. Caso você deseje ver uma lista de todas
as opções disponíveis, utilize a flag `--help`. Por exemplo:
```shell
sudo kubeadm init phase control-plane controller-manager --help
```
Você também pode utilizar a flag `--help` para ver uma lista de subfases de uma
fase superior:
```shell
sudo kubeadm init phase control-plane --help
```
`kubeadm init` também expõe uma flag chamada `--skip-phases` que pode ser
utilizada para pular a execução de certas fases. Esta flag aceita uma lista de
nomes de fases. Os nomes de fases aceitos estão descritos na lista ordenada
acima.
Um exemplo:
```shell
sudo kubeadm init phase control-plane all --config=configfile.yaml
sudo kubeadm init phase etcd local --config=configfile.yaml
# agora você pode modificar os manifestos da camada de gerenciamento e do etcd
sudo kubeadm init --skip-phases=control-plane,etcd --config=configfile.yaml
```
O que este exemplo faz é escrever os manifestos da camada de gerenciamento e do
etcd no diretório `/etc/kubernetes/manifests`, baseados na configuração descrita
no arquivo `configfile.yaml`. Isto permite que você modifique os arquivos e
então pule estas fases utilizando a opção `--skip-phases`. Ao chamar o último
comando, você cria um nó da camada de gerenciamento com os manifestos
personalizados.
{{< feature-state for_k8s_version="v1.22" state="beta" >}}
Como alternativa, você pode também utilizar o campo `skipPhases` na configuração
`InitConfiguration`.
### Utilizando kubeadm init com um arquivo de configuração {#config-file}
{{< caution >}}
O arquivo de configuração ainda é considerado uma funcionalidade de estado beta
e pode mudar em versões futuras.
{{< /caution >}}
É possível configurar o comando `kubeadm init` com um arquivo de configuração ao
invés de argumentos de linha de comando, e algumas funcionalidades mais avançadas
podem estar disponíveis apenas como opções do arquivo de configuração. Este
arquivo é fornecido utilizando a opção `--config` e deve conter uma estrutura
`ClusterConfiguration` e, opcionalmente, mais estruturas separadas por `---\n`.
Combinar a opção `--config` com outras opções de linha de comando pode não ser
permitido em alguns casos.
A configuração padrão pode ser emitida utilizando o comando
[kubeadm config print](/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-config/).
Se a sua configuração não estiver utilizando a última versão, é **recomendado**
que você migre utilizando o comando
[kubeadm config migrate](/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-config/).
Para mais informações sobre os campos e utilização da configuração, você pode
consultar a
[página de referência da API](/docs/reference/config-api/kubeadm-config.v1beta3/).
### Utilizando kubeadm init com _feature gates_ {#feature-gates}
O kubeadm suporta um conjunto de _feature gates_ que são exclusivos do kubeadm e
podem ser utilizados somente durante a criação de um cluster com `kubeadm init`.
Estas funcionalidades podem controlar o comportamento do cluster. Os
_feature gates_ são removidos assim que uma funcionalidade atinge a disponibilidade
geral (_general availability_, ou GA).
Para informar um _feature gate_, você pode utilizar a opção `--feature-gates`
do comando `kubeadm init`, ou pode adicioná-las no campo `featureGates` quando
um [arquivo de configuração](/docs/reference/config-api/kubeadm-config.v1beta3/#kubeadm-k8s-io-v1beta3-ClusterConfiguration)
é utilizado através da opção `--config`.
A utilização de
[_feature gates_ dos componentes principais do Kubernetes](/docs/reference/command-line-tools-reference/feature-gates)
com o kubeadm não é suportada. Ao invés disso, é possível enviá-los através da
[personalização de componentes com a API do kubeadm](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/).
Lista dos _feature gates_:
{{< table caption="_feature gates_ do kubeadm" >}}
_Feature gate_ | Valor-padrão | Versão Alfa | Versão Beta
:-----------------------------|:-------------|:------------|:-----------
`PublicKeysECDSA` | `false` | 1.19 | -
`RootlessControlPlane` | `false` | 1.22 | -
`UnversionedKubeletConfigMap` | `true` | 1.22 | 1.23
{{< /table >}}
{{< note >}}
Assim que um _feature gate_ atinge a disponibilidade geral, ele é removido desta
lista e o seu valor fica bloqueado em `true` por padrão. Ou seja, a funcionalidade
estará sempre ativa.
{{< /note >}}
Descrição dos _feature gates_:
`PublicKeysECDSA`
: Pode ser utilizado para criar um cluster que utilize certificados ECDSA no
lugar do algoritmo RSA padrão. A renovação dos certificados ECDSA existentes
também é suportada utilizando o comando `kubeadm certs renew`, mas você não pode
alternar entre os algoritmos RSA e ECDSA dinamicamente ou durante atualizações.
`RootlessControlPlane`
: Quando habilitada esta opção, os componentes da camada de gerenciamento cuja
instalação de Pods estáticos é controlada pelo kubeadm, como o `kube-apiserver`,
`kube-controller-manager`, `kube-scheduler` e `etcd`, têm seus contêineres
configurados para rodarem como usuários não-root. Se a opção não for habilitada,
estes componentes são executados como root. Você pode alterar o valor deste
_feature gate_ antes de atualizar seu cluster para uma versão mais recente do
Kubernetes.
`UnversionedKubeletConfigMap`
: Esta opção controla o nome do {{< glossary_tooltip text="ConfigMap" term_id="configmap" >}}
onde o kubeadm armazena os dados de configuração do kubelet. Quando esta opção
não for especificada ou estiver especificada com o valor `true`, o ConfigMap
será nomeado `kubelet-config`. Caso esteja especificada com o valor `false`, o
nome do ConfigMap incluirá as versões maior e menor do Kubernetes instalado
(por exemplo, `kubelet-config-{{< skew currentVersion >}}`). O kubeadm garante
que as regras de RBAC para leitura e escrita deste ConfigMap serão apropriadas
para o valor escolhido. Quando o kubeadm cria este ConfigMap (durante a execução
dos comandos `kubeadm init` ou `kubeadm upgrade apply`), o kubeadm irá respeitar
o valor da opção `UnversionedKubeletConfigMap`. Quando tal ConfigMap for lido
(durante a execução dos comandos `kubeadm join`, `kubeadm reset`,
`kubeadm upgrade...`), o kubeadm tentará utilizar o nome do ConfigMap sem a
versão primeiro. Se esta operação não for bem-sucedida, então o kubeadm irá
utilizar o nome legado (versionado) para este ConfigMap.
{{< note >}}
Informar a opção `UnversionedKubeletConfigMap` com o valor `false` é suportado,
mas está **descontinuado**.
{{< /note >}}
### Adicionando parâmetros do kube-proxy {#kube-proxy}
Para informações sobre como utilizar parâmetros do kube-proxy na configuração
do kubeadm, veja:
- [referência do kube-proxy](/docs/reference/config-api/kube-proxy-config.v1alpha1/)
Para informações sobre como habilitar o modo IPVS com o kubeadm, veja:
- [IPVS](https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/ipvs/README.md)
### Informando opções personalizadas em componentes da camada de gerenciamento {#control-plane-flags}
Para informações sobre como passar as opções aos componentes da camada de
gerenciamento, veja:
- [opções da camada de gerenciamento](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/)
### Executando o kubeadm sem uma conexão à internet {#without-internet-connection}
Para executar o kubeadm sem uma conexão à internet, você precisa baixar as imagens
de contêiner requeridas pela camada de gerenciamento.
Você pode listar e baixar as imagens utilizando o subcomando
`kubeadm config images`:
```shell
kubeadm config images list
kubeadm config images pull
```
Você pode passar a opção `--config` para os comandos acima através de um
[arquivo de configuração do kubeadm](#config-file) para controlar os campos
`kubernetesVersion` e `imageRepository`.
Todas as imagens padrão hospedadas em `k8s.gcr.io` que o kubeadm requer suportam
múltiplas arquiteturas.
### Utilizando imagens personalizadas {#custom-images}
Por padrão, o kubeadm baixa imagens hospedadas no repositório de contêineres
`k8s.gcr.io`. Se a versão requisitada do Kubernetes é um rótulo de integração
contínua (por exemplo, `ci/latest`), o repositório de contêineres
`gcr.io/k8s-staging-ci-images` é utilizado.
Você pode sobrescrever este comportamento utilizando o
[kubeadm com um arquivo de configuração](#config-file). Personalizações permitidas
são:
* Fornecer um valor para o campo `kubernetesVersion` que afeta a versão das
imagens.
* Fornecer um repositório de contêineres alternativo através do campo
`imageRepository` para ser utilizado no lugar de `k8s.gcr.io`.
* Fornecer um valor específico para os campos `imageRepository` e `imageTag`,
correspondendo ao repositório de contêineres e tag a ser utilizada, para as imagens
dos componentes etcd ou CoreDNS.
Caminhos de imagens do repositório de contêineres padrão `k8s.gcr.io` podem diferir
dos utilizados em repositórios de contêineres personalizados através do campo
`imageRepository` devido a razões de retrocompatibilidade. Por exemplo, uma
imagem pode ter um subcaminho em `k8s.gcr.io/subcaminho/imagem`, mas quando
utilizado um repositório de contêineres personalizado, o valor padrão será
`meu.repositoriopersonalizado.io/imagem`.
Para garantir que você terá as imagens no seu repositório personalizado em
caminhos que o kubeadm consiga consumir, você deve:
* Baixar as imagens dos caminhos padrão `k8s.gcr.io` utilizando o comando
`kubeadm config images {list|pull}`.
* Subir as imagens para os caminhos listados no resultado do comando
`kubeadm config images list --config=config.yaml`, onde `config.yaml` contém
o valor customizado do campo `imageRepository`, e/ou `imageTag` para os
componentes etcd e CoreDNS.
* Utilizar o mesmo arquivo `config.yaml` quando executar o comando `kubeadm init`.
#### Imagens personalizadas para o _sandbox_ (imagem `pause`) {#custom-pause-image}
Para configurar uma imagem personalizada para o _sandbox_, você precisará
configurar o {{< glossary_tooltip text="agente de execução de contêineres" term_id="container-runtime" >}}
para utilizar a imagem.
Verifique a documentação para o seu agente de execução de contêineres para
mais informações sobre como modificar esta configuração; para alguns agentes de
execução de contêiner você também encontrará informações no tópico
[Agentes de Execução de Contêineres](/docs/setup/production-environment/container-runtimes/).
### Carregando certificados da camada de gerenciamento no cluster
Ao adicionar a opção `--upload-certs` ao comando `kubeadm init` você pode
subir temporariamente certificados da camada de gerenciamento em um Secret no
cluster. Este Secret expira automaticamente após 2 horas. Os certificados são
encriptados utilizando uma chave de 32 bytes que pode ser especificada através
da opção `--certificate-key`. A mesma chave pode ser utilizada para baixar
certificados quando nós adicionais da camada de gerenciamento estão se associando
ao cluster, utilizando as opções `--control-plane` e `--certificate-key` ao rodar
`kubeadm join`.
O seguinte comando de fase pode ser usado para subir os certificados novamente
após a sua expiração:
```shell
kubeadm init phase upload-certs --upload-certs --certificate-key=ALGUM_VALOR --config=ALGUM_ARQUIVO_YAML
```
Se a opção `--certificate-key` não for passada aos comandos `kubeadm init`
e `kubeadm init phase upload-certs`, uma nova chave será gerada automaticamente.
O comando abaixo pode ser utilizado para gerar uma nova chave sob demanda:
```shell
kubeadm certs certificate-key
```
### Gerenciamento de certificados com o kubeadm
Para informações detalhadas sobre gerenciamento de certificados com o kubeadm,
consulte [Gerenciamento de Certificados com o kubeadm](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/).
O documento inclui informações sobre a utilização de autoridades de certificação
(CA) externas, certificados personalizados e renovação de certificados.
### Gerenciando o arquivo _drop-in_ do kubeadm para o kubelet {#kubelet-drop-in}
O pacote `kubeadm` é distribuído com um arquivo de configuração para rodar o
`kubelet` utilizando `systemd`. Note que o kubeadm nunca altera este arquivo.
Este arquivo _drop-in_ é parte do pacote DEB/RPM do kubeadm.
Para mais informações, consulte
[Gerenciando o arquivo drop-in do kubeadm para o systemd](/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#the-kubelet-drop-in-file-for-systemd).
### Usando o kubeadm com agentes de execução CRI
Por padrão, o kubeadm tenta detectar seu agente de execução de contêineres. Para
mais detalhes sobre esta detecção, consulte o
[guia de instalação CRI do kubeadm](/pt-br/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#instalando-agente-de-execucao).
### Configurando o nome do nó
Por padrão, o `kubeadm` gera um nome para o nó baseado no endereço da máquina.
Você pode sobrescrever esta configuração utilizando a opção `--node-name`. Esta
opção passa o valor apropriado para a opção [`--hostname-override`](/docs/reference/command-line-tools-reference/kubelet/#options)
do kubelet.
Note que sobrescrever o hostname de um nó pode
[interferir com provedores de nuvem](https://github.com/kubernetes/website/pull/8873).
### Automatizando o kubeadm
Ao invés de copiar o token que você obteve do comando `kubeadm init` para cada nó,
como descrito no [tutorial básico do kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/),
você pode paralelizar a distribuição do token para facilitar a automação.
Para implementar esta automação, você precisa saber o endereço IP que o nó da
camada de gerenciamento irá ter após a sua inicialização, ou utilizar um nome
DNS ou um endereço de um balanceador de carga.
1. Gere um token. Este token deve ter a forma `<string de 6 caracteres>.<string de 16 caracteres>`.
Mais especificamente, o token precisa ser compatível com a expressão regular:
`[a-z0-9]{6}\.[a-z0-9]{16}`.
O kubeadm pode gerar um token para você:
```shell
kubeadm token generate
```
1. Inicialize o nó da camada de gerenciamento e os nós de carga de trabalho de
forma concorrente com este token. Conforme os nós forem iniciando, eles
deverão encontrar uns aos outros e formar o cluster. O mesmo argumento
`--token` pode ser utilizado em ambos os comandos `kubeadm init` e
`kubeadm join`.
1. O mesmo procedimento pode ser feito para a opção `--certificate-key` quando
nós adicionais da camada de gerenciamento associarem-se ao cluster. A chave
pode ser gerada utilizando:
```shell
kubeadm certs certificate-key
```
Uma vez que o cluster esteja inicializado, você pode buscar as credenciais para
a camada de gerenciamento no caminho `/etc/kubernetes/admin.conf` e utilizá-las
para conectar-se ao cluster.
Note que este tipo de inicialização tem algumas garantias de segurança relaxadas
pois ele não permite que o hash do CA raiz seja validado com a opção
`--discovery-token-ca-cert-hash` (pois este hash não é gerado quando os nós são
provisionados). Para detalhes, veja a documentação do comando
[kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join/).
## {{% heading "whatsnext" %}}
* [kubeadm init phase](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/)
para entender mais sobre as fases do comando `kubeadm init`
* [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join/) para
inicializar um nó de carga de trabalho do Kubernetes e associá-lo ao cluster
* [kubeadm upgrade](/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/)
para atualizar um cluster do Kubernetes para uma versão mais recente
* [kubeadm reset](/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-reset/)
para reverter quaisquer mudanças feitas neste host pelos comandos
`kubeadm init` ou `kubeadm join`

View File

@ -0,0 +1,202 @@
---
title: Instalando Kubernetes com kOps
content_type: task
weight: 20
---
<!-- overview -->
Este início rápido mostra como instalar facilmente um cluster Kubernetes na AWS usando uma ferramenta chamada [`kOps`](https://github.com/kubernetes/kops).
`kOps` é um sistema de provisionamento automatizado:
* Instalação totalmente automatizada
* Usa DNS para identificar clusters
* Auto-recuperação: tudo é executado em grupos de Auto-Scaling
* Suporte de vários sistemas operacionais (Amazon Linux, Debian, Flatcar, RHEL, Rocky e Ubuntu) - veja em [imagens](https://github.com/kubernetes/kops/blob/master/docs/operations/images.md)
* Suporte a alta disponibilidade - consulte a [documentação sobre alta disponibilidade](https://github.com/kubernetes/kops/blob/master/docs/operations/high_availability.md)
* Pode provisionar diretamente ou gerar manifestos do terraform - veja a [documentação sobre como fazer isso com Terraform](https://github.com/kubernetes/kops/blob/master/docs/terraform.md)
## {{% heading "prerequisites" %}}
* Você deve ter o [kubectl](/docs/tasks/tools/) instalado.
* Você deve [instalar](https://github.com/kubernetes/kops#installing) `kops` em uma arquitetura de dispositivo de 64 bits (AMD64 e Intel 64).
* Você deve ter uma [conta da AWS](https://docs.aws.amazon.com/polly/latest/dg/setting-up.html), gerar as [chaves do IAM](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) e [configurá-las](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration). O usuário do IAM precisará de [permissões adequadas](https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md#setup-iam-user).
<!-- steps -->
## Como criar um cluster
### (1/5) Instalar kops
#### Instalação
Faça o download do kops na [página de downloads](https://github.com/kubernetes/kops/releases) (também é conveniente gerar um binário a partir do código-fonte):
{{< tabs name="kops_installation" >}}
{{% tab name="macOS" %}}
Baixe a versão mais recente com o comando:
```shell
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-darwin-amd64
```
Para baixar uma versão específica, substitua a seguinte parte do comando pela versão específica do kops.
```shell
$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)
```
Por exemplo, para baixar kops versão v1.20.0 digite:
```shell
curl -LO https://github.com/kubernetes/kops/releases/download/v1.20.0/kops-darwin-amd64
```
Dê a permissão de execução ao binário do kops.
```shell
chmod +x kops-darwin-amd64
```
Mova o binário do kops para o seu PATH.
```shell
sudo mv kops-darwin-amd64 /usr/local/bin/kops
```
Você também pode instalar kops usando [Homebrew](https://brew.sh/).
```shell
brew update && brew install kops
```
{{% /tab %}}
{{% tab name="Linux" %}}
Baixe a versão mais recente com o comando:
```shell
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
```
Para baixar uma versão específica do kops, substitua a seguinte parte do comando pela versão específica do kops.
```shell
$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)
```
Por exemplo, para baixar kops versão v1.20.0 digite:
```shell
curl -LO https://github.com/kubernetes/kops/releases/download/v1.20.0/kops-linux-amd64
```
Dê a permissão de execução ao binário do kops
```shell
chmod +x kops-linux-amd64
```
Mova o binário do kops para o seu PATH.
```shell
sudo mv kops-linux-amd64 /usr/local/bin/kops
```
Você também pode instalar kops usando [Homebrew](https://docs.brew.sh/Homebrew-on-Linux).
```shell
brew update && brew install kops
```
{{% /tab %}}
{{< /tabs >}}
### (2/5) Crie um domínio route53 para seu cluster
O kops usa DNS para descoberta, tanto dentro do cluster quanto fora, para que você possa acessar o servidor da API do kubernetes a partir dos clientes.
kops tem uma opinião forte sobre o nome do cluster: deve ser um nome DNS válido. Ao fazer isso, você não confundirá mais seus clusters, poderá compartilhar clusters com seus colegas de forma inequívoca e alcançá-los sem ter de lembrar de um endereço IP.
Você pode e provavelmente deve usar subdomínios para dividir seus clusters. Como nosso exemplo usaremos
`useast1.dev.example.com`. O endpoint do servidor de API será então `api.useast1.dev.example.com`.
Uma zona hospedada do Route53 pode servir subdomínios. Sua zona hospedada pode ser `useast1.dev.example.com`,
mas também `dev.example.com` ou até `example.com`. kops funciona com qualquer um deles, então normalmente você escolhe por motivos de organização (por exemplo, você tem permissão para criar registros em `dev.example.com`,
mas não em `example.com`).
Vamos supor que você esteja usando `dev.example.com` como sua zona hospedada. Você cria essa zona hospedada usando o [processo normal](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingNewSubdomain.html), ou
com um comando como `aws route53 create-hosted-zone --name dev.example.com --caller-reference 1`.
Você deve então configurar seus registros NS no domínio principal, para que os registros no domínio sejam resolvidos. Aqui, você criaria registros NS no `example.com` para `dev`. Se for um nome de domínio raiz, você configuraria os registros NS em seu registrador de domínio (por exemplo `example.com`, precisaria ser configurado onde você comprou `example.com`).
Verifique a configuração do seu domínio route53 (é a causa número 1 de problemas!). Você pode verificar novamente se seu cluster está configurado corretamente se tiver a ferramenta dig executando:
`dig NS dev.example.com`
Você deve ver os 4 registros NS que o Route53 atribuiu à sua zona hospedada.
### (3/5) Crie um bucket do S3 para armazenar o estado dos clusters
O kops permite que você gerencie seus clusters mesmo após a instalação. Para fazer isso, ele deve acompanhar os clusters que você criou, juntamente com suas configurações, as chaves que estão usando etc. Essas informações são armazenadas em um bucket do S3. As permissões do S3 são usadas para controlar o acesso ao bucket.
Vários clusters podem usar o mesmo bucket do S3 e você pode compartilhar um bucket do S3 entre seus colegas que administram os mesmos clusters - isso é muito mais fácil do que transmitir arquivos kubecfg. Mas qualquer pessoa com acesso ao bucket do S3 terá acesso administrativo a todos os seus clusters, portanto, você não deseja compartilhá-lo além da equipe de operações.
Portanto, normalmente você tem um bucket do S3 para cada equipe de operações (e geralmente o nome corresponderá ao nome da zona hospedada acima!)
Em nosso exemplo, escolhemos `dev.example.com` como nossa zona hospedada, então vamos escolher `clusters.dev.example.com` como o nome do bucket do S3.
* Exporte `AWS_PROFILE` (se precisar selecione um perfil para que a AWS CLI funcione)
* Crie o bucket do S3 usando `aws s3 mb s3://clusters.dev.example.com`
* Você pode rodar `export KOPS_STATE_STORE=s3://clusters.dev.example.com` e, em seguida, o kops usará esse local por padrão. Sugerimos colocar isso em seu perfil bash ou similar.
### (4/5) Crie sua configuração de cluster
Execute `kops create cluster` para criar sua configuração de cluster:
`kops create cluster --zones=us-east-1c useast1.dev.example.com`
kops criará a configuração para seu cluster. Observe que ele _apenas_ cria a configuração, na verdade não cria os recursos de nuvem - você fará isso na próxima etapa com um arquivo `kops update cluster`. Isso lhe dá a oportunidade de revisar a configuração ou alterá-la.
Ele exibe comandos que você pode usar para explorar mais:
* Liste seus clusters com: `kops get cluster`
* Edite este cluster com: `kops edit cluster useast1.dev.example.com`
* Edite seu grupo de instâncias de nós: `kops edit ig --name=useast1.dev.example.com nodes`
* Edite seu grupo de instâncias principal: `kops edit ig --name=useast1.dev.example.com master-us-east-1c`
Se esta é sua primeira vez usando kops, gaste alguns minutos para experimentá-los! Um grupo de instâncias é um conjunto de instâncias que serão registradas como nós do kubernetes. Na AWS, isso é implementado por meio de grupos de auto-scaling.
Você pode ter vários grupos de instâncias, por exemplo, se quiser nós que sejam uma combinação de instâncias spot e sob demanda ou instâncias de GPU e não GPU.
### (5/5) Crie o cluster na AWS
Execute `kops update cluster` para criar seu cluster na AWS:
`kops update cluster useast1.dev.example.com --yes`
Isso leva alguns segundos para ser executado, mas seu cluster provavelmente levará alguns minutos para estar realmente pronto.
`kops update cluster` será a ferramenta que você usará sempre que alterar a configuração do seu cluster; ele aplica as alterações que você fez na configuração ao seu cluster - reconfigurando AWS ou kubernetes conforme necessário.
Por exemplo, depois de você executar `kops edit ig nodes`, em seguida execute `kops update cluster --yes` para aplicar sua configuração e, às vezes, você também precisará `kops rolling-update cluster` para implementar a configuração imediatamente.
Sem `--yes`, `kops update cluster` mostrará uma prévia do que ele fará. Isso é útil para clusters de produção!
### Explore outros complementos
Consulte a [lista de complementos](/pt-br/docs/concepts/cluster-administration/addons/) para explorar outros complementos, incluindo ferramentas para registro, monitoramento, política de rede, visualização e controle de seu cluster Kubernetes.
## Limpeza
* Para excluir seu cluster: `kops delete cluster useast1.dev.example.com --yes`
## {{% heading "whatsnext" %}}
* Saiba mais sobre os [conceitos do Kubernetes](/pt-br/docs/concepts/) e o [`kubectl`](/docs/reference/kubectl/).
* Saiba mais sobre o [uso avançado](https://kops.sigs.k8s.io/) do `kOps` para tutoriais, práticas recomendadas e opções de configuração avançada.
* Siga as discussões da comunidade do `kOps` no Slack: [discussões da comunidade](https://github.com/kubernetes/kops#other-ways-to-communicate-with-the-contributors).
* Contribua para o `kOps` endereçando ou levantando um problema [GitHub Issues](https://github.com/kubernetes/kops/issues).

View File

@ -1,14 +1,16 @@
---
title: "Оркестрация контейнеров промышленного уровня"
abstract: "Автоматизированное развёртывание, масштабирование и управление контейнерами."
abstract: "Автоматизированное развёртывание, масштабирование и управление контейнерами"
cid: home
sitemap:
priority: 1.0
---
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) - это открытое программное обеспечение для автоматизации развёртывания, масштабирования и управления контейнеризированными приложениями.
### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) это открытое программное обеспечение для автоматизации развёртывания, масштабирования и управления контейнеризированными приложениями.
Kubernetes группирует контейнеры, составляющие приложение, в логические единицы для более простого управления и обнаружения. При создании Kubernetes использован [15-летний опыт эксплуатации производственных нагрузок Google](http://queue.acm.org/detail.cfm?id=2898444), совмещённый с лучшими идеями и практиками сообщества.
Kubernetes группирует контейнеры, составляющие приложение, в логические единицы для более простого управления и обнаружения. При создании Kubernetes использован [15-летний опыт эксплуатации производственных нагрузок Google](http://queue.acm.org/detail.cfm?id=2898444), который был совмещён с лучшими идеями и практиками сообщества.
{{% /blocks/feature %}}
{{% blocks/feature image="scalable" %}}
@ -21,7 +23,7 @@ Kubernetes группирует контейнеры, составляющие
{{% blocks/feature image="blocks" %}}
#### Бесконечная гибкость
Будь то локальное тестирование или работа в корпорации, гибкость Kubernetes растёт вместе с вами, обеспечивая бесперебойную и простую доставку приложений, независимо от сложности ваших потребностей.
Будь то локальное тестирование или работа в корпорации, гибкость Kubernetes растёт вместе с вами, обеспечивая бесперебойную и простую доставку приложений независимо от сложности ваших потребностей.
{{% /blocks/feature %}}
@ -37,16 +39,16 @@ Kubernetes — это проект с открытым исходным кодо
{{< blocks/section id="video" background-image="kub_video_banner_homepage" >}}
<div class="light-text">
<h2>О сложности миграции 150+ микросервисов в Kubernetes</h2>
<p>Сара Уелльс, технический директор по эксплуатации и надёжности в Financial Times</p>
<p>Сара Уэллс, технический директор по эксплуатации и надёжности в Financial Times</p>
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Смотреть видео</button>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe-2022/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu22" button id="desktopKCButton">Посетите KubeCon в Европе, 17-20 мая 2022 года</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna22" button id="desktopKCButton">Посетите KubeCon в Северной Америке, 24-28 октября 2022 года</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna21" button id="desktopKCButton">Посетите KubeCon в Северной Америке, 24-28 октября 2022 года</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe-2023/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu23" button id="desktopKCButton">Посетите KubeCon в Европе, 17-21 апреля 2023 года</a>
</div>
<div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>

View File

@ -18,16 +18,16 @@ content_type: concept
* [ACI](https://www.github.com/noironetworks/aci-containers) обеспечивает интегрированную сеть контейнеров и сетевую безопасность с помощью Cisco ACI.
* [Antrea](https://antrea.io/) работает на уровне 3, обеспечивая сетевые службы и службы безопасности для Kubernetes, используя Open vSwitch в качестве уровня сетевых данных.
* [Calico](https://docs.projectcalico.org/latest/introduction/) Calico поддерживает гибкий набор сетевых опций, поэтому вы можете выбрать наиболее эффективный вариант для вашей ситуации, включая сети без оверлея и оверлейные сети, с или без BGP. Calico использует тот же механизм для обеспечения соблюдения сетевой политики для хостов, модулей и (при использовании Istio и Envoy) приложений на уровне сервисной сети (mesh layer).
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) объединяет Flannel и Calico, обеспечивая сеть и сетевую политику.
* [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel) объединяет Flannel и Calico, обеспечивая сеть и сетевую политику.
* [Cilium](https://github.com/cilium/cilium) - это плагин сети L3 и сетевой политики, который может прозрачно применять политики HTTP/API/L7. Поддерживаются как режим маршрутизации, так и режим наложения/инкапсуляции, и он может работать поверх других подключаемых модулей CNI.
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) позволяет Kubernetes легко подключаться к выбору плагинов CNI, таких как Calico, Canal, Flannel, Romana или Weave.
* [CNI-Genie](https://github.com/cni-genie/CNI-Genie) позволяет Kubernetes легко подключаться к выбору плагинов CNI, таких как Calico, Canal, Flannel, Romana или Weave.
* [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), основан на [Tungsten Fabric](https://tungsten.io), представляет собой платформу для виртуализации мультиоблачных сетей с открытым исходным кодом и управления политиками. Contrail и Tungsten Fabric интегрированы с системами оркестрации, такими как Kubernetes, OpenShift, OpenStack и Mesos, и обеспечивают режимы изоляции для виртуальных машин, контейнеров/подов и рабочих нагрузок без операционной системы.
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) - это поставщик оверлейной сети, который можно использовать с Kubernetes.
* [Knitter](https://github.com/ZTE/Knitter/) - это плагин для поддержки нескольких сетевых интерфейсов Kubernetes подов.
* [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) - это плагин Multi для работы с несколькими сетями в Kubernetes, который поддерживает большинство самых популярных [CNI](https://github.com/containernetworking/cni) (например: Calico, Cilium, Contiv, Flannel), в дополнение к рабочим нагрузкам основанных на SRIOV, DPDK, OVS-DPDK и VPP в Kubernetes.
* [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/) - это сетевой провайдер для Kubernetes основанный на [OVN (Open Virtual Network)](https://github.com/ovn-org/ovn/), реализация виртуальной сети, появившийся в результате проекта Open vSwitch (OVS). OVN-Kubernetes обеспечивает сетевую реализацию на основе наложения для Kubernetes, включая реализацию балансировки нагрузки и сетевой политики на основе OVS.
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) - это подключаемый модуль контроллера CNI на основе OVN для обеспечения облачной цепочки сервисных функций (SFC), несколько наложенных сетей OVN, динамического создания подсети, динамического создания виртуальных сетей, сети поставщика VLAN, сети прямого поставщика и подключаемого к другим Multi Сетевые плагины, идеально подходящие для облачных рабочих нагрузок на периферии в сети с несколькими кластерами.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) плагин для контейнера (NCP) обеспечивающий интеграцию между VMware NSX-T и контейнерами оркестраторов, таких как Kubernetes, а так же интеграцию между NSX-T и контейнеров на основе платформы CaaS/PaaS, таких как Pivotal Container Service (PKS) и OpenShift.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html) плагин для контейнера (NCP) обеспечивающий интеграцию между VMware NSX-T и контейнерами оркестраторов, таких как Kubernetes, а так же интеграцию между NSX-T и контейнеров на основе платформы CaaS/PaaS, таких как Pivotal Container Service (PKS) и OpenShift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) - эта платформа SDN, которая обеспечивает сетевое взаимодействие на основе политик между Kubernetes подами и не Kubernetes окружением, с отображением и мониторингом безопасности.
* [Romana](https://github.com/romana/romana) - это сетевое решение уровня 3 для сетей подов, которое также поддерживает [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Подробности установки Kubeadm доступны [здесь](https://github.com/romana/romana/tree/master/containerize).
* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) предоставляет сеть и обеспечивает сетевую политику, будет работать на обеих сторонах сетевого раздела и не требует внешней базы данных.

View File

@ -72,7 +72,7 @@ baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
sudo yum install -y kubectl
{{< /tab >}}

View File

@ -26,12 +26,12 @@ card:
<p>В данном руководстве вы познакомитесь с основами системы оркестрации кластеров Kubernetes. Каждый модуль содержит краткую справочную информацию по основной функциональности и концепциям Kubernetes, а также включает интерактивные онлайн-уроки. С их помощью вы научитесь самостоятельно управлять простым кластером и контейнеризированными приложениями, которые были в нём развернуты.</p>
<p>Пройдя интерактивные уроки, вы узнаете, как:</p>
<ul>
<li>развёртывать контейнеризированное приложение в кластер.</li>
<li>масштабировать развёртывание.</li>
<li>обновить контейнеризированное приложение на новую версию ПО.</li>
<li>развёртывать контейнеризированное приложение в кластер;</li>
<li>масштабировать развёртывание;</li>
<li>обновить контейнеризированное приложение на новую версию ПО;</li>
<li>отлаживать контейнеризированное приложение.</li>
</ul>
<p>Все руководства используют сервис Katacoda, поэтому в вашем браузере будет показан виртуальный терминал с работающим Minikube, небольшой локальной средой Kubernetes, которая может работать где угодно. Вам не потребуется устанавливать дополнительное ПО или вообще что-либо настраивать. Каждый интерактивный урок запускается непосредственно в вашем браузере.</p>
<p>Все руководства используют сервис Katacoda, поэтому в вашем браузере будет показан виртуальный терминал с запущенным Minikube — небольшой локальной средой Kubernetes, которая может работать где угодно. Вам не потребуется устанавливать дополнительное ПО или вообще что-либо настраивать. Каждый интерактивный урок запускается непосредственно в вашем браузере.</p>
</div>
</div>
@ -40,7 +40,7 @@ card:
<div class="row">
<div class="col-md-9">
<h2>Чем может Kubernetes помочь вам?</h2>
<p>От современных веб-сервисов пользователи ожидают, что приложения будут доступны 24/7, а разработчики — развёртывать новые версии приложений по нескольку раз в день. Контейнеризация направлена на достижение этой цели, упаковывая ПО и позволяя выпускать и обновлять приложения просто, быстро и без простоев. Kubernetes гарантирует вам, что ваши контейнеризованные приложения будет запущены где угодно и когда угодно, вместе со всеми необходимыми для их работы ресурсами и инструментами. Kubernetes — это готовая к промышленному использованию платформа с открытым исходным кодом, разработанная исходя из накопленного опыта Google по оркестровке контейнеров и лучшими идеями от сообщества.</p>
<p>От современных веб-сервисов пользователи ожидают, что приложения будут доступны 24/7, а разработчики — развёртывать новые версии приложений по нескольку раз в день. Контейнеризация направлена на достижение этой цели, посольку позволяет выпускать и обновлять приложения без простоев. Kubernetes гарантирует, что ваши контейнеризованные приложения будет запущены где угодно и когда угодно, вместе со всеми необходимыми для их работы ресурсами и инструментами. Kubernetes — это готовая к промышленному использованию платформа с открытым исходным кодом, разработанная на основе накопленного опыта Google по оркестровке контейнеров и вобравшая в себя лучшие идеи от сообщества.</p>
</div>
</div>
@ -63,7 +63,7 @@ card:
<div class="thumbnail">
<a href="/ru/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_02.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="/ru/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/"><h5>2. Развёртывание приложение</h5></a>
<a href="/ru/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/"><h5>2. Развёртывание приложения</h5></a>
</div>
</div>
</div>

View File

@ -0,0 +1,555 @@
---
layout: blog
title: "警告: 有用的预警"
date: 2020-09-03
slug: warnings
evergreen: true
---
<!--
layout: blog
title: "Warning: Helpful Warnings Ahead"
date: 2020-09-03
slug: warnings
evergreen: true
-->
<!--
**Author**: [Jordan Liggitt](https://github.com/liggitt) (Google)
-->
**作者**: [Jordan Liggitt](https://github.com/liggitt) (Google)
<!--
As Kubernetes maintainers, we're always looking for ways to improve usability while preserving compatibility.
As we develop features, triage bugs, and answer support questions, we accumulate information that would be helpful for Kubernetes users to know.
In the past, sharing that information was limited to out-of-band methods like release notes, announcement emails, documentation, and blog posts.
Unless someone knew to seek out that information and managed to find it, they would not benefit from it.
-->
作为 Kubernetes 维护者,我们一直在寻找在保持兼容性的同时提高可用性的方法。
在开发功能、分类 Bug、和回答支持问题的过程中我们积累了有助于 Kubernetes 用户了解的信息。
过去,共享这些信息仅限于发布说明、公告电子邮件、文档和博客文章等带外方法。
除非有人知道需要寻找这些信息并成功找到它们,否则他们不会从中受益。
<!--
In Kubernetes v1.19, we added a feature that allows the Kubernetes API server to
[send warnings to API clients](https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1693-warnings).
The warning is sent using a [standard `Warning` response header](https://tools.ietf.org/html/rfc7234#section-5.5),
so it does not change the status code or response body in any way.
This allows the server to send warnings easily readable by any API client, while remaining compatible with previous client versions.
-->
在 Kubernetes v1.19 中,我们添加了一个功能,允许 Kubernetes API
服务器[向 API 客户端发送警告](https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1693-warnings)。
警告信息使用[标准 `Warning` 响应头](https://tools.ietf.org/html/rfc7234#section-5.5)发送,
因此它不会以任何方式更改状态代码或响应体。
这一设计使得服务能够发送任何 API 客户端都可以轻松读取的警告,同时保持与以前的客户端版本兼容。
<!--
Warnings are surfaced by `kubectl` v1.19+ in `stderr` output, and by the `k8s.io/client-go` client library v0.19.0+ in log output.
The `k8s.io/client-go` behavior can be [overridden per-process or per-client](#customize-client-handling).
-->
警告在 `kubectl` v1.19+ 的 `stderr` 输出中和 `k8s.io/client-go` v0.19.0+ 客户端库的日志中出现。
`k8s.io/client-go` 行为可以[在进程或客户端层面重载](#customize-client-handling)。
<!--
## Deprecation Warnings
-->
## 弃用警告 {#deprecation-warnings}
<!--
The first way we are using this new capability is to send warnings for use of deprecated APIs.
-->
我们第一次使用此新功能是针对已弃用的 API 调用发送警告。
<!--
Kubernetes is a [big, fast-moving project](https://www.cncf.io/cncf-kubernetes-project-journey/#development-velocity).
Keeping up with the [changes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#changelog-since-v1180)
in each release can be daunting, even for people who work on the project full-time. One important type of change is API deprecations.
As APIs in Kubernetes graduate to GA versions, pre-release API versions are deprecated and eventually removed.
-->
Kubernetes 是一个[大型、快速发展的项目](https://www.cncf.io/cncf-kubernetes-project-journey/#development-velocity)。
跟上每个版本的[变更](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#changelog-since-v1180)可能是令人生畏的,
即使对于全职从事该项目的人来说也是如此。一种重要的变更是 API 弃用。
随着 Kubernetes 中的 API 升级到 GA 版本,预发布的 API 版本会被弃用并最终被删除。
<!--
Even though there is an [extended deprecation period](/docs/reference/using-api/deprecation-policy/),
and deprecations are [included in release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#deprecation),
they can still be hard to track. During the deprecation period, the pre-release API remains functional,
allowing several releases to transition to the stable API version. However, we have found that users often don't even realize
they are depending on a deprecated API version until they upgrade to the release that stops serving it.
-->
即使有[延长的弃用期](/zh-cn/docs/reference/using-api/deprecation-policy/)
并且[在发布说明中](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#deprecation)也包含了弃用信息,
他们仍然很难被追踪。在弃用期内,预发布 API 仍然有效,
允许多个版本过渡到稳定的 API 版本。
然而,我们发现用户往往甚至没有意识到他们依赖于已弃用的 API 版本,
直到升级到不再提供相应服务的新版本。
<!--
Starting in v1.19, whenever a request is made to a deprecated REST API, a warning is returned along with the API response.
This warning includes details about the release in which the API will no longer be available, and the replacement API version.
-->
从 v1.19 开始,系统每当收到针对已弃用的 REST API 的请求时,都会返回警告以及 API 响应。
此警告包括有关 API 将不再可用的版本以及替换 API 版本的详细信息。
<!--
Because the warning originates at the server, and is intercepted at the client level, it works for all kubectl commands,
including high-level commands like `kubectl apply`, and low-level commands like `kubectl get --raw`:
<img alt="kubectl applying a manifest file, then displaying a warning message 'networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress'."
src="kubectl-warnings.png"
style="width:637px;max-width:100%;">
-->
因为警告源自服务器端,并在客户端层级被拦截,所以它适用于所有 kubectl 命令,
包括像 `kubectl apply` 这样的高级命令,以及像 `kubectl get --raw` 这样的低级命令:
<img alt="kubectl 执行一个清单文件, 然后显示警告信息 'networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress'。"
src="kubectl-warnings.png"
style="width:637px;max-width:100%;">
<!--
This helps people affected by the deprecation to know the request they are making is deprecated,
how long they have to address the issue, and what API they should use instead.
This is especially helpful when the user is applying a manifest they didn't create,
so they have time to reach out to the authors to ask for an updated version.
-->
这有助于受弃用影响的人们知道他们所请求的API已被弃用
他们有多长时间来解决这个问题,以及他们应该使用什么 API。
这在用户应用不是由他们创建的清单文件时特别有用,
所以他们有时间联系作者要一个更新的版本。
<!--
We also realized that the person *using* a deprecated API is often not the same person responsible for upgrading the cluster,
so we added two administrator-facing tools to help track use of deprecated APIs and determine when upgrades are safe.
-->
我们还意识到**使用**已弃用的 API 的人通常不是负责升级集群的人,
因此,我们添加了两个面向管理员的工具来帮助跟踪已弃用的 API 的使用情况并确定何时升级安全。
<!--
### Metrics
-->
### 度量指标 {#metrics}
<!--
Starting in Kubernetes v1.19, when a request is made to a deprecated REST API endpoint,
an `apiserver_requested_deprecated_apis` gauge metric is set to `1` in the kube-apiserver process.
This metric has labels for the API `group`, `version`, `resource`, and `subresource`,
and a `removed_release` label that indicates the Kubernetes release in which the API will no longer be served.
-->
从 Kubernetes v1.19 开始,当向已弃用的 REST API 端点发出请求时,
在 kube-apiserver 进程中,`apiserver_requested_deprecated_apis` 度量指标会被设置为 `1`
该指标具有 API `group`、`version`、`resource` 和 `subresource` 的标签,
和一个 `removed_release` 标签,表明不再提供 API 的 Kubernetes 版本。
<!--
This is an example query using `kubectl`, [prom2json](https://github.com/prometheus/prom2json),
and [jq](https://stedolan.github.io/jq/) to determine which deprecated APIs have been requested
from the current instance of the API server:
-->
下面是一个使用 `kubectl` 的查询示例,[prom2json](https://github.com/prometheus/prom2json)
和 [jq](https://stedolan.github.io/jq/) 用来确定当前 API
服务器实例上收到了哪些对已弃用的 API 请求:
```sh
kubectl get --raw /metrics | prom2json | jq '
.[] | select(.name=="apiserver_requested_deprecated_apis").metrics[].labels
'
```
<!--
Output:
-->
输出:
```json
{
"group": "extensions",
"removed_release": "1.22",
"resource": "ingresses",
"subresource": "",
"version": "v1beta1"
}
{
"group": "rbac.authorization.k8s.io",
"removed_release": "1.22",
"resource": "clusterroles",
"subresource": "",
"version": "v1beta1"
}
```
<!--
This shows the deprecated `extensions/v1beta1` Ingress and `rbac.authorization.k8s.io/v1beta1` ClusterRole APIs
have been requested on this server, and will be removed in v1.22.
We can join that information with the `apiserver_request_total` metrics to get more details about the requests being made to these APIs:
-->
输出展示在此服务器上请求了已弃用的 `extensions/v1beta1` Ingress 和 `rbac.authorization.k8s.io/v1beta1`
ClusterRole API这两个 API 都将在 v1.22 中被删除。
我们可以将该信息与 `apiserver_request_total` 指标结合起来,以获取有关这些 API 请求的更多详细信息:
```sh
kubectl get --raw /metrics | prom2json | jq '
# set $deprecated to a list of deprecated APIs
[
.[] |
select(.name=="apiserver_requested_deprecated_apis").metrics[].labels |
{group,version,resource}
] as $deprecated
|
# select apiserver_request_total metrics which are deprecated
.[] | select(.name=="apiserver_request_total").metrics[] |
select(.labels | {group,version,resource} as $key | $deprecated | index($key))
'
```
<!--
Output:
-->
输出:
```json
{
"labels": {
"code": "0",
"component": "apiserver",
"contentType": "application/vnd.kubernetes.protobuf;stream=watch",
"dry_run": "",
"group": "extensions",
"resource": "ingresses",
"scope": "cluster",
"subresource": "",
"verb": "WATCH",
"version": "v1beta1"
},
"value": "21"
}
{
"labels": {
"code": "200",
"component": "apiserver",
"contentType": "application/vnd.kubernetes.protobuf",
"dry_run": "",
"group": "extensions",
"resource": "ingresses",
"scope": "cluster",
"subresource": "",
"verb": "LIST",
"version": "v1beta1"
},
"value": "1"
}
{
"labels": {
"code": "200",
"component": "apiserver",
"contentType": "application/json",
"dry_run": "",
"group": "rbac.authorization.k8s.io",
"resource": "clusterroles",
"scope": "cluster",
"subresource": "",
"verb": "LIST",
"version": "v1beta1"
},
"value": "1"
}
```
<!--
The output shows that only read requests are being made to these APIs, and the most requests have been made to watch the deprecated Ingress API.
You can also find that information through the following Prometheus query,
which returns information about requests made to deprecated APIs which will be removed in v1.22:
-->
上面的输出展示,对这些 API 发出的都只是读请求,并且大多数请求都用来监测已弃用的 Ingress API。
你还可以通过以下 Prometheus 查询获取这一信息,
该查询返回关于已弃用的、将在 v1.22 中删除的 API 请求的信息:
```promql
apiserver_requested_deprecated_apis{removed_release="1.22"} * on(group,version,resource,subresource)
group_right() apiserver_request_total
```
<!--
### Audit Annotations
-->
### 审计注解 {#audit-annotations}
<!--
Metrics are a fast way to check whether deprecated APIs are being used, and at what rate,
but they don't include enough information to identify particular clients or API objects.
Starting in Kubernetes v1.19, [audit events](/docs/tasks/debug/debug-cluster/audit/)
for requests to deprecated APIs include an audit annotation of `"k8s.io/deprecated":"true"`.
Administrators can use those audit events to identify specific clients or objects that need to be updated.
-->
度量指标是检查是否正在使用已弃用的 API 以及使用率如何的快速方法,
但它们没有包含足够的信息来识别特定的客户端或 API 对象。
从 Kubernetes v1.19 开始,
对已弃用的 API 的请求进行审计时,[审计事件](/zh-cn/docs/tasks/debug/debug-cluster/audit/)中会包括
审计注解 `"k8s.io/deprecated":"true"`
管理员可以使用这些审计事件来识别需要更新的特定客户端或对象。
<!--
## Custom Resource Definitions
-->
## 自定义资源定义 {#custom-resource-definitions}
<!--
Along with the API server ability to warn about deprecated API use, starting in v1.19, a CustomResourceDefinition can indicate a
[particular version of the resource it defines is deprecated](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-deprecation).
When API requests to a deprecated version of a custom resource are made, a warning message is returned, matching the behavior of built-in APIs.
The author of the CustomResourceDefinition can also customize the warning for each version if they want to.
This allows them to give a pointer to a migration guide or other information if needed.
-->
除了 API 服务器对已弃用的 API 使用发出警告的能力外,从 v1.19 开始CustomResourceDefinition
可以指示[它定义的资源的特定版本已被弃用](/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-deprecation)。
当对自定义资源的已弃用的版本发出 API 请求时,将返回一条警告消息,与内置 API 的行为相匹配。
CustomResourceDefinition 的作者还可以根据需要自定义每个版本的警告。
这允许他们在需要时提供指向迁移指南的信息或其他信息。
<!--
```yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
name: crontabs.example.com
spec:
versions:
- name: v1alpha1
# This indicates the v1alpha1 version of the custom resource is deprecated.
# API requests to this version receive a warning in the server response.
deprecated: true
# This overrides the default warning returned to clients making v1alpha1 API requests.
deprecationWarning: "example.com/v1alpha1 CronTab is deprecated; use example.com/v1 CronTab (see http://example.com/v1alpha1-v1)"
...
- name: v1beta1
# This indicates the v1beta1 version of the custom resource is deprecated.
# API requests to this version receive a warning in the server response.
# A default warning message is returned for this version.
deprecated: true
...
- name: v1
...
```
-->
```yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
name: crontabs.example.com
spec:
versions:
- name: v1alpha1
# 这表示 v1alpha1 版本的自定义资源已经废弃了。
# 对此版本的 API 请求会在服务器响应中收到警告。
deprecated: true
# 这会把返回给发出 v1alpha1 API 请求的客户端的默认警告覆盖。
deprecationWarning: "example.com/v1alpha1 CronTab is deprecated; use example.com/v1 CronTab (see http://example.com/v1alpha1-v1)"
...
- name: v1beta1
# 这表示 v1beta1 版本的自定义资源已经废弃了。
# 对此版本的 API 请求会在服务器响应中收到警告。
# 此版本返回默认警告消息。
deprecated: true
...
- name: v1
...
```
<!--
## Admission Webhooks
-->
## 准入 Webhook {#admission-webhooks}
<!--
[Admission webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers)
are the primary way to integrate custom policies or validation with Kubernetes.
Starting in v1.19, admission webhooks can [return warning messages](/docs/reference/access-authn-authz/extensible-admission-controllers/#response)
that are passed along to the requesting API client. Warnings can be returned with allowed or rejected admission responses.
As an example, to allow a request but warn about a configuration known not to work well, an admission webhook could send this response:
-->
[准入 Webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers)是将自定义策略或验证与
Kubernetes 集成的主要方式。
从 v1.19 开始Admission Webhook 可以[返回警告消息](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#response)
传递给发送请求的 API 客户端。警告可以与允许或拒绝的响应一起返回。
例如,允许请求但警告已知某个配置无法正常运行时,准入 Webhook 可以发送以下响应:
```json
{
"apiVersion": "admission.k8s.io/v1",
"kind": "AdmissionReview",
"response": {
"uid": "<value from request.uid>",
"allowed": true,
"warnings": [
".spec.memory: requests >1GB do not work on Fridays"
]
}
}
```
<!--
If you are implementing a webhook that returns a warning message, here are some tips:
* Don't include a "Warning:" prefix in the message (that is added by clients on output)
* Use warning messages to describe problems the client making the API request should correct or be aware of
* Be brief; limit warnings to 120 characters if possible
-->
如果你在实现一个返回警告消息的 Webhook这里有一些提示
* 不要在消息中包含 “Warning:” 前缀(由客户端在输出时添加)
* 使用警告消息来正确描述能被发出 API 请求的客户端纠正或了解的问题
* 保持简洁;如果可能,将警告限制为 120 个字符以内
<!--
There are many ways admission webhooks could use this new feature, and I'm looking forward to seeing what people come up with.
Here are a couple ideas to get you started:
* webhook implementations adding a "complain" mode, where they return warnings instead of rejections,
to allow trying out a policy to verify it is working as expected before starting to enforce it
* "lint" or "vet"-style webhooks, inspecting objects and surfacing warnings when best practices are not followed
-->
准入 Webhook 可以通过多种方式使用这个新功能,我期待看到大家想出来的方法。
这里有一些想法可以帮助你入门:
* 添加 “complain” 模式的 Webhook 实现,它们返回警告而不是拒绝,
允许在开始执行之前尝试策略以验证它是否按预期工作
* “lint” 或 “vet” 风格的 Webhook检查对象并在未遵循最佳实践时显示警告
<!--
## Customize Client Handling
-->
## 自定义客户端处理方式 {#customize-client-handling}
<!--
Applications that use the `k8s.io/client-go` library to make API requests can customize
how warnings returned from the server are handled. By default, warnings are logged to
stderr as they are received, but this behavior can be customized
[per-process](https://godoc.org/k8s.io/client-go/rest#SetDefaultWarningHandler)
or [per-client](https://godoc.org/k8s.io/client-go/rest#Config).
-->
使用 `k8s.io/client-go` 库发出 API 请求的应用程序可以定制如何处理从服务器返回的警告。
默认情况下,收到的警告会以日志形式输出到 stderr
但[在进程层面](https://godoc.org/k8s.io/client-go/rest#SetDefaultWarningHandler)或[客户端层面]
(https://godoc.org/k8s.io/client-go/rest#Config)均可定制这一行为。
<!--
This example shows how to make your application behave like `kubectl`,
overriding message handling process-wide to deduplicate warnings
and highlighting messages using colored output where supported:
-->
这个例子展示了如何让你的应用程序表现得像 `kubectl`
在进程层面重载整个消息处理逻辑以删除重复的警告,
并在支持的情况下使用彩色输出突出显示消息:
```go
import (
"os"
"k8s.io/client-go/rest"
"k8s.io/kubectl/pkg/util/term"
...
)
func main() {
rest.SetDefaultWarningHandler(
rest.NewWarningWriter(os.Stderr, rest.WarningWriterOptions{
// only print a given warning the first time we receive it
Deduplicate: true,
// highlight the output with color when the output supports it
Color: term.AllowsColorOutput(os.Stderr),
},
),
)
...
```
<!--
The next example shows how to construct a client that ignores warnings.
This is useful for clients that operate on metadata for all resource types
(found dynamically at runtime using the discovery API)
and do not benefit from warnings about a particular resource being deprecated.
Suppressing deprecation warnings is not recommended for clients that require use of particular APIs.
-->
下一个示例展示如何构建一个忽略警告的客户端。
这对于那些操作所有资源类型(使用发现 API 在运行时动态发现)
的元数据并且不会从已弃用的特定资源的警告中受益的客户端很有用。
对于需要使用特定 API 的客户端,不建议抑制弃用警告。
```go
import (
"k8s.io/client-go/rest"
"k8s.io/client-go/kubernetes"
)
func getClientWithoutWarnings(config *rest.Config) (kubernetes.Interface, error) {
// copy to avoid mutating the passed-in config
config = rest.CopyConfig(config)
// set the warning handler for this client to ignore warnings
config.WarningHandler = rest.NoWarnings{}
// construct and return the client
return kubernetes.NewForConfig(config)
}
```
<!--
## Kubectl Strict Mode
-->
## Kubectl 强制模式 {#kubectl-strict-mode}
<!--
If you want to be sure you notice deprecations as soon as possible and get a jump start on addressing them,
`kubectl` added a `--warnings-as-errors` option in v1.19. When invoked with this option,
`kubectl` treats any warnings it receives from the server as errors and exits with a non-zero exit code:
<img alt="kubectl applying a manifest file with a --warnings-as-errors flag, displaying a warning message and exiting with a non-zero exit code."
src="kubectl-warnings-as-errors.png"
style="width:637px;max-width:100%;">
This could be used in a CI job to apply manifests to a current server,
and required to pass with a zero exit code in order for the CI job to succeed.
-->
如果你想确保及时注意到弃用问题并立即着手解决它们,
`kubectl` 在 v1.19 中添加了 `--warnings-as-errors` 选项。使用此选项调用时,
`kubectl` 将从服务器收到的所有警告视为错误,并以非零码退出:
<img alt="kubectl 在设置 --warnings-as-errors 标记的情况下执行一个清单文件, 返回警告消息和非零退出码。"
src="kubectl-warnings-as-errors.png"
style="width:637px;max-width:100%;">
这可以在 CI 作业中用于将清单文件应用到当前服务器,
其中要求通过零退出码才能使 CI 作业成功。
<!--
## Future Possibilities
-->
## 未来的可能性 {#future-possibilities}
<!--
Now that we have a way to communicate helpful information to users in context,
we're already considering other ways we can use this to improve people's experience with Kubernetes.
A couple areas we're looking at next are warning about [known problematic values](http://issue.k8s.io/64841#issuecomment-395141013)
we cannot reject outright for compatibility reasons, and warning about use of deprecated fields or field values
(like selectors using beta os/arch node labels, [deprecated in v1.14](/docs/reference/labels-annotations-taints/#beta-kubernetes-io-arch-deprecated)).
I'm excited to see progress in this area, continuing to make it easier to use Kubernetes.
-->
现在我们有了一种在上下文中向用户传达有用信息的方法,
我们已经在考虑使用其他方法来改善人们使用 Kubernetes 的体验。
我们接下来要研究的几个领域是关于[已知有问题的值](http://issue.k8s.io/64841#issuecomment-395141013)的警告。
出于兼容性原因,我们不能直接拒绝,而应就使用已弃用的字段或字段值
(例如使用 beta os/arch 节点标签的选择器,
[在 v1.14 中已弃用](/zh-cn/docs/reference/labels-annotations-taints/#beta-kubernetes-io-arch-deprecated)
给出警告。
我很高兴看到这方面的进展,继续让 Kubernetes 更容易使用。

Binary file not shown.

After

Width:  |  Height:  |  Size: 221 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 296 KiB

View File

@ -1117,7 +1117,7 @@ the kubelet, and the `--fail-swap-on` command line flag or `failSwapOn`
[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
must be set to false.
-->
要在节点上启用交换内存必须启用kubelet 的 `NodeSwap` 特性门控,
要在节点上启用交换内存,必须启用 kubelet 的 `NodeSwap` 特性门控,
同时使用 `--fail-swap-on` 命令行参数或者将 `failSwapOn`
[配置](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)设置为 false。

View File

@ -484,7 +484,7 @@ incoming request is for a resource or non-resource URL) matches the request.
当给定的请求与某个 FlowSchema 的 `rules` 的其中一条匹配,那么就认为该请求与该 FlowSchema 匹配。
判断规则与该请求是否匹配,**不仅**要求该条规则的 `subjects` 字段至少存在一个与该请求相匹配,
**而且**要求该条规则的 `resourceRules``nonResourceRules`
取决于传入请求是针对资源URL还是非资源URL字段至少存在一个与该请求相匹配。
(取决于传入请求是针对资源 URL 还是非资源 URL字段至少存在一个与该请求相匹配。
<!--
For the `name` field in subjects, and the `verbs`, `apiGroups`, `resources`,
@ -907,15 +907,16 @@ poorly-behaved workloads that may be harming system health.
-->
* `apiserver_flowcontrol_read_vs_write_request_count_samples` 是一个直方图向量,
记录当前请求数量的观察值,
由标签 `phase`(取值为 `waiting` `executing`)和 `request_kind`
(取值 `mutating` `readOnly`)拆分。定期以高速率观察该值。
由标签 `phase`(取值为 `waiting` `executing`)和 `request_kind`
(取值 `mutating` `readOnly`)拆分。定期以高速率观察该值。
每个观察到的值是一个介于 0 和 1 之间的比值,计算方式为请求数除以该请求数的对应限制
(等待的队列长度限制和执行所用的并发限制)。
<!--
* `apiserver_flowcontrol_read_vs_write_request_count_watermarks` is a
histogram vector of high or low water marks of the number of
requests broken down by the labels `phase` (which takes on the
requests (divided by the corresponding limit to get a ratio in the
range 0 to 1) broken down by the labels `phase` (which takes on the
values `waiting` and `executing`) and `request_kind` (which takes on
the values `mutating` and `readOnly`); the label `mark` takes on
values `high` and `low`. The water marks are accumulated over
@ -923,21 +924,21 @@ poorly-behaved workloads that may be harming system health.
`apiserver_flowcontrol_read_vs_write_request_count_samples`. These
water marks show the range of values that occurred between samples.
-->
* `apiserver_flowcontrol_read_vs_write_request_count_watermarks` 是一个直方图向量,
记录请求数量的高/低水位线
由标签 `phase`(取值为 `waiting` `executing`)和 `request_kind`
(取值为 `mutating` `readOnly`)拆分;标签 `mark` 取值为 `high``low`
* `apiserver_flowcontrol_read_vs_write_request_count_watermarks`
是请求数量的高或低水位线的直方图向量(除以相应的限制,得到介于 0 至 1 的比率)
由标签 `phase`(取值为 `waiting` `executing`)和 `request_kind`
(取值为 `mutating` `readOnly`)拆分;标签 `mark` 取值为 `high``low`
`apiserver_flowcontrol_read_vs_write_request_count_samples` 向量观察到有值新增,
则该向量累积。这些水位线显示了样本值的范围。
<!--
* `apiserver_flowcontrol_current_inqueue_requests` is a gauge vector
holding the instantaneous number of queued (not executing) requests,
broken down by the labels `priorityLevel` and `flowSchema`.
broken down by the labels `priority_level` and `flow_schema`.
-->
* `apiserver_flowcontrol_current_inqueue_requests` 是一个表向量,
记录包含排队中的(未执行)请求的瞬时数量,
由标签 `priorityLevel` 和 `flowSchema` 拆分。
由标签 `priority_level` 和 `flow_schema` 拆分。
<!--
* `apiserver_flowcontrol_current_executing_requests` is a gauge vector
@ -964,17 +965,23 @@ poorly-behaved workloads that may be harming system health.
values `waiting` and `executing`) and `priority_level`. Each
histogram gets observations taken periodically, up through the last
activity of the relevant sort. The observations are made at a high
rate.
rate. Each observed value is a ratio, between 0 and 1, of a number
of requests divided by the corresponding limit on the number of
requests (queue length limit for waiting and concurrency limit for
executing).
-->
* `apiserver_flowcontrol_priority_level_request_count_samples` 是一个直方图向量,
记录当前请求的观测值,由标签 `phase`(取值为`waiting` 和 `executing`)和
记录当前请求的观测值,由标签 `phase`(取值为`waiting` `executing`)和
`priority_level` 进一步区分。
每个直方图都会定期进行观察,直到相关类别的最后活动为止。观察频率高。
所观察到的值都是请求数除以相应的请求数限制(等待的队列长度限制和执行的并发限制)的比率,
介于 0 和 1 之间。
<!--
* `apiserver_flowcontrol_priority_level_request_count_watermarks` is a
histogram vector of high or low water marks of the number of
requests broken down by the labels `phase` (which takes on the
requests (divided by the corresponding limit to get a ratio in the
range 0 to 1) broken down by the labels `phase` (which takes on the
values `waiting` and `executing`) and `priority_level`; the label
`mark` takes on values `high` and `low`. The water marks are
accumulated over windows bounded by the times when an observation
@ -982,8 +989,9 @@ poorly-behaved workloads that may be harming system health.
`apiserver_flowcontrol_priority_level_request_count_samples`. These
water marks show the range of values that occurred between samples.
-->
* `apiserver_flowcontrol_priority_level_request_count_watermarks` 是一个直方图向量,
记录请求数的高/低水位线,由标签 `phase`(取值为 `waiting``executing`)和
* `apiserver_flowcontrol_priority_level_request_count_watermarks`
是请求数量的高或低水位线的直方图向量(除以相应的限制,得到 0 到 1 的范围内的比率),
由标签 `phase`(取值为 `waiting``executing`)和
`priority_level` 拆分;
标签 `mark` 取值为 `high``low`
`apiserver_flowcontrol_priority_level_request_count_samples` 向量观察到有值新增,
@ -1020,7 +1028,7 @@ poorly-behaved workloads that may be harming system health.
<!--
* `apiserver_flowcontrol_request_concurrency_limit` is a gauge vector
hoding the computed concurrency limit (based on the API server's
holding the computed concurrency limit (based on the API server's
total concurrency limit and PriorityLevelConfigurations' concurrency
shares), broken down by the label `priority_level`.
-->
@ -1031,8 +1039,8 @@ poorly-behaved workloads that may be harming system health.
<!--
* `apiserver_flowcontrol_request_wait_duration_seconds` is a histogram
vector of how long requests spent queued, broken down by the labels
`flowSchema` (indicating which one matched the request),
`priorityLevel` (indicating the one to which the request was
`flow_schema` (indicating which one matched the request),
`priority_level` (indicating the one to which the request was
assigned), and `execute` (indicating whether the request started
executing).
-->
@ -1056,8 +1064,8 @@ poorly-behaved workloads that may be harming system health.
<!--
* `apiserver_flowcontrol_request_execution_seconds` is a histogram
vector of how long requests took to actually execute, broken down by
the labels `flowSchema` (indicating which one matched the request)
and `priorityLevel` (indicating the one to which the request was
the labels `flow_schema` (indicating which one matched the request)
and `priority_level` (indicating the one to which the request was
assigned).
-->
* `apiserver_flowcontrol_request_execution_seconds` 是一个直方图向量,
@ -1065,6 +1073,39 @@ poorly-behaved workloads that may be harming system health.
由标签 `flow_schema`(表示与请求匹配的 FlowSchema
`priority_level`(表示分配给该请求的优先级)进一步区分。
<!--
* `apiserver_flowcontrol_watch_count_samples` is a histogram vector of
the number of active WATCH requests relevant to a given write,
broken down by `flow_schema` and `priority_level`.
-->
* `apiserver_flowcontrol_watch_count_samples` 是一个直方图向量,
记录给定写的相关活动 WATCH 请求数量,
由标签 `flow_schema``priority_level` 进一步区分。
<!--
* `apiserver_flowcontrol_work_estimated_seats` is a histogram vector
of the number of estimated seats (maximum of initial and final stage
of execution) associated with requests, broken down by `flow_schema`
and `priority_level`.
-->
* `apiserver_flowcontrol_work_estimated_seats` 是一个直方图向量,
记录与估计席位(最初阶段和最后阶段的最多人数)相关联的请求数量,
由标签 `flow_schema``priority_level` 进一步区分。
<!--
* `apiserver_flowcontrol_request_dispatch_no_accommodation_total` is a
counter vec of the number of events that in principle could have led
to a request being dispatched but did not, due to lack of available
concurrency, broken down by `flow_schema` and `priority_level`. The
relevant sorts of events are arrival of a request and completion of
a request.
-->
* `apiserver_flowcontrol_request_dispatch_no_accommodation_total`
是一个事件数量的计数器,这些事件在原则上可能导致请求被分派,
但由于并发度不足而没有被分派,
由标签 `flow_schema``priority_level` 进一步区分。
相关的事件类型是请求的到达和请求的完成。
<!--
### Debug endpoints

View File

@ -495,7 +495,7 @@ kubectl scale deployment/my-nginx --replicas=1
```
```
deployment.extensions/my-nginx scaled
deployment.apps/my-nginx scaled
```
<!--

View File

@ -59,7 +59,7 @@ There are several different proxies you may encounter when using Kubernetes:
2. [apiserver proxy](/zh-cn/docs/tasks/access-application-cluster/access-cluster-services/#discovering-builtin-services)
- 是一个建立在 apiserver 内部的“堡垒”
- 将集群外部的用户与集群 IP 相连接这些IP是无法通过其他方式访问的
- 将集群外部的用户与集群 IP 相连接,这些 IP 是无法通过其他方式访问的
- 运行在 apiserver 进程内
- 客户端到代理使用 HTTPS 协议 (如果配置 apiserver 使用 HTTP 协议,则使用 HTTP 协议)
- 通过可用信息进行选择,代理到目的地可能使用 HTTP 或 HTTPS 协议

View File

@ -26,10 +26,10 @@ When you specify a {{< glossary_tooltip term_id="pod" >}}, you can optionally sp
much of each resource a {{< glossary_tooltip text="container" term_id="container" >}} needs.
The most common resources to specify are CPU and memory (RAM); there are others.
When you specify the resource _request_ for Containers in a Pod, the
When you specify the resource _request_ for containers in a Pod, the
{{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} uses this
information to decide which node to place the Pod on. When you specify a resource _limit_
for a Container, the kubelet enforces those limits so that the running container is not
for a container, the kubelet enforces those limits so that the running container is not
allowed to use more of that resource than the limit you set. The kubelet also reserves
at least the _request_ amount of that system resource specifically for that container
to use.
@ -273,6 +273,7 @@ MiB of memory, and a limit of 1 CPU and 256MiB of memory.
你可以认为该 Pod 的资源请求为 0.5 CPU 和 128 MiB 内存,资源限制为 1 CPU 和 256MiB 内存。
```yaml
---
apiVersion: v1
kind: Pod
metadata:
@ -382,7 +383,7 @@ limits you defined.
而不是临时存储用量。
<!--
If a container exceeds its memory request, and the node that it runs on becomes short of
If a container exceeds its memory request and the node that it runs on becomes short of
memory overall, it is likely that the Pod the container belongs to will be
{{< glossary_tooltip text="evicted" term_id="eviction" >}}.
@ -401,7 +402,7 @@ see the [Troubleshooting](#troubleshooting) section.
要确定某容器是否会由于资源限制而无法调度或被杀死,请参阅[疑难解答](#troubleshooting)节。
<!--
## Monitoring compute & memory resource usage
### Monitoring compute & memory resource usage
The kubelet reports the resource usage of a Pod as part of the Pod
[`status`](/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status).
@ -411,7 +412,7 @@ are available in your cluster, then Pod resource usage can be retrieved either
from the [Metrics API](/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#metrics-api)
directly or from your monitoring tools.
-->
## 监控计算和内存资源用量 {#monitoring-compute-memory-resource-usage}
### 监控计算和内存资源用量 {#monitoring-compute-memory-resource-usage}
kubelet 会将 Pod 的资源使用情况作为 Pod
[`status`](/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status)
@ -431,12 +432,11 @@ locally-attached writeable devices or, sometimes, by RAM.
Pods use ephemeral local storage for scratch space, caching, and for logs.
The kubelet can provide scratch space to Pods using local ephemeral storage to
mount [`emptyDir`](/docs/concepts/storage/volumes/#emptydir)
{{< glossary_tooltip term_id="volume" text="volumes" >}} into containers.
{{< glossary_tooltip term_id="volume" text="volumes" >}} into containers.
-->
## 本地临时存储 {#local-ephemeral-storage}
<!-- feature gate LocalStorageCapacityIsolation -->
{{< feature-state for_k8s_version="v1.25" state="stable" >}}
节点通常还可以具有本地的临时性存储,由本地挂接的可写入设备或者有时也用 RAM
@ -633,12 +633,14 @@ or 400 megabytes (`400M`).
In the following example, the Pod has two containers. Each container has a request of
2GiB of local ephemeral storage. Each container has a limit of 4GiB of local ephemeral
storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and
a limit of 8GiB of local ephemeral storage.
a limit of 8GiB of local ephemeral storage. 500Mi of that limit could be
consumed by the `emptyDir` volume.
-->
在下面的例子中Pod 包含两个容器。每个容器请求 2 GiB 大小的本地临时性存储。
每个容器都设置了 4 GiB 作为其本地临时性存储的限制。
因此,整个 Pod 的本地临时性存储请求是 4 GiB且其本地临时性存储的限制为 8 GiB。
该限制值中有 500Mi 可供 `emptyDir` 卷使用。
```yaml
apiVersion: v1
@ -669,7 +671,8 @@ spec:
mountPath: "/tmp"
volumes:
- name: ephemeral
emptyDir: {}
emptyDir:
sizeLimit: 500Mi
```
<!--
@ -1017,9 +1020,9 @@ cluster-level extended resource "example.com/foo" is handled by the scheduler
extender.
- The scheduler sends a Pod to the scheduler extender only if the Pod requests
"example.com/foo".
"example.com/foo".
- The `ignoredByScheduler` field specifies that the scheduler does not check
the "example.com/foo" resource in its `PodFitsResources` predicate.
the "example.com/foo" resource in its `PodFitsResources` predicate.
-->
**示例:**
@ -1235,7 +1238,7 @@ Allocated resources:
In the preceding output, you can see that if a Pod requests more than 1.120 CPUs
or more than 6.23Gi of memory, that Pod will not fit on the node.
By looking at the "Pods" section, you can see which Pods are taking up space on
By looking at the “Pods” section, you can see which Pods are taking up space on
the node.
-->
在上面的输出中,你可以看到如果 Pod 请求超过 1.120 CPU 或者 6.23Gi 内存,节点将无法满足。
@ -1347,7 +1350,7 @@ Events:
<!--
In the preceding example, the `Restart Count: 5` indicates that the `simmemleak`
Container in the Pod was terminated and restarted five times (so far).
container in the Pod was terminated and restarted five times (so far).
The `OOMKilled` reason shows that the container tried to use more memory than its limit.
-->
在上面的例子中,`Restart Count: 5` 意味着 Pod 中的 `simmemleak`

View File

@ -60,6 +60,7 @@ Kubernetes Secrets are, by default, stored unencrypted in the API server's under
Additionally, anyone who is authorized to create a Pod in a namespace can use that access to read
any Secret in that namespace; this includes indirect access such as the ability to create a
Deployment.
In order to safely use Secrets, take at least the following steps:
1. [Enable Encryption at Rest](/docs/tasks/administer-cluster/encrypt-data/) for Secrets.
@ -190,17 +191,19 @@ the exact mechanisms for issuing and refreshing those session tokens.
There are several options to create a Secret:
- [create Secret using `kubectl` command](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- [create Secret from config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
- [create Secret using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
- [Use `kubectl`](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- [Use a configuration file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
- [Use the Kustomize tool](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
-->
## 使用 Secret {#working-with-secrets}
### 创建 Secret {#creating-a-secret}
- [使用 `kubectl` 命令来创建 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- [基于配置文件来创建 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-config-file/)
- [使用 kustomize 来创建 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
创建 Secret 有以下几种可选方式:
- [使用 `kubectl`](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- [使用配置文件](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-config-file/)
- [使用 Kustomize 工具](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
<!--
#### Constraints on Secret names and data {#restriction-names-data}
@ -255,56 +258,36 @@ Secret或其他资源的个数。
<!--
### Editing a Secret
You can edit an existing Secret using kubectl:
You can edit an existing Secret unless it is [immutable](#secret-immutable). To
edit a Secret, use one of the following methods:
-->
### 编辑 Secret {#editing-a-secret}
你可以使用 kubectl 来编辑一个已有的 Secret
```shell
kubectl edit secrets mysecret
```
你可以编辑一个已有的 Secret除非它是[不可变更的](#secret-immutable)。
要编辑一个 Secret可使用以下方法之一
<!--
This opens your default editor and allows you to update the base64 encoded Secret
values in the `data` field; for example:
* [Use `kubectl`](/docs/tasks/configmap-secret/managing-secret-using-kubectl/#edit-secret)
* [Use a configuration file](/docs/tasks/configmap-secret/managing-secret-using-config-file/#edit-secret)
-->
这一命令会启动你的默认编辑器,允许你更新 `data` 字段中存放的 base64 编码的 Secret 值;
例如:
```yaml
# 请编辑以下对象。以 `#` 开头的几行将被忽略,
# 且空文件将放弃编辑。如果保存此文件时出错,
# 则重新打开此文件时也会有相关故障。
apiVersion: v1
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: { ... }
creationTimestamp: 2020-01-22T18:41:56Z
name: mysecret
namespace: default
resourceVersion: "164619"
uid: cfee02d6-c137-11e5-8d73-42010af00002
type: Opaque
```
* [使用 `kubectl`](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl/#edit-secret)
* [使用配置文件](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-config-file/#edit-secret)
<!--
That example manifest defines a Secret with two keys in the `data` field: `username` and `password`.
The values are Base64 strings in the manifest; however, when you use the Secret with a Pod
then the kubelet provides the _decoded_ data to the Pod and its containers.
You can also edit the data in a Secret using the [Kustomize tool](/docs/tasks/configmap-secret/managing-secret-using-kustomize/#edit-secret). However, this
method creates a new `Secret` object with the edited data.
You can package many keys and values into one Secret, or use many Secrets, whichever is convenient.
Depending on how you created the Secret, as well as how the Secret is used in
your Pods, updates to existing `Secret` objects are propagated automatically to
Pods that use the data. For more information, refer to [Mounted Secrets are updated automatically](#mounted-secrets-are-updated-automatically).
-->
这一示例清单定义了一个 Secret`data` 字段中包含两个主键:`username` 和 `password`
清单中的字段值是 Base64 字符串,不过,当你在 Pod 中使用 Secret 时kubelet 为 Pod
及其中的容器提供的是**解码**后的数据
你也可以使用
[Kustomize 工具](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kustomize/#edit-secret)编辑数据。
然而这种方法会用编辑过的数据创建新的 `Secret` 对象。
你可以在一个 Secret 中打包多个主键和数值,也可以选择使用多个 Secret
完全取决于哪种方式最方便。
根据你创建 Secret 的方式以及该 Secret 在 Pod 中被使用的方式,对已有 `Secret`
对象的更新将自动扩散到使用此数据的 Pod。有关更多信息
请参阅[自动更新挂载的 Secret](#mounted-secrets-are-updated-automatically)。
<!--
### Using a Secret
@ -706,8 +689,8 @@ in a Pod:
-->
### 以环境变量的方式使用 Secret {#using-secrets-as-environment-variables}
如果需要在 Pod 中以{{< glossary_tooltip text="环境变量" term_id="container-env-variables" >}}
的形式使用 Secret
如果需要在 Pod
中以{{< glossary_tooltip text="环境变量" term_id="container-env-variables" >}}的形式使用 Secret
<!--
1. Create a Secret (or use an existing one). Multiple Pods can reference the same Secret.
@ -865,7 +848,7 @@ The `imagePullSecrets` field for a Pod is a list of references to Secrets in the
as the Pod.
You can use an `imagePullSecrets` to pass image registry access credentials to
the kubelet. The kubelet uses this information to pull a private image on behalf of your Pod.
See the [PodSpec API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core)
See `PodSpec` in the [Pod API reference](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec)
for more information about the `imagePullSecrets` field.
-->
Pod 的 `imagePullSecrets` 字段是一个对 Pod 所在的名字空间中的 Secret
@ -880,7 +863,8 @@ kubelet 使用这个信息来替你的 Pod 拉取私有镜像。
The `imagePullSecrets` field is a list of references to secrets in the same namespace.
You can use an `imagePullSecrets` to pass a secret that contains a Docker (or other) image registry
password to the kubelet. The kubelet uses this information to pull a private image on behalf of your Pod.
See the [PodSpec API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core) for more information about the `imagePullSecrets` field.
See the [PodSpec API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core)
for more information about the `imagePullSecrets` field.
-->
#### 使用 imagePullSecrets {#using-imagepullsecrets-1}
@ -1137,6 +1121,7 @@ For example, if your actual password is `S!B\*d$zDsb=`, you should execute the c
```shell
kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb='
```
<!--
You do not need to escape special characters in passwords from files (`--from-file`).
-->
@ -1949,7 +1934,7 @@ A bootstrap type Secret has the following keys specified under `data`:
- `token-secret`: A random 16 character string as the actual token secret. Required.
- `description`: A human-readable string that describes what the token is
used for. Optional.
- `expiration`: An absolute UTC time using RFC3339 specifying when the token
- `expiration`: An absolute UTC time using [RFC3339](https://datatracker.ietf.org/doc/html/rfc3339) specifying when the token
should be expired. Optional.
- `usage-bootstrap-<usage>`: A boolean flag indicating additional usage for
the bootstrap token.
@ -1961,7 +1946,8 @@ A bootstrap type Secret has the following keys specified under `data`:
- `token-id`:由 6 个随机字符组成的字符串,作为令牌的标识符。必需。
- `token-secret`:由 16 个随机字符组成的字符串,包含实际的令牌机密。必需。
- `description`:供用户阅读的字符串,描述令牌的用途。可选。
- `expiration`:一个使用 RFC3339 来编码的 UTC 绝对时间,给出令牌要过期的时间。可选。
- `expiration`:一个使用 [RFC3339](https://datatracker.ietf.org/doc/html/rfc3339)
来编码的 UTC 绝对时间,给出令牌要过期的时间。可选。
- `usage-bootstrap-<usage>`:布尔类型的标志,用来标明启动引导令牌的其他用途。
- `auth-extra-groups`:用逗号分隔的组名列表,身份认证时除被认证为
`system:bootstrappers` 组之外,还会被添加到所列的用户组中。
@ -2148,7 +2134,6 @@ Secrets used on that node.
- Learn how to [manage Secrets using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
- Read the [API reference](/docs/reference/kubernetes-api/config-and-storage-resources/secret-v1/) for `Secret`
-->
- 有关管理和提升 Secret 安全性的指南,请参阅 [Kubernetes Secret 良好实践](/zh-cn/docs/concepts/security/secrets-good-practices)
- 学习如何[使用 `kubectl` 管理 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- 学习如何[使用配置文件管理 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-config-file/)

View File

@ -2,6 +2,7 @@
title: 镜像
content_type: concept
weight: 10
hide_summary: true # 在章节索引中单独列出
---
<!--
reviewers:
@ -10,6 +11,7 @@ reviewers:
title: Images
content_type: concept
weight: 10
hide_summary: true # Listed separately in section index
-->
<!-- overview -->
@ -33,6 +35,16 @@ This page provides an outline of the container image concept.
本页概要介绍容器镜像的概念。
{{< note >}}
<!--
If you are looking for the container images for a Kubernetes
release (such as v{{< skew latestVersion >}}, the latest minor release),
visit [Download Kubernetes](https://kubernetes.io/releases/download/).
-->
如果你正在寻找 Kubernetes 某个发行版本(如最新次要版本 v{{< skew latestVersion >}}
的容器镜像,请访问[下载 Kubernetes](/zh-cn/releases/download/)。
{{< /note >}}
<!-- body -->
<!--
@ -192,7 +204,7 @@ When you (or a controller) submit a new Pod to the API server, your cluster sets
-->
#### 默认镜像拉取策略 {#imagepullpolicy-defaulting}
当你(或控制器)向 API 服务器提交一个新的 Pod 时,你的集群会在满足特定条件时设置 `imagePullPolicy `字段:
当你(或控制器)向 API 服务器提交一个新的 Pod 时,你的集群会在满足特定条件时设置 `imagePullPolicy` 字段:
<!--
- if you omit the `imagePullPolicy` field, and the tag for the container image is
@ -346,20 +358,11 @@ These options are explained in more detail below.
Specific instructions for setting credentials depends on the container runtime and registry you chose to use. You should refer to your solution's documentation for the most accurate information.
-->
### 配置 Node 对私有仓库认证 {configuring-nodes-to-authenticate-to-a-private-registry}
### 配置 Node 对私有仓库认证 {#configuring-nodes-to-authenticate-to-a-private-registry}
设置凭据的具体说明取决于你选择使用的容器运行时和仓库。
你应该参考解决方案的文档来获取最准确的信息。
<!--
Default Kubernetes only supports the `auths` and `HttpHeaders` section in Docker configuration.
Docker credential helpers (`credHelpers` or `credsStore`) are not supported.
-->
{{< note >}}
Kubernetes 默认仅支持 Docker 配置中的 `auths``HttpHeaders` 部分,
不支持 Docker 凭据辅助程序(`credHelpers` 或 `credsStore`)。
{{< /note >}}
<!--
For an example of configuring a private container image registry, see the
[Pull an Image from a Private Registry](/docs/tasks/configure-pod-container/pull-image-private-registry)
@ -592,7 +595,7 @@ reference a Secret in the same namespace.
For example:
-->
#### 在 Pod 中引用 ImagePullSecrets {referring-to-an-imagepullsecrets-on-a-pod}
#### 在 Pod 中引用 ImagePullSecrets {#referring-to-an-imagepullsecrets-on-a-pod}
现在,在创建 Pod 时,可以在 Pod 定义中增加 `imagePullSecrets` 部分来引用该 Secret。
`imagePullSecrets` 数组中的每一项只能引用同一名字空间中的 Secret。
@ -705,10 +708,8 @@ common use cases and suggested solutions.
<!--
If you need access to multiple registries, you can create one secret for each registry.
Kubelet will merge any `imagePullSecrets` into a single virtual `.docker/config.json`
-->
如果你需要访问多个仓库,可以为每个仓库创建一个 Secret。
`kubelet` 将所有 `imagePullSecrets` 合并为一个虚拟的 `.docker/config.json` 文件。
## {{% heading "whatsnext" %}}

View File

@ -192,8 +192,8 @@ nested fields specific to that object. The [Kubernetes API Reference](/docs/refe
can help you find the spec format for all of the objects you can create using Kubernetes.
-->
对每个 Kubernetes 对象而言,其 `spec` 之精确格式都是不同的,包含了特定于该对象的嵌套字段。
我们能在 [Kubernetes API 参考](/zh-cn/docs/reference/kubernetes-api/)
找到我们想要在 Kubernetes 上创建的任何对象的规约格式。
[Kubernetes API 参考](/zh-cn/docs/reference/kubernetes-api/)可以帮助你找到想要使用
Kubernetes 创建的所有对象的规约格式。
<!--
For example, see the [`spec` field](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec)
@ -224,11 +224,17 @@ detail the structure of that `.status` field, and its content for each different
## {{% heading "whatsnext" %}}
<!--
* Learn about the most important basic Kubernetes objects, such as [Pod](/docs/concepts/workloads/pods/).
* Learn about [controllers](/docs/concepts/architecture/controller/) in Kubernetes.
* [Using the Kubernetes API](/docs/reference/using-api/) explains some more API concepts.
Learn more about the following:
* [Pods](https://kubernetes.io/docs/concepts/workloads/pods/) which are the most important basic Kubernetes objects.
* [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) objects.
* [Controllers](https://kubernetes.io/docs/concepts/architecture/controller/) in Kubernetes.
* [Kubernetes API overview](https://kubernetes.io/docs/reference/using-api/) which explains some more API concepts.
* [kubectl](https://kubernetes.io/docs/reference/kubectl/) and [kubectl commands](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands).
-->
* 了解最重要的 Kubernetes 基本对象,例如 [Pod](/zh-cn/docs/concepts/workloads/pods/)。
* 了解 Kubernetes 中的[控制器](/zh-cn/docs/concepts/architecture/controller/)。
* [使用 Kubernetes API](/zh-cn/docs/reference/using-api/) 一节解释了一些 API 概念。
进一步了解以下信息:
* 最重要的 Kubernetes 基本对象 [Pod](/zh-cn/docs/concepts/workloads/pods/)。
* [Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/) 对象。
* Kubernetes 中的[控制器](/zh-cn/docs/concepts/architecture/controller/)。
* 解释了一些 API 概念的 [Kubernetes API 概述](/zh-cn/docs/reference/using-api/)。
* [kubectl](/zh-cn/docs/reference/kubectl/) 和 [kubectl 命令](/docs/reference/generated/kubectl/kubectl-commands)。

View File

@ -26,7 +26,7 @@ For example, you can only have one Pod named `myapp-1234` within the same [names
每个 Kubernetes 对象也有一个 [**UID**](#uids) 来标识在整个集群中的唯一性。
比如,在同一个[名字空间](/zh-cn/docs/concepts/overview/working-with-objects/namespaces/)
中有一个名为 `myapp-1234` 的 Pod但是可以命名一个 Pod 和一个 Deployment 同为 `myapp-1234`
只能有一个名为 `myapp-1234` 的 Pod但是可以命名一个 Pod 和一个 Deployment 同为 `myapp-1234`
<!--
For non-unique user-provided attributes, Kubernetes provides [labels](/docs/concepts/overview/working-with-objects/labels/) and [annotations](/docs/concepts/overview/working-with-objects/annotations/).
@ -178,8 +178,8 @@ UUID 是标准化的,见 ISO/IEC 9834-8 和 ITU-T X.667。
## {{% heading "whatsnext" %}}
<!--
* Read about [labels](/docs/concepts/overview/working-with-objects/labels/) in Kubernetes.
* Read about [labels](/docs/concepts/overview/working-with-objects/labels/) and [annotations](/docs/concepts/overview/working-with-objects/annotations/) in Kubernetes.
* See the [Identifiers and Names in Kubernetes](https://git.k8s.io/design-proposals-archive/architecture/identifiers.md) design document.
-->
* 进一步了解 Kubernetes [标签](/zh-cn/docs/concepts/overview/working-with-objects/labels/)
* 进一步了解 Kubernetes [标签](/zh-cn/docs/concepts/overview/working-with-objects/labels/)和[注解](/zh-cn/docs/concepts/overview/working-with-objects/annotations/)。
* 参阅 [Kubernetes 标识符和名称](https://git.k8s.io/design-proposals-archive/architecture/identifiers.md)的设计文档

View File

@ -154,7 +154,7 @@ map[cpu:250m memory:120Mi]
If a [ResourceQuota](/docs/concepts/policy/resource-quotas/) is defined, the sum of container requests as well as the
`overhead` field are counted.
-->
如果定义了 [ResourceQuata](/zh-cn/docs/concepts/policy/resource-quotas/),
如果定义了 [ResourceQuota](/zh-cn/docs/concepts/policy/resource-quotas/),
则容器请求的总量以及 `overhead` 字段都将计算在内。
<!--

View File

@ -134,7 +134,7 @@ The name of a Service object must be a valid
[RFC 1035 label name](/docs/concepts/overview/working-with-objects/names#rfc-1035-label-names).
For example, suppose you have a set of Pods where each listens on TCP port 9376
and contains a label `app=MyApp`:
and contains a label `app.kubernetes.io/name=MyApp`:
-->
## 定义 Service {#defining-a-service}
@ -143,7 +143,7 @@ Service 在 Kubernetes 中是一个 REST 对象,和 Pod 类似。
Service 对象的名称必须是合法的
[RFC 1035 标签名称](/zh-cn/docs/concepts/overview/working-with-objects/names#rfc-1035-label-names)。
例如,假定有一组 Pod它们对外暴露了 9376 端口,同时还被打上 `app=MyApp` 标签:
例如,假定有一组 Pod它们对外暴露了 9376 端口,同时还被打上 `app.kubernetes.io/name=MyApp` 标签:
```yaml
apiVersion: v1
@ -582,7 +582,7 @@ thus is only available to use as-is.
Note that the kube-proxy starts up in different modes, which are determined by its configuration.
- The kube-proxy's configuration is done via a ConfigMap, and the ConfigMap for kube-proxy
effectively deprecates the behaviour for almost all of the flags for the kube-proxy.
effectively deprecates the behavior for almost all of the flags for the kube-proxy.
- The ConfigMap for the kube-proxy does not support live reloading of configuration.
- The ConfigMap parameters for the kube-proxy cannot all be validated and verified on startup.
For example, if your operating system doesn't allow you to run iptables commands,
@ -603,7 +603,7 @@ Note that the kube-proxy starts up in different modes, which are determined by i
<!--
### User space proxy mode {#proxy-mode-userspace}
In this mode, kube-proxy watches the Kubernetes control plane for the addition and
In this (legacy) mode, kube-proxy watches the Kubernetes control plane for the addition and
removal of Service and Endpoint objects. For each Service it opens a
port (randomly chosen) on the local node. Any connections to this "proxy port"
are proxied to one of the Service's backend Pods (as reported via
@ -620,7 +620,7 @@ By default, kube-proxy in userspace mode chooses a backend via a round-robin alg
-->
### userspace 代理模式 {#proxy-mode-userspace}
这种模式kube-proxy 会监视 Kubernetes 控制平面对 Service 对象和 Endpoints 对象的添加和移除操作。
这种(遗留)模式kube-proxy 会监视 Kubernetes 控制平面对 Service 对象和 Endpoints 对象的添加和移除操作。
对每个 Service它会在本地 Node 上打开一个端口(随机选择)。
任何连接到“代理端口”的请求,都会被代理到 Service 的后端 `Pods` 中的某个上面(如 `Endpoints` 所报告的一样)。
使用哪个后端 Pod是 kube-proxy 基于 `SessionAffinity` 来确定的。
@ -639,7 +639,7 @@ In this mode, kube-proxy watches the Kubernetes control plane for the addition a
removal of Service and Endpoint objects. For each Service, it installs
iptables rules, which capture traffic to the Service's `clusterIP` and `port`,
and redirect that traffic to one of the Service's
backend sets. For each Endpoint object, it installs iptables rules which
backend sets. For each Endpoint object, it installs iptables rules which
select a backend Pod.
By default, kube-proxy in iptables mode chooses a backend at random.
@ -701,7 +701,7 @@ The IPVS proxy mode is based on netfilter hook function that is similar to
iptables mode, but uses a hash table as the underlying data structure and works
in the kernel space.
That means kube-proxy in IPVS mode redirects traffic with lower latency than
kube-proxy in iptables mode, with much better performance when synchronising
kube-proxy in iptables mode, with much better performance when synchronizing
proxy rules. Compared to the other proxy modes, IPVS mode also supports a
higher throughput of network traffic.
@ -819,7 +819,7 @@ also start and end with an alphanumeric character.
For example, the names `123-abc` and `web` are valid, but `123_abc` and `-web` are not.
-->
与一般的Kubernetes名称一样端口名称只能包含小写字母数字字符 和 `-`
与一般的 Kubernetes 名称一样,端口名称只能包含小写字母数字字符 和 `-`
端口名称还必须以字母数字字符开头和结尾。
例如,名称 `123-abc``web` 有效,但是 `123_abc``-web` 无效。
@ -874,7 +874,7 @@ endpoints, the kube-proxy does not forward any traffic for the relevant Service.
<!--
If you enable the `ProxyTerminatingEndpoints`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
`ProxyTerminatingEndpoints` for the kube-proxy, the kube-proxy checks if the node
for the kube-proxy, the kube-proxy checks if the node
has local endpoints and whether or not all the local endpoints are marked as terminating.
-->
如果你启用了 kube-proxy 的 `ProxyTerminatingEndpoints`
@ -934,7 +934,11 @@ Kubernetes 支持两种基本的服务发现模式 —— 环境变量和 DNS。
### Environment variables
When a Pod is run on a Node, the kubelet adds a set of environment variables
for each active Service. It adds `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables, where the Service name is upper-cased and dashes are converted to underscores. It also supports variables (see [makeLinkVariables](https://github.com/kubernetes/kubernetes/blob/dd2d12f6dc0e654c15d5db57a5f9f6ba61192726/pkg/kubelet/envvars/envvars.go#L72)) that are compatible with Docker Engine's "_[legacy container links](https://docs.docker.com/network/links/)_" feature.
for each active Service. It adds `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables,
where the Service name is upper-cased and dashes are converted to underscores.
It also supports variables (see [makeLinkVariables](https://github.com/kubernetes/kubernetes/blob/dd2d12f6dc0e654c15d5db57a5f9f6ba61192726/pkg/kubelet/envvars/envvars.go#L72))
that are compatible with Docker Engine's
"_[legacy container links](https://docs.docker.com/network/links/)_" feature.
For example, the Service `redis-primary` which exposes TCP port 6379 and has been
allocated cluster IP address 10.0.0.11, produces the following environment
@ -1002,7 +1006,7 @@ create a DNS record for `my-service.my-ns`. Pods in the `my-ns` namespace
should be able to find the service by doing a name lookup for `my-service`
(`my-service.my-ns` would also work).
Pods in other Namespaces must qualify the name as `my-service.my-ns`. These names
Pods in other namespaces must qualify the name as `my-service.my-ns`. These names
will resolve to the cluster IP assigned for the Service.
-->
例如,如果你在 Kubernetes 命名空间 `my-ns` 中有一个名为 `my-service` 的服务,
@ -1145,7 +1149,10 @@ Kubernetes `ServiceTypes` 允许指定你所需要的 Service 类型。
{{< /note >}}
<!--
You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster. It lets you consolidate your routing rules into a single resource as it can expose multiple services under the same IP address.
You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service.
Ingress is not a Service type, but it acts as the entry point for your cluster.
It lets you consolidate your routing rules into a single resource as it can expose multiple
services under the same IP address.
-->
你也可以使用 [Ingress](/zh-cn/docs/concepts/services-networking/ingress/) 来暴露自己的服务。
Ingress 不是一种服务类型,但它充当集群的入口点。
@ -1260,10 +1267,6 @@ kube-proxy only selects the loopback interface for NodePort Services.
The default for `--nodeport-addresses` is an empty list.
This means that kube-proxy should consider all available network interfaces for NodePort.
(That's also compatible with earlier Kubernetes releases.)
Note that this Service is visible as `<NodeIP>:spec.ports[*].nodePort`
and `.spec.clusterIP:spec.ports[*].port`.
If the `--nodeport-addresses` flag for kube-proxy or the equivalent field
in the kube-proxy configuration file is set, `<NodeIP>` would be a filtered node IP address (or possibly IP addresses).
-->
此标志采用逗号分隔的 IP 段列表(例如 `10.0.0.0/8`、`192.0.2.0/25`)来指定 kube-proxy 应视为该节点本地的
IP 地址范围。
@ -1273,9 +1276,17 @@ IP 地址范围。
`--nodeport-addresses` 的默认值是一个空列表。
这意味着 kube-proxy 应考虑 NodePort 的所有可用网络接口。
(这也与早期的 Kubernetes 版本兼容。)
请注意,此服务显示为 `<NodeIP>:spec.ports[*].nodePort``.spec.clusterIP:spec.ports[*].port`
{{< note >}}
<!--
This Service is visible as `<NodeIP>:spec.ports[*].nodePort` and `.spec.clusterIP:spec.ports[*].port`.
If the `--nodeport-addresses` flag for kube-proxy or the equivalent field
in the kube-proxy configuration file is set, `<NodeIP>` would be a filtered node IP address (or possibly IP addresses).
-->
此服务呈现为 `<NodeIP>:spec.ports[*].nodePort``.spec.clusterIP:spec.ports[*].port`
如果设置了 kube-proxy 的 `--nodeport-addresses` 标志或 kube-proxy 配置文件中的等效字段,
`<NodeIP>` 将是过滤的节点 IP 地址(或可能的 IP 地址)。
{{< /note >}}
<!--
### Type LoadBalancer {#loadbalancer}
@ -1317,7 +1328,8 @@ status:
```
<!--
Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.
Traffic from the external load balancer is directed at the backend Pods.
The cloud provider decides how it is load balanced.
-->
来自外部负载均衡器的流量将直接重定向到后端 Pod 上,不过实际它们是如何工作的,这要依赖于云提供商。
@ -1439,13 +1451,13 @@ LoadBalancer 类型的服务继续分配节点端口。
`spec.loadBalancerClass` enables you to use a load balancer implementation other than the cloud provider default.
By default, `spec.loadBalancerClass` is `nil` and a `LoadBalancer` type of Service uses
the cloud provider's default load balancer implementation if the cluster is configured with
a cloud provider using the `--cloud-provider` component flag.
a cloud provider using the `--cloud-provider` component flag.
If `spec.loadBalancerClass` is specified, it is assumed that a load balancer
implementation that matches the specified class is watching for Services.
Any default load balancer implementation (for example, the one provided by
the cloud provider) will ignore Services that have this field set.
`spec.loadBalancerClass` can be set on a Service of type `LoadBalancer` only.
Once set, it cannot be changed.
Once set, it cannot be changed.
-->
`spec.loadBalancerClass` 允许你不使用云提供商的默认负载均衡器实现,转而使用指定的负载均衡器实现。
默认情况下,`.spec.loadBalancerClass` 的取值是 `nil`,如果集群使用 `--cloud-provider` 配置了云提供商,
@ -1469,7 +1481,8 @@ Unprefixed names are reserved for end-users.
In a mixed environment it is sometimes necessary to route traffic from Services inside the same
(virtual) network address block.
In a split-horizon DNS environment you would need two Services to be able to route both external and internal traffic to your endpoints.
In a split-horizon DNS environment you would need two Services to be able to route both external
and internal traffic to your endpoints.
To set an internal load balancer, add one of the following annotations to your Service
depending on the cloud Service provider you're using.
@ -1667,7 +1680,9 @@ TCP 和 SSL 选择第4层代理ELB 转发流量而不修改报头。
In the above example, if the Service contained three ports, `80`, `443`, and
`8443`, then `443` and `8443` would use the SSL certificate, but `80` would be proxied HTTP.
From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services.
From Kubernetes v1.9 onwards you can use
[predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html)
with HTTPS or SSL listeners for your Services.
To see which policies are available for use, you can use the `aws` command line tool:
-->
在上例中,如果服务包含 `80`、`443` 和 `8443` 三个端口, 那么 `443``8443` 将使用 SSL 证书,
@ -1777,7 +1792,8 @@ Connection draining for Classic ELBs can be managed with the annotation
`service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled` set
to the value of `"true"`. The annotation
`service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout` can
also be used to set maximum time, in seconds, to keep the existing connections open before deregistering the instances.
also be used to set maximum time, in seconds, to keep the existing connections open before
deregistering the instances.
-->
#### AWS 上的连接排空
@ -1879,7 +1895,8 @@ To use a Network Load Balancer on AWS, use the annotation `service.beta.kubernet
{{< note >}}
<!--
NLB only works with certain instance classes; see the [AWS documentation](http://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets)
NLB only works with certain instance classes; see the
[AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets)
on Elastic Load Balancing for a list of supported instance types.
-->
NLB 仅适用于某些实例类。有关受支持的实例类型的列表,
@ -1901,9 +1918,9 @@ the NLB Target Group's health check on the auto-assigned
`.spec.healthCheckNodePort` and not receive any traffic.
-->
与经典弹性负载均衡器不同网络负载均衡器NLB将客户端的 IP 地址转发到该节点。
如果服务的 `.spec.externalTrafficPolicy` 设置为 `Cluster` 则客户端的IP地址不会传达到最终的 Pod。
如果服务的 `.spec.externalTrafficPolicy` 设置为 `Cluster` ,则客户端的 IP 地址不会传达到最终的 Pod。
通过将 `.spec.externalTrafficPolicy` 设置为 `Local`客户端IP地址将传播到最终的 Pod
通过将 `.spec.externalTrafficPolicy` 设置为 `Local`,客户端 IP 地址将传播到最终的 Pod
但这可能导致流量分配不均。
没有针对特定 LoadBalancer 服务的任何 Pod 的节点将无法通过自动分配的
`.spec.healthCheckNodePort` 进行 NLB 目标组的运行状况检查,并且不会收到任何流量。
@ -2066,7 +2083,8 @@ spec:
{{< note >}}
<!--
ExternalName accepts an IPv4 address string, but as a DNS name comprised of digits, not as an IP address. ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName
ExternalName accepts an IPv4 address string, but as a DNS name comprised of digits, not as an IP address.
ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName
is intended to specify a canonical DNS name. To hardcode an IP address, consider using
[headless Services](#headless-services).
-->
@ -2091,9 +2109,13 @@ Service's `type`.
{{< warning >}}
<!--
You may have trouble using ExternalName for some common protocols, including HTTP and HTTPS. If you use ExternalName then the hostname used by clients inside your cluster is different from the name that the ExternalName references.
You may have trouble using ExternalName for some common protocols, including HTTP and HTTPS.
If you use ExternalName then the hostname used by clients inside your cluster is different from
the name that the ExternalName references.
For protocols that use hostnames this difference may lead to errors or unexpected responses. HTTP requests will have a `Host:` header that the origin server does not recognize; TLS servers will not be able to provide a certificate matching the hostname that the client connected to.
For protocols that use hostnames this difference may lead to errors or unexpected responses.
HTTP requests will have a `Host:` header that the origin server does not recognize;
TLS servers will not be able to provide a certificate matching the hostname that the client connected to.
-->
对于一些常见的协议,包括 HTTP 和 HTTPS你使用 ExternalName 可能会遇到问题。
如果你使用 ExternalName那么集群内客户端使用的主机名与 ExternalName 引用的名称不同。
@ -2191,7 +2213,7 @@ The previous information should be sufficient for many people who want to
use Services. However, there is a lot going on behind the scenes that may be
worth understanding.
-->
## 虚拟IP实施 {#the-gory-details-of-virtual-ips}
## 虚拟 IP 实施 {#the-gory-details-of-virtual-ips}
对很多想使用 Service 的人来说,前面的信息应该足够了。
然而,有很多内部原理性的内容,还是值去理解的。
@ -2219,7 +2241,7 @@ fail with a message indicating an IP address could not be allocated.
In the control plane, a background controller is responsible for creating that
map (needed to support migrating from older versions of Kubernetes that used
in-memory locking). Kubernetes also uses controllers to check for invalid
assignments (eg due to administrator intervention) and for cleaning up allocated
assignments (e.g. due to administrator intervention) and for cleaning up allocated
IP addresses that are no longer used by any Services.
-->
### 避免冲突 {#avoiding-collisions}
@ -2374,8 +2396,11 @@ through a load-balancer, though in those cases the client IP does get altered.
#### IPVS
<!--
iptables operations slow down dramatically in large scale cluster e.g 10,000 Services.
IPVS is designed for load balancing and based on in-kernel hash tables. So you can achieve performance consistency in large number of Services from IPVS-based kube-proxy. Meanwhile, IPVS-based kube-proxy has more sophisticated load balancing algorithms (least conns, locality, weighted, persistence).
iptables operations slow down dramatically in large scale cluster e.g. 10,000 Services.
IPVS is designed for load balancing and based on in-kernel hash tables.
So you can achieve performance consistency in large number of Services from IPVS-based kube-proxy.
Meanwhile, IPVS-based kube-proxy has more sophisticated load balancing algorithms
(least conns, locality, weighted, persistence).
-->
在大规模集群(例如 10000 个服务iptables 操作会显着降低速度。
IPVS 专为负载均衡而设计,并基于内核内哈希表。
@ -2386,14 +2411,15 @@ IPVS 专为负载均衡而设计,并基于内核内哈希表。
## API Object
Service is a top-level resource in the Kubernetes REST API. You can find more details
about the API object at: [Service API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core).
about the [Service API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core).
## Supported protocols {#protocol-support}
-->
## API 对象 {#api-object}
Service 是 Kubernetes REST API 中的顶级资源。你可以在以下位置找到有关 API 对象的更多详细信息:
[Service 对象 API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core).
Service 是 Kubernetes REST API 中的顶级资源。你可以找到有关
[Service 对象 API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core)
的更多详细信息。
## 受支持的协议 {#protocol-support}
@ -2437,11 +2463,12 @@ provider offering this facility. (Most do not).
{{< warning >}}
<!--
The support of multihomed SCTP associations requires that the CNI plugin can support the assignment of multiple interfaces and IP addresses to a Pod.
The support of multihomed SCTP associations requires that the CNI plugin can support the
assignment of multiple interfaces and IP addresses to a Pod.
NAT for multihomed SCTP associations requires special logic in the corresponding kernel modules.
-->
支持多宿主SCTP关联要求 CNI 插件能够支持为一个 Pod 分配多个接口和 IP 地址。
支持多宿主 SCTP 关联要求 CNI 插件能够支持为一个 Pod 分配多个接口和 IP 地址。
用于多宿主 SCTP 关联的 NAT 在相应的内核模块中需要特殊的逻辑。
{{< /warning >}}
@ -2483,7 +2510,7 @@ HTTP/HTTPS 反向代理,并将其转发到该服务的 Endpoints。
{{< note >}}
<!--
You can also use {{< glossary_tooltip term_id="ingress" >}} in place of Service
to expose HTTP / HTTPS Services.
to expose HTTP/HTTPS Services.
-->
你还可以使用 {{< glossary_tooltip text="Ingress" term_id="ingress" >}} 代替
Service 来公开 HTTP/HTTPS 服务。
@ -2522,11 +2549,10 @@ followed by the data from the client.
## {{% heading "whatsnext" %}}
<!--
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
* Follow the [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/) tutorial
* Read about [Ingress](/docs/concepts/services-networking/ingress/)
* Read about [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/)
-->
* 阅读[使用服务访问应用](/zh-cn/docs/concepts/services-networking/connect-applications-service/)
* 遵循[使用 Service 连接到应用](/zh-cn/docs/tutorials/services/connect-applications-service/)教程
* 阅读了解 [Ingress](/zh-cn/docs/concepts/services-networking/ingress/)
* 阅读了解[端点切片Endpoint Slices](/zh-cn/docs/concepts/services-networking/endpoint-slices/)

View File

@ -51,8 +51,8 @@ Currently, the following types of volume sources can be projected:
All sources are required to be in the same namespace as the Pod. For more details,
see the [all-in-one volume](https://git.k8s.io/design-proposals-archive/node/all-in-one-volume.md) design document.
-->
所有的卷源都要求处于 Pod 所在的同一个名字空间内。进一步的详细信息,可参考
[一体化卷](https://git.k8s.io/design-proposals-archive/node/all-in-one-volume.md)设计文档。
所有的卷源都要求处于 Pod 所在的同一个名字空间内。更多详细信息,
可参考[一体化卷](https://git.k8s.io/design-proposals-archive/node/all-in-one-volume.md)设计文档。
<!--
### Example configuration with a secret, a downwardAPI, and a configMap {#example-configuration-secret-downwardapi-configmap}
@ -86,15 +86,13 @@ parameters are nearly the same with two exceptions:
<!--
## serviceAccountToken projected volumes {#serviceaccounttoken}
When the `TokenRequestProjection` feature is enabled, you can inject the token
for the current [service account](/docs/reference/access-authn-authz/authentication/#service-account-tokens)
You can inject the token for the current [service account](/docs/reference/access-authn-authz/authentication/#service-account-tokens)
into a Pod at a specified path. For example:
-->
## serviceAccountToken 投射卷 {#serviceaccounttoken}
`TokenRequestProjection` 特性被启用时,你可以将当前
[服务账号](/zh-cn/docs/reference/access-authn-authz/authentication/#service-account-tokens)
的令牌注入到 Pod 中特定路径下。例如:
你可以将当前[服务账号](/zh-cn/docs/reference/access-authn-authz/authentication/#service-account-tokens)的令牌注入到
Pod 中特定路径下。例如:
{{< codenew file="pods/storage/projected-service-account-token.yaml" >}}
@ -159,6 +157,39 @@ ownership.
中设置了 `RunAsUser` 属性的 Linux Pod 中,投射文件具有正确的属主属性设置,
其中包含了容器用户属主。
<!--
When all containers in a pod have the same `runAsUser` set in their
[`PodSecurityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context)
or container
[`SecurityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context-1),
then the kubelet ensures that the contents of the `serviceAccountToken` volume are owned by that user,
and the token file has its permission mode set to `0600`.
-->
当 Pod 中的所有容器在其
[`PodSecurityContext`](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context)
或容器
[`SecurityContext`](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context-1)
中设置了相同的 `runAsUser`kubelet 将确保 `serviceAccountToken`
卷的内容归该用户所有,并且令牌文件的权限模式会被设置为 `0600`
{{< note >}}
<!--
{{< glossary_tooltip text="Ephemeral containers" term_id="ephemeral-container" >}}
added to a Pod after it is created do *not* change volume permissions that were
set when the pod was created.
If a Pod's `serviceAccountToken` volume permissions were set to `0600` because
all other containers in the Pod have the same `runAsUser`, ephemeral
containers must use the same `runAsUser` to be able to read the token.
-->
在某 Pod 被创建后为其添加的{{< glossary_tooltip text="临时容器" term_id="ephemeral-container" >}}**不会**更改创建该
Pod 时设置的卷权限。
如果 Pod 的 `serviceAccountToken` 卷权限被设为 `0600`
是因为 Pod 中的其他所有容器都具有相同的 `runAsUser`
则临时容器必须使用相同的 `runAsUser` 才能读取令牌。
{{< /note >}}
### Windows
<!--

View File

@ -1,19 +1,27 @@
---
title: 卷快照
content_type: concept
weight: 40
weight: 60
---
<!--
reviewers:
- saad-ali
- thockin
- msau42
- jingxu97
- xing-yang
- yuxiangqian
title: Volume Snapshots
content_type: concept
weight: 40
weight: 60
-->
<!-- overview -->
<!--
In Kubernetes, a _VolumeSnapshot_ represents a snapshot of a volume on a storage system. This document assumes that you are already familiar with Kubernetes [persistent volumes](/docs/concepts/storage/persistent-volumes/).
In Kubernetes, a _VolumeSnapshot_ represents a snapshot of a volume on a storage
system. This document assumes that you are already familiar with Kubernetes
[persistent volumes](/docs/concepts/storage/persistent-volumes/).
-->
在 Kubernetes 中,**卷快照** 是一个存储系统上卷的快照,本文假设你已经熟悉了 Kubernetes
的[持久卷](/zh-cn/docs/concepts/storage/persistent-volumes/)。
@ -23,34 +31,45 @@ In Kubernetes, a _VolumeSnapshot_ represents a snapshot of a volume on a storage
<!--
## Introduction
-->
## 介绍 {#introduction}
<!--
Similar to how API resources `PersistentVolume` and `PersistentVolumeClaim` are used to provision volumes for users and administrators, `VolumeSnapshotContent` and `VolumeSnapshot` API resources are provided to create volume snapshots for users and administrators.
Similar to how API resources `PersistentVolume` and `PersistentVolumeClaim` are
used to provision volumes for users and administrators, `VolumeSnapshotContent`
and `VolumeSnapshot` API resources are provided to create volume snapshots for
users and administrators.
-->
`PersistentVolume``PersistentVolumeClaim` 这两个 API 资源用于给用户和管理员制备卷类似,
`VolumeSnapshotContent``VolumeSnapshot` 这两个 API 资源用于给用户和管理员创建卷快照。
<!--
A `VolumeSnapshotContent` is a snapshot taken from a volume in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a PersistentVolume is a cluster resource.
A `VolumeSnapshotContent` is a snapshot taken from a volume in the cluster that
has been provisioned by an administrator. It is a resource in the cluster just
like a PersistentVolume is a cluster resource.
-->
`VolumeSnapshotContent` 是从一个卷获取的一种快照,该卷由管理员在集群中进行制备。
就像持久卷PersistentVolume是集群的资源一样它也是集群中的资源。
<!--
A `VolumeSnapshot` is a request for snapshot of a volume by a user. It is similar to a PersistentVolumeClaim.
A `VolumeSnapshot` is a request for snapshot of a volume by a user. It is similar
to a PersistentVolumeClaim.
-->
`VolumeSnapshot` 是用户对于卷的快照的请求。它类似于持久卷声明PersistentVolumeClaim
<!--
`VolumeSnapshotClass` allows you to specify different attributes belonging to a `VolumeSnapshot`. These attributes may differ among snapshots taken from the same volume on the storage system and therefore cannot be expressed by using the same `StorageClass` of a `PersistentVolumeClaim`.
`VolumeSnapshotClass` allows you to specify different attributes belonging to a
`VolumeSnapshot`. These attributes may differ among snapshots taken from the same
volume on the storage system and therefore cannot be expressed by using the same
`StorageClass` of a `PersistentVolumeClaim`.
-->
`VolumeSnapshotClass` 允许指定属于 `VolumeSnapshot` 的不同属性。在从存储系统的相同卷上获取的快照之间,
这些属性可能有所不同,因此不能通过使用与 `PersistentVolumeClaim` 相同的 `StorageClass` 来表示。
<!--
Volume snapshots provide Kubernetes users with a standardized way to copy a volume's contents at a particular point in time without creating an entirely new volume. This functionality enables, for example, database administrators to backup databases before performing edit or delete modifications.
Volume snapshots provide Kubernetes users with a standardized way to copy a volume's
contents at a particular point in time without creating an entirely new volume. This
functionality enables, for example, database administrators to backup databases before
performing edit or delete modifications.
-->
卷快照能力为 Kubernetes 用户提供了一种标准的方式来在指定时间点复制卷的内容,并且不需要创建全新的卷。
例如,这一功能使得数据库管理员能够在执行编辑或删除之类的修改之前对数据库执行备份。
@ -61,34 +80,49 @@ Users need to be aware of the following when using this feature:
当使用该功能时,用户需要注意以下几点:
<!--
* API Objects `VolumeSnapshot`, `VolumeSnapshotContent`, and `VolumeSnapshotClass` are {{< glossary_tooltip term_id="CustomResourceDefinition" text="CRDs" >}}, not part of the core API.
* `VolumeSnapshot` support is only available for CSI drivers.
* As part of the deployment process of `VolumeSnapshot`, the Kubernetes team provides a snapshot controller to be deployed into the control plane, and a sidecar helper container called csi-snapshotter to be deployed together with the CSI driver. The snapshot controller watches `VolumeSnapshot` and `VolumeSnapshotContent` objects and is responsible for the creation and deletion of `VolumeSnapshotContent` object. The sidecar csi-snapshotter watches `VolumeSnapshotContent` objects and triggers `CreateSnapshot` and `DeleteSnapshot` operations against a CSI endpoint.
* There is also a validating webhook server which provides tightened validation on snapshot objects. This should be installed by the Kubernetes distros along with the snapshot controller and CRDs, not CSI drivers. It should be installed in all Kubernetes clusters that has the snapshot feature enabled.
* CSI drivers may or may not have implemented the volume snapshot functionality. The CSI drivers that have provided support for volume snapshot will likely use the csi-snapshotter. See [CSI Driver documentation](https://kubernetes-csi.github.io/docs/) for details.
* The CRDs and snapshot controller installations are the responsibility of the Kubernetes distribution.
- API Objects `VolumeSnapshot`, `VolumeSnapshotContent`, and `VolumeSnapshotClass`
are {{< glossary_tooltip term_id="CustomResourceDefinition" text="CRDs" >}}, not
part of the core API.
- `VolumeSnapshot` support is only available for CSI drivers.
- As part of the deployment process of `VolumeSnapshot`, the Kubernetes team provides
a snapshot controller to be deployed into the control plane, and a sidecar helper
container called csi-snapshotter to be deployed together with the CSI driver.
The snapshot controller watches `VolumeSnapshot` and `VolumeSnapshotContent` objects
and is responsible for the creation and deletion of `VolumeSnapshotContent` object.
The sidecar csi-snapshotter watches `VolumeSnapshotContent` objects and triggers
`CreateSnapshot` and `DeleteSnapshot` operations against a CSI endpoint.
- There is also a validating webhook server which provides tightened validation on
snapshot objects. This should be installed by the Kubernetes distros along with
the snapshot controller and CRDs, not CSI drivers. It should be installed in all
Kubernetes clusters that has the snapshot feature enabled.
- CSI drivers may or may not have implemented the volume snapshot functionality.
The CSI drivers that have provided support for volume snapshot will likely use
the csi-snapshotter. See [CSI Driver documentation](https://kubernetes-csi.github.io/docs/) for details.
- The CRDs and snapshot controller installations are the responsibility of the Kubernetes distribution.
-->
* API 对象 `VolumeSnapshot``VolumeSnapshotContent` 和 `VolumeSnapshotClass`
- API 对象 `VolumeSnapshot``VolumeSnapshotContent` 和 `VolumeSnapshotClass`
是 {{< glossary_tooltip term_id="CustomResourceDefinition" text="CRD" >}}
不属于核心 API。
* `VolumeSnapshot` 支持仅可用于 CSI 驱动。
* 作为 `VolumeSnapshot` 部署过程的一部分Kubernetes 团队提供了一个部署于控制平面的快照控制器,
- `VolumeSnapshot` 支持仅可用于 CSI 驱动。
- 作为 `VolumeSnapshot` 部署过程的一部分Kubernetes 团队提供了一个部署于控制平面的快照控制器,
并且提供了一个叫做 `csi-snapshotter` 的边车Sidecar辅助容器和 CSI 驱动程序一起部署。
快照控制器监视 `VolumeSnapshot``VolumeSnapshotContent` 对象,
并且负责创建和删除 `VolumeSnapshotContent` 对象。
边车 csi-snapshotter 监视 `VolumeSnapshotContent` 对象,
并且触发针对 CSI 端点的 `CreateSnapshot``DeleteSnapshot` 的操作。
* 还有一个验证性质的 Webhook 服务器,可以对快照对象进行更严格的验证。
- 还有一个验证性质的 Webhook 服务器,可以对快照对象进行更严格的验证。
Kubernetes 发行版应将其与快照控制器和 CRD而非 CSI 驱动程序)一起安装。
此服务器应该安装在所有启用了快照功能的 Kubernetes 集群中。
* CSI 驱动可能实现也可能没有实现卷快照功能。CSI 驱动可能会使用 csi-snapshotter
- CSI 驱动可能实现也可能没有实现卷快照功能。CSI 驱动可能会使用 csi-snapshotter
来提供对卷快照的支持。详见 [CSI 驱动程序文档](https://kubernetes-csi.github.io/docs/)
* Kubernetes 负责 CRD 和快照控制器的安装。
- Kubernetes 负责 CRD 和快照控制器的安装。
<!--
## Lifecycle of a volume snapshot and volume snapshot content
`VolumeSnapshotContents` are resources in the cluster. `VolumeSnapshots` are requests for those resources. The interaction between `VolumeSnapshotContents` and `VolumeSnapshots` follow this lifecycle:
`VolumeSnapshotContents` are resources in the cluster. `VolumeSnapshots` are requests
for those resources. The interaction between `VolumeSnapshotContents` and `VolumeSnapshots`
follow this lifecycle:
-->
## 卷快照和卷快照内容的生命周期 {#lifecycle-of-a-volume-snapshot-and-volume-snapshot-content}
@ -106,7 +140,10 @@ There are two ways snapshots may be provisioned: pre-provisioned or dynamically
<!--
#### Pre-provisioned {#static}
A cluster administrator creates a number of `VolumeSnapshotContents`. They carry the details of the real volume snapshot on the storage system which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.
A cluster administrator creates a number of `VolumeSnapshotContents`. They carry the details
of the real volume snapshot on the storage system which is available for use by cluster users.
They exist in the Kubernetes API and are available for consumption.
-->
#### 预制备 {#static}
@ -116,7 +153,9 @@ A cluster administrator creates a number of `VolumeSnapshotContents`. They carry
<!--
#### Dynamic
Instead of using a pre-existing snapshot, you can request that a snapshot to be dynamically taken from a PersistentVolumeClaim. The [VolumeSnapshotClass](/docs/concepts/storage/volume-snapshot-classes/) specifies storage provider-specific parameters to use when taking a snapshot.
Instead of using a pre-existing snapshot, you can request that a snapshot to be dynamically
taken from a PersistentVolumeClaim. The [VolumeSnapshotClass](/docs/concepts/storage/volume-snapshot-classes/)
specifies storage provider-specific parameters to use when taking a snapshot.
-->
#### 动态制备 {#dynamic}
@ -127,7 +166,9 @@ Instead of using a pre-existing snapshot, you can request that a snapshot to be
<!--
### Binding
The snapshot controller handles the binding of a `VolumeSnapshot` object with an appropriate `VolumeSnapshotContent` object, in both pre-provisioned and dynamically provisioned scenarios. The binding is a one-to-one mapping.
The snapshot controller handles the binding of a `VolumeSnapshot` object with an appropriate
`VolumeSnapshotContent` object, in both pre-provisioned and dynamically provisioned scenarios.
The binding is a one-to-one mapping.
-->
### 绑定 {#binding}
@ -135,7 +176,8 @@ The snapshot controller handles the binding of a `VolumeSnapshot` object with an
绑定关系是一对一的。
<!--
In the case of pre-provisioned binding, the VolumeSnapshot will remain unbound until the requested VolumeSnapshotContent object is created.
In the case of pre-provisioned binding, the VolumeSnapshot will remain unbound until the
requested VolumeSnapshotContent object is created.
-->
在预制备快照绑定场景下,`VolumeSnapshotContent` 对象创建之后,才会和 `VolumeSnapshot` 进行绑定。
@ -144,31 +186,32 @@ In the case of pre-provisioned binding, the VolumeSnapshot will remain unbound u
The purpose of this protection is to ensure that in-use
{{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}}
API objects are not removed from the system while a snapshot is being taken from it (as this may result in data loss).
API objects are not removed from the system while a snapshot is being taken from it
(as this may result in data loss).
-->
### 快照源的持久性卷声明保护
### 快照源的持久性卷声明保护 {#persistent-volume-claim-as-snapshot-source-protection}
这种保护的目的是确保在从系统中获取快照时,不会将正在使用的
{{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}}
API 对象从系统中删除(因为这可能会导致数据丢失)。
{{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}}
API 对象从系统中删除(因为这可能会导致数据丢失)。
<!--
While a snapshot is being taken of a PersistentVolumeClaim, that PersistentVolumeClaim is in-use. If you delete a PersistentVolumeClaim API object in active use as a snapshot source, the PersistentVolumeClaim object is not removed immediately. Instead, removal of the PersistentVolumeClaim object is postponed until the snapshot is readyToUse or aborted.
While a snapshot is being taken of a PersistentVolumeClaim, that PersistentVolumeClaim
is in-use. If you delete a PersistentVolumeClaim API object in active use as a snapshot
source, the PersistentVolumeClaim object is not removed immediately. Instead, removal of
the PersistentVolumeClaim object is postponed until the snapshot is readyToUse or aborted.
-->
如果一个 PVC 正在被快照用来作为源进行快照创建,则该 PVC 是使用中的。如果用户删除正作为快照源的 PVC API 对象,
则 PVC 对象不会立即被删除掉。相反PVC 对象的删除将推迟到任何快照不在主动使用它为止。
当快照的 `Status` 中的 `ReadyToUse`值为 `true`PVC 将不再用作快照源。
当从 `PersistentVolumeClaim` 中生成快照时,`PersistentVolumeClaim` 就在被使用了。
如果删除一个作为快照源的 `PersistentVolumeClaim` 对象,这个 `PersistentVolumeClaim` 对象不会立即被删除的。
相反,删除 `PersistentVolumeClaim` 对象的动作会被放弃,或者推迟到快照的 Status 为 ReadyToUse 时再执行。
在为某 `PersistentVolumeClaim` 生成快照时,该 `PersistentVolumeClaim` 处于被使用状态。
如果删除一个正作为快照源使用的 `PersistentVolumeClaim` API 对象,该 `PersistentVolumeClaim` 对象不会立即被移除。
相反,移除 `PersistentVolumeClaim` 对象的动作会被推迟,直到快照状态变为 ReadyToUse 或快照操作被中止时再执行。
<!--
### Delete
Deletion is triggered by deleting the `VolumeSnapshot` object, and the `DeletionPolicy` will be followed. If the `DeletionPolicy` is `Delete`, then the underlying storage snapshot will be deleted along with the `VolumeSnapshotContent` object. If the `DeletionPolicy` is `Retain`, then both the underlying snapshot and `VolumeSnapshotContent` remain.
Deletion is triggered by deleting the `VolumeSnapshot` object, and the `DeletionPolicy`
will be followed. If the `DeletionPolicy` is `Delete`, then the underlying storage snapshot
will be deleted along with the `VolumeSnapshotContent` object. If the `DeletionPolicy` is
`Retain`, then both the underlying snapshot and `VolumeSnapshotContent` remain.
-->
### 删除 {#delete}
@ -197,11 +240,13 @@ spec:
```
<!--
`persistentVolumeClaimName` is the name of the PersistentVolumeClaim data source for the snapshot. This field is required for dynamically provisioning a snapshot.
`persistentVolumeClaimName` is the name of the PersistentVolumeClaim data source
for the snapshot. This field is required for dynamically provisioning a snapshot.
A volume snapshot can request a particular class by specifying the name of a
[VolumeSnapshotClass](/docs/concepts/storage/volume-snapshot-classes/)
using the attribute `volumeSnapshotClassName`. If nothing is set, then the default class is used if available.
using the attribute `volumeSnapshotClassName`. If nothing is set, then the
default class is used if available.
-->
`persistentVolumeClaimName``PersistentVolumeClaim` 数据源对快照的名称。
这个字段是动态制备快照中的必填字段。
@ -210,7 +255,9 @@ using the attribute `volumeSnapshotClassName`. If nothing is set, then the defau
使用 `volumeSnapshotClassName` 属性来请求特定类。如果没有设置,那么使用默认类(如果有)。
<!--
For pre-provisioned snapshots, you need to specify a `volumeSnapshotContentName` as the source for the snapshot as shown in the following example. The `volumeSnapshotContentName` source field is required for pre-provisioned snapshots.
For pre-provisioned snapshots, you need to specify a `volumeSnapshotContentName`
as the source for the snapshot as shown in the following example. The
`volumeSnapshotContentName` source field is required for pre-provisioned snapshots.
-->
如下面例子所示,对于预制备的快照,需要给快照指定 `volumeSnapshotContentName` 作为来源。
对于预制备的快照 `source` 中的`volumeSnapshotContentName` 字段是必填的。
@ -228,9 +275,11 @@ spec:
<!--
## Volume Snapshot Contents
Each VolumeSnapshot contains a spec and a status, which is the specification and status of the volume snapshot.
Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates `VolumeSnapshotContent` objects. Here is an example:
Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning,
the snapshot common controller creates `VolumeSnapshotContent` objects. Here is an example:
-->
## 卷快照内容 {#volume-snapshot-contents}
每个 VolumeSnapshotContent 对象包含 spec 和 status。
在动态制备时,快照通用控制器创建 `VolumeSnapshotContent` 对象。下面是例子:
@ -253,11 +302,16 @@ spec:
```
<!--
`volumeHandle` is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snapshot. It specifies the volume source of the snapshot.
`volumeHandle` is the unique identifier of the volume created on the storage
backend and returned by the CSI driver during the volume creation. This field
is required for dynamically provisioning a snapshot.
It specifies the volume source of the snapshot.
For pre-provisioned snapshots, you (as cluster administrator) are responsible for creating the `VolumeSnapshotContent` object as follows.
For pre-provisioned snapshots, you (as cluster administrator) are responsible
for creating the `VolumeSnapshotContent` object as follows.
-->
`volumeHandle` 是存储后端创建卷的唯一标识符,在卷创建期间由 CSI 驱动程序返回。动态设置快照需要此字段。它指出了快照的卷源。
`volumeHandle` 是存储后端创建卷的唯一标识符,在卷创建期间由 CSI 驱动程序返回。
动态设置快照需要此字段。它指出了快照的卷源。
对于预制备快照,你(作为集群管理员)要按如下命令来创建 `VolumeSnapshotContent` 对象。
@ -276,23 +330,29 @@ spec:
name: new-snapshot-test
namespace: default
```
<!--
`snapshotHandle` is the unique identifier of the volume snapshot created on the storage backend. This field is required for the pre-provisioned snapshots. It specifies the CSI snapshot id on the storage system that this `VolumeSnapshotContent` represents.
`snapshotHandle` is the unique identifier of the volume snapshot created on
the storage backend. This field is required for the pre-provisioned snapshots.
It specifies the CSI snapshot id on the storage system that this
`VolumeSnapshotContent` represents.
-->
`snapshotHandle` 是存储后端创建卷的唯一标识符。对于预设置快照,这个字段是必须的。
`snapshotHandle` 是存储后端创建卷的唯一标识符。对于预制备的快照,这个字段是必需的。
它指定此 `VolumeSnapshotContent` 表示的存储系统上的 CSI 快照 ID。
<!--
`sourceVolumeMode` is the mode of the volume whose snapshot is taken. The value
of the `sourceVolumeMode` field can be either `Filesystem` or `Block`. If the
source volume mode is not specified, Kubernetes treats the snapshot as if the
`sourceVolumeMode` is the mode of the volume whose snapshot is taken. The value
of the `sourceVolumeMode` field can be either `Filesystem` or `Block`. If the
source volume mode is not specified, Kubernetes treats the snapshot as if the
source volume's mode is unknown.
-->
`sourceVolumeMode` 是创建快照的卷的模式。`sourceVolumeMode` 字段的值可以是
`Filesystem``Block`。如果没有指定源卷模式Kubernetes 会将快照视为未知的源卷模式。
<!--
`volumeSnapshotRef` is the reference of the corresponding `VolumeSnapshot`. Note that when the `VolumeSnapshotContent` is being created as a pre-provisioned snapshot, the `VolumeSnapshot` referenced in `volumeSnapshotRef` might not exist yet.
`volumeSnapshotRef` is the reference of the corresponding `VolumeSnapshot`. Note that
when the `VolumeSnapshotContent` is being created as a pre-provisioned snapshot, the
`VolumeSnapshot` referenced in `volumeSnapshotRef` might not exist yet.
-->
`volumeSnapshotRef` 字段是对相应的 `VolumeSnapshot` 的引用。
请注意,当 `VolumeSnapshotContent` 被创建为预配置快照时。
@ -314,8 +374,8 @@ To check if your cluster has capability for this feature, run the following comm
要检查你的集群是否具有此特性的能力,可以运行如下命令:
```yaml
$ kubectl get crd volumesnapshotcontent -o yaml
```shell
kubectl get crd volumesnapshotcontent -o yaml
```
<!--
@ -328,6 +388,7 @@ the `VolumeSnapshotContent` that corresponds to the `VolumeSnapshot`.
但是使用与源卷不同的卷模式,则需要添加注解
`snapshot.storage.kubernetes.io/allowVolumeModeChange: "true"`
到对应 `VolumeSnapshot``VolumeSnapshotContent` 中。
<!--
For pre-provisioned snapshots, `Spec.SourceVolumeMode` needs to be populated
by the cluster administrator.
@ -363,13 +424,13 @@ spec:
<!--
You can provision a new volume, pre-populated with data from a snapshot, by using
the *dataSource* field in the `PersistentVolumeClaim` object.
the _dataSource_ field in the `PersistentVolumeClaim` object.
-->
你可以制备一个新卷,该卷预填充了快照中的数据,在 `持久卷声明` 对象中使用 **dataSource** 字段。
你可以制备一个新卷,该卷预填充了快照中的数据,在 `PersistentVolumeClaim` 对象中使用 **dataSource** 字段。
<!--
For more details, see
[Volume Snapshot and Restore Volume from Snapshot](/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support).
-->
更多详细信息,请参阅
[卷快照和从快照还原卷](/zh-cn/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support)。
更多详细信息,
请参阅[卷快照和从快照还原卷](/zh-cn/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support)。

View File

@ -1,5 +1,12 @@
---
title: ReplicaSet
feature:
title: 自我修复
anchor: ReplicationController 如何工作
description: >
重新启动失败的容器,在节点死亡时替换并重新调度容器,
杀死不响应用户定义的健康检查的容器,
并且在它们准备好服务之前不会将它们公布给客户端。
content_type: concept
weight: 20
---
@ -9,6 +16,13 @@ reviewers:
- bprashanth
- madhusudancs
title: ReplicaSet
feature:
title: Self-healing
anchor: How a ReplicaSet works
description: >
Restarts containers that fail, replaces and reschedules containers when nodes die,
kills containers that don't respond to your user-defined health check,
and doesn't advertise them to clients until they are ready to serve.
content_type: concept
weight: 20
-->

View File

@ -183,7 +183,10 @@ After submitting at least 5 substantial pull requests and meeting the other [req
<!--
## Reviewers
Reviewers are responsible for reviewing open pull requests. Unlike member feedback, you must address reviewer feedback. Reviewers are members of the [@kubernetes/sig-docs-{language}-reviews](https://github.com/orgs/kubernetes/teams?query=sig-docs) GitHub team.
Reviewers are responsible for reviewing open pull requests. Unlike member
feedback, the PR author must address reviewer feedback. Reviewers are members of the
[@kubernetes/sig-docs-{language}-reviews](https://github.com/orgs/kubernetes/teams?query=sig-docs)
GitHub team.
Reviewers can:
@ -202,7 +205,7 @@ You can be a SIG Docs reviewer, or a reviewer for docs in a specific subject are
## 评审人Reviewers {#reviewers}
评审人负责评审悬决的 PR。
与成员所给的反馈不同,必须处理评审人的反馈。
与成员所给的反馈不同,身为 PR 作者必须处理评审人的反馈。
评审人是 [@kubernetes/sig-docs-{language}-reviews](https://github.com/orgs/kubernetes/teams?query=sig-docs) GitHub 团队的成员。
评审人可以:
@ -268,7 +271,7 @@ To apply:
申请流程如下:
<!--
1. Open a pull request that adds your GitHub user name to a section of the
1. Open a pull request that adds your GitHub username to a section of the
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS_ALIASES) file
in the `kubernetes/website` repository.
@ -276,7 +279,7 @@ in the `kubernetes/website` repository.
If you aren't sure where to add yourself, add yourself to `sig-docs-en-reviews`.
{{< /note >}}
2. Assign the PR to one or more SIG-Docs approvers (user names listed under `sig-docs-{language}-owners`).
2. Assign the PR to one or more SIG-Docs approvers (usernames listed under `sig-docs-{language}-owners`).
If approved, a SIG Docs lead adds you to the appropriate GitHub team. Once added,
[@k8s-ci-robot](https://github.com/kubernetes/test-infra/tree/master/prow#bots-home) assigns and suggests you as a reviewer on new pull requests.
@ -341,7 +344,8 @@ Approvers and SIG Docs leads are the only ones who can merge pull requests into
A careless merge can break the site, so be sure that when you merge something, you mean it.
{{< /warning >}}
- Make sure that proposed changes meet the [contribution guidelines](/docs/contribute/style/content-guide/#contributing-content).
- Make sure that proposed changes meet the
[documentation content guide](/docs/contribute/style/content-guide/).
If you ever have a question, or you're not sure about something, feel free to call for additional review.
-->
@ -356,7 +360,7 @@ Approvers and SIG Docs leads are the only ones who can merge pull requests into
不小心的合并可能会破坏整个站点。在执行合并操作时,务必小心。
{{< /warning >}}
- 确保所提议的变更满足[贡献指南](/zh-cn/docs/contribute/style/content-guide/#contributing-content)要求。
- 确保所提议的变更满足[文档内容指南](/zh-cn/docs/contribute/style/content-guide/)要求。
如果有问题或者疑惑,可以根据需要请他人帮助评审。

View File

@ -3,34 +3,60 @@ title: 管理服务账号
content_type: concept
weight: 50
---
<!--
reviewers:
- bprashanth
- davidopp
- lavalamp
- liggitt
- bprashanth
- davidopp
- lavalamp
- liggitt
title: Managing Service Accounts
content_type: concept
weight: 50
-->
<!-- overview -->
<!--
This is a Cluster Administrator guide to service accounts. You should be familiar with
[configuring Kubernetes service accounts](/docs/tasks/configure-pod-container/configure-service-account/).
A _ServiceAccount_ provides an identity for processes that run in a Pod.
Support for authorization and user accounts is planned but incomplete. Sometimes
incomplete features are referred to in order to better describe service accounts.
A process inside a Pod can use the identity of its associated service account to
authenticate to the cluster's API server.
-->
这是一篇针对服务账号的集群管理员指南。
你应该熟悉[配置 Kubernetes 服务账号](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/)。
**ServiceAccount** 为 Pod 中运行的进程提供了一个身份。
对鉴权和用户账号的支持已在规划中,当前并不完备。
为了更好地描述服务账号,有时这些不完善的特性也会被提及。
Pod 内的进程可以使用其关联服务账号的身份,向集群的 API 服务器进行身份认证。
<!--
For an introduction to service accounts, read [configure service accounts](/docs/tasks/configure-pod-container/configure-service-account/).
This task guide explains some of the concepts behind ServiceAccounts. The
guide also explains how to obtain or revoke tokens that represent
ServiceAccounts.
-->
有关服务账号的介绍,
请参阅[配置服务账号](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/)。
本任务指南阐述有关 ServiceAccount 的几个概念。
本指南还讲解如何获取或撤销代表 ServiceAccount 的令牌。
<!-- body -->
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}}
<!--
To be able to follow these steps exactly, ensure you have a namespace named
`examplens`.
If you don't, create one by running:
-->
为了能够准确地跟随这些步骤,确保你有一个名为 `examplens` 的名字空间。
如果你没有,运行以下命令创建一个名字空间:
```shell
kubectl create namespace examplens
```
<!--
## User accounts versus service accounts
@ -42,162 +68,196 @@ for a number of reasons:
Kubernetes 区分用户账号和服务账号的概念,主要基于以下原因:
<!--
- User accounts are for humans. Service accounts are for processes, which run
in pods.
- User accounts are intended to be global. Names must be unique across all
namespaces of a cluster. Service accounts are namespaced.
- Typically, a cluster's user accounts might be synced from a corporate
- User accounts are for humans. Service accounts are for application processes,
which (for Kubernetes) run in containers that are part of pods.
- User accounts are intended to be global: names must be unique across all
namespaces of a cluster. No matter what namespace you look at, a particular
username that represents a user represents the same user.
In Kubernetes, service accounts are namespaced: two different namespaces can
contain ServiceAccounts that have identical names.
-->
- 用户账号是针对人而言的。而服务账号是针对运行在 Pod 中的应用进程而言的,
在 Kubernetes 中这些进程运行在容器中,而容器是 Pod 的一部分。
- 用户账号是全局性的。其名称在某集群中的所有名字空间中必须是唯一的。
无论你查看哪个名字空间,代表用户的特定用户名都代表着同一个用户。
在 Kubernetes 中,服务账号是名字空间作用域的。
两个不同的名字空间可以包含具有相同名称的 ServiceAccount。
<!--
- Typically, a cluster's user accounts might be synchronised from a corporate
database, where new user account creation requires special privileges and is
tied to complex business processes. Service account creation is intended to be
more lightweight, allowing cluster users to create service accounts for
specific tasks by following the principle of least privilege.
- Auditing considerations for humans and service accounts may differ.
- A config bundle for a complex system may include definition of various service
tied to complex business processes. By contrast, service account creation is
intended to be more lightweight, allowing cluster users to create service accounts
for specific tasks on demand. Separating ServiceAccount creation from the steps to
onboard human users makes it easier for workloads to following the principle of
least privilege.
-->
- 通常情况下,集群的用户账号可能会从企业数据库进行同步,创建新用户需要特殊权限,并且涉及到复杂的业务流程。
服务账号创建有意做得更轻量,允许集群用户为了具体的任务按需创建服务账号。
将 ServiceAccount 的创建与新用户注册的步骤分离开来,使工作负载更易于遵从权限最小化原则。
<!--
- Auditing considerations for humans and service accounts may differ; the separation
makes that easier to achieve.
- A configuration bundle for a complex system may include definition of various service
accounts for components of that system. Because service accounts can be created
without many constraints and have namespaced names, such config is portable.
without many constraints and have namespaced names, such configuration is
usually portable.
-->
- 用户账号是针对人而言的。而服务账号是针对运行在 Pod 中的进程而言的。
- 用户账号是全局性的。其名称在某集群中的所有名字空间中必须是唯一的。服务账号是名字空间作用域的。
- 通常情况下,集群的用户账号可能会从企业数据库进行同步,其创建需要特殊权限,
并且涉及到复杂的业务流程。
服务账号创建有意做得更轻量,允许集群用户为了具体的任务创建服务账号以遵从权限最小化原则。
- 对人员和服务账号审计所考虑的因素可能不同。
- 对人员和服务账号审计所考虑的因素可能不同;这种分离更容易区分不同之处。
- 针对复杂系统的配置包可能包含系统组件相关的各种服务账号的定义。
因为服务账号的创建约束不多并且有名字空间域的名称,这种配置是很轻量的。
<!--
## Service account automation
Three separate components cooperate to implement the automation around service accounts:
- A `ServiceAccount` admission controller
- A Token controller
- A `ServiceAccount` controller
-->
## 服务账号的自动化 {#service-account-automation}
以下三个独立组件协作完成服务账号相关的自动化:
- `ServiceAccount` 准入控制器
- Token 控制器
- `ServiceAccount` 控制器
因为服务账号的创建约束不多并且有名字空间域的名称,所以这种配置通常是轻量的。
<!--
### ServiceAccount Admission Controller
The modification of pods is implemented via a plugin
called an [Admission Controller](/docs/reference/access-authn-authz/admission-controllers/).
It is part of the API server.
It acts synchronously to modify pods as they are created or updated. When this plugin is active
(and it is by default on most distributions), then it does the following when a pod is created or modified:
## Bound service account token volume mechanism {#bound-service-account-token-volume}
-->
### ServiceAccount 准入控制器 {#serviceaccount-admission-controller}
对 Pod 的改动通过一个被称为[准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/)的插件来实现。
它是 API 服务器的一部分。当 Pod 被创建或更新时,它会同步地修改 Pod。
如果该插件处于激活状态(在大多数发行版中都是默认激活的),
当 Pod 被创建或更新时它会进行以下操作:
<!--
1. If the pod does not have a `ServiceAccount` set, it sets the `ServiceAccount` to `default`.
1. It ensures that the `ServiceAccount` referenced by the pod exists, and otherwise rejects it.
1. It adds a `volume` to the pod which contains a token for API access if neither the
ServiceAccount `automountServiceAccountToken` nor the Pod's `automountServiceAccountToken`
is set to `false`.
1. It adds a `volumeSource` to each container of the pod mounted at
`/var/run/secrets/kubernetes.io/serviceaccount`, if the previous step has created a volume
for the ServiceAccount token.
1. If the pod does not contain any `imagePullSecrets`, then `imagePullSecrets` of the
`ServiceAccount` are added to the pod.
-->
1. 如果该 Pod 没有设置 `ServiceAccount`,将其 `ServiceAccount` 设为 `default`
1. 保证 Pod 所引用的 `ServiceAccount` 确实存在,否则拒绝该 Pod。
1. 如果服务账号的 `automountServiceAccountToken` 或 Pod 的
`automountServiceAccountToken` 都未显式设置为 `false`,则为 Pod 创建一个
`volume`,在其中包含用来访问 API 的令牌。
1. 如果前一步中为服务账号令牌创建了卷,则为 Pod 中的每个容器添加一个
`volumeSource`,挂载在其 `/var/run/secrets/kubernetes.io/serviceaccount`
目录下。
1. 如果 Pod 不包含 `imagePullSecrets` 设置,将 `ServiceAccount`
所引用的服务账号中的 `imagePullSecrets` 信息添加到 Pod 中。
<!--
#### Bound Service Account Token Volume
-->
#### 绑定的服务账号令牌卷 {#bound-service-account-token-volume}
## 绑定的服务账号令牌卷机制 {#bound-service-account-token-volume}
{{< feature-state for_k8s_version="v1.22" state="stable" >}}
<!--
The ServiceAccount admission controller will add the following projected volume instead of a
Secret-based volume for the non-expiring service account token created by the Token controller.
By default, the Kubernetes control plane (specifically, the
[ServiceAccount admission controller](#service-account-admission-controller))
adds a [projected volume](/docs/concepts/storage/projected-volumes/) to Pods,
and this volume includes a token for Kubernetes API access.
Here's an example of how that looks for a launched Pod:
-->
ServiceAccount 准入控制器将添加如下投射卷,
而不是为令牌控制器所生成的不过期的服务账号令牌而创建的基于 Secret 的卷。
默认情况下Kubernetes 控制平面(特别是 [ServiceAccount 准入控制器](#service-account-admission-controller)
添加一个[投射卷](/zh-cn/docs/concepts/storage/projected-volumes/)到 Pod
此卷包括了访问 Kubernetes API 的令牌。
以下示例演示如何查找已启动的 Pod
```yaml
- name: kube-api-access-<随机后缀>
projected:
defaultMode: 420 # 0644
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
...
- name: kube-api-access-<随机后缀>
projected:
sources:
- serviceAccountToken:
path: token # 必须与应用所预期的路径匹配
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
```
<!--
This projected volume consists of three sources:
That manifest snippet defines a projected volume that consists of three sources. In this case,
each source also represents a single path within that volume. The three sources are:
1. A `serviceAccountToken` acquired from kube-apiserver via TokenRequest API. It will expire
after 1 hour by default or when the pod is deleted. It is bound to the pod and it has
its audience set to match the audience of the `kube-apiserver`.
1. A `configMap` containing a CA bundle used for verifying connections to the kube-apiserver.
1. A `downwardAPI` that references the namespace of the pod.
1. A `serviceAccountToken` source, that contains a token that the kubelet acquires from kube-apiserver
The kubelet fetches time-bound tokens using the TokenRequest API. A token served for a TokenRequest expires
either when the pod is deleted or after a defined lifespan (by default, that is 1 hour).
The token is bound to the specific Pod and has the kube-apiserver as its audience.
This mechanism superseded an earlier mechanism that added a volume based on a Secret,
where the Secret represented the ServiceAccount for the Pod, but did not expire.
1. A `configMap` source. The ConfigMap contains a bundle of certificate authority data. Pods can use these
certificates to make sure that they are connecting to your cluster's kube-apiserver (and not to middlebox
or an accidentally misconfigured peer).
1. A `downwardAPI` source that looks up the name of thhe namespace containing the Pod, and makes
that name information available to application code running inside the Pod.
-->
此投射卷有三个数据源:
该清单片段定义了由三个数据源组成的投射卷。在当前场景中,每个数据源也代表该卷内的一条独立路径。这三个数据源是
1. 通过 TokenRequest API 从 kube-apiserver 处获得的 `serviceAccountToken`
这一令牌默认会在一个小时之后或者 Pod 被删除时过期。
该令牌绑定到 Pod 上,并将其 audience受众设置为与 `kube-apiserver` 的 audience 相匹配。
1. 包含用来验证与 kube-apiserver 连接的 CA 证书包的 `configMap` 对象。
1. 引用 Pod 名字空间的一个 `downwardAPI`
1. `serviceAccountToken` 数据源,包含 kubelet 从 kube-apiserver 获取的令牌。
kubelet 使用 TokenRequest API 获取有时间限制的令牌。为 TokenRequest 服务的这个令牌会在
Pod 被删除或定义的生命周期(默认为 1 小时)结束之后过期。该令牌绑定到特定的 Pod
并将其 audience受众设置为与 `kube-apiserver` 的 audience 相匹配。
这种机制取代了之前基于 Secret 添加卷的机制,之前 Secret 代表了针对 Pod 的 ServiceAccount 但不会过期。
1. `configMap` 数据源。ConfigMap 包含一组证书颁发机构数据。
Pod 可以使用这些证书来确保自己连接到集群的 kube-apiserver而不是连接到中间件或意外配置错误的对等点上
1. `downwardAPI` 数据源,用于查找包含 Pod 的名字空间的名称,
并使该名称信息可用于在 Pod 内运行的应用程序代码。
<!--
See more details about [projected volumes](/docs/tasks/configure-pod-container/configure-projected-volume-storage/).
Any container within the Pod that mounts this particular volume can access the above information.
-->
参阅[投射卷](/zh-cn/docs/tasks/configure-pod-container/configure-projected-volume-storage/)了解进一步的细节。
Pod 内挂载这个特定卷的所有容器都可以访问上述信息。
{{< note >}}
<!--
There is no specific mechanism to invalidate a token issued via TokenRequest. If you no longer
trust a bound service account token for a Pod, you can delete that Pod. Deleting a Pod expires
its bound service account tokens.
-->
没有特定的机制可以使通过 TokenRequest 签发的令牌无效。如果你不再信任为某个 Pod 绑定的服务账号令牌,
你可以删除该 Pod。删除 Pod 将使其绑定的服务账号令牌过期。
{{< /note >}}
<!--
### Token Controller
## Manual Secret management for ServiceAccounts
TokenController runs as part of `kube-controller-manager`. It acts asynchronously. It:
Versions of Kubernetes before v1.22 automatically created credentials for accessing
the Kubernetes API. This older mechanism was based on creating token Secrets that
could then be mounted into running Pods.
-->
## 手动管理 ServiceAccount 的 Secret {#manual-secret-management-for-serviceaccounts}
- watches ServiceAccount creation and creates a corresponding
ServiceAccount token Secret to allow API access.
- watches ServiceAccount deletion and deletes all corresponding ServiceAccount
v1.22 之前的 Kubernetes 版本会自动创建凭据访问 Kubernetes API。
这种更老的机制基于先创建令牌 Secret然后将其挂载到正运行的 Pod 中。
<!--
In more recent versions, including Kubernetes v{{< skew currentVersion >}}, API credentials
are [obtained directly](#bound-service-account-token-volume) using the
[TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) API,
and are mounted into Pods using a projected volume.
The tokens obtained using this method have bounded lifetimes, and are automatically
invalidated when the Pod they are mounted into is deleted.
-->
在包括 Kubernetes v{{< skew currentVersion >}} 在内最近的几个版本中,使用
[TokenRequest](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) API
[直接获得](#bound-service-account-token-volume) API 凭据,并使用投射卷挂载到 Pod 中。
使用这种方法获得的令牌具有绑定的生命周期,当挂载的 Pod 被删除时这些令牌将自动失效。
<!--
You can still [manually create](/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount) a Secret to hold a service account token; for example, if you need a token that never expires.
Once you manually create a Secret and link it to a ServiceAccount, the Kubernetes control plane automatically populates the token into that Secret.
-->
你仍然可以[手动创建](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount)
Secret 来保存服务账号令牌;例如在你需要一个永不过期的令牌的时候。
一旦你手动创建一个 Secret 并将其关联到 ServiceAccountKubernetes 控制平面就会自动将令牌填充到该 Secret 中。
{{< note >}}
<!--
Although the manual mechanism for creating a long-lived ServiceAccount token exists,
using [TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/)
to obtain short-lived API access tokens is recommended instead.
-->
尽管存在手动创建长久 ServiceAccount 令牌的机制,但还是推荐使用
[TokenRequest](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/)
获得短期的 API 访问令牌。
{{< /note >}}
<!--
## Control plane details
### Token controller
The service account token controller runs as part of `kube-controller-manager`.
This controller acts asynchronously. It:
- watches for ServiceAccount deletion and deletes all corresponding ServiceAccount
token Secrets.
- watches ServiceAccount token Secret addition, and ensures the referenced
- watches for ServiceAccount token Secret addition, and ensures the referenced
ServiceAccount exists, and adds a token to the Secret if needed.
- watches Secret deletion and removes a reference from the corresponding
- watches for Secret deletion and removes a reference from the corresponding
ServiceAccount if needed.
-->
### Token 控制器 {#token-controller}
## 控制平面细节 {#control-plane-details}
TokenController 作为 `kube-controller-manager` 的一部分运行,以异步的形式工作。
### 令牌控制器 {#token-controller}
服务账号令牌控制器作为 `kube-controller-manager` 的一部分运行,以异步的形式工作。
其职责包括:
- 监测 ServiceAccount 的创建并创建相应的服务账号令牌 Secret 以允许访问 API。
- 监测 ServiceAccount 的删除并删除所有相应的服务账号令牌 Secret。
- 监测服务账号令牌 Secret 的添加,保证相应的 ServiceAccount 存在,如有需要,
向 Secret 中添加令牌。
@ -217,57 +277,374 @@ verify the tokens during authentication.
kube-apiserver。公钥用于在身份认证过程中校验令牌。
<!--
#### To create additional API tokens
### ServiceAccount admission controller
A controller loop ensures a Secret with an API token exists for each
ServiceAccount. To create additional API tokens for a ServiceAccount, create a
Secret of type `kubernetes.io/service-account-token` with an annotation
referencing the ServiceAccount, and the controller will update it with a
generated token:
Below is a sample configuration for such a Secret:
The modification of pods is implemented via a plugin
called an [Admission Controller](/docs/reference/access-authn-authz/admission-controllers/).
It is part of the API server.
This admission controller acts synchronously to modify pods as they are created.
When this plugin is active (and it is by default on most distributions), then
it does the following when a Pod is created:
-->
#### 创建额外的 API 令牌 {#to-create-additional-api-tokens}
### ServiceAccount 准入控制器 {#serviceaccount-admission-controller}
控制器中有专门的循环来保证每个 ServiceAccount 都存在对应的包含 API 令牌的 Secret。
当需要为 ServiceAccount 创建额外的 API 令牌时,可以创建一个类型为
`kubernetes.io/service-account-token` 的 Secret并在其注解中引用对应的
ServiceAccount。控制器会生成令牌并更新该 Secret
对 Pod 的改动通过一个被称为[准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/)的插件来实现。
它是 API 服务器的一部分。当 Pod 被创建时,该准入控制器会同步地修改 Pod。
如果该插件处于激活状态(在大多数发行版中都是默认激活的),当 Pod 被创建时它会进行以下操作:
下面是这种 Secret 的一个示例配置:
<!--
1. If the pod does not have a `.spec.serviceAccountName` set, the admission controller sets the name of the
ServiceAccount for this incoming Pod to `default`.
1. The admission controller ensures that the ServiceAccount referenced by the incoming Pod exists. If there
is no ServiceAccount with a matching name, the admission controller rejects the incoming Pod. That check
applies even for the `default` ServiceAccount.
-->
1. 如果该 Pod 没有设置 `.spec.serviceAccountName`
准入控制器为新来的 Pod 将 ServiceAccount 的名称设为 `default`
2. 准入控制器保证新来的 Pod 所引用的 ServiceAccount 确实存在。
如果没有 ServiceAccount 具有匹配的名称,则准入控制器拒绝新来的 Pod。
这个检查甚至适用于 `default` ServiceAccount。
<!--
1. Provided that neither the ServiceAccount's `automountServiceAccountToken` field nor the
Pod's `automountServiceAccountToken` field is set to `false`:
- the admission controller mutates the incoming Pod, adding an extra
{{< glossary_tooltip text="volume" term_id="volume" >}} that contains
a token for API access.
- the admission controller adds a `volumeMount` to each container in the Pod,
skipping any containers that already have a volume mount defined for the path
`/var/run/secrets/kubernetes.io/serviceaccount`.
For Linux containers, that volume is mounted at `/var/run/secrets/kubernetes.io/serviceaccount`;
on Windows nodes, the mount is at the equivalent path.
1. If the spec of the incoming Pod does already contain any `imagePullSecrets`, then the
admission controller adds `imagePullSecrets`, copying them from the `ServiceAccount`.
-->
3. 如果服务账号的 `automountServiceAccountToken` 字段或 Pod 的
`automountServiceAccountToken` 字段都未显式设置为 `false`
- 准入控制器变更新来的 Pod添加一个包含 API
访问令牌的额外{{< glossary_tooltip text="卷" term_id="volume" >}}。
- 准入控制器将 `volumeMount` 添加到 Pod 中的每个容器,
忽略已为 `/var/run/secrets/kubernetes.io/serviceaccount` 路径定义的卷挂载的所有容器。
对于 Linux 容器,此卷挂载在 `/var/run/secrets/kubernetes.io/serviceaccount`
在 Windows 节点上,此卷挂载在等价的路径上。
4. 如果新来 Pod 的规约已包含任何 `imagePullSecrets`,则准入控制器添加 `imagePullSecrets`
并从 `ServiceAccount` 进行复制。
### TokenRequest API
{{< feature-state for_k8s_version="v1.22" state="stable" >}}
<!--
You use the [TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/)
subresource of a ServiceAccount to obtain a time-bound token for that ServiceAccount.
You don't need to call this to obtain an API token for use within a container, since
the kubelet sets this up for you using a _projected volume_.
If you want to use the TokenRequest API from `kubectl`, see
[Manually create an API token for a ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount).
-->
你使用 ServiceAccount 的
[TokenRequest](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/)
子资源为该 ServiceAccount 获取有时间限制的令牌。
你不需要调用它来获取在容器中使用的 API 令牌,因为 kubelet 使用 **投射卷** 对此进行了设置。
如果你想要从 `kubectl` 使用 TokenRequest API
请参阅[为 ServiceAccount 手动创建 API 令牌](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount)。
<!--
The Kubernetes control plane (specifically, the ServiceAccount admission controller)
adds a projected volume to Pods, and the kubelet ensures that this volume contains a token
that lets containers authenticate as the right ServiceAccount.
(This mechanism superseded an earlier mechanism that added a volume based on a Secret,
where the Secret represented the ServiceAccount for the Pod but did not expire.)
Here's an example of how that looks for a launched Pod:
-->
Kubernetes 控制平面(特别是 ServiceAccount 准入控制器)向 Pod 添加了一个投射卷,
kubelet 确保该卷包含允许容器作为正确 ServiceAccount 进行身份认证的令牌。
(这种机制取代了之前基于 Secret 添加卷的机制,之前 Secret 代表了 Pod 所用的 ServiceAccount 但不会过期。)
以下示例演示如何查找已启动的 Pod
```yaml
...
- name: kube-api-access-<random-suffix>
projected:
defaultMode: 420 # 这个十进制数等同于八进制 0644
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
```
<!--
That manifest snippet defines a projected volume that combines information from three sources:
1. A `serviceAccountToken` source, that contains a token that the kubelet acquires from kube-apiserver
The kubelet fetches time-bound tokens using the TokenRequest API. A token served for a TokenRequest expires
either when the pod is deleted or after a defined lifespan (by default, that is 1 hour).
The token is bound to the specific Pod and has the kube-apiserver as its audience.
1. A `configMap` source. The ConfigMap contains a bundle of certificate authority data. Pods can use these
certificates to make sure that they are connecting to your cluster's kube-apiserver (and not to middlebox
or an accidentally misconfigured peer).
1. A `downwardAPI` source. This `downwardAPI` volume makes the name of the namespace container the Pod available
to application code running inside the Pod.
-->
该清单片段定义了由三个数据源信息组成的投射卷。
1. `serviceAccountToken` 数据源,包含 kubelet 从 kube-apiserver 获取的令牌。
kubelet 使用 TokenRequest API 获取有时间限制的令牌。为 TokenRequest 服务的这个令牌会在
Pod 被删除或定义的生命周期(默认为 1 小时)结束之后过期。该令牌绑定到特定的 Pod
并将其 audience受众设置为与 `kube-apiserver` 的 audience 相匹配。
1. `configMap` 数据源。ConfigMap 包含一组证书颁发机构数据。
Pod 可以使用这些证书来确保自己连接到集群的 kube-apiserver而不是连接到中间件或意外配置错误的对等点上
1. `downwardAPI` 数据源。这个 `downwardAPI` 卷获得包含 Pod 的名字空间的名称,
并使该名称信息可用于在 Pod 内运行的应用程序代码。
<!--
Any container within the Pod that mounts this volume can access the above information.
## Create additional API tokens {#create-token}
-->
挂载此卷的 Pod 内的所有容器均可以访问上述信息。
## 创建额外的 API 令牌 {#create-token}
{{< caution >}}
<!--
Only create long-lived API tokens if the [token request](#tokenrequest-api) mechanism
is not suitable. The token request mechanism provides time-limited tokens; because these
expire, they represent a lower risk to information security.
-->
只有[令牌请求](#tokenrequest-api)机制不合适,才需要创建长久的 API 令牌。
令牌请求机制提供有时间限制的令牌;因为随着这些令牌过期,它们对信息安全方面的风险也会降低。
{{< /caution >}}
<!--
To create a non-expiring, persisted API token for a ServiceAccount, create a
Secret of type `kubernetes.io/service-account-token` with an annotation
referencing the ServiceAccount. The control plane then generates a long-lived token and
updates that Secret with that generated token data.
Here is a sample manifest for such a Secret:
-->
要为 ServiceAccount 创建一个不过期、持久化的 API 令牌,
请创建一个类型为 `kubernetes.io/service-account-token` 的 Secret附带引用 ServiceAccount 的注解。
控制平面随后生成一个长久的令牌,并使用生成的令牌数据更新该 Secret。
以下是此类 Secret 的示例清单:
{{< codenew file="secret/serviceaccount/mysecretname.yaml" >}}
<!--
To create a Secret based on this example, run:
-->
若要基于此示例创建 Secret运行以下命令
```shell
kubectl -n examplens create -f https://k8s.io/examples/secret/serviceaccount/mysecretname.yaml
```
<!--
To see the details for that Secret, run:
-->
若要查看该 Secret 的详细信息,运行以下命令:
```shell
kubectl -n examplens describe secret mysecretname
```
<!--
The output is similar to:
-->
输出类似于:
```
Name: mysecretname
Namespace: examplens
Labels: <none>
Annotations: kubernetes.io/service-account.name=myserviceaccount
kubernetes.io/service-account.uid=8a85c4c4-8483-11e9-bc42-526af7764f64
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1362 bytes
namespace: 9 bytes
token: ...
```
<!--
If you launch a new Pod into the `examplens` namespace, it can use the `myserviceaccount`
service-account-token Secret that you just created.
-->
如果你在 `examplens` 名字空间中启动新的 Pod可以使用你刚刚创建的
`myserviceaccount` service-account-token Secret。
<!--
## Delete/invalidate a ServiceAccount token {#delete-token}
If you know the name of the Secret that contains the token you want to remove:
-->
## 删除/废止 ServiceAccount 令牌 {#delete-token}
如果你知道 Secret 的名称且该 Secret 包含要移除的令牌:
```shell
kubectl delete secret name-of-secret
```
<!--
Otherwise, first find the Secret for the ServiceAccount.
-->
否则,先找到 ServiceAccount 所用的 Secret。
```shell
# 此处假设你已有一个名为 'examplens' 的名字空间
kubectl -n examplens get serviceaccount/example-automated-thing -o yaml
```
<!--
The output is similar to:
-->
输出类似于:
```yaml
apiVersion: v1
kind: Secret
kind: ServiceAccount
metadata:
name: mysecretname
annotations:
kubernetes.io/service-account.name: myserviceaccount
type: kubernetes.io/service-account-token
```
```shell
kubectl create -f ./secret.yaml
kubectl describe secret mysecretname
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"example-automated-thing","namespace":"examplens"}}
creationTimestamp: "2019-07-21T07:07:07Z"
name: example-automated-thing
namespace: examplens
resourceVersion: "777"
selfLink: /api/v1/namespaces/examplens/serviceaccounts/example-automated-thing
uid: f23fd170-66f2-4697-b049-e1e266b7f835
secrets:
- name: example-automated-thing-token-zyxwv
```
<!--
#### To delete/invalidate a ServiceAccount token Secret
Then, delete the Secret you now know the name of:
-->
#### 删除/废止服务账号令牌 Secret
随后删除你现在知道名称的 Secret
```shell
kubectl delete secret mysecretname
kubectl -n examplens delete secret/example-automated-thing-token-zyxwv
```
<!--
The control plane spots that the ServiceAccount is missing its Secret,
and creates a replacement:
-->
控制平面发现 ServiceAccount 缺少其 Secret并创建一个替代项
```shell
kubectl -n examplens get serviceaccount/example-automated-thing -o yaml
```
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"example-automated-thing","namespace":"examplens"}}
creationTimestamp: "2019-07-21T07:07:07Z"
name: example-automated-thing
namespace: examplens
resourceVersion: "1026"
selfLink: /api/v1/namespaces/examplens/serviceaccounts/example-automated-thing
uid: f23fd170-66f2-4697-b049-e1e266b7f835
secrets:
- name: example-automated-thing-token-4rdrh
```
<!--
## Clean up
If you created a namespace `examplens` to experiment with, you can remove it:
-->
## 清理 {#clean-up}
如果创建了一个 `examplens` 名字空间进行试验,你可以移除它:
```shell
kubectl delete namespace examplens
```
<!--
## Control plane details
### ServiceAccount controller
A ServiceAccount controller manages the ServiceAccounts inside namespaces, and
ensures a ServiceAccount named "default" exists in every active namespace.
-->
### 服务账号控制器 {#serviceaccount-controller}
## 控制平面细节 {#control-plane-details}
服务账号控制器管理各名字空间下的 ServiceAccount 对象,
并且保证每个活跃的名字空间下存在一个名为 "default" 的 ServiceAccount。
### ServiceAccount 控制器 {#serviceaccount-controller}
ServiceAccount 控制器管理名字空间内的 ServiceAccount并确保每个活跃的名字空间中都存在名为
“default” 的 ServiceAccount。
<!--
### Token controller
The service account token controller runs as part of `kube-controller-manager`.
This controller acts asynchronously. It:
- watches for ServiceAccount creation and creates a corresponding
ServiceAccount token Secret to allow API access.
- watches for ServiceAccount deletion and deletes all corresponding ServiceAccount
token Secrets.
- watches for ServiceAccount token Secret addition, and ensures the referenced
ServiceAccount exists, and adds a token to the Secret if needed.
- watches for Secret deletion and removes a reference from the corresponding
ServiceAccount if needed.
-->
### 令牌控制器
服务账号令牌控制器作为 `kube-controller-manager` 的一部分运行,以异步的形式工作。
其职责包括:
- 监测 ServiceAccount 的创建并创建相应的服务账号令牌 Secret 以允许 API 访问。
- 监测 ServiceAccount 的删除并删除所有相应的服务账号令牌 Secret。
- 监测服务账号令牌 Secret 的添加,保证相应的 ServiceAccount 存在,如有需要,
向 Secret 中添加令牌。
- 监测 Secret 的删除,如有需要,从相应的 ServiceAccount 中移除引用。
<!--
You must pass a service account private key file to the token controller in
the `kube-controller-manager` using the `--service-account-private-key-file`
flag. The private key is used to sign generated service account tokens.
Similarly, you must pass the corresponding public key to the `kube-apiserver`
using the `--service-account-key-file` flag. The public key will be used to
verify the tokens during authentication.
-->
你必须通过 `--service-account-private-key-file` 标志为 `kube-controller-manager`
的令牌控制器传入一个服务账号私钥文件。该私钥用于为所生成的服务账号令牌签名。
同样地,你需要通过 `--service-account-key-file` 标志将对应的公钥通知给
kube-apiserver。公钥用于在身份认证过程中校验令牌。
## {{% heading "whatsnext" %}}
<!--
- Read more details about [projected volumes](/docs/concepts/storage/projected-volumes/).
-->
- 查阅有关[投射卷](/zh-cn/docs/concepts/storage/projected-volumes/)的更多细节。

Some files were not shown because too many files have changed in this diff Show More