diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index a34ed8d9c4..2c1452beab 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -212,6 +212,7 @@ aliases: - ngtuna - truongnh1992 sig-docs-ru-owners: # Admins for Russian content + - Arhell - msheldyakov - aisonaku - potapy4 @@ -245,11 +246,11 @@ aliases: # authoritative source: git.k8s.io/community/OWNERS_ALIASES committee-steering: # provide PR approvals for announcements - cblecker + - cpanato - bentheelder - justaugustus - mrbobbytables - palnabarun - - parispittman - tpepper # authoritative source: https://git.k8s.io/sig-release/OWNERS_ALIASES sig-release-leads: diff --git a/content/de/docs/tasks/tools/install-kubectl.md b/content/de/docs/tasks/tools/install-kubectl.md index 2354fad25f..53f04a9047 100644 --- a/content/de/docs/tasks/tools/install-kubectl.md +++ b/content/de/docs/tasks/tools/install-kubectl.md @@ -42,7 +42,7 @@ baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 -gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg +gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF yum install -y kubectl {{< /tab >}} diff --git a/content/en/blog/_posts/2017-02-00-Caas-The-Foundation-For-Next-Gen-Paas.md b/content/en/blog/_posts/2017-02-00-Caas-The-Foundation-For-Next-Gen-Paas.md index c613e8e29f..66c810591b 100644 --- a/content/en/blog/_posts/2017-02-00-Caas-The-Foundation-For-Next-Gen-Paas.md +++ b/content/en/blog/_posts/2017-02-00-Caas-The-Foundation-For-Next-Gen-Paas.md @@ -30,7 +30,7 @@ This then points to the other benefit of next generation PaaS being built on top Kubernetes is infrastructure for next generation applications, PaaS and more. Given this, I’m really excited by our [announcement](https://azure.microsoft.com/en-us/blog/kubernetes-now-generally-available-on-azure-container-service/) today that Kubernetes on Azure Container Service has reached general availability. When you deploy your next generation application to Azure, whether on a PaaS or deployed directly onto Kubernetes itself (or both) you can deploy it onto a managed, supported Kubernetes cluster. -Furthermore, because we know that the world of PaaS and software development in general is a hybrid one, we’re excited to announce the preview availability of [Windows clusters in Azure Container Service](https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough). We’re also working on [hybrid clusters](https://github.com/Azure/acs-engine/blob/master/docs/kubernetes/windows.md) in [ACS-Engine](https://github.com/Azure/acs-engine) and expect to roll those out to general availability in the coming months. +Furthermore, because we know that the world of PaaS and software development in general is a hybrid one, we’re excited to announce the preview availability of [Windows clusters in Azure Container Service](https://learn.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough). We’re also working on [hybrid clusters](https://github.com/Azure/acs-engine/blob/master/docs/kubernetes/windows.md) in [ACS-Engine](https://github.com/Azure/acs-engine) and expect to roll those out to general availability in the coming months. I’m thrilled to see how containers and container as a service is changing the world of compute, I’m confident that we’re only scratching the surface of the transformation we’ll see in the coming months and years. diff --git a/content/en/blog/_posts/2018-05-04-Announcing-Kubeflow-0-1.md b/content/en/blog/_posts/2018-05-04-Announcing-Kubeflow-0-1.md index d60dbf22f6..1be435c931 100644 --- a/content/en/blog/_posts/2018-05-04-Announcing-Kubeflow-0-1.md +++ b/content/en/blog/_posts/2018-05-04-Announcing-Kubeflow-0-1.md @@ -94,7 +94,7 @@ If you’d like to try out Kubeflow, we have a number of options for you: 1. You can use sample walkthroughs hosted on [Katacoda](https://www.katacoda.com/kubeflow) 2. You can follow a guided tutorial with existing models from the [examples repository](https://github.com/kubeflow/examples). These include the [GitHub Issue Summarization](https://github.com/kubeflow/examples/tree/master/github_issue_summarization), [MNIST](https://github.com/kubeflow/examples/tree/master/mnist) and [Reinforcement Learning with Agents](https://github.com/kubeflow/examples/tree/v0.5.1/agents). -3. You can start a cluster on your own and try your own model. Any Kubernetes conformant cluster will support Kubeflow including those from contributors [Caicloud](https://www.prnewswire.com/news-releases/caicloud-releases-its-kubernetes-based-cluster-as-a-service-product-claas-20-and-the-first-tensorflow-as-a-service-taas-11-while-closing-6m-series-a-funding-300418071.html), [Canonical](https://jujucharms.com/canonical-kubernetes/), [Google](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-container-cluster), [Heptio](https://heptio.com/products/kubernetes-subscription/), [Mesosphere](https://github.com/mesosphere/dcos-kubernetes-quickstart), [Microsoft](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough), [IBM](https://cloud.ibm.com/docs/containers?topic=containers-cs_cluster_tutorial#cs_cluster_tutorial), [Red Hat/Openshift ](https://docs.openshift.com/container-platform/3.3/install_config/install/quick_install.html#install-config-install-quick-install)and [Weaveworks](https://www.weave.works/product/cloud/). +3. You can start a cluster on your own and try your own model. Any Kubernetes conformant cluster will support Kubeflow including those from contributors [Caicloud](https://www.prnewswire.com/news-releases/caicloud-releases-its-kubernetes-based-cluster-as-a-service-product-claas-20-and-the-first-tensorflow-as-a-service-taas-11-while-closing-6m-series-a-funding-300418071.html), [Canonical](https://jujucharms.com/canonical-kubernetes/), [Google](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-container-cluster), [Heptio](https://heptio.com/products/kubernetes-subscription/), [Mesosphere](https://github.com/mesosphere/dcos-kubernetes-quickstart), [Microsoft](https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough), [IBM](https://cloud.ibm.com/docs/containers?topic=containers-cs_cluster_tutorial#cs_cluster_tutorial), [Red Hat/Openshift ](https://docs.openshift.com/container-platform/3.3/install_config/install/quick_install.html#install-config-install-quick-install)and [Weaveworks](https://www.weave.works/product/cloud/). There were also a number of sessions at KubeCon + CloudNativeCon EU 2018 covering Kubeflow. The links to the talks are here; the associated videos will be posted in the coming days. diff --git a/content/en/blog/_posts/2018-10-08-support-for-azure-vmss.md b/content/en/blog/_posts/2018-10-08-support-for-azure-vmss.md index 42746f49e2..ca942ac013 100644 --- a/content/en/blog/_posts/2018-10-08-support-for-azure-vmss.md +++ b/content/en/blog/_posts/2018-10-08-support-for-azure-vmss.md @@ -10,11 +10,11 @@ date: 2018-10-08 With Kubernetes v1.12, Azure virtual machine scale sets (VMSS) and cluster-autoscaler have reached their General Availability (GA) and User Assigned Identity is available as a preview feature. -_Azure VMSS allow you to create and manage identical, load balanced VMs that automatically increase or decrease based on demand or a set schedule. This enables you to easily manage and scale multiple VMs to provide high availability and application resiliency, ideal for large-scale applications like container workloads [[1]](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview)._ +_Azure VMSS allow you to create and manage identical, load balanced VMs that automatically increase or decrease based on demand or a set schedule. This enables you to easily manage and scale multiple VMs to provide high availability and application resiliency, ideal for large-scale applications like container workloads [[1]](https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview)._ Cluster autoscaler allows you to adjust the size of the Kubernetes clusters based on the load conditions automatically. -Another exciting feature which v1.12 brings to the table is the ability to use User Assigned Identities with Kubernetes clusters [[12]](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview). +Another exciting feature which v1.12 brings to the table is the ability to use User Assigned Identities with Kubernetes clusters [[12]](https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview). In this article, we will do a brief overview of VMSS, cluster autoscaler and user assigned identity features on Azure. @@ -22,7 +22,7 @@ In this article, we will do a brief overview of VMSS, cluster autoscaler and use Azure’s Virtual Machine Scale sets (VMSS) feature offers users an ability to automatically create VMs from a single central configuration, provide load balancing via L4 and L7 load balancing, provide a path to use availability zones for high availability, provides large-scale VM instances et. al. -VMSS consists of a group of virtual machines, which are identical and can be managed and configured at a group level. More details of this feature in Azure itself can be found at the following link [[1]](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview). +VMSS consists of a group of virtual machines, which are identical and can be managed and configured at a group level. More details of this feature in Azure itself can be found at the following link [[1]](https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview). With Kubernetes v1.12 customers can create k8s cluster out of VMSS instances and utilize VMSS features. @@ -254,7 +254,7 @@ Cluster Autoscaler currently supports four VM types: standard (VMAS), VMSS, ACS ## User Assigned Identity -Inorder for the Kubernetes cluster components to securely talk to the cloud services, it needs to authenticate with the cloud provider. In Azure Kubernetes clusters, up until now this was done using two ways - Service Principals or Managed Identities. In case of service principal the credentials are stored within the cluster and there are password rotation and other challenges which user needs to incur to accommodate this model. Managed service identities takes out this burden from the user and manages the service instances directly [[12]](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview). +Inorder for the Kubernetes cluster components to securely talk to the cloud services, it needs to authenticate with the cloud provider. In Azure Kubernetes clusters, up until now this was done using two ways - Service Principals or Managed Identities. In case of service principal the credentials are stored within the cluster and there are password rotation and other challenges which user needs to incur to accommodate this model. Managed service identities takes out this burden from the user and manages the service instances directly [[12]](https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview). There are two kinds of managed identities possible - one is system assigned and another is user assigned. In case of system assigned identity each vm in the Kubernetes cluster is assigned a managed identity during creation. This identity is used by various Kubernetes components needing access to Azure resources. Examples to these operations are getting/updating load balancer configuration, getting/updating vm information etc. With the system assigned managed identity, user has no control over the identity which is assigned to the underlying vm. The system automatically assigns it and this reduces the flexibility for the user. @@ -273,7 +273,7 @@ env.ServiceManagementEndpoint, config.UserAssignedIdentityID) ``` -This calls hits either the instance metadata service or the vm extension [[12]](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview) to gather the token which is then used to access various resources. +This calls hits either the instance metadata service or the vm extension [[12]](https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview) to gather the token which is then used to access various resources. ## Setting up a cluster with user assigned identity @@ -304,11 +304,11 @@ For azure specific discussions - please checkout the Azure SIG page at [[6]](htt For CA, please checkout the Autoscaler project here [[7]](http://www.github.com/kubernetes/autoscaler) and join the [#sig-autoscaling](https://kubernetes.slack.com/messages/sig-autoscaling) Slack for more discussions. -For the acs-engine (the unmanaged variety) on Azure docs can be found here: [[9]](https://github.com/Azure/acs-engine). More details about the managed service from Azure Kubernetes Service (AKS) here [[5]](https://docs.microsoft.com/en-us/azure/aks/). +For the acs-engine (the unmanaged variety) on Azure docs can be found here: [[9]](https://github.com/Azure/acs-engine). More details about the managed service from Azure Kubernetes Service (AKS) here [[5]](https://learn.microsoft.com/en-us/azure/aks/). ## References -1) https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview +1) https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview 2) /docs/concepts/architecture/cloud-controller/ @@ -316,7 +316,7 @@ For the acs-engine (the unmanaged variety) on Azure docs can be found here: [[9] 4) https://github.com/Azure/acs-engine/blob/master/docs/kubernetes/deploy.md -5) https://docs.microsoft.com/en-us/azure/aks/ +5) https://learn.microsoft.com/en-us/azure/aks/ 6) https://github.com/kubernetes/community/tree/master/sig-azure @@ -330,7 +330,7 @@ For the acs-engine (the unmanaged variety) on Azure docs can be found here: [[9] 11) /docs/concepts/architecture/ -12) https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview +12) https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview 13) https://github.com/Azure/acs-engine/tree/master/examples/kubernetes-msi-userassigned diff --git a/content/en/blog/_posts/2020-05-21-wsl2-dockerdesktop-k8s.md b/content/en/blog/_posts/2020-05-21-wsl2-dockerdesktop-k8s.md index 1166d8b766..9a1d476030 100644 --- a/content/en/blog/_posts/2020-05-21-wsl2-dockerdesktop-k8s.md +++ b/content/en/blog/_posts/2020-05-21-wsl2-dockerdesktop-k8s.md @@ -16,7 +16,7 @@ New to Windows 10 and WSL2, or new to Docker and Kubernetes? Welcome to this blo For the last few years, Kubernetes became a de-facto standard platform for running containerized services and applications in distributed environments. While a wide variety of distributions and installers exist to deploy Kubernetes in the cloud environments (public, private or hybrid), or within the bare metal environments, there is still a need to deploy and run Kubernetes locally, for example, on the developer's workstation. -Kubernetes has been originally designed to be deployed and used in the Linux environments. However, a good number of users (and not only application developers) use Windows OS as their daily driver. When Microsoft revealed WSL - [the Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/), the line between Windows and Linux environments became even less visible. +Kubernetes has been originally designed to be deployed and used in the Linux environments. However, a good number of users (and not only application developers) use Windows OS as their daily driver. When Microsoft revealed WSL - [the Windows Subsystem for Linux](https://learn.microsoft.com/en-us/windows/wsl/), the line between Windows and Linux environments became even less visible. Also, WSL brought an ability to run Kubernetes on Windows almost seamlessly! @@ -31,7 +31,7 @@ Since we will explain how to install KinD, we won't go into too much detail arou However, here is the list of the prerequisites needed and their version/lane: - OS: Windows 10 version 2004, Build 19041 -- [WSL2 enabled](https://docs.microsoft.com/en-us/windows/wsl/wsl2-install) +- [WSL2 enabled](https://learn.microsoft.com/en-us/windows/wsl/wsl2-install) - In order to install the distros as WSL2 by default, once WSL2 installed, run the command `wsl.exe --set-default-version 2` in Powershell - WSL2 distro installed from the Windows Store - the distro used is Ubuntu-18.04 - [Docker Desktop for Windows](https://hub.docker.com/editions/community/docker-ce-desktop-windows), stable channel - the version used is 2.2.0.4 diff --git a/content/en/blog/_posts/2022-05-27-maxunavailable-for-statefulset.md b/content/en/blog/_posts/2022-05-27-maxunavailable-for-statefulset.md index aa6257eb3e..5dd786d9ad 100644 --- a/content/en/blog/_posts/2022-05-27-maxunavailable-for-statefulset.md +++ b/content/en/blog/_posts/2022-05-27-maxunavailable-for-statefulset.md @@ -69,7 +69,7 @@ has 5 replicas, with `maxUnavailable` set to 2 and `partition` set to 0. I can trigger a rolling update by changing the image to `k8s.gcr.io/nginx-slim:0.9`. Once I initiate the rolling update, I can watch the pods update 2 at a time as the current value of maxUnavailable is 2. The below output shows a span of time and is not complete. The maxUnavailable can be an absolute number (for example, 2) or a percentage of desired Pods (for example, 10%). The -absolute number is calculated from percentage by rounding down. +absolute number is calculated from percentage by rounding up to the nearest integer. ``` kubectl get pods --watch ``` diff --git a/content/en/blog/_posts/2022-10-18-kubernetes-1.26-deprecations-and-removals.md b/content/en/blog/_posts/2022-10-18-kubernetes-1.26-deprecations-and-removals.md new file mode 100644 index 0000000000..8574208d87 --- /dev/null +++ b/content/en/blog/_posts/2022-10-18-kubernetes-1.26-deprecations-and-removals.md @@ -0,0 +1,136 @@ +--- +layout: blog +title: "Kubernetes Removals, Deprecations, and Major Changes in 1.26" +date: 2022-11-18 +slug: upcoming-changes-in-kubernetes-1-26 +--- + +**Author**: Frederico Muñoz (SAS) + +Change is an integral part of the Kubernetes life-cycle: as Kubernetes grows and matures, features may be deprecated, removed, or replaced with improvements for the health of the project. For Kubernetes v1.26 there are several planned: this article identifies and describes some of them, based on the information available at this mid-cycle point in the v1.26 release process, which is still ongoing and can introduce additional changes. + +## The Kubernetes API Removal and Deprecation process {#k8s-api-deprecation-process} + +The Kubernetes project has a [well-documented deprecation policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/) for features. This policy states that stable APIs may only be deprecated when a newer, stable version of that same API is available and that APIs have a minimum lifetime for each stability level. A deprecated API is one that has been marked for removal in a future Kubernetes release; it will continue to function until removal (at least one year from the deprecation), but usage will result in a warning being displayed. Removed APIs are no longer available in the current version, at which point you must migrate to using the replacement. + +* Generally available (GA) or stable API versions may be marked as deprecated but must not be removed within a major version of Kubernetes. +* Beta or pre-release API versions must be supported for 3 releases after deprecation. +* Alpha or experimental API versions may be removed in any release without prior deprecation notice. + +Whether an API is removed as a result of a feature graduating from beta to stable or because that API simply did not succeed, all removals comply with this deprecation policy. Whenever an API is removed, migration options are communicated in the documentation. + +## A note about the removal of the CRI `v1alpha2` API and containerd 1.5 support {#cri-api-removal} + +Following the adoption of the [Container Runtime Interface](https://kubernetes.io/docs/concepts/architecture/cri/) (CRI) and the [removal of dockershim] in v1.24 , the CRI is the supported and documented way through which Kubernetes interacts withdifferent container runtimes. Each kubelet negotiates which version of CRI to use with the container runtime on that node. + +The Kubernetes project recommends using CRI version `v1`; in Kubernetes v1.25 the kubelet can also negotiate the use of CRI `v1alpha2` (which was deprecated along at the same time as adding support for the stable `v1` interface). + +Kubernetes v1.26 will not support CRI `v1alpha2`. That [removal](https://github.com/kubernetes/kubernetes/pull/110618) will result in the kubelet not registering the node if the container runtime doesn't support CRI `v1`. This means that containerd minor version 1.5 and older will not be supported in Kubernetes 1.26; if you use containerd, you will need to upgrade to containerd version 1.6.0 or later **before** you upgrade that node to Kubernetes v1.26. Other container runtimes that only support the `v1alpha2` are equally affected: if that affects you, you should contact the container runtime vendor for advice or check their website for additional instructions in how to move forward. + +If you want to benefit from v1.26 features and still use an older container runtime, you can run an older kubelet. The [supported skew](/releases/version-skew-policy/#kubelet) for the kubelet allows you to run a v1.25 kubelet, which still is still compatible with `v1alpha2` CRI support, even if you upgrade the control plane to the 1.26 minor release of Kubernetes. + +As well as container runtimes themselves, that there are tools like [stargz-snapshotter](https://github.com/containerd/stargz-snapshotter) that act as a proxy between kubelet and container runtime and those also might be affected. + +## Deprecations and removals in Kubernetes v1.26 {#deprecations-removals} + +In addition to the above, Kubernetes v1.26 is targeted to include several additional removals and deprecations. + +### Removal of the `v1beta1` flow control API group + +The `flowcontrol.apiserver.k8s.io/v1beta1` API version of FlowSchema and PriorityLevelConfiguration [will no longer be served in v1.26](https://kubernetes.io/docs/reference/using-api/deprecation-guide/#flowcontrol-resources-v126). Users should migrate manifests and API clients to use the `flowcontrol.apiserver.k8s.io/v1beta2` API version, available since v1.23. + +### Removal of the `v2beta2` HorizontalPodAutoscaler API + +The `autoscaling/v2beta2` API version of HorizontalPodAutoscaler [will no longer be served in v1.26](https://kubernetes.io/docs/reference/using-api/deprecation-guide/#horizontalpodautoscaler-v126). Users should migrate manifests and API clients to use the `autoscaling/v2` API version, available since v1.23. + +### Removal of in-tree credential management code + +In this upcoming release, legacy vendor-specific authentication code that is part of Kubernetes +will be [removed](https://github.com/kubernetes/kubernetes/pull/112341) from both +`client-go` and `kubectl`. +The existing mechanism supports authentication for two specific cloud providers: +Azure and Google Cloud. +In its place, Kubernetes already offers a vendor-neutral +[authentication plugin mechanism](/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins) - +you can switch over right now, before the v1.26 release happens. +If you're affected, you can find additional guidance on how to proceed for +[Azure](https://github.com/Azure/kubelogin#readme) and for +[Google Cloud](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke). + +### Removal of `kube-proxy` userspace modes + +The `userspace` proxy mode, deprecated for over a year, is [no longer supported on either Linux or Windows](https://github.com/kubernetes/kubernetes/pull/112133) and will be removed in this release. Users should use `iptables` or `ipvs` on Linux, or `kernelspace` on Windows: using `--mode userspace` will now fail. + +### Removal of in-tree OpenStack cloud provider + +Kubernetes is switching from in-tree code for storage integrations, in favor of the Container Storage Interface (CSI). +As part of this, Kubernetes v1.26 will remove the the deprecated in-tree storage integration for OpenStack +(the `cinder` volume type). You should migrate to external cloud provider and CSI driver from +https://github.com/kubernetes/cloud-provider-openstack instead. +For more information, visit [Cinder in-tree to CSI driver migration](https://github.com/kubernetes/enhancements/issues/1489). + +### Removal of the GlusterFS in-tree driver + +The in-tree GlusterFS driver was [deprecated in v1.25](https://kubernetes.io/blog/2022/08/23/kubernetes-v1-25-release/#deprecations-and-removals), and will be removed from Kubernetes v1.26. + +### Deprecation of non-inclusive `kubectl` flag + +As part of the implementation effort of the [Inclusive Naming Initiative](https://www.cncf.io/announcements/2021/10/13/inclusive-naming-initiative-announces-new-community-resources-for-a-more-inclusive-future/), +the `--prune-whitelist` flag will be [deprecated](https://github.com/kubernetes/kubernetes/pull/113116), and replaced with `--prune-allowlist`. +Users that use this flag are strongly advised to make the necessary changes prior to the final removal of the flag, in a future release. + +### Removal of dynamic kubelet configuration + +_Dynamic kubelet configuration_ allowed [new kubelet configurations to be rolled out via the Kubernetes API](https://github.com/kubernetes/enhancements/tree/2cd758cc6ab617a93f578b40e97728261ab886ed/keps/sig-node/281-dynamic-kubelet-configuration), even in a live cluster. +A cluster operator could reconfigure the kubelet on a Node by specifying a ConfigMap +that contained the configuration data that the kubelet should use. +Dynamic kubelet configuration was removed from the kubelet in v1.24, and will be +[removed from the API server](https://github.com/kubernetes/kubernetes/pull/112643) in the v1.26 release. + +### Deprecations for `kube-apiserver` command line arguments + +The `--master-service-namespace` command line argument to the kube-apiserver doesn't have +any effect, and was already informally [deprecated](https://github.com/kubernetes/kubernetes/pull/38186). +That command line argument wil be formally marked as deprecated in v1.26, preparing for its +removal in a future release. +The Kubernetes project does not expect any impact from this deprecation and removal. + +### Deprecations for `kubectl run` command line arguments + +Several unused option arguments for the `kubectl run` subcommand will be [marked as deprecated](https://github.com/kubernetes/kubernetes/pull/112261), including: + +* `--cascade` +* `--filename` +* `--force` +* `--grace-period` +* `--kustomize` +* `--recursive` +* `--timeout` +* `--wait` + +These arguments are already ignored so no impact is expected: the explicit deprecation sets a warning message and prepares the removal of the argumentsin a future release. + +### Removal of legacy command line arguments relating to logging + +Kubernetes v1.26 will [remove](https://github.com/kubernetes/kubernetes/pull/112120) some +command line arguments relating to logging. These command line arguments were +already deprecated. +For more information, see [Deprecate klog specific flags in Kubernetes Components](https://github.com/kubernetes/enhancements/tree/3cb66bd0a1ef973ebcc974f935f0ac5cba9db4b2/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components). + +## Looking ahead {#looking-ahead} + +The official list of [API removals](https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-27) planned for Kubernetes 1.27 includes: + +* All beta versions of the CSIStorageCapacity API; specifically: `storage.k8s.io/v1beta1` + +### Want to know more? + +Deprecations are announced in the Kubernetes release notes. You can see the announcements of pending deprecations in the release notes for: +* [Kubernetes 1.21](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#deprecation) +* [Kubernetes 1.22](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#deprecation) +* [Kubernetes 1.23](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#deprecation) +* [Kubernetes 1.24](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#deprecation) +* [Kubernetes 1.25](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#deprecation) + +We will formally announce the deprecations that come with [Kubernetes 1.26](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#deprecation) as part of the CHANGELOG for that release. + diff --git a/content/en/docs/concepts/architecture/cgroups.md b/content/en/docs/concepts/architecture/cgroups.md index 6347c608b5..377c073b42 100644 --- a/content/en/docs/concepts/architecture/cgroups.md +++ b/content/en/docs/concepts/architecture/cgroups.md @@ -106,7 +106,7 @@ updated to newer versions that support cgroup v2. For example: ## Identify the cgroup version on Linux Nodes {#check-cgroup-version} -The cgroup version depends on on the Linux distribution being used and the +The cgroup version depends on the Linux distribution being used and the default cgroup version configured on the OS. To check which cgroup version your distribution uses, run the `stat -fc %T /sys/fs/cgroup/` command on the node: diff --git a/content/en/docs/concepts/architecture/cri.md b/content/en/docs/concepts/architecture/cri.md index 1bf20e9c84..2b8fe79a5b 100644 --- a/content/en/docs/concepts/architecture/cri.md +++ b/content/en/docs/concepts/architecture/cri.md @@ -39,7 +39,7 @@ and doesn't register as a node. ## Upgrading -When upgrading Kubernetes, then the kubelet tries to automatically select the +When upgrading Kubernetes, the kubelet tries to automatically select the latest CRI version on restart of the component. If that fails, then the fallback will take place as mentioned above. If a gRPC re-dial was required because the container runtime has been upgraded, then the container runtime must also diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md index a58c1a3322..52408e6022 100644 --- a/content/en/docs/concepts/configuration/secret.md +++ b/content/en/docs/concepts/configuration/secret.md @@ -101,9 +101,9 @@ the exact mechanisms for issuing and refreshing those session tokens. There are several options to create a Secret: -- [create Secret using `kubectl` command](/docs/tasks/configmap-secret/managing-secret-using-kubectl/) -- [create Secret from config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/) -- [create Secret using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/) +- [Use `kubectl`](/docs/tasks/configmap-secret/managing-secret-using-kubectl/) +- [Use a configuration file](/docs/tasks/configmap-secret/managing-secret-using-config-file/) +- [Use the Kustomize tool](/docs/tasks/configmap-secret/managing-secret-using-kustomize/) #### Constraints on Secret names and data {#restriction-names-data} @@ -132,41 +132,18 @@ number of Secrets (or other resources) in a namespace. ### Editing a Secret -You can edit an existing Secret using kubectl: +You can edit an existing Secret unless it is [immutable](#secret-immutable). To +edit a Secret, use one of the following methods: -```shell -kubectl edit secrets mysecret -``` +* [Use `kubectl`](/docs/tasks/configmap-secret/managing-secret-using-kubectl/#edit-secret) +* [Use a configuration file](/docs/tasks/configmap-secret/managing-secret-using-config-file/#edit-secret) -This opens your default editor and allows you to update the base64 encoded Secret -values in the `data` field; for example: +You can also edit the data in a Secret using the [Kustomize tool](/docs/tasks/configmap-secret/managing-secret-using-kustomize/#edit-secret). However, this +method creates a new `Secret` object with the edited data. -```yaml -# Please edit the object below. Lines beginning with a '#' will be ignored, -# and an empty file will abort the edit. If an error occurs while saving this file, it will be -# reopened with the relevant failures. -# -apiVersion: v1 -data: - username: YWRtaW4= - password: MWYyZDFlMmU2N2Rm -kind: Secret -metadata: - annotations: - kubectl.kubernetes.io/last-applied-configuration: { ... } - creationTimestamp: 2020-01-22T18:41:56Z - name: mysecret - namespace: default - resourceVersion: "164619" - uid: cfee02d6-c137-11e5-8d73-42010af00002 -type: Opaque -``` - -That example manifest defines a Secret with two keys in the `data` field: `username` and `password`. -The values are Base64 strings in the manifest; however, when you use the Secret with a Pod -then the kubelet provides the _decoded_ data to the Pod and its containers. - -You can package many keys and values into one Secret, or use many Secrets, whichever is convenient. +Depending on how you created the Secret, as well as how the Secret is used in +your Pods, updates to existing `Secret` objects are propagated automatically to +Pods that use the data. For more information, refer to [Mounted Secrets are updated automatically](#mounted-secrets-are-updated-automatically). ### Using a Secret @@ -1195,7 +1172,7 @@ A bootstrap type Secret has the following keys specified under `data`: - `token-secret`: A random 16 character string as the actual token secret. Required. - `description`: A human-readable string that describes what the token is used for. Optional. -- `expiration`: An absolute UTC time using RFC3339 specifying when the token +- `expiration`: An absolute UTC time using [RFC3339](https://datatracker.ietf.org/doc/html/rfc3339) specifying when the token should be expired. Optional. - `usage-bootstrap-`: A boolean flag indicating additional usage for the bootstrap token. diff --git a/content/en/docs/concepts/containers/_index.md b/content/en/docs/concepts/containers/_index.md index 746e1b7fc9..7a1c1c9eec 100644 --- a/content/en/docs/concepts/containers/_index.md +++ b/content/en/docs/concepts/containers/_index.md @@ -6,7 +6,6 @@ reviewers: - erictune - thockin content_type: concept -no_list: true --- @@ -18,7 +17,10 @@ run it. Containers decouple applications from underlying host infrastructure. This makes deployment easier in different cloud or OS environments. - +Each {{< glossary_tooltip text="node" term_id="node" >}} in a Kubernetes +cluster runs the containers that form the +[Pods](/docs/concepts/workloads/pods/) assigned to that node. +Containers in a Pod are co-located and co-scheduled to run on the same node. @@ -29,17 +31,23 @@ software package, containing everything needed to run an application: the code and any runtime it requires, application and system libraries, and default values for any essential settings. -By design, a container is immutable: you cannot change the code of a -container that is already running. If you have a containerized application -and want to make changes, you need to build a new image that includes -the change, then recreate the container to start from the updated image. +Containers are intended to be stateless and +[immutable](https://glossary.cncf.io/immutable-infrastructure/): +you should not change +the code of a container that is already running. If you have a containerized +application and want to make changes, the correct process is to build a new +image that includes the change, then recreate the container to start from the +updated image. ## Container runtimes {{< glossary_definition term_id="container-runtime" length="all" >}} -## {{% heading "whatsnext" %}} - -* Read about [container images](/docs/concepts/containers/images/) -* Read about [Pods](/docs/concepts/workloads/pods/) +Usually, you can allow your cluster to pick the default container runtime +for a Pod. If you need to use more than one container runtime in your cluster, +you can specify the [RuntimeClass](/docs/concepts/containers/runtime-class/) +for a Pod to make sure that Kubernetes runs those containers using a +particular container runtime. +You can also use RuntimeClass to run different Pods with the same container +runtime but with different settings. diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md index c696fbb3ea..d7d037d21b 100644 --- a/content/en/docs/concepts/containers/images.md +++ b/content/en/docs/concepts/containers/images.md @@ -5,6 +5,7 @@ reviewers: title: Images content_type: concept weight: 10 +hide_summary: true # Listed separately in section index --- @@ -19,6 +20,12 @@ before referring to it in a This page provides an outline of the container image concept. +{{< note >}} +If you are looking for the container images for a Kubernetes +release (such as v{{< skew latestVersion >}}, the latest minor release), +visit [Download Kubernetes](https://kubernetes.io/releases/download/). +{{< /note >}} + ## Image names diff --git a/content/en/docs/concepts/containers/runtime-class.md b/content/en/docs/concepts/containers/runtime-class.md index ff4bbcd57a..b43bde9a2f 100644 --- a/content/en/docs/concepts/containers/runtime-class.md +++ b/content/en/docs/concepts/containers/runtime-class.md @@ -5,6 +5,7 @@ reviewers: title: Runtime Class content_type: concept weight: 30 +hide_summary: true # Listed separately in section index --- diff --git a/content/en/docs/concepts/overview/_index.md b/content/en/docs/concepts/overview/_index.md index e326f68716..72abe7298a 100644 --- a/content/en/docs/concepts/overview/_index.md +++ b/content/en/docs/concepts/overview/_index.md @@ -18,9 +18,16 @@ This page is an overview of Kubernetes. -Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. -The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s as an abbreviation results from counting the eight letters between the "K" and the "s". Google open-sourced the Kubernetes project in 2014. Kubernetes combines [over 15 years of Google's experience](/blog/2015/04/borg-predecessor-to-kubernetes/) running production workloads at scale with best-of-breed ideas and practices from the community. +Kubernetes is a portable, extensible, open source platform for managing containerized +workloads and services, that facilitates both declarative configuration and automation. +It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. + +The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s as an abbreviation +results from counting the eight letters between the "K" and the "s". Google open-sourced the +Kubernetes project in 2014. Kubernetes combines +[over 15 years of Google's experience](/blog/2015/04/borg-predecessor-to-kubernetes/) running +production workloads at scale with best-of-breed ideas and practices from the community. ## Going back in time @@ -29,69 +36,136 @@ Let's take a look at why Kubernetes is so useful by going back in time. ![Deployment evolution](/images/docs/Container_Evolution.svg) **Traditional deployment era:** -Early on, organizations ran applications on physical servers. There was no way to define resource boundaries for applications in a physical server, and this caused resource allocation issues. For example, if multiple applications run on a physical server, there can be instances where one application would take up most of the resources, and as a result, the other applications would underperform. A solution for this would be to run each application on a different physical server. But this did not scale as resources were underutilized, and it was expensive for organizations to maintain many physical servers. +Early on, organizations ran applications on physical servers. There was no way to define +resource boundaries for applications in a physical server, and this caused resource +allocation issues. For example, if multiple applications run on a physical server, there +can be instances where one application would take up most of the resources, and as a result, +the other applications would underperform. A solution for this would be to run each application +on a different physical server. But this did not scale as resources were underutilized, and it +was expensive for organizations to maintain many physical servers. -**Virtualized deployment era:** As a solution, virtualization was introduced. It allows you to run multiple Virtual Machines (VMs) on a single physical server's CPU. Virtualization allows applications to be isolated between VMs and provides a level of security as the information of one application cannot be freely accessed by another application. +**Virtualized deployment era:** As a solution, virtualization was introduced. It allows you +to run multiple Virtual Machines (VMs) on a single physical server's CPU. Virtualization +allows applications to be isolated between VMs and provides a level of security as the +information of one application cannot be freely accessed by another application. -Virtualization allows better utilization of resources in a physical server and allows better scalability because an application can be added or updated easily, reduces hardware costs, and much more. With virtualization you can present a set of physical resources as a cluster of disposable virtual machines. +Virtualization allows better utilization of resources in a physical server and allows +better scalability because an application can be added or updated easily, reduces +hardware costs, and much more. With virtualization you can present a set of physical +resources as a cluster of disposable virtual machines. -Each VM is a full machine running all the components, including its own operating system, on top of the virtualized hardware. +Each VM is a full machine running all the components, including its own operating +system, on top of the virtualized hardware. -**Container deployment era:** Containers are similar to VMs, but they have relaxed isolation properties to share the Operating System (OS) among the applications. Therefore, containers are considered lightweight. Similar to a VM, a container has its own filesystem, share of CPU, memory, process space, and more. As they are decoupled from the underlying infrastructure, they are portable across clouds and OS distributions. +**Container deployment era:** Containers are similar to VMs, but they have relaxed +isolation properties to share the Operating System (OS) among the applications. +Therefore, containers are considered lightweight. Similar to a VM, a container +has its own filesystem, share of CPU, memory, process space, and more. As they +are decoupled from the underlying infrastructure, they are portable across clouds +and OS distributions. Containers have become popular because they provide extra benefits, such as: -* Agile application creation and deployment: increased ease and efficiency of container image creation compared to VM image use. -* Continuous development, integration, and deployment: provides for reliable and frequent container image build and deployment with quick and efficient rollbacks (due to image immutability). -* Dev and Ops separation of concerns: create application container images at build/release time rather than deployment time, thereby decoupling applications from infrastructure. -* Observability: not only surfaces OS-level information and metrics, but also application health and other signals. -* Environmental consistency across development, testing, and production: Runs the same on a laptop as it does in the cloud. -* Cloud and OS distribution portability: Runs on Ubuntu, RHEL, CoreOS, on-premises, on major public clouds, and anywhere else. -* Application-centric management: Raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources. -* Loosely coupled, distributed, elastic, liberated micro-services: applications are broken into smaller, independent pieces and can be deployed and managed dynamically – not a monolithic stack running on one big single-purpose machine. +* Agile application creation and deployment: increased ease and efficiency of + container image creation compared to VM image use. +* Continuous development, integration, and deployment: provides for reliable + and frequent container image build and deployment with quick and efficient + rollbacks (due to image immutability). +* Dev and Ops separation of concerns: create application container images at + build/release time rather than deployment time, thereby decoupling + applications from infrastructure. +* Observability: not only surfaces OS-level information and metrics, but also + application health and other signals. +* Environmental consistency across development, testing, and production: Runs + the same on a laptop as it does in the cloud. +* Cloud and OS distribution portability: Runs on Ubuntu, RHEL, CoreOS, on-premises, + on major public clouds, and anywhere else. +* Application-centric management: Raises the level of abstraction from running an + OS on virtual hardware to running an application on an OS using logical resources. +* Loosely coupled, distributed, elastic, liberated micro-services: applications are + broken into smaller, independent pieces and can be deployed and managed dynamically – + not a monolithic stack running on one big single-purpose machine. * Resource isolation: predictable application performance. * Resource utilization: high efficiency and density. ## Why you need Kubernetes and what it can do {#why-you-need-kubernetes-and-what-can-it-do} -Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn't it be easier if this behavior was handled by a system? +Containers are a good way to bundle and run your applications. In a production +environment, you need to manage the containers that run the applications and +ensure that there is no downtime. For example, if a container goes down, another +container needs to start. Wouldn't it be easier if this behavior was handled by a system? -That's how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example: Kubernetes can easily manage a canary deployment for your system. +That's how Kubernetes comes to the rescue! Kubernetes provides you with a framework +to run distributed systems resiliently. It takes care of scaling and failover for +your application, provides deployment patterns, and more. For example: Kubernetes +can easily manage a canary deployment for your system. Kubernetes provides you with: * **Service discovery and load balancing** -Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable. + Kubernetes can expose a container using the DNS name or using their own IP address. + If traffic to a container is high, Kubernetes is able to load balance and distribute + the network traffic so that the deployment is stable. * **Storage orchestration** -Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more. + Kubernetes allows you to automatically mount a storage system of your choice, such as + local storages, public cloud providers, and more. * **Automated rollouts and rollbacks** -You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container. + You can describe the desired state for your deployed containers using Kubernetes, + and it can change the actual state to the desired state at a controlled rate. + For example, you can automate Kubernetes to create new containers for your + deployment, remove existing containers and adopt all their resources to the new container. * **Automatic bin packing** -You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources. + You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. + You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit + containers onto your nodes to make the best use of your resources. * **Self-healing** -Kubernetes restarts containers that fail, replaces containers, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve. + Kubernetes restarts containers that fail, replaces containers, kills containers that don't + respond to your user-defined health check, and doesn't advertise them to clients until they + are ready to serve. * **Secret and configuration management** -Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration. + Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, + and SSH keys. You can deploy and update secrets and application configuration without + rebuilding your container images, and without exposing secrets in your stack configuration. ## What Kubernetes is not -Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) system. Since Kubernetes operates at the container level rather than at the hardware level, it provides some generally applicable features common to PaaS offerings, such as deployment, scaling, load balancing, and lets users integrate their logging, monitoring, and alerting solutions. However, Kubernetes is not monolithic, and these default solutions are optional and pluggable. Kubernetes provides the building blocks for building developer platforms, but preserves user choice and flexibility where it is important. +Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) system. +Since Kubernetes operates at the container level rather than at the hardware level, +it provides some generally applicable features common to PaaS offerings, such as +deployment, scaling, load balancing, and lets users integrate their logging, monitoring, +and alerting solutions. However, Kubernetes is not monolithic, and these default solutions +are optional and pluggable. Kubernetes provides the building blocks for building developer +platforms, but preserves user choice and flexibility where it is important. Kubernetes: -* Does not limit the types of applications supported. Kubernetes aims to support an extremely diverse variety of workloads, including stateless, stateful, and data-processing workloads. If an application can run in a container, it should run great on Kubernetes. -* Does not deploy source code and does not build your application. Continuous Integration, Delivery, and Deployment (CI/CD) workflows are determined by organization cultures and preferences as well as technical requirements. -* Does not provide application-level services, such as middleware (for example, message buses), data-processing frameworks (for example, Spark), databases (for example, MySQL), caches, nor cluster storage systems (for example, Ceph) as built-in services. Such components can run on Kubernetes, and/or can be accessed by applications running on Kubernetes through portable mechanisms, such as the [Open Service Broker](https://openservicebrokerapi.org/). -* Does not dictate logging, monitoring, or alerting solutions. It provides some integrations as proof of concept, and mechanisms to collect and export metrics. -* Does not provide nor mandate a configuration language/system (for example, Jsonnet). It provides a declarative API that may be targeted by arbitrary forms of declarative specifications. -* Does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems. -* Additionally, Kubernetes is not a mere orchestration system. In fact, it eliminates the need for orchestration. The technical definition of orchestration is execution of a defined workflow: first do A, then B, then C. In contrast, Kubernetes comprises a set of independent, composable control processes that continuously drive the current state towards the provided desired state. It shouldn't matter how you get from A to C. Centralized control is also not required. This results in a system that is easier to use and more powerful, robust, resilient, and extensible. - - +* Does not limit the types of applications supported. Kubernetes aims to support an + extremely diverse variety of workloads, including stateless, stateful, and data-processing + workloads. If an application can run in a container, it should run great on Kubernetes. +* Does not deploy source code and does not build your application. Continuous Integration, + Delivery, and Deployment (CI/CD) workflows are determined by organization cultures and + preferences as well as technical requirements. +* Does not provide application-level services, such as middleware (for example, message buses), + data-processing frameworks (for example, Spark), databases (for example, MySQL), caches, nor + cluster storage systems (for example, Ceph) as built-in services. Such components can run on + Kubernetes, and/or can be accessed by applications running on Kubernetes through portable + mechanisms, such as the [Open Service Broker](https://openservicebrokerapi.org/). +* Does not dictate logging, monitoring, or alerting solutions. It provides some integrations + as proof of concept, and mechanisms to collect and export metrics. +* Does not provide nor mandate a configuration language/system (for example, Jsonnet). It provides + a declarative API that may be targeted by arbitrary forms of declarative specifications. +* Does not provide nor adopt any comprehensive machine configuration, maintenance, management, + or self-healing systems. +* Additionally, Kubernetes is not a mere orchestration system. In fact, it eliminates the need + for orchestration. The technical definition of orchestration is execution of a defined workflow: + first do A, then B, then C. In contrast, Kubernetes comprises a set of independent, composable + control processes that continuously drive the current state towards the provided desired state. + It shouldn't matter how you get from A to C. Centralized control is also not required. This + results in a system that is easier to use and more powerful, robust, resilient, and extensible. ## {{% heading "whatsnext" %}} -* Take a look at the [Kubernetes Components](/docs/concepts/overview/components/) -* Take a look at the [The Kubernetes API](/docs/concepts/overview/kubernetes-api/) -* Take a look at the [Cluster Architecture](/docs/concepts/architecture/) -* Ready to [Get Started](/docs/setup/)? +* Take a look at the [Kubernetes Components](/docs/concepts/overview/components/) +* Take a look at the [The Kubernetes API](/docs/concepts/overview/kubernetes-api/) +* Take a look at the [Cluster Architecture](/docs/concepts/architecture/) +* Ready to [Get Started](/docs/setup/)? diff --git a/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md index 6c9ce3fcb2..53969ab025 100644 --- a/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md +++ b/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md @@ -2,16 +2,18 @@ title: Understanding Kubernetes Objects content_type: concept weight: 10 -card: +card: name: concepts weight: 40 --- + This page explains how Kubernetes objects are represented in the Kubernetes API, and how you can express them in `.yaml` format. + ## Understanding Kubernetes objects {#kubernetes-objects} *Kubernetes objects* are persistent entities in the Kubernetes system. Kubernetes uses these @@ -32,7 +34,7 @@ interface, for example, the CLI makes the necessary Kubernetes API calls for you the Kubernetes API directly in your own programs using one of the [Client Libraries](/docs/reference/using-api/client-libraries/). -### Object Spec and Status +### Object spec and status Almost every Kubernetes object includes two nested object fields that govern the object's configuration: the object *`spec`* and the object *`status`*. @@ -86,7 +88,7 @@ The output is similar to this: deployment.apps/nginx-deployment created ``` -### Required Fields +### Required fields In the `.yaml` file for the Kubernetes object you want to create, you'll need to set values for the following fields: @@ -116,9 +118,9 @@ detail the structure of that `.status` field, and its content for each different ## {{% heading "whatsnext" %}} Learn more about the following: -* [Pods](https://kubernetes.io/docs/concepts/workloads/pods/) which are the most important basic Kubernetes objects. -* [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) objects. -* [Controllers](https://kubernetes.io/docs/concepts/architecture/controller/) in Kubernetes. -* [Kubernetes API overview](https://kubernetes.io/docs/reference/using-api/) which explains some more API concepts. -* [kubectl](https://kubernetes.io/docs/reference/kubectl/) and [kubectl commands](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands). +* [Pods](/docs/concepts/workloads/pods/) which are the most important basic Kubernetes objects. +* [Deployment](/docs/concepts/workloads/controllers/deployment/) objects. +* [Controllers](/docs/concepts/architecture/controller/) in Kubernetes. +* [Kubernetes API overview](/docs/reference/using-api/) which explains some more API concepts. +* [kubectl](/docs/reference/kubectl/) and [kubectl commands](/docs/reference/generated/kubectl/kubectl-commands). diff --git a/content/en/docs/concepts/overview/working-with-objects/names.md b/content/en/docs/concepts/overview/working-with-objects/names.md index 52586baf3e..22b0403dad 100644 --- a/content/en/docs/concepts/overview/working-with-objects/names.md +++ b/content/en/docs/concepts/overview/working-with-objects/names.md @@ -99,5 +99,5 @@ UUIDs are standardized as ISO/IEC 9834-8 and as ITU-T X.667. ## {{% heading "whatsnext" %}} -* Read about [labels](/docs/concepts/overview/working-with-objects/labels/) in Kubernetes. +* Read about [labels](/docs/concepts/overview/working-with-objects/labels/) and [annotations](/docs/concepts/overview/working-with-objects/annotations/) in Kubernetes. * See the [Identifiers and Names in Kubernetes](https://git.k8s.io/design-proposals-archive/architecture/identifiers.md) design document. diff --git a/content/en/docs/concepts/overview/working-with-objects/namespaces.md b/content/en/docs/concepts/overview/working-with-objects/namespaces.md index f28a82fba2..1eb96fe4a6 100644 --- a/content/en/docs/concepts/overview/working-with-objects/namespaces.md +++ b/content/en/docs/concepts/overview/working-with-objects/namespaces.md @@ -32,6 +32,26 @@ resources, such as different versions of the same software: use {{< glossary_tooltip text="labels" term_id="label" >}} to distinguish resources within the same namespace. +{{< note >}} +For a production cluster, consider _not_ using the `default` namespace. Instead, make other namespaces and use those. +{{< /note >}} + +## Initial namespaces + +Kubernetes starts with four initial namespaces: + +`default` +: Kubernetes includes this namespace so that you can start using your new cluster without first creating a namespace. + +`kube-node-lease` +: This namespace holds [Lease](/docs/reference/kubernetes-api/cluster-resources/lease-v1/) objects associated with each node. Node leases allow the kubelet to send [heartbeats](/docs/concepts/architecture/nodes/#heartbeats) so that the control plane can detect node failure. + +`kube-public` +: This namespace is readable by *all* clients (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement. + +`kube-system` +: The namespace for objects created by the Kubernetes system. + ## Working with Namespaces Creation and deletion of namespaces are described in the @@ -56,16 +76,7 @@ kube-public Active 1d kube-system Active 1d ``` -Kubernetes starts with four initial namespaces: - * `default` The default namespace for objects with no other namespace - * `kube-system` The namespace for objects created by the Kubernetes system - * `kube-public` This namespace is created automatically and is readable by all users (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement. - * `kube-node-lease` This namespace holds [Lease](/docs/reference/kubernetes-api/cluster-resources/lease-v1/) - objects associated with each node. Node leases allow the kubelet to send - [heartbeats](/docs/concepts/architecture/nodes/#heartbeats) so that the control plane - can detect node failure. - ### Setting the namespace for a request To set the namespace for a current request, use the `--namespace` flag. @@ -106,7 +117,7 @@ By creating namespaces with the same name as [public top-level domains](https://data.iana.org/TLD/tlds-alpha-by-domain.txt), Services in these namespaces can have short DNS names that overlap with public DNS records. Workloads from any namespace performing a DNS lookup without a [trailing dot](https://datatracker.ietf.org/doc/html/rfc1034#page-8) will -be redirected to those services, taking precedence over public DNS. +be redirected to those services, taking precedence over public DNS. To mitigate this, limit privileges for creating namespaces to trusted users. If required, you could additionally configure third-party security controls, such @@ -116,13 +127,13 @@ to block creating any namespace with the name of [public TLDs](https://data.iana.org/TLD/tlds-alpha-by-domain.txt). {{< /warning >}} -## Not All Objects are in a Namespace +## Not all objects are in a namespace Most Kubernetes resources (e.g. pods, services, replication controllers, and others) are in some namespaces. However namespace resources are not themselves in a namespace. And low-level resources, such as [nodes](/docs/concepts/architecture/nodes/) and -persistentVolumes, are not in any namespace. +[persistentVolumes](/docs/concepts/storage/persistent-volumes/), are not in any namespace. To see which Kubernetes resources are and aren't in a namespace: diff --git a/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md b/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md index 3474645d14..62098a0928 100644 --- a/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md +++ b/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md @@ -98,7 +98,7 @@ your cluster. Those fields are: {{< note >}} The `minDomains` field is a beta field and enabled by default in 1.25. You can disable it by disabling the - `MinDomainsInPodToplogySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/). + `MinDomainsInPodTopologySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/). {{< /note >}} - The value of `minDomains` must be greater than 0, when specified. diff --git a/content/en/docs/concepts/services-networking/dns-pod-service.md b/content/en/docs/concepts/services-networking/dns-pod-service.md index bf7ffae82b..30672a32fa 100644 --- a/content/en/docs/concepts/services-networking/dns-pod-service.md +++ b/content/en/docs/concepts/services-networking/dns-pod-service.md @@ -220,13 +220,15 @@ following Pod-specific DNS policies. These policies are specified in the See [related discussion](/docs/tasks/administer-cluster/dns-custom-nameservers) for more details. - "`ClusterFirst`": Any DNS query that does not match the configured cluster - domain suffix, such as "`www.kubernetes.io`", is forwarded to the upstream - nameserver inherited from the node. Cluster administrators may have extra + domain suffix, such as "`www.kubernetes.io`", is forwarded to an upstream + nameserver by the DNS server. Cluster administrators may have extra stub-domain and upstream DNS servers configured. See [related discussion](/docs/tasks/administer-cluster/dns-custom-nameservers) for details on how DNS queries are handled in those cases. - "`ClusterFirstWithHostNet`": For Pods running with hostNetwork, you should - explicitly set its DNS policy "`ClusterFirstWithHostNet`". + explicitly set its DNS policy to "`ClusterFirstWithHostNet`". Otherwise, Pods + running with hostNetwork and `"ClusterFirst"` will fallback to the behavior + of the `"Default"` policy. - Note: This is not supported on Windows. See [below](#dns-windows) for details - "`None`": It allows a Pod to ignore DNS settings from the Kubernetes environment. All DNS settings are supposed to be provided using the diff --git a/content/en/docs/concepts/services-networking/endpoint-slices.md b/content/en/docs/concepts/services-networking/endpoint-slices.md index ef987438bd..09d20a3597 100644 --- a/content/en/docs/concepts/services-networking/endpoint-slices.md +++ b/content/en/docs/concepts/services-networking/endpoint-slices.md @@ -15,11 +15,9 @@ description: >- {{< feature-state for_k8s_version="v1.21" state="stable" >}} -_EndpointSlices_ provide a simple way to track network endpoints within a -Kubernetes cluster. They offer a more scalable and extensible alternative to -Endpoints. - - +Kubernetes' _EndpointSlice_ API provides a way to track network endpoints +within a Kubernetes cluster. EndpointSlices offer a more scalable and extensible +alternative to [Endpoints](/docs/concepts/services-networking/service/#endpoints). @@ -274,3 +272,5 @@ networking and topology-aware routing. ## {{% heading "whatsnext" %}} * Follow the [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/) tutorial +* Read the [API reference](/docs/reference/kubernetes-api/service-resources/endpoint-slice-v1/) for the EndpointSlice API +* Read the [API reference](/docs/reference/kubernetes-api/service-resources/endpoints-v1/) for the Endpoints API diff --git a/content/en/docs/concepts/services-networking/service-topology.md b/content/en/docs/concepts/services-networking/service-topology.md index 5c5429297c..3778bb4035 100644 --- a/content/en/docs/concepts/services-networking/service-topology.md +++ b/content/en/docs/concepts/services-networking/service-topology.md @@ -201,5 +201,5 @@ spec: * Read about [enabling Service Topology](/docs/tasks/administer-cluster/enabling-service-topology) -* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) +* Read [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/) diff --git a/content/en/docs/concepts/storage/projected-volumes.md b/content/en/docs/concepts/storage/projected-volumes.md index a7c74349f5..321ee8d8ae 100644 --- a/content/en/docs/concepts/storage/projected-volumes.md +++ b/content/en/docs/concepts/storage/projected-volumes.md @@ -46,8 +46,7 @@ parameters are nearly the same with two exceptions: for each individual projection. ## serviceAccountToken projected volumes {#serviceaccounttoken} -When the `TokenRequestProjection` feature is enabled, you can inject the token -for the current [service account](/docs/reference/access-authn-authz/authentication/#service-account-tokens) +You can inject the token for the current [service account](/docs/reference/access-authn-authz/authentication/#service-account-tokens) into a Pod at a specified path. For example: {{< codenew file="pods/storage/projected-service-account-token.yaml" >}} diff --git a/content/en/docs/concepts/workloads/controllers/cron-jobs.md b/content/en/docs/concepts/workloads/controllers/cron-jobs.md index 1f64045422..d2795e0efb 100644 --- a/content/en/docs/concepts/workloads/controllers/cron-jobs.md +++ b/content/en/docs/concepts/workloads/controllers/cron-jobs.md @@ -92,6 +92,7 @@ For example, the line below states that the task must be started every Friday at To generate CronJob schedule expressions, you can also use web tools like [crontab.guru](https://crontab.guru/). ## Time zones + For CronJobs with no time zone specified, the kube-controller-manager interprets schedules relative to its local time zone. {{< feature-state for_k8s_version="v1.25" state="beta" >}} @@ -101,7 +102,7 @@ you can specify a time zone for a CronJob (if you don't enable that feature gate Kubernetes that does not have experimental time zone support, all CronJobs in your cluster have an unspecified timezone). -When you have the feature enabled, you can set `spec.timeZone` to the name of a valid [time zone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) name. For example, setting +When you have the feature enabled, you can set `spec.timeZone` to the name of a valid [time zone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). For example, setting `spec.timeZone: "Etc/UTC"` instructs Kubernetes to interpret the schedule relative to Coordinated Universal Time. A time zone database from the Go standard library is included in the binaries and used as a fallback in case an external database is not available on the system. @@ -121,15 +122,15 @@ If `startingDeadlineSeconds` is set to a value less than 10 seconds, the CronJob {{< /caution >}} -For every CronJob, the CronJob {{< glossary_tooltip term_id="controller" >}} checks how many schedules it missed in the duration from its last scheduled time until now. If there are more than 100 missed schedules, then it does not start the job and logs the error +For every CronJob, the CronJob {{< glossary_tooltip term_id="controller" >}} checks how many schedules it missed in the duration from its last scheduled time until now. If there are more than 100 missed schedules, then it does not start the job and logs the error. -```` +``` Cannot determine if job needs to be started. Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew. -```` +``` It is important to note that if the `startingDeadlineSeconds` field is set (not `nil`), the controller counts how many missed jobs occurred from the value of `startingDeadlineSeconds` until now rather than from the last scheduled time until now. For example, if `startingDeadlineSeconds` is `200`, the controller counts how many missed jobs occurred in the last 200 seconds. -A CronJob is counted as missed if it has failed to be created at its scheduled time. For example, If `concurrencyPolicy` is set to `Forbid` and a CronJob was attempted to be scheduled when there was a previous schedule still running, then it would count as missed. +A CronJob is counted as missed if it has failed to be created at its scheduled time. For example, if `concurrencyPolicy` is set to `Forbid` and a CronJob was attempted to be scheduled when there was a previous schedule still running, then it would count as missed. For example, suppose a CronJob is set to schedule a new Job every one minute beginning at `08:30:00`, and its `startingDeadlineSeconds` field is not set. If the CronJob controller happens to @@ -137,7 +138,7 @@ be down from `08:29:00` to `10:21:00`, the job will not start as the number of m To illustrate this concept further, suppose a CronJob is set to schedule a new Job every one minute beginning at `08:30:00`, and its `startingDeadlineSeconds` is set to 200 seconds. If the CronJob controller happens to -be down for the same period as the previous example (`08:29:00` to `10:21:00`,) the Job will still start at 10:22:00. This happens as the controller now checks how many missed schedules happened in the last 200 seconds (ie, 3 missed schedules), rather than from the last scheduled time until now. +be down for the same period as the previous example (`08:29:00` to `10:21:00`,) the Job will still start at 10:22:00. This happens as the controller now checks how many missed schedules happened in the last 200 seconds (i.e., 3 missed schedules), rather than from the last scheduled time until now. The CronJob is only responsible for creating Jobs that match its schedule, and the Job in turn is responsible for the management of the Pods it represents. @@ -146,7 +147,7 @@ the Job in turn is responsible for the management of the Pods it represents. Starting with Kubernetes v1.21 the second version of the CronJob controller is the default implementation. To disable the default CronJob controller -and use the original CronJob controller instead, one pass the `CronJobControllerV2` +and use the original CronJob controller instead, pass the `CronJobControllerV2` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) flag to the {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}}, and set this flag to `false`. For example: diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index a282b8455a..da0aa76ddc 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -4,6 +4,13 @@ reviewers: - bprashanth - madhusudancs title: ReplicaSet +feature: + title: Self-healing + anchor: How a ReplicaSet works + description: > + Restarts containers that fail, replaces and reschedules containers when nodes die, + kills containers that don't respond to your user-defined health check, + and doesn't advertise them to clients until they are ready to serve. content_type: concept weight: 20 --- diff --git a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md index 90a04f6f17..1360bd69f0 100644 --- a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md @@ -3,12 +3,6 @@ reviewers: - bprashanth - janetkuo title: ReplicationController -feature: - title: Self-healing - anchor: How a ReplicationController Works - description: > - Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve. - content_type: concept weight: 90 --- diff --git a/content/en/docs/concepts/workloads/pods/ephemeral-containers.md b/content/en/docs/concepts/workloads/pods/ephemeral-containers.md index cc894debaf..dfd7c366c1 100644 --- a/content/en/docs/concepts/workloads/pods/ephemeral-containers.md +++ b/content/en/docs/concepts/workloads/pods/ephemeral-containers.md @@ -52,6 +52,10 @@ possible to add an ephemeral container using `kubectl edit`. Like regular containers, you may not change or remove an ephemeral container after you have added it to a Pod. +{{< note >}} +Ephemeral containers are not supported by [static pods](/docs/tasks/configure-pod-container/static-pod/). +{{< /note >}} + ## Uses for ephemeral containers Ephemeral containers are useful for interactive troubleshooting when `kubectl diff --git a/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md b/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md index b0076eae50..71beccb53f 100644 --- a/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md +++ b/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md @@ -65,8 +65,8 @@ In the bootstrap initialization process, the following occurs: 6. kubelet now has limited credentials to create and retrieve a certificate signing request (CSR) 7. kubelet creates a CSR for itself with the signerName set to `kubernetes.io/kube-apiserver-client-kubelet` 8. CSR is approved in one of two ways: - * If configured, kube-controller-manager automatically approves the CSR - * If configured, an outside process, possibly a person, approves the CSR using the Kubernetes API or via `kubectl` + * If configured, kube-controller-manager automatically approves the CSR + * If configured, an outside process, possibly a person, approves the CSR using the Kubernetes API or via `kubectl` 9. Certificate is created for the kubelet 10. Certificate is issued to the kubelet 11. kubelet retrieves the certificate @@ -126,7 +126,7 @@ of provisioning. 1. [Bootstrap Tokens](#bootstrap-tokens) 2. [Token authentication file](#token-authentication-file) -Bootstrap tokens are a simpler and more easily managed method to authenticate kubelets, and do not require any additional flags when starting kube-apiserver. +Using bootstrap tokens is a simpler and more easily managed method to authenticate kubelets, and does not require any additional flags when starting kube-apiserver. Whichever method you choose, the requirement is that the kubelet be able to authenticate as a user with the rights to: @@ -176,7 +176,7 @@ systems). There are multiple ways you can generate a token. For example: head -c 16 /dev/urandom | od -An -t x | tr -d ' ' ``` -will generate tokens that look like `02b50b05283e98dd0fd71db496ef01e8`. +This will generate tokens that look like `02b50b05283e98dd0fd71db496ef01e8`. The token file should look like the following example, where the first three values can be anything and the quoted group name should be as depicted: @@ -186,7 +186,7 @@ values can be anything and the quoted group name should be as depicted: ``` Add the `--token-auth-file=FILENAME` flag to the kube-apiserver command (in your -systemd unit file perhaps) to enable the token file. See docs +systemd unit file perhaps) to enable the token file. See docs [here](/docs/reference/access-authn-authz/authentication/#static-token-file) for further details. @@ -247,7 +247,7 @@ To provide the Kubernetes CA key and certificate to kube-controller-manager, use --cluster-signing-cert-file="/etc/path/to/kubernetes/ca/ca.crt" --cluster-signing-key-file="/etc/path/to/kubernetes/ca/ca.key" ``` -for example: +For example: ```shell --cluster-signing-cert-file="/var/lib/kubernetes/ca.pem" --cluster-signing-key-file="/var/lib/kubernetes/ca-key.pem" @@ -312,7 +312,7 @@ by default. The controller uses the [`SubjectAccessReview` API](/docs/reference/access-authn-authz/authorization/#checking-api-access) to determine if a given user is authorized to request a CSR, then approves based on the authorization outcome. To prevent conflicts with other approvers, the -builtin approver doesn't explicitly deny CSRs. It only ignores unauthorized +built-in approver doesn't explicitly deny CSRs. It only ignores unauthorized requests. The controller also prunes expired certificates as part of garbage collection. @@ -435,12 +435,12 @@ controller, or manually approve the serving certificate requests. A deployment-specific approval process for kubelet serving certificates should typically only approve CSRs which: -1. are requested by nodes (ensure the `spec.username` field is of the form - `system:node:` and `spec.groups` contains `system:nodes`) -2. request usages for a serving certificate (ensure `spec.usages` contains `server auth`, +1. are requested by nodes (ensure the `spec.username` field is of the form + `system:node:` and `spec.groups` contains `system:nodes`) +2. request usages for a serving certificate (ensure `spec.usages` contains `server auth`, optionally contains `digital signature` and `key encipherment`, and contains no other usages) -3. only have IP and DNS subjectAltNames that belong to the requesting node, - and have no URI and Email subjectAltNames (parse the x509 Certificate Signing Request +3. only have IP and DNS subjectAltNames that belong to the requesting node, + and have no URI and Email subjectAltNames (parse the x509 Certificate Signing Request in `spec.request` to verify `subjectAltNames`) {{< /note >}} @@ -460,7 +460,7 @@ You have several options for generating these credentials: ## kubectl approval -CSRs can be approved outside of the approval flows builtin to the controller +CSRs can be approved outside of the approval flows built into the controller manager. The signing controller does not immediately sign all certificate requests. @@ -469,6 +469,6 @@ appropriately-privileged user. This flow is intended to allow for automated approval handled by an external approval controller or the approval controller implemented in the core controller-manager. However cluster administrators can also manually approve certificate requests using kubectl. An administrator can -list CSRs with `kubectl get csr` and describe one in detail with `kubectl -describe csr `. An administrator can approve or deny a CSR with `kubectl -certificate approve ` and `kubectl certificate deny `. +list CSRs with `kubectl get csr` and describe one in detail with +`kubectl describe csr `. An administrator can approve or deny a CSR with +`kubectl certificate approve ` and `kubectl certificate deny `. diff --git a/content/en/docs/reference/access-authn-authz/service-accounts-admin.md b/content/en/docs/reference/access-authn-authz/service-accounts-admin.md index 73867e3c73..332e757313 100644 --- a/content/en/docs/reference/access-authn-authz/service-accounts-admin.md +++ b/content/en/docs/reference/access-authn-authz/service-accounts-admin.md @@ -96,7 +96,7 @@ Here's an example of how that looks for a launched Pod: That manifest snippet defines a projected volume that consists of three sources. In this case, each source also represents a single path within that volume. The three sources are: -1. A `serviceAccountToken` source, that contains a token that the kubelet acquires from kube-apiserver +1. A `serviceAccountToken` source, that contains a token that the kubelet acquires from kube-apiserver. The kubelet fetches time-bound tokens using the TokenRequest API. A token served for a TokenRequest expires either when the pod is deleted or after a defined lifespan (by default, that is 1 hour). The token is bound to the specific Pod and has the kube-apiserver as its audience. @@ -105,7 +105,7 @@ each source also represents a single path within that volume. The three sources 1. A `configMap` source. The ConfigMap contains a bundle of certificate authority data. Pods can use these certificates to make sure that they are connecting to your cluster's kube-apiserver (and not to middlebox or an accidentally misconfigured peer). -1. A `downwardAPI` source that looks up the name of thhe namespace containing the Pod, and makes +1. A `downwardAPI` source that looks up the name of the namespace containing the Pod, and makes that name information available to application code running inside the Pod. Any container within the Pod that mounts this particular volume can access the above information. @@ -232,14 +232,14 @@ Here's an example of how that looks for a launched Pod: That manifest snippet defines a projected volume that combines information from three sources: -1. A `serviceAccountToken` source, that contains a token that the kubelet acquires from kube-apiserver +1. A `serviceAccountToken` source, that contains a token that the kubelet acquires from kube-apiserver. The kubelet fetches time-bound tokens using the TokenRequest API. A token served for a TokenRequest expires either when the pod is deleted or after a defined lifespan (by default, that is 1 hour). The token is bound to the specific Pod and has the kube-apiserver as its audience. 1. A `configMap` source. The ConfigMap contains a bundle of certificate authority data. Pods can use these certificates to make sure that they are connecting to your cluster's kube-apiserver (and not to middlebox or an accidentally misconfigured peer). -1. A `downwardAPI` source. This `downwardAPI` volume makes the name of the namespace container the Pod available +1. A `downwardAPI` source. This `downwardAPI` volume makes the name of the namespace containing the Pod available to application code running inside the Pod. Any container within the Pod that mounts this volume can access the above information. @@ -262,6 +262,7 @@ Here is a sample manifest for such a Secret: {{< codenew file="secret/serviceaccount/mysecretname.yaml" >}} To create a Secret based on this example, run: + ```shell kubectl -n examplens create -f https://k8s.io/examples/secret/serviceaccount/mysecretname.yaml ``` @@ -273,6 +274,7 @@ kubectl -n examplens describe secret mysecretname ``` The output is similar to: + ``` Name: mysecretname Namespace: examplens @@ -306,7 +308,9 @@ Otherwise, first find the Secret for the ServiceAccount. # This assumes that you already have a namespace named 'examplens' kubectl -n examplens get serviceaccount/example-automated-thing -o yaml ``` + The output is similar to: + ```yaml apiVersion: v1 kind: ServiceAccount @@ -321,9 +325,11 @@ metadata: selfLink: /api/v1/namespaces/examplens/serviceaccounts/example-automated-thing uid: f23fd170-66f2-4697-b049-e1e266b7f835 secrets: -- name: example-automated-thing-token-zyxwv + - name: example-automated-thing-token-zyxwv ``` + Then, delete the Secret you now know the name of: + ```shell kubectl -n examplens delete secret/example-automated-thing-token-zyxwv ``` @@ -334,6 +340,7 @@ and creates a replacement: ```shell kubectl -n examplens get serviceaccount/example-automated-thing -o yaml ``` + ```yaml apiVersion: v1 kind: ServiceAccount @@ -348,12 +355,13 @@ metadata: selfLink: /api/v1/namespaces/examplens/serviceaccounts/example-automated-thing uid: f23fd170-66f2-4697-b049-e1e266b7f835 secrets: -- name: example-automated-thing-token-4rdrh + - name: example-automated-thing-token-4rdrh ``` ## Clean up If you created a namespace `examplens` to experiment with, you can remove it: + ```shell kubectl delete namespace examplens ``` diff --git a/content/en/docs/reference/glossary/container.md b/content/en/docs/reference/glossary/container.md index cbf1f80fba..2034d8ada3 100644 --- a/content/en/docs/reference/glossary/container.md +++ b/content/en/docs/reference/glossary/container.md @@ -16,4 +16,4 @@ tags: Containers decouple applications from underlying host infrastructure to make deployment easier in different cloud or OS environments, and for easier scaling. - +The applications that run inside containers are called containerized applications. The process of bundling these applications and their dependencies into a container image is called containerization. diff --git a/content/en/docs/reference/glossary/ephemeral-container.md b/content/en/docs/reference/glossary/ephemeral-container.md index 2e94ab2691..da32029f9e 100644 --- a/content/en/docs/reference/glossary/ephemeral-container.md +++ b/content/en/docs/reference/glossary/ephemeral-container.md @@ -16,3 +16,4 @@ A {{< glossary_tooltip term_id="container" >}} type that you can temporarily run If you want to investigate a Pod that's running with problems, you can add an ephemeral container to that Pod and carry out diagnostics. Ephemeral containers have no resource or scheduling guarantees, and you should not use them to run any part of the workload itself. +Ephemeral containers are not supported by {{< glossary_tooltip text="static pods" term_id="static-pod" >}}. diff --git a/content/en/docs/reference/glossary/etcd.md b/content/en/docs/reference/glossary/etcd.md index e6c281f3b9..474b923caf 100644 --- a/content/en/docs/reference/glossary/etcd.md +++ b/content/en/docs/reference/glossary/etcd.md @@ -4,7 +4,7 @@ id: etcd date: 2018-04-12 full_link: /docs/tasks/administer-cluster/configure-upgrade-etcd/ short_description: > - Consistent and highly-available key value store used as Kubernetes' backing store for all cluster data. + Consistent and highly-available key value store used as backing store of Kubernetes for all cluster data. aka: tags: diff --git a/content/en/docs/reference/glossary/static-pod.md b/content/en/docs/reference/glossary/static-pod.md index dc77a035cc..565bc02a88 100644 --- a/content/en/docs/reference/glossary/static-pod.md +++ b/content/en/docs/reference/glossary/static-pod.md @@ -15,4 +15,6 @@ A {{< glossary_tooltip text="pod" term_id="pod" >}} managed directly by the kube daemon on a specific node, -without the API server observing it. \ No newline at end of file +without the API server observing it. + +Static Pods do not support {{< glossary_tooltip text="ephemeral containers" term_id="ephemeral-container" >}}. diff --git a/content/en/docs/setup/_index.md b/content/en/docs/setup/_index.md index ba9b3c6785..f73c5c2338 100644 --- a/content/en/docs/setup/_index.md +++ b/content/en/docs/setup/_index.md @@ -27,7 +27,7 @@ control, available resources, and expertise required to operate and manage a clu You can [download Kubernetes](/releases/download/) to deploy a Kubernetes cluster on a local machine, into the cloud, or for your own datacenter. -Several [Kubernetes components](/docs/concepts/overview/components/) such as `kube-apiserver` or `kube-proxy` can also be +Several [Kubernetes components](/docs/concepts/overview/components/) such as {{< glossary_tooltip text="kube-apiserver" term_id="kube-apiserver" >}} or {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}} can also be deployed as [container images](/releases/download/#container-images) within the cluster. It is **recommended** to run Kubernetes components as container images wherever diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md index a963fb08da..0aa0046406 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md @@ -590,7 +590,7 @@ data and may need to be recreated from scratch. Workarounds: -* Regularly [back up etcd](https://coreos.com/etcd/docs/latest/admin_guide.html). The +* Regularly [back up etcd](https://etcd.io/docs/v3.5/op-guide/recovery/). The etcd data directory configured by kubeadm is at `/var/lib/etcd` on the control-plane node. * Use multiple control-plane nodes. You can read diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md index 4fa48c4bfb..1baa12b3b7 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md @@ -11,7 +11,7 @@ weight: 70 {{< note >}} While kubeadm is being used as the management tool for external etcd nodes in this guide, please note that kubeadm does not plan to support certificate rotation -or upgrades for such nodes. The long term plan is to empower the tool +or upgrades for such nodes. The long-term plan is to empower the tool [etcdadm](https://github.com/kubernetes-sigs/etcdadm) to manage these aspects. {{< /note >}} @@ -32,7 +32,7 @@ etcd cluster of three members that can be used by kubeadm during cluster creatio * Each host must have systemd and a bash compatible shell installed. * Each host must [have a container runtime, kubelet, and kubeadm installed](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/). * Each host should have access to the Kubernetes container image registry (`registry.k8s.io`) or list/pull the required etcd image using -`kubeadm config images list/pull`. This guide will setup etcd instances as +`kubeadm config images list/pull`. This guide will set up etcd instances as [static pods](/docs/tasks/configure-pod-container/static-pod/) managed by a kubelet. * Some infrastructure to copy files between hosts. For example `ssh` and `scp` can satisfy this requirement. @@ -98,7 +98,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set export NAME1="infra1" export NAME2="infra2" - # Create temp directories to store files that will end up on other hosts. + # Create temp directories to store files that will end up on other hosts mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/ HOSTS=(${HOST0} ${HOST1} ${HOST2}) @@ -136,7 +136,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set done ``` -1. Generate the certificate authority +1. Generate the certificate authority. If you already have a CA then the only action that is copying the CA's `crt` and `key` file to `/etc/kubernetes/pki/etcd/ca.crt` and @@ -150,12 +150,12 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set kubeadm init phase certs etcd-ca ``` - This creates two files + This creates two files: - `/etc/kubernetes/pki/etcd/ca.crt` - `/etc/kubernetes/pki/etcd/ca.key` -1. Create certificates for each member +1. Create certificates for each member. ```sh kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml @@ -184,7 +184,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set find /tmp/${HOST1} -name ca.key -type f -delete ``` -1. Copy certificates and kubeadm configs +1. Copy certificates and kubeadm configs. The certificates have been generated and now they must be moved to their respective hosts. @@ -199,7 +199,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set root@HOST $ mv pki /etc/kubernetes/ ``` -1. Ensure all expected files exist +1. Ensure all expected files exist. The complete list of required files on `$HOST0` is: @@ -240,7 +240,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set └── server.key ``` - On `$HOST2` + On `$HOST2`: ``` $HOME @@ -259,7 +259,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set └── server.key ``` -1. Create the static pod manifests +1. Create the static pod manifests. Now that the certificates and configs are in place it's time to create the manifests. On each host run the `kubeadm` command to generate a static manifest @@ -271,7 +271,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set root@HOST2 $ kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml ``` -1. Optional: Check the cluster health +1. Optional: Check the cluster health. ```sh docker run --rm -it \ @@ -286,7 +286,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set https://[HOST1 IP]:2379 is healthy: successfully committed proposal: took = 19.44402ms https://[HOST2 IP]:2379 is healthy: successfully committed proposal: took = 35.926451ms ``` - - Set `${ETCD_TAG}` to the version tag of your etcd image. For example `3.4.3-0`. To see the etcd image and tag that kubeadm uses execute `kubeadm config images list --kubernetes-version ${K8S_VERSION}`, where `${K8S_VERSION}` is for example `v1.17.0` + - Set `${ETCD_TAG}` to the version tag of your etcd image. For example `3.4.3-0`. To see the etcd image and tag that kubeadm uses execute `kubeadm config images list --kubernetes-version ${K8S_VERSION}`, where `${K8S_VERSION}` is for example `v1.17.0`. - Set `${HOST0}`to the IP address of the host you are testing. @@ -294,7 +294,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set ## {{% heading "whatsnext" %}} -Once you have a working 3 member etcd cluster, you can continue setting up a -highly available control plane using the [external etcd method with -kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/). +Once you have an etcd cluster with 3 working members, you can continue setting up a +highly available control plane using the +[external etcd method with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/). diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md index de8ddce39e..fe3f08e578 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md @@ -43,12 +43,12 @@ kind: ClusterRole metadata: name: kubeadm:get-nodes rules: -- apiGroups: - - "" - resources: - - nodes - verbs: - - get + - apiGroups: + - "" + resources: + - nodes + verbs: + - get --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding @@ -59,16 +59,16 @@ roleRef: kind: ClusterRole name: kubeadm:get-nodes subjects: -- apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:bootstrappers:kubeadm:default-node-token + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:bootstrappers:kubeadm:default-node-token ``` ## `ebtables` or some similar executable not found during installation If you see the following warnings while running `kubeadm init` -```sh +```console [preflight] WARNING: ebtables not found in system path [preflight] WARNING: ethtool not found in system path ``` @@ -82,7 +82,7 @@ Then you may be missing `ebtables`, `ethtool` or a similar executable on your no If you notice that `kubeadm init` hangs after printing out the following line: -```sh +```console [apiclient] Created API client, waiting for the control plane to become ready ``` @@ -90,10 +90,10 @@ This may be caused by a number of problems. The most common are: - network connection problems. Check that your machine has full network connectivity before continuing. - the cgroup driver of the container runtime differs from that of the kubelet. To understand how to -configure it properly see [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/). + configure it properly see [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/). - control plane containers are crashlooping or hanging. You can check this by running `docker ps` -and investigating each container by running `docker logs`. For other container runtime see -[Debugging Kubernetes nodes with crictl](/docs/tasks/debug/debug-cluster/crictl/). + and investigating each container by running `docker logs`. For other container runtime see + [Debugging Kubernetes nodes with crictl](/docs/tasks/debug/debug-cluster/crictl/). ## kubeadm blocks when removing managed containers @@ -204,21 +204,21 @@ in kube-apiserver logs. To fix the issue you must follow these steps: 1. Backup and delete `/etc/kubernetes/kubelet.conf` and `/var/lib/kubelet/pki/kubelet-client*` from the failed node. 1. From a working control plane node in the cluster that has `/etc/kubernetes/pki/ca.key` execute -`kubeadm kubeconfig user --org system:nodes --client-name system:node:$NODE > kubelet.conf`. -`$NODE` must be set to the name of the existing failed node in the cluster. -Modify the resulted `kubelet.conf` manually to adjust the cluster name and server endpoint, -or pass `kubeconfig user --config` (it accepts `InitConfiguration`). If your cluster does not have -the `ca.key` you must sign the embedded certificates in the `kubelet.conf` externally. + `kubeadm kubeconfig user --org system:nodes --client-name system:node:$NODE > kubelet.conf`. + `$NODE` must be set to the name of the existing failed node in the cluster. + Modify the resulted `kubelet.conf` manually to adjust the cluster name and server endpoint, + or pass `kubeconfig user --config` (it accepts `InitConfiguration`). If your cluster does not have + the `ca.key` you must sign the embedded certificates in the `kubelet.conf` externally. 1. Copy this resulted `kubelet.conf` to `/etc/kubernetes/kubelet.conf` on the failed node. 1. Restart the kubelet (`systemctl restart kubelet`) on the failed node and wait for -`/var/lib/kubelet/pki/kubelet-client-current.pem` to be recreated. + `/var/lib/kubelet/pki/kubelet-client-current.pem` to be recreated. 1. Manually edit the `kubelet.conf` to point to the rotated kubelet client certificates, by replacing -`client-certificate-data` and `client-key-data` with: + `client-certificate-data` and `client-key-data` with: - ```yaml - client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem - client-key: /var/lib/kubelet/pki/kubelet-client-current.pem - ``` + ```yaml + client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem + client-key: /var/lib/kubelet/pki/kubelet-client-current.pem + ``` 1. Restart the kubelet. 1. Make sure the node becomes `Ready`. @@ -241,7 +241,7 @@ Error from server (NotFound): the server could not find the requested resource In some situations `kubectl logs` and `kubectl run` commands may return with the following errors in an otherwise functional cluster: -```sh +```console Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc65b868-glc5m/mysql: dial tcp 10.19.0.41:10250: getsockopt: no route to host ``` @@ -306,15 +306,17 @@ This version of Docker can prevent the kubelet from executing into the etcd cont To work around the issue, choose one of these options: - Roll back to an earlier version of Docker, such as 1.13.1-75 -``` -yum downgrade docker-1.13.1-75.git8633870.el7.centos.x86_64 docker-client-1.13.1-75.git8633870.el7.centos.x86_64 docker-common-1.13.1-75.git8633870.el7.centos.x86_64 -``` + + ``` + yum downgrade docker-1.13.1-75.git8633870.el7.centos.x86_64 docker-client-1.13.1-75.git8633870.el7.centos.x86_64 docker-common-1.13.1-75.git8633870.el7.centos.x86_64 + ``` - Install one of the more recent recommended versions, such as 18.06: -```bash -sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo -yum install docker-ce-18.06.1.ce-3.el7.x86_64 -``` + + ```bash + sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo + yum install docker-ce-18.06.1.ce-3.el7.x86_64 + ``` ## Not possible to pass a comma separated list of values to arguments inside a `--component-extra-args` flag diff --git a/content/en/docs/tasks/administer-cluster/certificates.md b/content/en/docs/tasks/administer-cluster/certificates.md index d67514c64b..dcbd41a7e6 100644 --- a/content/en/docs/tasks/administer-cluster/certificates.md +++ b/content/en/docs/tasks/administer-cluster/certificates.md @@ -7,7 +7,7 @@ weight: 20 When using client certificate authentication, you can generate certificates -manually through `easyrsa`, `openssl` or `cfssl`. +manually through [`easyrsa`](https://github.com/OpenVPN/easy-rsa), [`openssl`](https://github.com/openssl/openssl) or [`cfssl`](https://github.com/cloudflare/cfssl). diff --git a/content/en/docs/tasks/administer-cluster/encrypt-data.md b/content/en/docs/tasks/administer-cluster/encrypt-data.md index a92da947c6..03a28ccd60 100644 --- a/content/en/docs/tasks/administer-cluster/encrypt-data.md +++ b/content/en/docs/tasks/administer-cluster/encrypt-data.md @@ -233,7 +233,9 @@ program to retrieve the contents of your Secret. ``` 1. Verify the stored Secret is prefixed with `k8s:enc:aescbc:v1:` which indicates - the `aescbc` provider has encrypted the resulting data. + the `aescbc` provider has encrypted the resulting data. Confirm that the key name shown in `etcd` + matches the key name specified in the `EncryptionConfiguration` mentioned above. In this example, + you can see that the encryption key named `key1` is used in `etcd` and in `EncryptionConfiguration`. 1. Verify the Secret is correctly decrypted when retrieved via the API: diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md index 74b8d2182e..9a496d39a6 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md @@ -41,43 +41,39 @@ minikube version: v1.5.2 minikube start --network-plugin=cni ``` -For minikube you can install Cilium using its CLI tool. Cilium will -automatically detect the cluster configuration and will install the appropriate -components for a successful installation: +For minikube you can install Cilium using its CLI tool. To do so, first download the latest +version of the CLI with the following command: ```shell curl -LO https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz +``` + +Then extract the downloaded file to your `/usr/local/bin` directory with the following command: + +```shell sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin rm cilium-linux-amd64.tar.gz +``` + +After running the above commands, you can now install Cilium with the following command: + +```shell cilium install ``` -``` -🔮 Auto-detected Kubernetes kind: minikube -✨ Running "minikube" validation checks -✅ Detected minikube version "1.20.0" -ℹ️ Cilium version not set, using default version "v1.10.0" -🔮 Auto-detected cluster name: minikube -🔮 Auto-detected IPAM mode: cluster-pool -🔮 Auto-detected datapath mode: tunnel -🔑 Generating CA... -2021/05/27 02:54:44 [INFO] generate received request -2021/05/27 02:54:44 [INFO] received CSR -2021/05/27 02:54:44 [INFO] generating key: ecdsa-256 -2021/05/27 02:54:44 [INFO] encoded CSR -2021/05/27 02:54:44 [INFO] signed certificate with serial number 48713764918856674401136471229482703021230538642 -🔑 Generating certificates for Hubble... -2021/05/27 02:54:44 [INFO] generate received request -2021/05/27 02:54:44 [INFO] received CSR -2021/05/27 02:54:44 [INFO] generating key: ecdsa-256 -2021/05/27 02:54:44 [INFO] encoded CSR -2021/05/27 02:54:44 [INFO] signed certificate with serial number 3514109734025784310086389188421560613333279574 -🚀 Creating Service accounts... -🚀 Creating Cluster roles... -🚀 Creating ConfigMap... -🚀 Creating Agent DaemonSet... -🚀 Creating Operator Deployment... -⌛ Waiting for Cilium to be installed... -``` + +Cilium will then automatically detect the cluster configuration and create and +install the appropriate components for a successful installation. +The components are: + +- Certificate Authority (CA) in Secret `cilium-ca` and certificates for Hubble (Cilium's observability layer). +- Service accounts. +- Cluster roles. +- ConfigMap. +- Agent DaemonSet and an Operator Deployment. + +After the installation, you can view the overall status of the Cilium deployment with the `cilium status` command. +See the expected output of the `status` command +[here](https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/#validate-the-installation). The remainder of the Getting Started Guide explains how to enforce both L3/L4 (i.e., IP address + port) security policies, as well as L7 (e.g., HTTP) security diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md index 443f20b1d9..7245624bf8 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md @@ -238,4 +238,3 @@ kubectl delete secret mysecret - Read more about the [Secret concept](/docs/concepts/configuration/secret/) - Learn how to [manage Secrets using kubectl](/docs/tasks/configmap-secret/managing-secret-using-kubectl/) - Learn how to [manage Secrets using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/) - diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md index 35448df260..37dda8581e 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md @@ -33,7 +33,7 @@ Run the following command: ```shell kubectl create secret generic db-user-pass \ - --from-literal=username=devuser \ + --from-literal=username=admin \ --from-literal=password='S!B\*d$zDsb=' ``` You must use single quotes `''` to escape special characters such as `$`, `\`, @@ -87,8 +87,8 @@ kubectl get secrets The output is similar to: ``` -NAME TYPE DATA AGE -db-user-pass Opaque 2 51s +NAME TYPE DATA AGE +db-user-pass Opaque 2 51s ``` View the details of the Secret: @@ -143,11 +143,13 @@ accidentally, or from being stored in a terminal log. S!B\*d$zDsb= ``` - {{}}This is an example for documentation purposes. In practice, + {{< caution >}} + This is an example for documentation purposes. In practice, this method could cause the command with the encoded data to be stored in your shell history. Anyone with access to your computer could find the command and decode the secret. A better approach is to combine the view and - decode commands.{{}} + decode commands. + {{< /caution >}} ```shell kubectl get secret db-user-pass -o jsonpath='{.data.password}' | base64 --decode @@ -193,10 +195,8 @@ To delete a Secret, run the following command: kubectl delete secret db-user-pass ``` - - ## {{% heading "whatsnext" %}} - Read more about the [Secret concept](/docs/concepts/configuration/secret/) -- Learn how to [manage Secrets using config files](/docs/tasks/configmap-secret/managing-secret-using-config-file/) +- Learn how to [manage Secrets using config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/) - Learn how to [manage Secrets using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/) \ No newline at end of file diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-kustomize.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-kustomize.md index 4ec87b3e74..a896bd5cb7 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-kustomize.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-kustomize.md @@ -90,8 +90,7 @@ the Secret data and appending the hash value to the name. This ensures that a new Secret is generated each time the data is modified. To verify that the Secret was created and to decode the Secret data, refer to -[Managing Secrets using -kubectl](/docs/tasks/configmap-secret/managing-secret-using-kubectl/#verify-the-secret). +[Managing Secrets using kubectl](/docs/tasks/configmap-secret/managing-secret-using-kubectl/#verify-the-secret). ## Edit a Secret {#edit-secret} @@ -117,12 +116,11 @@ your Pods. To delete a Secret, use `kubectl`: ```shell -kubectl delete secret +kubectl delete secret db-user-pass ``` - ## {{% heading "whatsnext" %}} - Read more about the [Secret concept](/docs/concepts/configuration/secret/) -- Learn how to [manage Secrets with the `kubectl` command](/docs/tasks/configmap-secret/managing-secret-using-kubectl/) +- Learn how to [manage Secrets using kubectl](/docs/tasks/configmap-secret/managing-secret-using-kubectl/) - Learn how to [manage Secrets using config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/) \ No newline at end of file diff --git a/content/en/docs/tasks/configure-pod-container/configure-service-account.md b/content/en/docs/tasks/configure-pod-container/configure-service-account.md index 47955a0ace..9ebd36c5ee 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-service-account.md +++ b/content/en/docs/tasks/configure-pod-container/configure-service-account.md @@ -41,6 +41,7 @@ If you do not specify a ServiceAccount when you create a Pod, Kubernetes automatically assigns the ServiceAccount named `default` in that namespace. You can fetch the details for a Pod you have created. For example: + ```shell kubectl get pods/ -o yaml ``` @@ -75,6 +76,7 @@ automountServiceAccountToken: false ``` You can also opt out of automounting API credentials for a particular Pod: + ```yaml apiVersion: v1 kind: Pod @@ -92,8 +94,7 @@ If both the ServiceAccount and the Pod's `.spec` specify a value for ## Use more than one ServiceAccount {#use-multiple-service-accounts} Every namespace has at least one ServiceAccount: the default ServiceAccount -resource, called `default`. -You can list all ServiceAccount resources in your +resource, called `default`. You can list all ServiceAccount resources in your [current namespace](/docs/concepts/overview/working-with-objects/namespaces/#setting-the-namespace-preference) with: @@ -157,7 +158,6 @@ If you want to remove the fields from a workload resource, set both fields to em on the [pod template](/docs/concepts/workloads/pods#pod-templates). {{< /note >}} - ### Cleanup {#cleanup-use-multiple-service-accounts} If you tried creating `build-robot` ServiceAccount from the example above, @@ -185,15 +185,17 @@ token might be shorter, or could even be longer). {{< note >}} Versions of Kubernetes before v1.22 automatically created long term credentials for accessing the Kubernetes API. This older mechanism was based on creating token Secrets -that could then be mounted into running Pods. -In more recent versions, including Kubernetes v{{< skew currentVersion >}}, API credentials -are obtained directly by using the [TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) API, -and are mounted into Pods using a [projected volume](/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume). +that could then be mounted into running Pods. In more recent versions, including +Kubernetes v{{< skew currentVersion >}}, API credentials are obtained directly by using the +[TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) API, +and are mounted into Pods using a +[projected volume](/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume). The tokens obtained using this method have bounded lifetimes, and are automatically invalidated when the Pod they are mounted into is deleted. -You can still manually create a service account token Secret; for example, if you need a token that never expires. -However, using the [TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) +You can still manually create a service account token Secret; for example, +if you need a token that never expires. However, using the +[TokenRequest](/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) subresource to obtain a token to access the API is recommended instead. {{< /note >}} @@ -215,6 +217,7 @@ EOF ``` If you view the Secret using: + ```shell kubectl get secret/build-robot-secret -o yaml ``` @@ -251,8 +254,7 @@ token: ... The content of `token` is elided here. Take care not to display the contents of a `kubernetes.io/service-account-token` -Secret somewhere that your terminal / computer screen could be seen by an -onlooker. +Secret somewhere that your terminal / computer screen could be seen by an onlooker. {{< /note >}} When you delete a ServiceAccount that has an associated Secret, the Kubernetes @@ -263,31 +265,32 @@ control plane automatically cleans up the long-lived token from that Secret. First, [create an imagePullSecret](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod). Next, verify it has been created. For example: -- Create an imagePullSecret, as described in [Specifying ImagePullSecrets on a Pod](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod). +- Create an imagePullSecret, as described in + [Specifying ImagePullSecrets on a Pod](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod). - ```shell - kubectl create secret docker-registry myregistrykey --docker-server=DUMMY_SERVER \ - --docker-username=DUMMY_USERNAME --docker-password=DUMMY_DOCKER_PASSWORD \ - --docker-email=DUMMY_DOCKER_EMAIL - ``` + ```shell + kubectl create secret docker-registry myregistrykey --docker-server=DUMMY_SERVER \ + --docker-username=DUMMY_USERNAME --docker-password=DUMMY_DOCKER_PASSWORD \ + --docker-email=DUMMY_DOCKER_EMAIL + ``` - Verify it has been created. - ```shell - kubectl get secrets myregistrykey - ``` - The output is similar to this: + ```shell + kubectl get secrets myregistrykey + ``` - ``` - NAME TYPE DATA AGE - myregistrykey   kubernetes.io/.dockerconfigjson   1       1d - ``` + The output is similar to this: + + ``` + NAME TYPE DATA AGE + myregistrykey   kubernetes.io/.dockerconfigjson   1       1d + ``` ### Add image pull secret to service account Next, modify the default service account for the namespace to use this Secret as an imagePullSecret. - ```shell kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}' ``` @@ -313,8 +316,8 @@ metadata: uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6 ``` -Using your editor, delete the line with key `resourceVersion`, add lines for `imagePullSecrets:` and save it. -Leave the `uid` value set the same as you found it. +Using your editor, delete the line with key `resourceVersion`, add lines for +`imagePullSecrets:` and save it. Leave the `uid` value set the same as you found it. After you made those changes, the edited ServiceAccount looks something like this: @@ -327,12 +330,13 @@ metadata: namespace: default uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6 imagePullSecrets: -- name: myregistrykey + - name: myregistrykey ``` ### Verify that imagePullSecrets are set for new Pods -Now, when a new Pod is created in the current namespace and using the default ServiceAccount, the new Pod has its `spec.imagePullSecrets` field set automatically: +Now, when a new Pod is created in the current namespace and using the default +ServiceAccount, the new Pod has its `spec.imagePullSecrets` field set automatically: ```shell kubectl run nginx --image=nginx --restart=Never @@ -354,13 +358,31 @@ To enable and use token request projection, you must specify each of the followi command line arguments to `kube-apiserver`: `--service-account-issuer` -: defines the Identifier of the service account token issuer. You can specify the `--service-account-issuer` argument multiple times, this can be useful to enable a non-disruptive change of the issuer. When this flag is specified multiple times, the first is used to generate tokens and all are used to determine which issuers are accepted. You must be running Kubernetes v1.22 or later to be able to specify `--service-account-issuer` multiple times. +: defines the Identifier of the service account token issuer. You can specify the + `--service-account-issuer` argument multiple times, this can be useful to enable + a non-disruptive change of the issuer. When this flag is specified multiple times, + the first is used to generate tokens and all are used to determine which issuers + are accepted. You must be running Kubernetes v1.22 or later to be able to specify + `--service-account-issuer` multiple times. + `--service-account-key-file` -: specifies the path to a file containing PEM-encoded X.509 private or public keys (RSA or ECDSA), used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If specified multiple times, tokens signed by any of the specified keys are considered valid by the Kubernetes API server. +: specifies the path to a file containing PEM-encoded X.509 private or public keys + (RSA or ECDSA), used to verify ServiceAccount tokens. The specified file can contain + multiple keys, and the flag can be specified multiple times with different files. + If specified multiple times, tokens signed by any of the specified keys are considered + valid by the Kubernetes API server. + `--service-account-signing-key-file` -: specifies the path to a file that contains the current private key of the service account token issuer. The issuer signs issued ID tokens with this private key. +: specifies the path to a file that contains the current private key of the service + account token issuer. The issuer signs issued ID tokens with this private key. + `--api-audiences` (can be omitted) -: defines audiences for ServiceAccount tokens. The service account token authenticator validates that tokens used against the API are bound to at least one of these audiences. If `api-audiences` is specified multiple times, tokens for any of the specified audiences are considered valid by the Kubernetes API server. If you specify the `--service-account-issuer` command line argument but you don't set `--api-audiences`, the control plane defaults to a single element audience list that contains only the issuer URL. +: defines audiences for ServiceAccount tokens. The service account token authenticator + validates that tokens used against the API are bound to at least one of these audiences. + If `api-audiences` is specified multiple times, tokens for any of the specified audiences + are considered valid by the Kubernetes API server. If you specify the `--service-account-issuer` + command line argument but you don't set `--api-audiences`, the control plane defaults to + a single element audience list that contains only the issuer URL. {{< /note >}} @@ -452,18 +474,19 @@ to the public endpoint, rather than the API server's address, by passing the `--service-account-jwks-uri` flag to the API server. Like the issuer URL, the JWKS URI is required to use the `https` scheme. - ## {{% heading "whatsnext" %}} See also: -* Read the [Cluster Admin Guide to Service Accounts](/docs/reference/access-authn-authz/service-accounts-admin/) -* Read about [Authorization in Kubernetes](/docs/reference/access-authn-authz/authorization/) -* Read about [Secrets](/docs/concepts/configuration/secret/) - * or learn to [distribute credentials securely using Secrets](/docs/tasks/inject-data-application/distribute-credentials-secure/) - * but also bear in mind that using Secrets for authenticating as a ServiceAccount +- Read the [Cluster Admin Guide to Service Accounts](/docs/reference/access-authn-authz/service-accounts-admin/) +- Read about [Authorization in Kubernetes](/docs/reference/access-authn-authz/authorization/) +- Read about [Secrets](/docs/concepts/configuration/secret/) + - or learn to [distribute credentials securely using Secrets](/docs/tasks/inject-data-application/distribute-credentials-secure/) + - but also bear in mind that using Secrets for authenticating as a ServiceAccount is deprecated. The recommended alternative is [ServiceAccount token volume projection](#service-account-token-volume-projection). -* Read about [projected volumes](/docs/tasks/configure-pod-container/configure-projected-volume-storage/). -* For background on OIDC discovery, read the [ServiceAccount signing key retrieval](https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1393-oidc-discovery) Kubernetes Enhancement Proposal -* Read the [OIDC Discovery Spec](https://openid.net/specs/openid-connect-discovery-1_0.html) +- Read about [projected volumes](/docs/tasks/configure-pod-container/configure-projected-volume-storage/). +- For background on OIDC discovery, read the + [ServiceAccount signing key retrieval](https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1393-oidc-discovery) + Kubernetes Enhancement Proposal +- Read the [OIDC Discovery Spec](https://openid.net/specs/openid-connect-discovery-1_0.html) diff --git a/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md b/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md index 69e665b42e..3b6bec6def 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md +++ b/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md @@ -12,17 +12,12 @@ A Container's file system lives only as long as the Container does. So when a Container terminates and restarts, filesystem changes are lost. For more consistent storage that is independent of the Container, you can use a [Volume](/docs/concepts/storage/volumes/). This is especially important for stateful -applications, such as key-value stores (such as Redis) and databases. - - +applications, such as key-value stores (such as Redis) and databases. ## {{% heading "prerequisites" %}} - {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - - ## Configure a volume for a Pod @@ -37,71 +32,71 @@ restarts. Here is the configuration file for the Pod: 1. Create the Pod: - ```shell - kubectl apply -f https://k8s.io/examples/pods/storage/redis.yaml - ``` + ```shell + kubectl apply -f https://k8s.io/examples/pods/storage/redis.yaml + ``` 1. Verify that the Pod's Container is running, and then watch for changes to -the Pod: + the Pod: - ```shell - kubectl get pod redis --watch - ``` - - The output looks like this: + ```shell + kubectl get pod redis --watch + ``` - ```shell - NAME READY STATUS RESTARTS AGE - redis 1/1 Running 0 13s - ``` + The output looks like this: + + ```shell + NAME READY STATUS RESTARTS AGE + redis 1/1 Running 0 13s + ``` 1. In another terminal, get a shell to the running Container: - ```shell - kubectl exec -it redis -- /bin/bash - ``` + ```shell + kubectl exec -it redis -- /bin/bash + ``` 1. In your shell, go to `/data/redis`, and then create a file: - ```shell - root@redis:/data# cd /data/redis/ - root@redis:/data/redis# echo Hello > test-file - ``` + ```shell + root@redis:/data# cd /data/redis/ + root@redis:/data/redis# echo Hello > test-file + ``` 1. In your shell, list the running processes: - ```shell - root@redis:/data/redis# apt-get update - root@redis:/data/redis# apt-get install procps - root@redis:/data/redis# ps aux - ``` + ```shell + root@redis:/data/redis# apt-get update + root@redis:/data/redis# apt-get install procps + root@redis:/data/redis# ps aux + ``` - The output is similar to this: + The output is similar to this: - ```shell - USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND - redis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379 - root 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash - root 15 0.0 0.0 17500 2072 ? R+ 00:48 0:00 ps aux - ``` + ```shell + USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND + redis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379 + root 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash + root 15 0.0 0.0 17500 2072 ? R+ 00:48 0:00 ps aux + ``` 1. In your shell, kill the Redis process: - ```shell - root@redis:/data/redis# kill - ``` + ```shell + root@redis:/data/redis# kill + ``` - where `` is the Redis process ID (PID). + where `` is the Redis process ID (PID). 1. In your original terminal, watch for changes to the Redis Pod. Eventually, -you will see something like this: + you will see something like this: - ```shell - NAME READY STATUS RESTARTS AGE - redis 1/1 Running 0 13s - redis 0/1 Completed 0 6m - redis 1/1 Running 1 6m - ``` + ```shell + NAME READY STATUS RESTARTS AGE + redis 1/1 Running 0 13s + redis 0/1 Completed 0 6m + redis 1/1 Running 1 6m + ``` At this point, the Container has terminated and restarted. This is because the Redis Pod has a @@ -110,38 +105,32 @@ of `Always`. 1. Get a shell into the restarted Container: - ```shell - kubectl exec -it redis -- /bin/bash - ``` + ```shell + kubectl exec -it redis -- /bin/bash + ``` 1. In your shell, go to `/data/redis`, and verify that `test-file` is still there. - ```shell - root@redis:/data/redis# cd /data/redis/ - root@redis:/data/redis# ls - test-file - ``` + + ```shell + root@redis:/data/redis# cd /data/redis/ + root@redis:/data/redis# ls + test-file + ``` 1. Delete the Pod that you created for this exercise: - ```shell - kubectl delete pod redis - ``` - - + ```shell + kubectl delete pod redis + ``` ## {{% heading "whatsnext" %}} +- See [Volume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volume-v1-core). -* See [Volume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volume-v1-core). - -* See [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core). - -* In addition to the local disk storage provided by `emptyDir`, Kubernetes -supports many different network-attached storage solutions, including PD on -GCE and EBS on EC2, which are preferred for critical data and will handle -details such as mounting and unmounting the devices on the nodes. See -[Volumes](/docs/concepts/storage/volumes/) for more details. - - - +- See [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core). +- In addition to the local disk storage provided by `emptyDir`, Kubernetes + supports many different network-attached storage solutions, including PD on + GCE and EBS on EC2, which are preferred for critical data and will handle + details such as mounting and unmounting the devices on the nodes. See + [Volumes](/docs/concepts/storage/volumes/) for more details. diff --git a/content/en/docs/tasks/configure-pod-container/static-pod.md b/content/en/docs/tasks/configure-pod-container/static-pod.md index e2eab5088e..23191e1ffe 100644 --- a/content/en/docs/tasks/configure-pod-container/static-pod.md +++ b/content/en/docs/tasks/configure-pod-container/static-pod.md @@ -38,6 +38,10 @@ The `spec` of a static Pod cannot refer to other API objects {{< glossary_tooltip text="Secret" term_id="secret" >}}, etc). {{< /note >}} +{{< note >}} +Static pods do not support [ephemeral containers](/docs/concepts/workloads/pods/ephemeral-containers/). +{{< /note >}} + ## {{% heading "prerequisites" %}} {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} diff --git a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md index 4b0d63c0c5..41d64cdba0 100644 --- a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md +++ b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md @@ -244,7 +244,13 @@ So, now run the Job: kubectl apply -f ./job.yaml ``` -Now wait a bit, then check on the job. +You can wait for the Job to succeed, with a timeout: +```shell +# The check for condition name is case insensitive +kubectl wait --for=condition=complete --timeout=300s job/job-wq-1 +``` + +Next, check on the Job: ```shell kubectl describe jobs/job-wq-1 @@ -285,7 +291,9 @@ Events: 14s 14s 1 {job } Normal SuccessfulCreate Created pod: job-wq-1-p17e0 ``` -All our pods succeeded. Yay. + + +All the pods for that Job succeeded. Yay. diff --git a/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md b/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md index c5d1d0fa30..9a4eb6f1fe 100644 --- a/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md +++ b/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md @@ -208,9 +208,18 @@ Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 33s 33s 1 {job-controller } Normal SuccessfulCreate Created pod: job-wq-2-lglf8 +``` +You can wait for the Job to succeed, with a timeout: +```shell +# The check for condition name is case insensitive +kubectl wait --for=condition=complete --timeout=300s job/job-wq-2 +``` +```shell kubectl logs pods/job-wq-2-7r7b2 +``` +``` Worker with sessionID: bbd72d0a-9e5c-4dd6-abf6-416cc267991f Initial queue state: empty=False Working on banana diff --git a/content/en/docs/tasks/job/indexed-parallel-processing-static.md b/content/en/docs/tasks/job/indexed-parallel-processing-static.md index 096fe5a9be..0ee576a46c 100644 --- a/content/en/docs/tasks/job/indexed-parallel-processing-static.md +++ b/content/en/docs/tasks/job/indexed-parallel-processing-static.md @@ -107,7 +107,14 @@ When you create this Job, the control plane creates a series of Pods, one for ea Because `.spec.parallelism` is less than `.spec.completions`, the control plane waits for some of the first Pods to complete before starting more of them. -Once you have created the Job, wait a moment then check on progress: +You can wait for the Job to succeed, with a timeout: +```shell +# The check for condition name is case insensitive +kubectl wait --for=condition=complete --timeout=300s job/indexed-job +``` + +Now, describe the Job and check that it was successful. + ```shell kubectl describe jobs/indexed-job diff --git a/content/en/docs/tutorials/hello-minikube.md b/content/en/docs/tutorials/hello-minikube.md index 96eaf037bc..c3a4810514 100644 --- a/content/en/docs/tutorials/hello-minikube.md +++ b/content/en/docs/tutorials/hello-minikube.md @@ -94,7 +94,7 @@ recommended way to manage the creation and scaling of Pods. Pod runs a Container based on the provided Docker image. ```shell - kubectl create deployment hello-node --image=registry.k8s.io/echoserver:1.4 + kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.39 -- /agnhost netexec --http-port=8080 ``` 2. View the Deployment: diff --git a/content/en/docs/tutorials/stateful-application/zookeeper.md b/content/en/docs/tutorials/stateful-application/zookeeper.md index 3f57ab3ad4..80b1d30fec 100644 --- a/content/en/docs/tutorials/stateful-application/zookeeper.md +++ b/content/en/docs/tutorials/stateful-application/zookeeper.md @@ -1042,7 +1042,7 @@ There are pending pods when an error occurred: Cannot evict pod as it would viol pod/zk-2 ``` -Use `CTRL-C` to terminate to kubectl. +Use `CTRL-C` to terminate kubectl. You cannot drain the third node because evicting `zk-2` would violate `zk-budget`. However, the node will remain cordoned. diff --git a/content/en/examples/application/deployment-scale.yaml b/content/en/examples/application/deployment-scale.yaml index 01fe96d845..838576375e 100644 --- a/content/en/examples/application/deployment-scale.yaml +++ b/content/en/examples/application/deployment-scale.yaml @@ -14,6 +14,6 @@ spec: spec: containers: - name: nginx - image: nginx:1.14.2 + image: nginx:1.16.1 ports: - containerPort: 80 diff --git a/content/en/releases/patch-releases.md b/content/en/releases/patch-releases.md index 919b47fd52..b5d9f308a7 100644 --- a/content/en/releases/patch-releases.md +++ b/content/en/releases/patch-releases.md @@ -78,8 +78,7 @@ releases may also occur in between these. | Monthly Patch Release | Cherry Pick Deadline | Target date | | --------------------- | -------------------- | ----------- | -| November 2022 | 2022-11-04 | 2022-11-09 | -| December 2022 | 2022-12-09 | 2022-12-14 | +| December 2022 | 2022-12-02 | 2022-12-07 | | January 2023 | 2023-01-13 | 2023-01-18 | | February 2023 | 2023-02-10 | 2023-02-15 | diff --git a/content/fr/docs/tutorials/kubernetes-basics/explore/explore-intro.html b/content/fr/docs/tutorials/kubernetes-basics/explore/explore-intro.html new file mode 100644 index 0000000000..724cfb8bcc --- /dev/null +++ b/content/fr/docs/tutorials/kubernetes-basics/explore/explore-intro.html @@ -0,0 +1,140 @@ +--- +title: Affichage des pods et des nœuds +weight: 10 +--- + + + + + + + +
+ +
+ +
+ +
+

Objectifs

+
    +
  • En savoir plus sur les pods Kubernetes.
  • +
  • En savoir plus sur les nœuds Kubernetes.
  • +
  • Dépannez les applications déployées.
  • +
+
+ +
+

Pods de Kubernetes

+

Lorsque vous avez créé un déploiement dans le Module 2, Kubernetes a créé un Pod pour héberger votre instance d'application. Un pod est une abstraction Kubernetes qui représente un groupe d'un ou plusieurs conteneurs d'application (tels que Docker), et certaines ressources partagées pour ces conteneurs. Ces ressources comprennent:

+
    +
  • Stockage partagé, en tant que Volumes
  • +
  • Mise en réseau, en tant qu'adresse IP d'un unique cluster
  • +
  • Informations sur l'exécution de chaque conteneur, telles que la version de l'image du conteneur ou les ports spécifiques à utiliser
  • +
+

Un pod modélise un "hôte logique" spécifique à l'application et peut contenir différents conteneurs d'applications qui sont relativement étroitement couplés. Par exemple, un pod peut inclure à la fois le conteneur avec votre application Node.js ainsi qu'un conteneur différent qui alimente les données à être publiées par le serveur Web Node.js. Les conteneurs d'un pod partagent une adresse IP et un espace de port, sont toujours co-localisés et co-planifiés, et exécutés dans un contexte partagé sur le même nœud.

+ +

Les pods sont l'unité atomique de la plate-forme Kubernetes. Lorsque nous créons un déploiement sur Kubernetes, ce déploiement crée des pods avec des conteneurs à l'intérieur (par opposition à la création directe de conteneurs). Chaque pod est lié au nœud où il est planifié et y reste jusqu'à la résiliation (selon la politique de redémarrage) ou la suppression. En cas de défaillance d'un nœud, des pods identiques sont programmés sur d'autres nœuds disponibles dans le cluster.

+ +
+
+
+

Sommaire:

+
    +
  • Pods
  • +
  • Nœuds
  • +
  • Commandes principales de Kubectl
  • +
+
+
+

+ Un pod est un groupe d'un ou plusieurs conteneurs applicatifs (tels que Docker) et comprend un stockage partagé (volumes), une adresse IP et des informations sur la façon de les exécuter. +

+
+
+
+
+ +
+
+

Aperçu des Pods

+
+
+ +
+
+

+
+
+
+ +
+
+

Nœuds

+

Un Pod s'exécute toujours sur un Nœud. Un nœud est une machine de travail dans Kubernetes et peut être une machine virtuelle ou physique, selon le cluster. Chaque nœud est géré par le planificateur. Un nœud peut avoir plusieurs pods, et le planificateur Kubernetes gère automatiquement la planification des pods sur les nœuds du cluster. La planification automatique du planificateur tient compte des ressources disponibles sur chaque nœud.

+ +

Chaque nœud Kubernetes exécute au moins:

+
    +
  • Kubelet, un processus responsable de la communication entre le planificateur Kubernetes et le nœud ; il gère les Pods et les conteneurs s'exécutant sur une machine.
  • +
  • Un environnement d'exécution de conteneur (comme Docker) chargé d'extraire l'image du conteneur d'un registre, de décompresser le conteneur et d'exécuter l'application.
  • +
+ +
+
+
+

Les conteneurs ne doivent être planifiés ensemble dans un seul pod que s'ils sont étroitement couplés et doivent partager des ressources telles que le disque.

+
+
+
+ +
+ +
+
+

Aperçu des Nœuds

+
+
+ +
+
+

+
+
+
+ +
+
+

Dépannage avec kubectl

+

Dans le module 2, vous avez utilisé l'interface de ligne de commande Kubectl. Vous continuerez à l'utiliser dans le module 3 pour obtenir des informations sur les applications déployées et leurs environnements. Les opérations les plus courantes peuvent être effectuées avec les commandes kubectl suivantes:

+
    +
  • kubectl get - liste les ressources
  • +
  • kubectl describe - affiche des informations détaillées sur une ressource
  • +
  • kubectl logs - imprime les journaux d'un conteneur dans un pod
  • +
  • kubectl exec - exécute une commande sur un conteneur dans un pod
  • +
+ +

Vous pouvez utiliser ces commandes pour voir quand les applications ont été déployées, quels sont leurs statuts actuels, où elles s'exécutent et quelles sont leurs configurations.

+ +

Maintenant que nous en savons plus sur nos composants de cluster et la ligne de commande, explorons notre application.

+ +
+
+
+

Un nœud est une machine de travail dans Kubernetes et peut être une machine virtuelle ou une machine physique, selon le cluster. Plusieurs pods peuvent s'exécuter sur un nœud.

+
+
+
+
+ + + +
+ +
+ + + diff --git a/content/hi/docs/contribute/style/_index.md b/content/hi/docs/contribute/style/_index.md new file mode 100644 index 0000000000..161ea0a219 --- /dev/null +++ b/content/hi/docs/contribute/style/_index.md @@ -0,0 +1,7 @@ +--- +title: प्रलेखन शैली अवलोकन +main_menu: true +weight: 80 +--- + +इस खंड के विषय लेखन शैली, सामग्री स्वरूपण, और संगठन, और कुबेरनेट्स प्रलेखन के लिए विशिष्ट Hugo अनुकूलन का उपयोग करने पर मार्गदर्शन प्रदान करते हैं। diff --git a/content/hi/docs/reference/glossary/addons.md b/content/hi/docs/reference/glossary/addons.md new file mode 100644 index 0000000000..499a12906d --- /dev/null +++ b/content/hi/docs/reference/glossary/addons.md @@ -0,0 +1,16 @@ +--- +title: ऐड-ऑन +id: addons +date: 2019-12-15 +full_link: /docs/concepts/cluster-administration/addons/ +short_description: > + संसाधन जो कुबेरनेट्स की कार्यक्षमता का विस्तार करते हैं। + +aka: +tags: +- tool +--- + संसाधन जो कुबेरनेट्स की कार्यक्षमता का विस्तार करते हैं। + + +[ऐड-ऑन इंस्टॉल करना](/docs/concepts/cluster-administration/addons/) अपने क्लस्टर के साथ ऐड-ऑन का उपयोग करने के बारे में अधिक जानकारी देता है, और कुछ लोकप्रिय ऐड-ऑन को सूचीबद्ध करता है। \ No newline at end of file diff --git a/content/hi/docs/reference/glossary/affinity.md b/content/hi/docs/reference/glossary/affinity.md new file mode 100644 index 0000000000..e5037d2e4a --- /dev/null +++ b/content/hi/docs/reference/glossary/affinity.md @@ -0,0 +1,22 @@ +--- +title: आत्मीयता +id: affinity +date: 2019-01-11 +full_link: /docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity +short_description: > + पॉड्स को कहां रखा जाए, यह निर्धारित करने के लिए शेड्यूलर द्वारा उपयोग किए जाने वाले नियम +aka: +tags: +- fundamental +--- + +कुबेरनेट्स में, _आत्मीयता_ नियमों का एक समूह है जो शेड्यूलर को संकेत देता है कि पॉड्स को कहाँ रखा जाए। + + +आत्मीयता दो प्रकार की होती है: +* [नोड आत्मीयता](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity) +* [पॉड-टू-पॉड आत्मीयता](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) + +नियमों को कुबेरनेट्स {{< glossary_tooltip term_id="label" text="लेबल">}} और {{< glossary_tooltip term_id="selector" text="सेलेक्टर">}} +का उपयोग करके परिभाषित किया गया है, जो {{< glossary_tooltip term_id="pod" text="पॉड्स" >}} में निर्दिष्ट हैं , +और उनका उपयोग इस बात पर निर्भर करता है कि आप शेड्यूलर को कितनी सख्ती से लागू करना चाहते हैं। diff --git a/content/hi/docs/reference/glossary/api-group.md b/content/hi/docs/reference/glossary/api-group.md new file mode 100644 index 0000000000..c826da33ce --- /dev/null +++ b/content/hi/docs/reference/glossary/api-group.md @@ -0,0 +1,19 @@ +--- +title: API समूह +id: api-group +date: 2019-09-02 +full_link: /docs/concepts/overview/kubernetes-api/#api-groups-and-versioning +short_description: > + कुबेरनेट्स API में संबंधित पथों का एक समूह। + +aka: +tags: +- fundamental +- architecture +--- +कुबेरनेट्स API में संबंधित पथों का एक समूह। + + +आप अपने API सर्वर के कॉन्फ़िगरेशन को बदलकर प्रत्येक API समूह को सक्षम या अक्षम कर सकते हैं। आप विशिष्ट संसाधनों के लिए पथ अक्षम या सक्षम भी कर सकते हैं। API समूह कुबेरनेट्स API का विस्तार करना आसान बनाता है। API समूह एक REST पथ में और एक क्रमबद्ध वस्तु के `apiVersion` फ़ील्ड में निर्दिष्ट है। + +* अधिक जानकारी के लिए [API समूह](/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning) पढ़ें। \ No newline at end of file diff --git a/content/hi/docs/reference/glossary/kubeadm.md b/content/hi/docs/reference/glossary/kubeadm.md new file mode 100644 index 0000000000..9791fca9fa --- /dev/null +++ b/content/hi/docs/reference/glossary/kubeadm.md @@ -0,0 +1,18 @@ +--- +title: क्यूबएडीएम (Kubeadm) +id: kubeadm +date: 2018-04-12 +full_link: /docs/admin/kubeadm/ +short_description: > + कुबेरनेट्स को जल्दी से इंस्टॉल करने और एक सुरक्षित क्लस्टर स्थापित करने के लिए एक उपकरण। + +aka: +tags: +- tool +- operation +--- + कुबेरनेट्स को जल्दी से इंस्टॉल करने और एक सुरक्षित क्लस्टर स्थापित करने के लिए एक उपकरण। + + + +आप कंट्रोल प्लेन और {{< glossary_tooltip text="वर्कर नोड्स" term_id="node" >}} दोनों घटकों को स्थापित करने के लिए क्यूबएडीएम का उपयोग कर सकते हैं। diff --git a/content/hi/docs/reference/glossary/limitrange.md b/content/hi/docs/reference/glossary/limitrange.md new file mode 100644 index 0000000000..53c49f0661 --- /dev/null +++ b/content/hi/docs/reference/glossary/limitrange.md @@ -0,0 +1,22 @@ +--- +title: लिमिटरेंज (LimitRange) +id: limitrange +date: 2019-04-15 +full_link: /docs/concepts/policy/limit-range/ +short_description: > + नेमस्पेस में प्रति कंटेनर या पॉड में संसाधन खपत को सीमित करने के लिए प्रतिबंध प्रदान करता है। + +aka: +tags: +- core-object +- fundamental +- architecture +related: + - pod + - container + +--- + नेमस्पेस में प्रति {{< glossary_tooltip text="कंटेनर" term_id="container" >}} या {{< glossary_tooltip text="पॉड" term_id="pod" >}} में संसाधन खपत को सीमित करने के लिए प्रतिबंध प्रदान करता है। + + +लिमिटरेंज, टाइप (type) द्वारा बनाई जा सकने वाले ऑब्जेक्ट्स और साथ ही नेमस्पेस में अलग-अलग {{< glossary_tooltip text="कंटेनर" term_id="container" >}} या {{< glossary_tooltip text="पॉड" term_id="pod" >}} द्वारा अनुरोध/उपभोग किए जा सकने वाले कंप्यूट संसाधनों की मात्रा को सीमित करता है। diff --git a/content/hi/docs/reference/glossary/pod-lifecycle.md b/content/hi/docs/reference/glossary/pod-lifecycle.md new file mode 100644 index 0000000000..afd36d316b --- /dev/null +++ b/content/hi/docs/reference/glossary/pod-lifecycle.md @@ -0,0 +1,19 @@ +--- +title: पॉड जीवनचक्र (Pod Lifecycle) +id: pod-lifecycle +date: 2019-02-17 +full-link: /docs/concepts/workloads/pods/pod-lifecycle/ +related: + - pod + - container +tags: + - fundamental +short_description: > + अवस्थाओं का क्रम जिसके माध्यम से एक पॉड अपने जीवनकाल में गुजरता है। + +--- + अवस्थाओं का क्रम जिसके माध्यम से एक पॉड अपने जीवनकाल में गुजरता है। + + + +[पॉड जीवनचक्र](/docs/concepts/workloads/pods/pod-lifecycle/) को पॉड की अवस्थाओं या चरणों द्वारा परिभाषित किया जाता है। पाँच संभावित पॉड चरण हैं: Pending, Running, Succeeded, Failed और Unknown। पॉड स्थिति का एक उच्च-स्तरीय विवरण [पॉडस्टैटस](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podstatus-v1-core) `phase` फ़ील्ड में सारांशित किया गया है। . diff --git a/content/hi/docs/reference/glossary/statefulset.md b/content/hi/docs/reference/glossary/statefulset.md new file mode 100644 index 0000000000..52124ff6f6 --- /dev/null +++ b/content/hi/docs/reference/glossary/statefulset.md @@ -0,0 +1,22 @@ +--- +title: स्टेटफुलसेट (StatefulSet) +id: statefulset +date: 2018-04-12 +full_link: /docs/concepts/workloads/controllers/statefulset/ +short_description: > + प्रत्येक पॉड के लिए स्थायी स्टोरेज और दृढ़ पहचानकर्ता के साथ, पॉड्स के एक सेट की डिप्लॉयमेंट और स्केलिंग का प्रबंधन करता है। + +aka: +tags: +- fundamental +- core-object +- workload +- storage +--- +{{}} के एक सेट की डिप्लॉयमेंट और स्केलिंग का प्रबंधन करता है, और इन पॉड्स के *क्रम और विशिष्टता के बारे में गारंटी प्रदान करता है*। + + + +एक {{}} की तरह, एक स्टेटफुलसेट एक सदृश कंटेनर विनिर्देश पर आधारित पॉड्स का प्रबंधन करता है। डिप्लॉयमेंट के विपरीत, स्टेटफुलसेट अपने प्रत्येक पॉड के लिए एक चिपचिपा पहचान बनाए रखता है। ये पॉड एक ही विनिर्देश से बनाए गए हैं, लेकिन विनिमय करने योग्य नहीं हैं; प्रत्येक का एक स्थायी पहचानकर्ता होता है जिसे वह किसी भी पुनर्निर्धारण के दौरान बनाए रखता है। + +यदि आप अपने वर्कलोड को दृढ़ता प्रदान करने के लिए स्टोरेज वॉल्यूम का उपयोग करना चाहते हैं, तो आप समाधान के हिस्से के रूप में स्टेटफुलसेट का उपयोग कर सकते हैं। हालांकि स्टेटफुलसेट में अलग-अलग पॉड विफलता के लिए अतिसंवेदनशील होते हैं, दृढ़ पॉड पहचानकर्ता मौजूदा वॉल्यूम को नए पॉड्स से मिलाना आसान बनाते हैं जो असफल होने वाले किसी भी पॉड को प्रतिस्थापित करता है। diff --git a/content/hi/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html b/content/hi/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html index e1ef116278..3e5147b647 100644 --- a/content/hi/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html +++ b/content/hi/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html @@ -18,7 +18,7 @@ weight: 10
-

Objectives

+

उद्देश्य

  • जानें कुबेरनेट्स क्लस्टर क्या है।
  • जानें मिनिक्यूब क्या है।
  • diff --git a/content/it/partners/_index.html b/content/it/partners/_index.html index 270dabe569..0787dc6443 100644 --- a/content/it/partners/_index.html +++ b/content/it/partners/_index.html @@ -7,85 +7,48 @@ cid: partners ---
    -
    -
    Kubernetes collabora con i partner per creare per creare un codebase che supporti uno spettro di piattaforme complementari.
    -
    -
    -
    -
    - Fornitori Certificati di Servizi su Kubernetes -
    -
    Fornitori di servizi riconosciuti e con grande esperienza nell'aiutare le imprese ad adottare con successo Kubernetes. -


    - -

    Interessato a diventare un partner KCSP? -
    -
    -
    -
    -
    - Distribuzioni di Kubernetes Certificate, Certified Hosted Platforms and Software di installazione Certificati -
    La conformità del software assicura che le versioni di Kubernetes prodotte da ogni fornitore supportino coerentemente le API necessarie. -


    - -

    Interessato a diventare un partner certificato Kubernetes? -
    -
    -
    -
    -
    Partner per la Formazione su Kubernetes
    -
    Professionisti riconosciuti e certificati, con solida esperienza nella formazione su tecnologie Cloud Native. -



    - -

    Interessato a diventare un partner KTP? -
    -
    -
    - - - -
    - - +
    Kubernetes collabora con i partner per creare per creare un codebase che supporti uno spettro di piattaforme complementari.
    +
    +
    +
    +
    + Fornitori Certificati di Servizi su Kubernetes +
    +
    Fornitori di servizi riconosciuti e con grande esperienza nell'aiutare le imprese ad adottare con successo Kubernetes. +


    + +

    Interessato a diventare un partner + KCSP? +
    +
    +
    +
    +
    + Distribuzioni di Kubernetes Certificate, Certified Hosted Platforms and Software di installazione Certificati +
    La conformità del software assicura che le versioni di Kubernetes prodotte da ogni fornitore supportino coerentemente le API necessarie. +


    + +

    Interessato a diventare un partner + certificato Kubernetes? +
    +
    +
    +
    +
    + Partner per la Formazione su Kubernetes +
    +
    Professionisti riconosciuti e certificati, con solida esperienza nella formazione su tecnologie Cloud Native. +


    + +

    Interessato a diventare un partner + KTP? +
    +
    - -
    + {{< cncf-landscape helpers=true >}}
    - diff --git a/content/ja/docs/concepts/workloads/controllers/job.md b/content/ja/docs/concepts/workloads/controllers/job.md index 2b630b9662..7a7ef2bdc7 100644 --- a/content/ja/docs/concepts/workloads/controllers/job.md +++ b/content/ja/docs/concepts/workloads/controllers/job.md @@ -596,7 +596,7 @@ Replication Controllerは、終了することが想定されていないPod(Web * Jobのさまざまな実行方法について学ぶ: * [ワークキューを用いた粒度の粗い並列処理](/docs/tasks/job/coarse-parallel-processing-work-queue/) * [ワークキューを用いた粒度の細かい並列処理](/docs/tasks/job/fine-parallel-processing-work-queue/) - * [静的な処理の割り当てを使用した並列処理のためのインデックス付きJob](/ja/docs/tasks/job/indexed-parallel-processing-static/) を使う(beta段階) + * [静的な処理の割り当てを使用した並列処理のためのインデックス付きJob](/ja/docs/tasks/job/indexed-parallel-processing-static/) を使う * テンプレートを元に複数のJobを作成: [拡張機能を用いた並列処理](/docs/tasks/job/parallel-processing-expansion/) * [終了したJobの自動クリーンアップ](#clean-up-finished-jobs-automatically)のリンクから、クラスターが完了または失敗したJobをどのようにクリーンアップするかをご確認ください。 * `Job`はKubernetes REST APIの一部です。JobのAPIを理解するために、{{< api-reference page="workload-resources/job-v1" >}}オブジェクトの定義をお読みください。 diff --git a/content/ja/docs/contribute/_index.md b/content/ja/docs/contribute/_index.md index 25eb4f9fa2..353355ab7b 100644 --- a/content/ja/docs/contribute/_index.md +++ b/content/ja/docs/contribute/_index.md @@ -42,7 +42,7 @@ Kubernetesコミュニティで効果的に働くためには、[git](https://gi 3. [プルリクエストのオープン](/docs/contribute/new-content/open-a-pr/)と[変更レビュー](/ja/docs/contribute/review/reviewing-prs/)の基本的なプロセスを理解していることを確認してください。 一部のタスクでは、Kubernetes organizationで、より多くの信頼とアクセス権限が必要です。 -役割と権限についての詳細は、[SIG Docsへの参加](/docs/contribute/participating/)を参照してください。 +役割と権限についての詳細は、[SIG Docsへの参加](/ja/docs/contribute/participate/)を参照してください。 ## はじめての貢献 - 貢献のための複数の方法について学ぶために[貢献の概要](/ja/docs/contribute/new-content/overview/)を読んでください。 @@ -56,12 +56,12 @@ Kubernetesコミュニティで効果的に働くためには、[git](https://gi - リポジトリの[ローカルクローンでの作業](/docs/contribute/new-content/open-a-pr/#fork-the-repo)について学んでください。 - [リリース機能](/docs/contribute/new-content/new-features/)について記載してください。 -- [SIG Docs](/docs/contribute/participate/)に参加し、[memberやreviewer](/docs/contribute/participate/roles-and-responsibilities/)になってください。 +- [SIG Docs](/ja/docs/contribute/participate/)に参加し、[memberやreviewer](/docs/contribute/participate/roles-and-responsibilities/)になってください。 - [国際化](/ja/docs/contribute/localization/)を始めたり、支援したりしてください。 ## SIG Docsに参加する -[SIG Docs](/docs/contribute/participate/)はKubernetesのドキュメントとウェブサイトを公開・管理するコントリビューターのグループです。SIG Docsに参加することはKubernetesコントリビューター(機能開発でもそれ以外でも)にとってKubernetesプロジェクトに大きな影響を与える素晴らしい方法の一つです。 +[SIG Docs](/ja/docs/contribute/participate/)はKubernetesのドキュメントとウェブサイトを公開・管理するコントリビューターのグループです。SIG Docsに参加することはKubernetesコントリビューター(機能開発でもそれ以外でも)にとってKubernetesプロジェクトに大きな影響を与える素晴らしい方法の一つです。 SIG Docsは複数の方法でコミュニケーションをとっています。 diff --git a/content/ja/examples/application/mysql/mysql-statefulset.yaml b/content/ja/examples/application/mysql/mysql-statefulset.yaml index b69af02c59..bf9aa6fe35 100644 --- a/content/ja/examples/application/mysql/mysql-statefulset.yaml +++ b/content/ja/examples/application/mysql/mysql-statefulset.yaml @@ -22,7 +22,7 @@ spec: - | set -ex # Generate mysql server-id from pod ordinal index. - [[ `hostname` =~ -([0-9]+)$ ]] || exit 1 + [[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1 ordinal=${BASH_REMATCH[1]} echo [mysqld] > /mnt/conf.d/server-id.cnf # Add an offset to avoid reserved server-id=0 value. diff --git a/content/pt-br/docs/concepts/storage/persistent-volumes.md b/content/pt-br/docs/concepts/storage/persistent-volumes.md index 01d092e673..506612fde2 100644 --- a/content/pt-br/docs/concepts/storage/persistent-volumes.md +++ b/content/pt-br/docs/concepts/storage/persistent-volumes.md @@ -231,7 +231,7 @@ Para solicitar um volume maior para uma PVC, edite a PVC e especifique um tamanh #### Expansão de volume CSI -{{< feature-state for_k8s_version="v1.16" state="beta" >}} +{{< feature-state for_k8s_version="v1.24" state="stable" >}} O suporte à expansão de volumes CSI é habilitada por padrão, porém é necessário um driver CSI específico para suportar a expansão do volume. Verifique a documentação do driver CSI específico para mais informações. diff --git a/content/pt-br/docs/reference/access-authn-authz/authorization.md b/content/pt-br/docs/reference/access-authn-authz/authorization.md new file mode 100644 index 0000000000..efba8eb731 --- /dev/null +++ b/content/pt-br/docs/reference/access-authn-authz/authorization.md @@ -0,0 +1,250 @@ +--- +title: Visão Geral de Autorização +content_type: concept +weight: 60 +--- + + +Aprenda mais sobre autorização no Kubernetes, incluindo detalhes sobre +criação de políticas utilizando módulos de autorização suportados. + + + +No Kubernetes, você deve estar autenticado (conectado) antes que sua requisição possa ser +autorizada (permissão concedida para acesso). Para obter informações sobre autenticação, +visite [Controlando Acesso à API do Kubernetes](/pt-br/docs/concepts/security/controlling-access/). + +O Kubernetes espera atributos que são comuns a requisições de APIs REST. Isto significa +que autorização no Kubernetes funciona com sistemas de controle de acesso a nível de organizações +ou de provedores de nuvem que possam lidar com outras APIs além das APIs do Kubernetes. + +## Determinar se uma requisição é permitida ou negada + +O Kubernetes autoriza requisições de API utilizando o servidor de API. Ele avalia +todos os atributos de uma requisição em relação a todas as políticas disponíveis e permite ou nega a requisição. +Todas as partes de uma requisição de API deve ser permitidas por alguma política para que possa prosseguir. +Isto significa que permissões são negadas por padrão. + +(Embora o Kubernetes use o servidor de API, controles de acesso e políticas que +dependem de campos específicos de tipos específicos de objetos são tratados pelos controladores de admissão.) + +Quando múltiplos módulos de autorização são configurados, cada um será verificado em sequência. +Se qualquer dos autorizadores aprovarem ou negarem uma requisição, a decisão é imediatamente +retornada e nenhum outro autorizador é consultado. Se nenhum módulo de autorização tiver +nenhuma opinião sobre requisição, então a requisição é negada. Uma negação retorna um +código de status HTTP 403. + +## Revisão de atributos de sua requisição + +O Kubernetes revisa somente os seguintes atributos de uma requisição de API: + + * **user** - O string de `user` fornecido durante a autenticação. + * **group** - A lista de nomes de grupos aos quais o usuário autenticado pertence. + * **extra** - Um mapa de chaves de string arbitrárias para valores de string, fornecido pela camada de autenticação. + * **API** - Indica se a solicitação é para um recurso de API. + * **Caminho da requisição** - Caminho para diversos endpoints que não manipulam recursos, como `/api` ou `/healthz`. + * **Verbo de requisição de API** - Verbos da API como `get`, `list`, `create`, `update`, `patch`, `watch`, `delete` e `deletecollection` que são utilizados para solicitações de recursos. Para determinar o verbo de requisição para um endpoint de recurso de API , consulte [Determine o verbo da requisição](/pt-br/docs/reference/access-authn-authz/authorization/#determine-the-request-verb). + * **Verbo de requisição HTTP** - Métodos HTTP em letras minúsculas como `get`, `post`, `put` e `delete` que são utilizados para requisições que não são de recursos. + * **Recurso** - O identificador ou nome do recurso que está sendo acessado (somente para requisições de recursos) - para requisições de recursos usando os verbos `get`, `update`, `patch` e `delete`, deve-se fornecer o nome do recurso. + * **Subrecurso** - O sub-recurso que está sendo acessado (somente para solicitações de recursos). + * **Namespace** - O namespace do objeto que está sendo acessado (somente para solicitações de recursos com namespace). + * **Grupo de API** - O {{< glossary_tooltip text="API Group" term_id="api-group" >}} sendo acessado (somente para requisições de recursos). Uma string vazia designa o [Grupo de API](/docs/reference/using-api/#api-groups) _core_. + +## Determine o verbo da requisição {#determine-the-request-verb} + +**Requisições de não-recursos** +Requisições sem recursos de `/api/v1/...` ou `/apis///...` +são considerados "requisições sem recursos" e usam o método HTTP em letras minúsculas da solicitação como o verbo. +Por exemplo, uma solicitação `GET` para endpoints como `/api` ou `/healthz` usaria `get` como o verbo. + +**Requisições de recursos** +Para determinar o verbo de requisição para um endpoint de API de recurso, revise o verbo HTTP +utilizado e se a requisição atua ou não em um recurso individual ou em uma +coleção de recursos: + +Verbo HTTP | Verbo de Requisição +---------- |--------------- +POST | create +GET, HEAD | get (para recursos individuais), list (para coleções, includindo o conteúdo do objeto inteiro), watch (para observar um recurso individual ou coleção de recursos) +PUT | update +PATCH | patch +DELETE | delete (para recursos individuais), deletecollection (para coleções) + +{{< caution >}} +Os verbos `get`, `list` e `watch` podem retornar todos os detalhes de um recurso. Eles são equivalentes em relação aos dados retornados. Por exemplo, `list` em `secrets` revelará os atributos de `data` de qualquer recurso retornado. +{{< /caution >}} + +Às vezes, o Kubernetes verifica a autorização para permissões adicionais utilizando verbos especializados. Por exemplo: + +* [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) + * Verbo `use` em recursos `podsecuritypolicies` no grupo `policy` de API. +* [RBAC](/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping) + * Verbos `bind` e `escalate` em `roles` e recursos `clusterroles` no grupo `rbac.authorization.k8s.io` de API. +* [Authentication](/pt-br/docs/reference/access-authn-authz/authentication/) + * Verbo `impersonate` em `users`, `groups`, e `serviceaccounts` no grupo de API `core`, e o `userextras` no grupo `authentication.k8s.io` de API. + +## Modos de Autorização {#authorization-modules} + +O servidor da API Kubernetes pode autorizar uma solicitação usando um dos vários modos de autorização: + + * **Node** - Um modo de autorização de finalidade especial que concede permissões a ```kubelets``` com base nos ```Pods``` que estão programados para execução. Para saber mais sobre como utilizar o modo de autorização do nó, consulte [Node Authorization](/docs/reference/access-authn-authz/node/). + * **ABAC** - Attribute-based access control (ABAC), ou Controle de acesso baseado em atributos, define um paradigma de controle de acesso pelo qual os direitos de acesso são concedidos aos usuários por meio do uso de políticas que combinam atributos. As políticas podem usar qualquer tipo de atributo (atributos de usuário, atributos de recurso, objeto, atributos de ambiente, etc.). Para saber mais sobre como usar o modo ABAC, consulte [ABAC Mode](/docs/reference/access-authn-authz/abac/). + * **RBAC** - Role-based access control (RBAC), ou controle de acesso baseado em função, é um método de regular o acesso a recursos computacionais ou de rede com base nas funções de usuários individuais dentro de uma empresa. Nesse contexto, acesso é a capacidade de um usuário individual realizar uma tarefa específica, como visualizar, criar ou modificar um arquivo. Para saber mais sobre como usar o modo RBAC, consulte [RBAC Mode](/docs/reference/access-authn-authz/rbac/) + * Quando especificado RBAC (Role-Based Access Control) usa o grupo de API `rbac.authorization.k8s.io` para orientar as decisões de autorização, permitindo que os administradores configurem dinamicamente as políticas de permissão por meio da API do Kubernetes. + * Para habilitar o modo RBAC, inicie o servidor de API (apiserver) com a opção `--authorization-mode=RBAC`. + * **Webhook** - Um WebHook é um retorno de chamada HTTP: um HTTP POST que ocorre quando algo acontece; uma simples notificação de evento via HTTP POST. Um aplicativo da Web que implementa WebHooks postará uma mensagem em um URL quando um determinado evento ocorrer. Para saber mais sobre como usar o modo Webhook, consulte [Webhook Mode](/docs/reference/access-authn-authz/webhook/). + +#### Verificando acesso a API + +`kubectl` fornece o subcomando `auth can-i` para consultar rapidamente a camada de autorização da API. +O comando usa a API `SelfSubjectAccessReview` para determinar se o usuário atual pode executar +uma determinada ação e funciona independentemente do modo de autorização utilizado. + + +```bash +# "can-i create" = "posso criar" +kubectl auth can-i create deployments --namespace dev +``` + +A saída é semelhante a esta: + +``` +yes +``` + +```shell +# "can-i create" = "posso criar" +kubectl auth can-i create deployments --namespace prod +``` + +A saída é semelhante a esta: + +``` +no +``` + +Os administradores podem combinar isso com [personificação de usuário](/pt-br/docs/reference/access-authn-authz/authentication/#personificação-de-usuário) +para determinar qual ação outros usuários podem executar. + +```bash +# "can-i list" = "posso listar" + +kubectl auth can-i list secrets --namespace dev --as dave +``` + +A saída é semelhante a esta: + +``` +no +``` + +Da mesma forma, para verificar se uma ServiceAccount chamada `dev-sa` no Namespace `dev` +pode listar ```Pods``` no namespace `target`: + +```bash +# "can-i list" = "posso listar" +kubectl auth can-i list pods \ + --namespace target \ + --as system:serviceaccount:dev:dev-sa +``` + +A saída é semelhante a esta: + +``` +yes +``` + +`SelfSubjectAccessReview` faz parte do grupo de API `authorization.k8s.io`, que +expõe a autorização do servidor de API para serviços externos. Outros recursos +neste grupo inclui: + +* `SubjectAccessReview` - Revisão de acesso para qualquer usuário, não apenas o atual. Útil para delegar decisões de autorização para o servidor de API. Por exemplo, o ```kubelet``` e extensões de servidores de API utilizam disso para determinar o acesso do usuário às suas próprias APIs. + +* `LocalSubjectAccessReview` - Similar a `SubjectAccessReview`, mas restrito a um namespace específico. + +* `SelfSubjectRulesReview` - Uma revisão que retorna o conjunto de ações que um usuário pode executar em um namespace. Útil para usuários resumirem rapidamente seu próprio acesso ou para interfaces de usuário mostrarem ações. + +Essas APIs podem ser consultadas criando recursos normais do Kubernetes, onde a resposta no campo `status` +do objeto retornado é o resultado da consulta. + +```bash +kubectl create -f - -o yaml << EOF +apiVersion: authorization.k8s.io/v1 +kind: SelfSubjectAccessReview +spec: + resourceAttributes: + group: apps + resource: deployments + verb: create + namespace: dev +EOF +``` + +A `SelfSubjectAccessReview` gerada seria: +```yaml +apiVersion: authorization.k8s.io/v1 +kind: SelfSubjectAccessReview +metadata: + creationTimestamp: null +spec: + resourceAttributes: + group: apps + resource: deployments + namespace: dev + verb: create +status: + allowed: true + denied: false +``` + +## Usando flags para seu módulo de autorização + +Você deve incluir uma flag em sua política para indicar qual módulo de autorização +suas políticas incluem: + +As seguintes flags podem ser utilizadas: + + * `--authorization-mode=ABAC` O modo de controle de acesso baseado em atributos (ABAC) permite configurar políticas usando arquivos locais. + * `--authorization-mode=RBAC` O modo de controle de acesso baseado em função (RBAC) permite que você crie e armazene políticas usando a API do Kubernetes. + * `--authorization-mode=Webhook` WebHook é um modo de retorno de chamada HTTP que permite gerenciar a autorização usando endpoint REST. + * `--authorization-mode=Node` A autorização de nó é um modo de autorização de propósito especial que autoriza especificamente requisições de API feitas por ```kubelets```. + * `--authorization-mode=AlwaysDeny` Esta flag bloqueia todas as requisições. Utilize esta flag somente para testes. + * `--authorization-mode=AlwaysAllow` Esta flag permite todas as requisições. Utilize esta flag somente se não existam requisitos de autorização para as requisições de API. + +Você pode escolher mais de um modulo de autorização. Módulos são verificados +em ordem, então, um modulo anterior tem maior prioridade para permitir ou negar uma requisição. + +## Escalonamento de privilégios através da criação ou edição da cargas de trabalho {#privilege-escalation-via-pod-creation} + +Usuários que podem criar ou editar ```pods``` em um namespace diretamente ou através de um [controlador](/pt-br/docs/concepts/architecture/controller/) +como, por exemplo, um operador, conseguiriam escalar seus próprios privilégios naquele namespace. + +{{< caution >}} +Administradores de sistemas, tenham cuidado ao permitir acesso para criar ou editar cargas de trabalho. +Detalhes de como estas permissões podem ser usadas de forma maliciosa podem ser encontradas em [caminhos para escalonamento](#escalation-paths). + +{{< /caution >}} + +### Caminhos para escalonamento {#escalation-paths} + +- Montagem de Secret arbitrários nesse namespace + - Pode ser utilizado para acessar Secret destinados a outras cargas de trabalho + - Pode ser utilizado para obter um token da conta de serviço com maior privilégio +- Uso de contas de serviço arbitrárias nesse namespace + - Pode executar ações da API do Kubernetes como outra carga de trabalho (personificação) + - Pode executar quaisquer ações privilegiadas que a conta de serviço tenha acesso +- Montagem de configmaps destinados a outras cargas de trabalho nesse namespace + - Pode ser utilizado para obter informações destinadas a outras cargas de trabalho, como nomes de host de banco de dados. +- Montagem de volumes destinados a outras cargas de trabalho nesse namespace + - Pode ser utilizado para obter informações destinadas a outras cargas de trabalho e alterá-las. + +{{< caution >}} +Administradores de sistemas devem ser cuidadosos ao instalar CRDs que +promovam mudanças nas áreas mencionadas acima. Estes podem abrir caminhos para escalonamento. +Isto deve ser considerado ao decidir os controles de acesso baseado em função (RBAC). +{{< /caution >}} + +## {{% heading "whatsnext" %}} + +* Para aprender mais sobre autenticação, visite **Authentication** in [Controlando acesso a APIs do Kubernetes](/pt-br/docs/concepts/security/controlling-access/). +* Para aprender mais sobre Admission Control, visite [Utilizando Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/). \ No newline at end of file diff --git a/content/pt-br/docs/reference/glossary/aggregation-layer.md b/content/pt-br/docs/reference/glossary/aggregation-layer.md new file mode 100644 index 0000000000..d627ea1216 --- /dev/null +++ b/content/pt-br/docs/reference/glossary/aggregation-layer.md @@ -0,0 +1,19 @@ +--- +title: Camada de Agregação +id: aggregation-layer +date: 2018-10-08 +full_link: /pt-br/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/ +short_description: > + A camada de agregação permite que você instale APIs adicionais no estilo Kubernetes em seu cluster. + +aka: +tags: +- architecture +- extension +- operation +--- + A camada de agregação permite que você instale APIs adicionais no estilo Kubernetes em seu cluster. + + + +Depois de configurar o {{< glossary_tooltip text="Servidor da API do Kubernetes" term_id="kube-apiserver" >}} para [suportar APIs adicionais](/docs/tasks/extend-kubernetes/configure-aggregation-layer/), você pode adicionar objetos `APIService` para obter a URL da API adicional. diff --git a/content/pt-br/docs/reference/glossary/cluster-operations.md b/content/pt-br/docs/reference/glossary/cluster-operations.md new file mode 100644 index 0000000000..7dca9faf1f --- /dev/null +++ b/content/pt-br/docs/reference/glossary/cluster-operations.md @@ -0,0 +1,18 @@ +--- +title: Operações do Cluster +id: cluster-operations +date: 2019-05-12 +full_link: +short_description: > + O trabalho envolvido no gerenciamento de um cluster Kubernetes. + + +aka: +tags: +- operation +--- + O trabalho envolvido no gerenciamento de um cluster Kubernetes: gerenciamento das operações diárias e coordenação das atualizações. + + + + Exemplos das tarefas de operações do cluster incluem: implantação de novos nós para dimensionar o cluster; realização de atualizações de software; implementação de controles de segurança; adição ou remoção de armazenamento; configuração da rede do cluster; gerenciamento de observabilidade em todo o cluster; e resposta a eventos. diff --git a/content/pt-br/docs/reference/glossary/developer.md b/content/pt-br/docs/reference/glossary/developer.md new file mode 100644 index 0000000000..a1b3e7210d --- /dev/null +++ b/content/pt-br/docs/reference/glossary/developer.md @@ -0,0 +1,18 @@ +--- +title: Desenvolvedor (desambiguação) +id: developer +date: 2018-04-12 +full_link: +short_description: > + Pode se referir a: Desenvolvedor de Aplicativos, Colaborador de Código ou Desenvolvedor de Plataforma. + +aka: +tags: +- community +- user-type +--- + Pode se referir a: {{< glossary_tooltip text="Desenvolvedor de Aplicativos" term_id="application-developer" >}}, {{< glossary_tooltip text="Colaborador de Código" term_id="code-contributor" >}}, ou {{< glossary_tooltip text="Desenvolvedor de Plataforma" term_id="platform-developer" >}}. + + + +Esse termo pode ter significados diferentes, dependendo do contexto. diff --git a/content/pt-br/docs/reference/glossary/statefulset.md b/content/pt-br/docs/reference/glossary/statefulset.md new file mode 100644 index 0000000000..f032eb9a77 --- /dev/null +++ b/content/pt-br/docs/reference/glossary/statefulset.md @@ -0,0 +1,22 @@ +--- +title: StatefulSet +id: statefulset +date: 2018-04-12 +full_link: /docs/concepts/workloads/controllers/statefulset/ +short_description: > + Gerencia deployment e escalonamento de um conjunto de Pods, com armazenamento durável e identificadores persistentes para cada Pod. + +aka: +tags: +- fundamental +- core-object +- workload +- storage +--- + Gerencia o deployment e escalonamento de um conjunto de {{< glossary_tooltip text="Pods" term_id="pod" >}}, *e fornece garantias sobre a ordem e unicidade* desses Pods. + + + +Como o {{< glossary_tooltip term_id="deployment" >}}, um StatefulSet gerencia Pods que são baseados em uma especificação de container idêntica. Diferente do Deployment, um StatefulSet mantém uma identidade fixa para cada um de seus Pods. Esses pods são criados da mesma especificação, mas não são intercambiáveis: cada um tem uma identificação persistente que se mantém em qualquer reagendamento. + +Se você quiser usar volumes de armazenamento para fornecer persistência para sua carga de trabalho, você pode usar um StatefulSet como parte da sua solução. Embora os Pods individuais em um StatefulSet sejam suscetíveis a falhas, os identificadores de pods persistentes facilitam a correspondência de volumes existentes com os novos pods que substituem qualquer um que tenha falhado. diff --git a/content/pt-br/docs/reference/glossary/workload.md b/content/pt-br/docs/reference/glossary/workload.md new file mode 100644 index 0000000000..c21e1ca1bb --- /dev/null +++ b/content/pt-br/docs/reference/glossary/workload.md @@ -0,0 +1,22 @@ +--- +title: Carga de Trabalho +id: workloads +date: 2019-02-13 +full_link: /docs/concepts/workloads/ +short_description: > + Uma carga de trabalho é uma aplicação sendo executada no Kubernetes. + +aka: +tags: +- fundamental +--- + Uma carga de trabalho é uma aplicação sendo executada no Kubernetes. + + + +Vários objetos principais que representam diferentes tipos ou partes de uma carga de trabalho +incluem os objetos DaemonSet, Deployment, Job, ReplicaSet, e StatefulSet. + +Por exemplo, uma carga de trabalho que tem um servidor web e um banco de dados pode rodar o +banco de dados em um {{< glossary_tooltip term_id="StatefulSet" >}} e o servidor web +em um {{< glossary_tooltip term_id="Deployment" >}}. diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md new file mode 100644 index 0000000000..7c6a0f16b2 --- /dev/null +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md @@ -0,0 +1,267 @@ +Rode este comando para configurar a camada de gerenciamento do Kubernetes + +### Sinopse + + +Rode este comando para configurar a camada de gerenciamento do Kubernetes + +O comando "init" executa as fases abaixo: +``` +preflight Efetua as verificações pré-execução +certs Geração de certificados + /ca Gera a autoridade de certificação (CA) auto-assinada do Kubernetes para provisionamento de identidades para outros componentes do Kubernetes + /apiserver Gera o certificado para o servidor da API do Kubernetes + /apiserver-kubelet-client Gera o certificado para o servidor da API se conectar ao Kubelet + /front-proxy-ca Gera a autoridade de certificação (CA) auto-assinada para provisionamento de identidades para o front proxy + /front-proxy-client Gera o certificado para o cliente do front proxy + /etcd-ca Gera a autoridade de certificação (CA) auto-assinada para provisionamento de identidades para o etcd + /etcd-server Gera o certificado para servir o etcd + /etcd-peer Gera o certificado para comunicação entre nós do etcd + /etcd-healthcheck-client Gera o certificado para liveness probes fazerem a verificação de integridade do etcd + /apiserver-etcd-client Gera o certificado que o servidor da API utiliza para comunicar-se com o etcd + /sa Gera uma chave privada para assinatura de tokens de conta de serviço, juntamente com sua chave pública +kubeconfig Gera todos os arquivos kubeconfig necessários para estabelecer a camada de gerenciamento e o arquivo kubeconfig de administração + /admin Gera um arquivo kubeconfig para o administrador e o próprio kubeadm utilizarem + /kubelet Gera um arquivo kubeconfig para o kubelet utilizar *somente* para fins de inicialização do cluster + /controller-manager Gera um arquivo kubeconfig para o gerenciador de controladores utilizar + /scheduler Gera um arquivo kubeconfig para o escalonador do Kubernetes utilizar +kubelet-start Escreve as configurações do kubelet e (re)inicializa o kubelet +control-plane Gera todos os manifestos de Pods estáticos necessários para estabelecer a camada de gerenciamento + /apiserver Gera o manifesto do Pod estático do kube-apiserver + /controller-manager Gera o manifesto do Pod estático do kube-controller-manager + /scheduler Gera o manifesto do Pod estático do kube-scheduler +etcd Gera o manifesto do Pod estático para um etcd local + /local Gera o manifesto do Pod estático para uma instância local e de nó único do etcd +upload-config Sobe a configuração do kubeadm e do kubelet para um ConfigMap + /kubeadm Sobe a configuração ClusterConfiguration do kubeadm para um ConfigMap + /kubelet Sobe a configuração do kubelet para um ConfigMap +upload-certs Sobe os certificados para o kubeadm-certs +mark-control-plane Marca um nó como parte da camada de gerenciamento +bootstrap-token Gera tokens de autoinicialização utilizados para associar um nó a um cluster +kubelet-finalize Atualiza configurações relevantes ao kubelet após a inicialização TLS + /experimental-cert-rotation Habilita rotação de certificados do cliente do kubelet +addon Instala os addons requeridos para passar nos testes de conformidade + /coredns Instala o addon CoreDNS em um cluster Kubernetes + /kube-proxy Instala o addon kube-proxy em um cluster Kubernetes +``` + + +``` +kubeadm init [flags] +``` + +### Opções + + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    --apiserver-advertise-address string

    O endereço IP que o servidor da API irá divulgar que está escutando. Quando não informado, a interface de rede padrão é utilizada.

    --apiserver-bind-port int32     Padrão: 6443

    Porta para o servidor da API conectar-se.

    --apiserver-cert-extra-sans strings

    Nomes alternativos (Subject Alternative Names, ou SANs) opcionais a serem adicionados ao certificado utilizado pelo servidor da API. Pode conter endereços IP ou nomes DNS.

    --cert-dir string     Padrão: "/etc/kubernetes/pki"

    O caminho para salvar e armazenar certificados.

    --certificate-key string

    Chave utilizada para encriptar os certificados da camada de gerenciamento no Secret kubeadm-certs.

    --config string

    Caminho para um arquivo de configuração do kubeadm.

    --control-plane-endpoint string

    Especifica um endereço IP estável ou nome DNS para a camada de gerenciamento.

    --cri-socket string

    Caminho para o soquete CRI se conectar. Se vazio, o kubeadm tentará autodetectar este valor; utilize esta opção somente se você possui mais que um CRI instalado ou se você possui um soquete CRI fora do padrão.

    --dry-run

    Não aplica as modificações; apenas imprime as alterações que seriam efetuadas.

    --feature-gates string

    Um conjunto de pares chave=valor que descreve feature gates para várias funcionalidades. As opções são:
    PublicKeysECDSA=true|false (ALFA - padrão=false)
    RootlessControlPlane=true|false (ALFA - padrão=false)
    UnversionedKubeletConfigMap=true|false (BETA - padrão=true)

    -h, --help

    ajuda para init

    --ignore-preflight-errors strings

    Uma lista de verificações para as quais erros serão exibidos como avisos. Exemplos: 'IsPrivilegedUser,Swap'. O valor 'all' ignora erros de todas as verificações.

    --image-repository string     Padrão: "k8s.gcr.io"

    Seleciona um registro de contêineres de onde baixar imagens.

    --kubernetes-version string     Padrão: "stable-1"

    Seleciona uma versão do Kubernetes específica para a camada de gerenciamento.

    --node-name string

    Especifica o nome do nó.

    --patches string

    +Caminho para um diretório contendo arquivos nomeados no padrão "target[suffix][+patchtype].extension". Por exemplo, "kube-apiserver0+merge.yaml" ou somente "etcd.json". +"target" pode ser um dos seguintes valores: "kube-apiserver", "kube-controller-manager", "kube-scheduler", "etcd". +"patchtype" pode ser "strategic", "merge" ou "json" e corresponde aos formatos de patch suportados pelo kubectl. O valor padrão para "patchtype" é "strategic". +"extension" deve ser "json" ou "yaml". "suffix" é uma string opcional utilizada para determinar quais patches são aplicados primeiro em ordem alfanumérica. +

    --pod-network-cidr string

    Especifica um intervalo de endereços IP para a rede do Pod. Quando especificado, a camada de gerenciamento irá automaticamente alocar CIDRs para cada nó.

    --service-cidr string     Padrão: "10.96.0.0/12"

    Utiliza um intervalo alternativo de endereços IP para VIPs de serviço.

    --service-dns-domain string     Padrão: "cluster.local"

    Utiliza um domínio alternativo para os serviços. Por exemplo, "myorg.internal".

    --skip-certificate-key-print

    Não exibe a chave utilizada para encriptar os certificados da camada de gerenciamento.

    --skip-phases strings

    Lista de fases a serem ignoradas.

    --skip-token-print

    Pula a impressão do token de autoinicialização padrão gerado pelo comando 'kubeadm init'.

    --token string

    O token a ser utilizado para estabelecer confiança bidirecional entre nós de carga de trabalho e nós da camada de gerenciamento. O formato segue a expressão regular [a-z0-9]{6}.[a-z0-9]{16} - por exemplo, abcdef.0123456789abcdef.

    --token-ttl duration     Padrão: 24h0m0s

    A duração de tempo de um token antes deste ser automaticamente apagado (por exemplo, 1s, 2m, 3h). Quando informado '0', o token não expira.

    --upload-certs

    Sobe os certificados da camada de gerenciamento para o Secret kubeadm-certs.

    + + + +### Opções herdadas de comandos superiores + + ++++ + + + + + + + + + + +
    --rootfs string

    [EXPERIMENTAL] O caminho para o sistema de arquivos raiz 'real' do host.

    + + + diff --git a/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-init.md b/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-init.md new file mode 100644 index 0000000000..969425de72 --- /dev/null +++ b/content/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-init.md @@ -0,0 +1,431 @@ +--- +title: kubeadm init +content_type: concept +weight: 20 +--- + + + +Este comando inicializa um nó da camada de gerenciamento do Kubernetes. + + + +{{< include "generated/kubeadm_init.md" >}} + +### Fluxo do comando Init {#init-workflow} + +O comando `kubeadm init` inicializa um nó da camada de gerenciamento do Kubernetes +através da execução dos passos abaixo: + +1. Roda uma série de verificações pré-execução para validar o estado do sistema + antes de efetuar mudanças. Algumas verificações emitem apenas avisos, outras + são consideradas erros e cancelam a execução do kubeadm até que o problema + seja corrigido ou que o usuário especifique a opção + `--ignore-preflight-errors=`. + +1. Gera uma autoridade de certificação (CA) auto-assinada para criar identidades + para cada um dos componentes do cluster. O usuário pode informar seu próprio + certificado CA e/ou chave ao instalar estes arquivos no diretório de + certificados configurado através da opção `--cert-dir` (por padrão, este + diretório é `/etc/kubernetes/pki`). + Os certificados do servidor da API terão entradas adicionais para nomes + alternativos (_subject alternative names_, ou SANs) especificados através da + opção `--apiserver-cert-extra-sans`. Estes argumentos serão modificados para + caracteres minúsculos quando necessário. + +1. Escreve arquivos kubeconfig adicionais no diretório `/etc/kubernetes` para o + kubelet, para o gerenciador de controladores e para o escalonador utilizarem + ao conectarem-se ao servidor da API, cada um com sua própria identidade, bem + como um arquivo kubeconfig adicional para administração do cluster chamado + `admin.conf`. + +1. Gera manifestos de Pods estáticos para o servidor da API, para o gerenciador + de controladores e para o escalonador. No caso de uma instância externa do + etcd não ter sido providenciada, um manifesto de Pod estático adicional é + gerado para o etcd. + + Manifestos de Pods estáticos são escritos no diretório `/etc/kubernetes/manifests`; + o kubelet lê este diretório em busca de manifestos de Pods para criar na + inicialização. + + Uma vez que os Pods da camada de gerenciamento estejam criados e rodando, + a sequência de execução do comando `kubeadm init` pode continuar. + +1. Aplica _labels_ e _taints_ ao nó da camada de gerenciamento de modo que cargas + de trabalho adicionais não sejam escalonadas para executar neste nó. + +1. Gera o token que nós adicionais podem utilizar para associarem-se a uma + camada de gerenciamento no futuro. Opcionalmente, o usuário pode fornecer um + token através da opção `--token`, conforme descrito na documentação do + comando [kubeadm token](/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-token/). + +1. Prepara todas as configurações necessárias para permitir que nós se associem + ao cluster utilizando os mecanismos de + [Tokens de Inicialização](/pt-br/docs/reference/access-authn-authz/bootstrap-tokens/) + e [Inicialização TLS](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/): + + - Escreve um ConfigMap para disponibilizar toda a informação necessária para + associar-se a um cluster e para configurar regras de controle de acesso + baseada em funções (RBAC). + + - Permite o acesso dos tokens de inicialização à API de assinaturas CSR. + + - Configura a auto-aprovação de novas requisições CSR. + + Para mais informações, consulte + [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join/). + +1. Instala um servidor DNS (CoreDNS) e os componentes adicionais do kube-proxy + através do servidor da API. A partir da versão 1.11 do Kubernetes, CoreDNS é + o servidor DNS padrão. Mesmo que o servidor DNS seja instalado nessa etapa, + o seu Pod não será escalonado até que um CNI seja instalado. + + {{< warning >}} + O uso do kube-dns com o kubeadm foi descontinuado na versão v1.18 e removido + na versão v1.21 do Kubernetes. + {{< /warning >}} + +### Utilizando fases de inicialização com o kubeadm {#init-phases} + +O kubeadm permite que você crie um nó da camada de gerenciamento em fases +utilizando o comando `kubeadm init phase`. + +Para visualizar a lista ordenada de fases e subfases, você pode rodar o comando +`kubeadm init --help`. A lista estará localizada no topo da ajuda e cada fase +tem sua descrição listada juntamente com o comando. Perceba que ao rodar o +comando `kubeadm init` todas as fases e subfases são executadas nesta ordem +exata. + +Algumas fases possuem flags específicas. Caso você deseje ver uma lista de todas +as opções disponíveis, utilize a flag `--help`. Por exemplo: + +```shell +sudo kubeadm init phase control-plane controller-manager --help +``` + +Você também pode utilizar a flag `--help` para ver uma lista de subfases de uma +fase superior: + +```shell +sudo kubeadm init phase control-plane --help +``` + +`kubeadm init` também expõe uma flag chamada `--skip-phases` que pode ser +utilizada para pular a execução de certas fases. Esta flag aceita uma lista de +nomes de fases. Os nomes de fases aceitos estão descritos na lista ordenada +acima. + +Um exemplo: + +```shell +sudo kubeadm init phase control-plane all --config=configfile.yaml +sudo kubeadm init phase etcd local --config=configfile.yaml +# agora você pode modificar os manifestos da camada de gerenciamento e do etcd +sudo kubeadm init --skip-phases=control-plane,etcd --config=configfile.yaml +``` + +O que este exemplo faz é escrever os manifestos da camada de gerenciamento e do +etcd no diretório `/etc/kubernetes/manifests`, baseados na configuração descrita +no arquivo `configfile.yaml`. Isto permite que você modifique os arquivos e +então pule estas fases utilizando a opção `--skip-phases`. Ao chamar o último +comando, você cria um nó da camada de gerenciamento com os manifestos +personalizados. + +{{< feature-state for_k8s_version="v1.22" state="beta" >}} + +Como alternativa, você pode também utilizar o campo `skipPhases` na configuração +`InitConfiguration`. + +### Utilizando kubeadm init com um arquivo de configuração {#config-file} + +{{< caution >}} +O arquivo de configuração ainda é considerado uma funcionalidade de estado beta +e pode mudar em versões futuras. +{{< /caution >}} + +É possível configurar o comando `kubeadm init` com um arquivo de configuração ao +invés de argumentos de linha de comando, e algumas funcionalidades mais avançadas +podem estar disponíveis apenas como opções do arquivo de configuração. Este +arquivo é fornecido utilizando a opção `--config` e deve conter uma estrutura +`ClusterConfiguration` e, opcionalmente, mais estruturas separadas por `---\n`. +Combinar a opção `--config` com outras opções de linha de comando pode não ser +permitido em alguns casos. + +A configuração padrão pode ser emitida utilizando o comando +[kubeadm config print](/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-config/). + +Se a sua configuração não estiver utilizando a última versão, é **recomendado** +que você migre utilizando o comando +[kubeadm config migrate](/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-config/). + +Para mais informações sobre os campos e utilização da configuração, você pode +consultar a +[página de referência da API](/docs/reference/config-api/kubeadm-config.v1beta3/). + +### Utilizando kubeadm init com _feature gates_ {#feature-gates} + +O kubeadm suporta um conjunto de _feature gates_ que são exclusivos do kubeadm e +podem ser utilizados somente durante a criação de um cluster com `kubeadm init`. +Estas funcionalidades podem controlar o comportamento do cluster. Os +_feature gates_ são removidos assim que uma funcionalidade atinge a disponibilidade +geral (_general availability_, ou GA). + +Para informar um _feature gate_, você pode utilizar a opção `--feature-gates` +do comando `kubeadm init`, ou pode adicioná-las no campo `featureGates` quando +um [arquivo de configuração](/docs/reference/config-api/kubeadm-config.v1beta3/#kubeadm-k8s-io-v1beta3-ClusterConfiguration) +é utilizado através da opção `--config`. + +A utilização de +[_feature gates_ dos componentes principais do Kubernetes](/docs/reference/command-line-tools-reference/feature-gates) +com o kubeadm não é suportada. Ao invés disso, é possível enviá-los através da +[personalização de componentes com a API do kubeadm](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/). + +Lista dos _feature gates_: + +{{< table caption="_feature gates_ do kubeadm" >}} +_Feature gate_ | Valor-padrão | Versão Alfa | Versão Beta +:-----------------------------|:-------------|:------------|:----------- +`PublicKeysECDSA` | `false` | 1.19 | - +`RootlessControlPlane` | `false` | 1.22 | - +`UnversionedKubeletConfigMap` | `true` | 1.22 | 1.23 +{{< /table >}} + +{{< note >}} +Assim que um _feature gate_ atinge a disponibilidade geral, ele é removido desta +lista e o seu valor fica bloqueado em `true` por padrão. Ou seja, a funcionalidade +estará sempre ativa. +{{< /note >}} + +Descrição dos _feature gates_: + +`PublicKeysECDSA` +: Pode ser utilizado para criar um cluster que utilize certificados ECDSA no +lugar do algoritmo RSA padrão. A renovação dos certificados ECDSA existentes +também é suportada utilizando o comando `kubeadm certs renew`, mas você não pode +alternar entre os algoritmos RSA e ECDSA dinamicamente ou durante atualizações. + +`RootlessControlPlane` +: Quando habilitada esta opção, os componentes da camada de gerenciamento cuja +instalação de Pods estáticos é controlada pelo kubeadm, como o `kube-apiserver`, +`kube-controller-manager`, `kube-scheduler` e `etcd`, têm seus contêineres +configurados para rodarem como usuários não-root. Se a opção não for habilitada, +estes componentes são executados como root. Você pode alterar o valor deste +_feature gate_ antes de atualizar seu cluster para uma versão mais recente do +Kubernetes. + +`UnversionedKubeletConfigMap` +: Esta opção controla o nome do {{< glossary_tooltip text="ConfigMap" term_id="configmap" >}} +onde o kubeadm armazena os dados de configuração do kubelet. Quando esta opção +não for especificada ou estiver especificada com o valor `true`, o ConfigMap +será nomeado `kubelet-config`. Caso esteja especificada com o valor `false`, o +nome do ConfigMap incluirá as versões maior e menor do Kubernetes instalado +(por exemplo, `kubelet-config-{{< skew currentVersion >}}`). O kubeadm garante +que as regras de RBAC para leitura e escrita deste ConfigMap serão apropriadas +para o valor escolhido. Quando o kubeadm cria este ConfigMap (durante a execução +dos comandos `kubeadm init` ou `kubeadm upgrade apply`), o kubeadm irá respeitar +o valor da opção `UnversionedKubeletConfigMap`. Quando tal ConfigMap for lido +(durante a execução dos comandos `kubeadm join`, `kubeadm reset`, +`kubeadm upgrade...`), o kubeadm tentará utilizar o nome do ConfigMap sem a +versão primeiro. Se esta operação não for bem-sucedida, então o kubeadm irá +utilizar o nome legado (versionado) para este ConfigMap. + +{{< note >}} +Informar a opção `UnversionedKubeletConfigMap` com o valor `false` é suportado, +mas está **descontinuado**. +{{< /note >}} + +### Adicionando parâmetros do kube-proxy {#kube-proxy} + +Para informações sobre como utilizar parâmetros do kube-proxy na configuração +do kubeadm, veja: +- [referência do kube-proxy](/docs/reference/config-api/kube-proxy-config.v1alpha1/) + +Para informações sobre como habilitar o modo IPVS com o kubeadm, veja: +- [IPVS](https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/ipvs/README.md) + +### Informando opções personalizadas em componentes da camada de gerenciamento {#control-plane-flags} + +Para informações sobre como passar as opções aos componentes da camada de +gerenciamento, veja: +- [opções da camada de gerenciamento](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/) + +### Executando o kubeadm sem uma conexão à internet {#without-internet-connection} + +Para executar o kubeadm sem uma conexão à internet, você precisa baixar as imagens +de contêiner requeridas pela camada de gerenciamento. + +Você pode listar e baixar as imagens utilizando o subcomando +`kubeadm config images`: + +```shell +kubeadm config images list +kubeadm config images pull +``` + +Você pode passar a opção `--config` para os comandos acima através de um +[arquivo de configuração do kubeadm](#config-file) para controlar os campos +`kubernetesVersion` e `imageRepository`. + +Todas as imagens padrão hospedadas em `k8s.gcr.io` que o kubeadm requer suportam +múltiplas arquiteturas. + +### Utilizando imagens personalizadas {#custom-images} + +Por padrão, o kubeadm baixa imagens hospedadas no repositório de contêineres +`k8s.gcr.io`. Se a versão requisitada do Kubernetes é um rótulo de integração +contínua (por exemplo, `ci/latest`), o repositório de contêineres +`gcr.io/k8s-staging-ci-images` é utilizado. + +Você pode sobrescrever este comportamento utilizando o +[kubeadm com um arquivo de configuração](#config-file). Personalizações permitidas +são: + +* Fornecer um valor para o campo `kubernetesVersion` que afeta a versão das + imagens. +* Fornecer um repositório de contêineres alternativo através do campo + `imageRepository` para ser utilizado no lugar de `k8s.gcr.io`. +* Fornecer um valor específico para os campos `imageRepository` e `imageTag`, + correspondendo ao repositório de contêineres e tag a ser utilizada, para as imagens + dos componentes etcd ou CoreDNS. + +Caminhos de imagens do repositório de contêineres padrão `k8s.gcr.io` podem diferir +dos utilizados em repositórios de contêineres personalizados através do campo +`imageRepository` devido a razões de retrocompatibilidade. Por exemplo, uma +imagem pode ter um subcaminho em `k8s.gcr.io/subcaminho/imagem`, mas quando +utilizado um repositório de contêineres personalizado, o valor padrão será +`meu.repositoriopersonalizado.io/imagem`. + +Para garantir que você terá as imagens no seu repositório personalizado em +caminhos que o kubeadm consiga consumir, você deve: + +* Baixar as imagens dos caminhos padrão `k8s.gcr.io` utilizando o comando + `kubeadm config images {list|pull}`. +* Subir as imagens para os caminhos listados no resultado do comando + `kubeadm config images list --config=config.yaml`, onde `config.yaml` contém + o valor customizado do campo `imageRepository`, e/ou `imageTag` para os + componentes etcd e CoreDNS. +* Utilizar o mesmo arquivo `config.yaml` quando executar o comando `kubeadm init`. + +#### Imagens personalizadas para o _sandbox_ (imagem `pause`) {#custom-pause-image} + +Para configurar uma imagem personalizada para o _sandbox_, você precisará +configurar o {{< glossary_tooltip text="agente de execução de contêineres" term_id="container-runtime" >}} +para utilizar a imagem. +Verifique a documentação para o seu agente de execução de contêineres para +mais informações sobre como modificar esta configuração; para alguns agentes de +execução de contêiner você também encontrará informações no tópico +[Agentes de Execução de Contêineres](/docs/setup/production-environment/container-runtimes/). + +### Carregando certificados da camada de gerenciamento no cluster + +Ao adicionar a opção `--upload-certs` ao comando `kubeadm init` você pode +subir temporariamente certificados da camada de gerenciamento em um Secret no +cluster. Este Secret expira automaticamente após 2 horas. Os certificados são +encriptados utilizando uma chave de 32 bytes que pode ser especificada através +da opção `--certificate-key`. A mesma chave pode ser utilizada para baixar +certificados quando nós adicionais da camada de gerenciamento estão se associando +ao cluster, utilizando as opções `--control-plane` e `--certificate-key` ao rodar +`kubeadm join`. + +O seguinte comando de fase pode ser usado para subir os certificados novamente +após a sua expiração: + +```shell +kubeadm init phase upload-certs --upload-certs --certificate-key=ALGUM_VALOR --config=ALGUM_ARQUIVO_YAML +``` + +Se a opção `--certificate-key` não for passada aos comandos `kubeadm init` +e `kubeadm init phase upload-certs`, uma nova chave será gerada automaticamente. + +O comando abaixo pode ser utilizado para gerar uma nova chave sob demanda: + +```shell +kubeadm certs certificate-key +``` + +### Gerenciamento de certificados com o kubeadm + +Para informações detalhadas sobre gerenciamento de certificados com o kubeadm, +consulte [Gerenciamento de Certificados com o kubeadm](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/). +O documento inclui informações sobre a utilização de autoridades de certificação +(CA) externas, certificados personalizados e renovação de certificados. + +### Gerenciando o arquivo _drop-in_ do kubeadm para o kubelet {#kubelet-drop-in} + +O pacote `kubeadm` é distribuído com um arquivo de configuração para rodar o +`kubelet` utilizando `systemd`. Note que o kubeadm nunca altera este arquivo. +Este arquivo _drop-in_ é parte do pacote DEB/RPM do kubeadm. + +Para mais informações, consulte +[Gerenciando o arquivo drop-in do kubeadm para o systemd](/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#the-kubelet-drop-in-file-for-systemd). + +### Usando o kubeadm com agentes de execução CRI + +Por padrão, o kubeadm tenta detectar seu agente de execução de contêineres. Para +mais detalhes sobre esta detecção, consulte o +[guia de instalação CRI do kubeadm](/pt-br/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#instalando-agente-de-execucao). + +### Configurando o nome do nó + +Por padrão, o `kubeadm` gera um nome para o nó baseado no endereço da máquina. +Você pode sobrescrever esta configuração utilizando a opção `--node-name`. Esta +opção passa o valor apropriado para a opção [`--hostname-override`](/docs/reference/command-line-tools-reference/kubelet/#options) +do kubelet. + +Note que sobrescrever o hostname de um nó pode +[interferir com provedores de nuvem](https://github.com/kubernetes/website/pull/8873). + +### Automatizando o kubeadm + +Ao invés de copiar o token que você obteve do comando `kubeadm init` para cada nó, +como descrito no [tutorial básico do kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/), +você pode paralelizar a distribuição do token para facilitar a automação. +Para implementar esta automação, você precisa saber o endereço IP que o nó da +camada de gerenciamento irá ter após a sua inicialização, ou utilizar um nome +DNS ou um endereço de um balanceador de carga. + +1. Gere um token. Este token deve ter a forma `.`. + Mais especificamente, o token precisa ser compatível com a expressão regular: + `[a-z0-9]{6}\.[a-z0-9]{16}`. + + O kubeadm pode gerar um token para você: + + ```shell + kubeadm token generate + ``` + +1. Inicialize o nó da camada de gerenciamento e os nós de carga de trabalho de + forma concorrente com este token. Conforme os nós forem iniciando, eles + deverão encontrar uns aos outros e formar o cluster. O mesmo argumento + `--token` pode ser utilizado em ambos os comandos `kubeadm init` e + `kubeadm join`. + +1. O mesmo procedimento pode ser feito para a opção `--certificate-key` quando + nós adicionais da camada de gerenciamento associarem-se ao cluster. A chave + pode ser gerada utilizando: + + ```shell + kubeadm certs certificate-key + ``` + +Uma vez que o cluster esteja inicializado, você pode buscar as credenciais para +a camada de gerenciamento no caminho `/etc/kubernetes/admin.conf` e utilizá-las +para conectar-se ao cluster. + +Note que este tipo de inicialização tem algumas garantias de segurança relaxadas +pois ele não permite que o hash do CA raiz seja validado com a opção +`--discovery-token-ca-cert-hash` (pois este hash não é gerado quando os nós são +provisionados). Para detalhes, veja a documentação do comando +[kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join/). + +## {{% heading "whatsnext" %}} + +* [kubeadm init phase](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) + para entender mais sobre as fases do comando `kubeadm init` +* [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join/) para + inicializar um nó de carga de trabalho do Kubernetes e associá-lo ao cluster +* [kubeadm upgrade](/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/) + para atualizar um cluster do Kubernetes para uma versão mais recente +* [kubeadm reset](/pt-br/docs/reference/setup-tools/kubeadm/kubeadm-reset/) + para reverter quaisquer mudanças feitas neste host pelos comandos + `kubeadm init` ou `kubeadm join` diff --git a/content/pt-br/docs/setup/production-environment/tools/kops.md b/content/pt-br/docs/setup/production-environment/tools/kops.md new file mode 100644 index 0000000000..051b142cf6 --- /dev/null +++ b/content/pt-br/docs/setup/production-environment/tools/kops.md @@ -0,0 +1,202 @@ +--- +title: Instalando Kubernetes com kOps +content_type: task +weight: 20 +--- + + + +Este início rápido mostra como instalar facilmente um cluster Kubernetes na AWS usando uma ferramenta chamada [`kOps`](https://github.com/kubernetes/kops). + +`kOps` é um sistema de provisionamento automatizado: + +* Instalação totalmente automatizada +* Usa DNS para identificar clusters +* Auto-recuperação: tudo é executado em grupos de Auto-Scaling +* Suporte de vários sistemas operacionais (Amazon Linux, Debian, Flatcar, RHEL, Rocky e Ubuntu) - veja em [imagens](https://github.com/kubernetes/kops/blob/master/docs/operations/images.md) +* Suporte a alta disponibilidade - consulte a [documentação sobre alta disponibilidade](https://github.com/kubernetes/kops/blob/master/docs/operations/high_availability.md) +* Pode provisionar diretamente ou gerar manifestos do terraform - veja a [documentação sobre como fazer isso com Terraform](https://github.com/kubernetes/kops/blob/master/docs/terraform.md) + +## {{% heading "prerequisites" %}} + +* Você deve ter o [kubectl](/docs/tasks/tools/) instalado. + +* Você deve [instalar](https://github.com/kubernetes/kops#installing) `kops` em uma arquitetura de dispositivo de 64 bits (AMD64 e Intel 64). + +* Você deve ter uma [conta da AWS](https://docs.aws.amazon.com/polly/latest/dg/setting-up.html), gerar as [chaves do IAM](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) e [configurá-las](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration). O usuário do IAM precisará de [permissões adequadas](https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md#setup-iam-user). + + + +## Como criar um cluster + +### (1/5) Instalar kops + +#### Instalação + +Faça o download do kops na [página de downloads](https://github.com/kubernetes/kops/releases) (também é conveniente gerar um binário a partir do código-fonte): + +{{< tabs name="kops_installation" >}} +{{% tab name="macOS" %}} + +Baixe a versão mais recente com o comando: + +```shell +curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-darwin-amd64 +``` + +Para baixar uma versão específica, substitua a seguinte parte do comando pela versão específica do kops. + +```shell +$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4) +``` + +Por exemplo, para baixar kops versão v1.20.0 digite: + +```shell +curl -LO https://github.com/kubernetes/kops/releases/download/v1.20.0/kops-darwin-amd64 +``` + +Dê a permissão de execução ao binário do kops. + +```shell +chmod +x kops-darwin-amd64 +``` + +Mova o binário do kops para o seu PATH. + +```shell +sudo mv kops-darwin-amd64 /usr/local/bin/kops +``` + +Você também pode instalar kops usando [Homebrew](https://brew.sh/). + +```shell +brew update && brew install kops +``` +{{% /tab %}} +{{% tab name="Linux" %}} + +Baixe a versão mais recente com o comando: + +```shell +curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64 +``` + +Para baixar uma versão específica do kops, substitua a seguinte parte do comando pela versão específica do kops. + +```shell +$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4) +``` + +Por exemplo, para baixar kops versão v1.20.0 digite: + +```shell +curl -LO https://github.com/kubernetes/kops/releases/download/v1.20.0/kops-linux-amd64 +``` + +Dê a permissão de execução ao binário do kops + +```shell +chmod +x kops-linux-amd64 +``` + +Mova o binário do kops para o seu PATH. + +```shell +sudo mv kops-linux-amd64 /usr/local/bin/kops +``` + +Você também pode instalar kops usando [Homebrew](https://docs.brew.sh/Homebrew-on-Linux). + +```shell +brew update && brew install kops +``` + +{{% /tab %}} +{{< /tabs >}} + +### (2/5) Crie um domínio route53 para seu cluster + +O kops usa DNS para descoberta, tanto dentro do cluster quanto fora, para que você possa acessar o servidor da API do kubernetes a partir dos clientes. + +kops tem uma opinião forte sobre o nome do cluster: deve ser um nome DNS válido. Ao fazer isso, você não confundirá mais seus clusters, poderá compartilhar clusters com seus colegas de forma inequívoca e alcançá-los sem ter de lembrar de um endereço IP. + +Você pode e provavelmente deve usar subdomínios para dividir seus clusters. Como nosso exemplo usaremos +`useast1.dev.example.com`. O endpoint do servidor de API será então `api.useast1.dev.example.com`. + +Uma zona hospedada do Route53 pode servir subdomínios. Sua zona hospedada pode ser `useast1.dev.example.com`, +mas também `dev.example.com` ou até `example.com`. kops funciona com qualquer um deles, então normalmente você escolhe por motivos de organização (por exemplo, você tem permissão para criar registros em `dev.example.com`, +mas não em `example.com`). + +Vamos supor que você esteja usando `dev.example.com` como sua zona hospedada. Você cria essa zona hospedada usando o [processo normal](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingNewSubdomain.html), ou +com um comando como `aws route53 create-hosted-zone --name dev.example.com --caller-reference 1`. + +Você deve então configurar seus registros NS no domínio principal, para que os registros no domínio sejam resolvidos. Aqui, você criaria registros NS no `example.com` para `dev`. Se for um nome de domínio raiz, você configuraria os registros NS em seu registrador de domínio (por exemplo `example.com`, precisaria ser configurado onde você comprou `example.com`). + +Verifique a configuração do seu domínio route53 (é a causa número 1 de problemas!). Você pode verificar novamente se seu cluster está configurado corretamente se tiver a ferramenta dig executando: + +`dig NS dev.example.com` + +Você deve ver os 4 registros NS que o Route53 atribuiu à sua zona hospedada. + +### (3/5) Crie um bucket do S3 para armazenar o estado dos clusters + +O kops permite que você gerencie seus clusters mesmo após a instalação. Para fazer isso, ele deve acompanhar os clusters que você criou, juntamente com suas configurações, as chaves que estão usando etc. Essas informações são armazenadas em um bucket do S3. As permissões do S3 são usadas para controlar o acesso ao bucket. + +Vários clusters podem usar o mesmo bucket do S3 e você pode compartilhar um bucket do S3 entre seus colegas que administram os mesmos clusters - isso é muito mais fácil do que transmitir arquivos kubecfg. Mas qualquer pessoa com acesso ao bucket do S3 terá acesso administrativo a todos os seus clusters, portanto, você não deseja compartilhá-lo além da equipe de operações. + +Portanto, normalmente você tem um bucket do S3 para cada equipe de operações (e geralmente o nome corresponderá ao nome da zona hospedada acima!) + +Em nosso exemplo, escolhemos `dev.example.com` como nossa zona hospedada, então vamos escolher `clusters.dev.example.com` como o nome do bucket do S3. + +* Exporte `AWS_PROFILE` (se precisar selecione um perfil para que a AWS CLI funcione) + +* Crie o bucket do S3 usando `aws s3 mb s3://clusters.dev.example.com` + +* Você pode rodar `export KOPS_STATE_STORE=s3://clusters.dev.example.com` e, em seguida, o kops usará esse local por padrão. Sugerimos colocar isso em seu perfil bash ou similar. + +### (4/5) Crie sua configuração de cluster + +Execute `kops create cluster` para criar sua configuração de cluster: + +`kops create cluster --zones=us-east-1c useast1.dev.example.com` + +kops criará a configuração para seu cluster. Observe que ele _apenas_ cria a configuração, na verdade não cria os recursos de nuvem - você fará isso na próxima etapa com um arquivo `kops update cluster`. Isso lhe dá a oportunidade de revisar a configuração ou alterá-la. + +Ele exibe comandos que você pode usar para explorar mais: + +* Liste seus clusters com: `kops get cluster` +* Edite este cluster com: `kops edit cluster useast1.dev.example.com` +* Edite seu grupo de instâncias de nós: `kops edit ig --name=useast1.dev.example.com nodes` +* Edite seu grupo de instâncias principal: `kops edit ig --name=useast1.dev.example.com master-us-east-1c` + +Se esta é sua primeira vez usando kops, gaste alguns minutos para experimentá-los! Um grupo de instâncias é um conjunto de instâncias que serão registradas como nós do kubernetes. Na AWS, isso é implementado por meio de grupos de auto-scaling. +Você pode ter vários grupos de instâncias, por exemplo, se quiser nós que sejam uma combinação de instâncias spot e sob demanda ou instâncias de GPU e não GPU. + +### (5/5) Crie o cluster na AWS + +Execute `kops update cluster` para criar seu cluster na AWS: + +`kops update cluster useast1.dev.example.com --yes` + +Isso leva alguns segundos para ser executado, mas seu cluster provavelmente levará alguns minutos para estar realmente pronto. +`kops update cluster` será a ferramenta que você usará sempre que alterar a configuração do seu cluster; ele aplica as alterações que você fez na configuração ao seu cluster - reconfigurando AWS ou kubernetes conforme necessário. + +Por exemplo, depois de você executar `kops edit ig nodes`, em seguida execute `kops update cluster --yes` para aplicar sua configuração e, às vezes, você também precisará `kops rolling-update cluster` para implementar a configuração imediatamente. + +Sem `--yes`, `kops update cluster` mostrará uma prévia do que ele fará. Isso é útil para clusters de produção! + +### Explore outros complementos + +Consulte a [lista de complementos](/pt-br/docs/concepts/cluster-administration/addons/) para explorar outros complementos, incluindo ferramentas para registro, monitoramento, política de rede, visualização e controle de seu cluster Kubernetes. + +## Limpeza + +* Para excluir seu cluster: `kops delete cluster useast1.dev.example.com --yes` + +## {{% heading "whatsnext" %}} + +* Saiba mais sobre os [conceitos do Kubernetes](/pt-br/docs/concepts/) e o [`kubectl`](/docs/reference/kubectl/). +* Saiba mais sobre o [uso avançado](https://kops.sigs.k8s.io/) do `kOps` para tutoriais, práticas recomendadas e opções de configuração avançada. +* Siga as discussões da comunidade do `kOps` no Slack: [discussões da comunidade](https://github.com/kubernetes/kops#other-ways-to-communicate-with-the-contributors). +* Contribua para o `kOps` endereçando ou levantando um problema [GitHub Issues](https://github.com/kubernetes/kops/issues). diff --git a/content/ru/_index.html b/content/ru/_index.html index bcac4a6c6e..8763c548ec 100644 --- a/content/ru/_index.html +++ b/content/ru/_index.html @@ -1,14 +1,16 @@ --- title: "Оркестрация контейнеров промышленного уровня" -abstract: "Автоматизированное развёртывание, масштабирование и управление контейнерами." +abstract: "Автоматизированное развёртывание, масштабирование и управление контейнерами" cid: home +sitemap: + priority: 1.0 --- {{< blocks/section id="oceanNodes" >}} {{% blocks/feature image="flower" %}} -### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) - это открытое программное обеспечение для автоматизации развёртывания, масштабирования и управления контейнеризированными приложениями. +### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) — это открытое программное обеспечение для автоматизации развёртывания, масштабирования и управления контейнеризированными приложениями. -Kubernetes группирует контейнеры, составляющие приложение, в логические единицы для более простого управления и обнаружения. При создании Kubernetes использован [15-летний опыт эксплуатации производственных нагрузок Google](http://queue.acm.org/detail.cfm?id=2898444), совмещённый с лучшими идеями и практиками сообщества. +Kubernetes группирует контейнеры, составляющие приложение, в логические единицы для более простого управления и обнаружения. При создании Kubernetes использован [15-летний опыт эксплуатации производственных нагрузок Google](http://queue.acm.org/detail.cfm?id=2898444), который был совмещён с лучшими идеями и практиками сообщества. {{% /blocks/feature %}} {{% blocks/feature image="scalable" %}} @@ -21,7 +23,7 @@ Kubernetes группирует контейнеры, составляющие {{% blocks/feature image="blocks" %}} #### Бесконечная гибкость -Будь то локальное тестирование или работа в корпорации, гибкость Kubernetes растёт вместе с вами, обеспечивая бесперебойную и простую доставку приложений, независимо от сложности ваших потребностей. +Будь то локальное тестирование или работа в корпорации, гибкость Kubernetes растёт вместе с вами, обеспечивая бесперебойную и простую доставку приложений независимо от сложности ваших потребностей. {{% /blocks/feature %}} @@ -37,16 +39,16 @@ Kubernetes — это проект с открытым исходным кодо {{< blocks/section id="video" background-image="kub_video_banner_homepage" >}}

    О сложности миграции 150+ микросервисов в Kubernetes

    -

    Сара Уелльс, технический директор по эксплуатации и надёжности в Financial Times

    +

    Сара Уэллс, технический директор по эксплуатации и надёжности в Financial Times



    - Посетите KubeCon в Европе, 17-20 мая 2022 года + Посетите KubeCon в Северной Америке, 24-28 октября 2022 года



    - Посетите KubeCon в Северной Америке, 24-28 октября 2022 года + Посетите KubeCon в Европе, 17-21 апреля 2023 года
    diff --git a/content/ru/docs/concepts/cluster-administration/addons.md b/content/ru/docs/concepts/cluster-administration/addons.md index 5c6d6446b6..53be662b17 100644 --- a/content/ru/docs/concepts/cluster-administration/addons.md +++ b/content/ru/docs/concepts/cluster-administration/addons.md @@ -18,16 +18,16 @@ content_type: concept * [ACI](https://www.github.com/noironetworks/aci-containers) обеспечивает интегрированную сеть контейнеров и сетевую безопасность с помощью Cisco ACI. * [Antrea](https://antrea.io/) работает на уровне 3, обеспечивая сетевые службы и службы безопасности для Kubernetes, используя Open vSwitch в качестве уровня сетевых данных. * [Calico](https://docs.projectcalico.org/latest/introduction/) Calico поддерживает гибкий набор сетевых опций, поэтому вы можете выбрать наиболее эффективный вариант для вашей ситуации, включая сети без оверлея и оверлейные сети, с или без BGP. Calico использует тот же механизм для обеспечения соблюдения сетевой политики для хостов, модулей и (при использовании Istio и Envoy) приложений на уровне сервисной сети (mesh layer). -* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) объединяет Flannel и Calico, обеспечивая сеть и сетевую политику. +* [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel) объединяет Flannel и Calico, обеспечивая сеть и сетевую политику. * [Cilium](https://github.com/cilium/cilium) - это плагин сети L3 и сетевой политики, который может прозрачно применять политики HTTP/API/L7. Поддерживаются как режим маршрутизации, так и режим наложения/инкапсуляции, и он может работать поверх других подключаемых модулей CNI. -* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) позволяет Kubernetes легко подключаться к выбору плагинов CNI, таких как Calico, Canal, Flannel, Romana или Weave. +* [CNI-Genie](https://github.com/cni-genie/CNI-Genie) позволяет Kubernetes легко подключаться к выбору плагинов CNI, таких как Calico, Canal, Flannel, Romana или Weave. * [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), основан на [Tungsten Fabric](https://tungsten.io), представляет собой платформу для виртуализации мультиоблачных сетей с открытым исходным кодом и управления политиками. Contrail и Tungsten Fabric интегрированы с системами оркестрации, такими как Kubernetes, OpenShift, OpenStack и Mesos, и обеспечивают режимы изоляции для виртуальных машин, контейнеров/подов и рабочих нагрузок без операционной системы. * [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) - это поставщик оверлейной сети, который можно использовать с Kubernetes. * [Knitter](https://github.com/ZTE/Knitter/) - это плагин для поддержки нескольких сетевых интерфейсов Kubernetes подов. * [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) - это плагин Multi для работы с несколькими сетями в Kubernetes, который поддерживает большинство самых популярных [CNI](https://github.com/containernetworking/cni) (например: Calico, Cilium, Contiv, Flannel), в дополнение к рабочим нагрузкам основанных на SRIOV, DPDK, OVS-DPDK и VPP в Kubernetes. * [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/) - это сетевой провайдер для Kubernetes основанный на [OVN (Open Virtual Network)](https://github.com/ovn-org/ovn/), реализация виртуальной сети, появившийся в результате проекта Open vSwitch (OVS). OVN-Kubernetes обеспечивает сетевую реализацию на основе наложения для Kubernetes, включая реализацию балансировки нагрузки и сетевой политики на основе OVS. * [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) - это подключаемый модуль контроллера CNI на основе OVN для обеспечения облачной цепочки сервисных функций (SFC), несколько наложенных сетей OVN, динамического создания подсети, динамического создания виртуальных сетей, сети поставщика VLAN, сети прямого поставщика и подключаемого к другим Multi Сетевые плагины, идеально подходящие для облачных рабочих нагрузок на периферии в сети с несколькими кластерами. -* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) плагин для контейнера (NCP) обеспечивающий интеграцию между VMware NSX-T и контейнерами оркестраторов, таких как Kubernetes, а так же интеграцию между NSX-T и контейнеров на основе платформы CaaS/PaaS, таких как Pivotal Container Service (PKS) и OpenShift. +* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html) плагин для контейнера (NCP) обеспечивающий интеграцию между VMware NSX-T и контейнерами оркестраторов, таких как Kubernetes, а так же интеграцию между NSX-T и контейнеров на основе платформы CaaS/PaaS, таких как Pivotal Container Service (PKS) и OpenShift. * [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) - эта платформа SDN, которая обеспечивает сетевое взаимодействие на основе политик между Kubernetes подами и не Kubernetes окружением, с отображением и мониторингом безопасности. * [Romana](https://github.com/romana/romana) - это сетевое решение уровня 3 для сетей подов, которое также поддерживает [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Подробности установки Kubeadm доступны [здесь](https://github.com/romana/romana/tree/master/containerize). * [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) предоставляет сеть и обеспечивает сетевую политику, будет работать на обеих сторонах сетевого раздела и не требует внешней базы данных. diff --git a/content/ru/docs/tasks/tools/install-kubectl.md b/content/ru/docs/tasks/tools/install-kubectl.md index a5ab438012..56c8da12e9 100644 --- a/content/ru/docs/tasks/tools/install-kubectl.md +++ b/content/ru/docs/tasks/tools/install-kubectl.md @@ -72,7 +72,7 @@ baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 -gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg +gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF sudo yum install -y kubectl {{< /tab >}} diff --git a/content/ru/docs/tutorials/kubernetes-basics/_index.html b/content/ru/docs/tutorials/kubernetes-basics/_index.html index ccdaefe8b8..05cd5cc75c 100644 --- a/content/ru/docs/tutorials/kubernetes-basics/_index.html +++ b/content/ru/docs/tutorials/kubernetes-basics/_index.html @@ -26,12 +26,12 @@ card:

    В данном руководстве вы познакомитесь с основами системы оркестрации кластеров Kubernetes. Каждый модуль содержит краткую справочную информацию по основной функциональности и концепциям Kubernetes, а также включает интерактивные онлайн-уроки. С их помощью вы научитесь самостоятельно управлять простым кластером и контейнеризированными приложениями, которые были в нём развернуты.

    Пройдя интерактивные уроки, вы узнаете, как:

      -
    • развёртывать контейнеризированное приложение в кластер.
    • -
    • масштабировать развёртывание.
    • -
    • обновить контейнеризированное приложение на новую версию ПО.
    • +
    • развёртывать контейнеризированное приложение в кластер;
    • +
    • масштабировать развёртывание;
    • +
    • обновить контейнеризированное приложение на новую версию ПО;
    • отлаживать контейнеризированное приложение.
    -

    Все руководства используют сервис Katacoda, поэтому в вашем браузере будет показан виртуальный терминал с работающим Minikube, небольшой локальной средой Kubernetes, которая может работать где угодно. Вам не потребуется устанавливать дополнительное ПО или вообще что-либо настраивать. Каждый интерактивный урок запускается непосредственно в вашем браузере.

    +

    Все руководства используют сервис Katacoda, поэтому в вашем браузере будет показан виртуальный терминал с запущенным Minikube — небольшой локальной средой Kubernetes, которая может работать где угодно. Вам не потребуется устанавливать дополнительное ПО или вообще что-либо настраивать. Каждый интерактивный урок запускается непосредственно в вашем браузере.

@@ -40,7 +40,7 @@ card:

Чем может Kubernetes помочь вам?

-

От современных веб-сервисов пользователи ожидают, что приложения будут доступны 24/7, а разработчики — развёртывать новые версии приложений по нескольку раз в день. Контейнеризация направлена на достижение этой цели, упаковывая ПО и позволяя выпускать и обновлять приложения просто, быстро и без простоев. Kubernetes гарантирует вам, что ваши контейнеризованные приложения будет запущены где угодно и когда угодно, вместе со всеми необходимыми для их работы ресурсами и инструментами. Kubernetes — это готовая к промышленному использованию платформа с открытым исходным кодом, разработанная исходя из накопленного опыта Google по оркестровке контейнеров и лучшими идеями от сообщества.

+

От современных веб-сервисов пользователи ожидают, что приложения будут доступны 24/7, а разработчики — развёртывать новые версии приложений по нескольку раз в день. Контейнеризация направлена на достижение этой цели, посольку позволяет выпускать и обновлять приложения без простоев. Kubernetes гарантирует, что ваши контейнеризованные приложения будет запущены где угодно и когда угодно, вместе со всеми необходимыми для их работы ресурсами и инструментами. Kubernetes — это готовая к промышленному использованию платформа с открытым исходным кодом, разработанная на основе накопленного опыта Google по оркестровке контейнеров и вобравшая в себя лучшие идеи от сообщества.

@@ -63,7 +63,7 @@ card:
diff --git a/content/zh-cn/blog/_posts/2020-09-03-warnings/index.md b/content/zh-cn/blog/_posts/2020-09-03-warnings/index.md new file mode 100644 index 0000000000..ddbf764f61 --- /dev/null +++ b/content/zh-cn/blog/_posts/2020-09-03-warnings/index.md @@ -0,0 +1,555 @@ +--- +layout: blog +title: "警告: 有用的预警" +date: 2020-09-03 +slug: warnings +evergreen: true +--- + + + + +**作者**: [Jordan Liggitt](https://github.com/liggitt) (Google) + + +作为 Kubernetes 维护者,我们一直在寻找在保持兼容性的同时提高可用性的方法。 +在开发功能、分类 Bug、和回答支持问题的过程中,我们积累了有助于 Kubernetes 用户了解的信息。 +过去,共享这些信息仅限于发布说明、公告电子邮件、文档和博客文章等带外方法。 +除非有人知道需要寻找这些信息并成功找到它们,否则他们不会从中受益。 + + +在 Kubernetes v1.19 中,我们添加了一个功能,允许 Kubernetes API +服务器[向 API 客户端发送警告](https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1693-warnings)。 +警告信息使用[标准 `Warning` 响应头](https://tools.ietf.org/html/rfc7234#section-5.5)发送, +因此它不会以任何方式更改状态代码或响应体。 +这一设计使得服务能够发送任何 API 客户端都可以轻松读取的警告,同时保持与以前的客户端版本兼容。 + + +警告在 `kubectl` v1.19+ 的 `stderr` 输出中和 `k8s.io/client-go` v0.19.0+ 客户端库的日志中出现。 +`k8s.io/client-go` 行为可以[在进程或客户端层面重载](#customize-client-handling)。 + + +## 弃用警告 {#deprecation-warnings} + + +我们第一次使用此新功能是针对已弃用的 API 调用发送警告。 + + +Kubernetes 是一个[大型、快速发展的项目](https://www.cncf.io/cncf-kubernetes-project-journey/#development-velocity)。 +跟上每个版本的[变更](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#changelog-since-v1180)可能是令人生畏的, +即使对于全职从事该项目的人来说也是如此。一种重要的变更是 API 弃用。 +随着 Kubernetes 中的 API 升级到 GA 版本,预发布的 API 版本会被弃用并最终被删除。 + + +即使有[延长的弃用期](/zh-cn/docs/reference/using-api/deprecation-policy/), +并且[在发布说明中](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#deprecation)也包含了弃用信息, +他们仍然很难被追踪。在弃用期内,预发布 API 仍然有效, +允许多个版本过渡到稳定的 API 版本。 +然而,我们发现用户往往甚至没有意识到他们依赖于已弃用的 API 版本, +直到升级到不再提供相应服务的新版本。 + + +从 v1.19 开始,系统每当收到针对已弃用的 REST API 的请求时,都会返回警告以及 API 响应。 +此警告包括有关 API 将不再可用的版本以及替换 API 版本的详细信息。 + + +因为警告源自服务器端,并在客户端层级被拦截,所以它适用于所有 kubectl 命令, +包括像 `kubectl apply` 这样的高级命令,以及像 `kubectl get --raw` 这样的低级命令: + +kubectl 执行一个清单文件, 然后显示警告信息 'networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress'。 + + +这有助于受弃用影响的人们知道他们所请求的API已被弃用, +他们有多长时间来解决这个问题,以及他们应该使用什么 API。 +这在用户应用不是由他们创建的清单文件时特别有用, +所以他们有时间联系作者要一个更新的版本。 + + +我们还意识到**使用**已弃用的 API 的人通常不是负责升级集群的人, +因此,我们添加了两个面向管理员的工具来帮助跟踪已弃用的 API 的使用情况并确定何时升级安全。 + + + +### 度量指标 {#metrics} + + +从 Kubernetes v1.19 开始,当向已弃用的 REST API 端点发出请求时, +在 kube-apiserver 进程中,`apiserver_requested_deprecated_apis` 度量指标会被设置为 `1`。 +该指标具有 API `group`、`version`、`resource` 和 `subresource` 的标签, +和一个 `removed_release` 标签,表明不再提供 API 的 Kubernetes 版本。 + + +下面是一个使用 `kubectl` 的查询示例,[prom2json](https://github.com/prometheus/prom2json) +和 [jq](https://stedolan.github.io/jq/) 用来确定当前 API +服务器实例上收到了哪些对已弃用的 API 请求: + +```sh +kubectl get --raw /metrics | prom2json | jq ' + .[] | select(.name=="apiserver_requested_deprecated_apis").metrics[].labels +' +``` + + +输出: + +```json +{ + "group": "extensions", + "removed_release": "1.22", + "resource": "ingresses", + "subresource": "", + "version": "v1beta1" +} +{ + "group": "rbac.authorization.k8s.io", + "removed_release": "1.22", + "resource": "clusterroles", + "subresource": "", + "version": "v1beta1" +} +``` + + +输出展示在此服务器上请求了已弃用的 `extensions/v1beta1` Ingress 和 `rbac.authorization.k8s.io/v1beta1` +ClusterRole API,这两个 API 都将在 v1.22 中被删除。 + +我们可以将该信息与 `apiserver_request_total` 指标结合起来,以获取有关这些 API 请求的更多详细信息: + +```sh +kubectl get --raw /metrics | prom2json | jq ' + # set $deprecated to a list of deprecated APIs + [ + .[] | + select(.name=="apiserver_requested_deprecated_apis").metrics[].labels | + {group,version,resource} + ] as $deprecated + + | + + # select apiserver_request_total metrics which are deprecated + .[] | select(.name=="apiserver_request_total").metrics[] | + select(.labels | {group,version,resource} as $key | $deprecated | index($key)) +' +``` + + +输出: + +```json +{ + "labels": { + "code": "0", + "component": "apiserver", + "contentType": "application/vnd.kubernetes.protobuf;stream=watch", + "dry_run": "", + "group": "extensions", + "resource": "ingresses", + "scope": "cluster", + "subresource": "", + "verb": "WATCH", + "version": "v1beta1" + }, + "value": "21" +} +{ + "labels": { + "code": "200", + "component": "apiserver", + "contentType": "application/vnd.kubernetes.protobuf", + "dry_run": "", + "group": "extensions", + "resource": "ingresses", + "scope": "cluster", + "subresource": "", + "verb": "LIST", + "version": "v1beta1" + }, + "value": "1" +} +{ + "labels": { + "code": "200", + "component": "apiserver", + "contentType": "application/json", + "dry_run": "", + "group": "rbac.authorization.k8s.io", + "resource": "clusterroles", + "scope": "cluster", + "subresource": "", + "verb": "LIST", + "version": "v1beta1" + }, + "value": "1" +} +``` + + +上面的输出展示,对这些 API 发出的都只是读请求,并且大多数请求都用来监测已弃用的 Ingress API。 + +你还可以通过以下 Prometheus 查询获取这一信息, +该查询返回关于已弃用的、将在 v1.22 中删除的 API 请求的信息: + +```promql +apiserver_requested_deprecated_apis{removed_release="1.22"} * on(group,version,resource,subresource) +group_right() apiserver_request_total +``` + + +### 审计注解 {#audit-annotations} + + +度量指标是检查是否正在使用已弃用的 API 以及使用率如何的快速方法, +但它们没有包含足够的信息来识别特定的客户端或 API 对象。 +从 Kubernetes v1.19 开始, +对已弃用的 API 的请求进行审计时,[审计事件](/zh-cn/docs/tasks/debug/debug-cluster/audit/)中会包括 +审计注解 `"k8s.io/deprecated":"true"`。 +管理员可以使用这些审计事件来识别需要更新的特定客户端或对象。 + + +## 自定义资源定义 {#custom-resource-definitions} + + +除了 API 服务器对已弃用的 API 使用发出警告的能力外,从 v1.19 开始,CustomResourceDefinition +可以指示[它定义的资源的特定版本已被弃用](/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-deprecation)。 +当对自定义资源的已弃用的版本发出 API 请求时,将返回一条警告消息,与内置 API 的行为相匹配。 + +CustomResourceDefinition 的作者还可以根据需要自定义每个版本的警告。 +这允许他们在需要时提供指向迁移指南的信息或其他信息。 + + +```yaml +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition + name: crontabs.example.com +spec: + versions: + - name: v1alpha1 + # 这表示 v1alpha1 版本的自定义资源已经废弃了。 + # 对此版本的 API 请求会在服务器响应中收到警告。 + deprecated: true + # 这会把返回给发出 v1alpha1 API 请求的客户端的默认警告覆盖。 + deprecationWarning: "example.com/v1alpha1 CronTab is deprecated; use example.com/v1 CronTab (see http://example.com/v1alpha1-v1)" + ... + + - name: v1beta1 + # 这表示 v1beta1 版本的自定义资源已经废弃了。 + # 对此版本的 API 请求会在服务器响应中收到警告。 + # 此版本返回默认警告消息。 + deprecated: true + ... + + - name: v1 + ... +``` + + +## 准入 Webhook {#admission-webhooks} + + +[准入 Webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers)是将自定义策略或验证与 +Kubernetes 集成的主要方式。 +从 v1.19 开始,Admission Webhook 可以[返回警告消息](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#response), +传递给发送请求的 API 客户端。警告可以与允许或拒绝的响应一起返回。 + +例如,允许请求但警告已知某个配置无法正常运行时,准入 Webhook 可以发送以下响应: + +```json +{ + "apiVersion": "admission.k8s.io/v1", + "kind": "AdmissionReview", + "response": { + "uid": "", + "allowed": true, + "warnings": [ + ".spec.memory: requests >1GB do not work on Fridays" + ] + } +} +``` + + +如果你在实现一个返回警告消息的 Webhook,这里有一些提示: + +* 不要在消息中包含 “Warning:” 前缀(由客户端在输出时添加) +* 使用警告消息来正确描述能被发出 API 请求的客户端纠正或了解的问题 +* 保持简洁;如果可能,将警告限制为 120 个字符以内 + + +准入 Webhook 可以通过多种方式使用这个新功能,我期待看到大家想出来的方法。 +这里有一些想法可以帮助你入门: + +* 添加 “complain” 模式的 Webhook 实现,它们返回警告而不是拒绝, + 允许在开始执行之前尝试策略以验证它是否按预期工作 +* “lint” 或 “vet” 风格的 Webhook,检查对象并在未遵循最佳实践时显示警告 + + +## 自定义客户端处理方式 {#customize-client-handling} + + +使用 `k8s.io/client-go` 库发出 API 请求的应用程序可以定制如何处理从服务器返回的警告。 +默认情况下,收到的警告会以日志形式输出到 stderr, +但[在进程层面](https://godoc.org/k8s.io/client-go/rest#SetDefaultWarningHandler)或[客户端层面] +(https://godoc.org/k8s.io/client-go/rest#Config)均可定制这一行为。 + + +这个例子展示了如何让你的应用程序表现得像 `kubectl`, +在进程层面重载整个消息处理逻辑以删除重复的警告, +并在支持的情况下使用彩色输出突出显示消息: + +```go +import ( + "os" + "k8s.io/client-go/rest" + "k8s.io/kubectl/pkg/util/term" + ... +) + +func main() { + rest.SetDefaultWarningHandler( + rest.NewWarningWriter(os.Stderr, rest.WarningWriterOptions{ + // only print a given warning the first time we receive it + Deduplicate: true, + // highlight the output with color when the output supports it + Color: term.AllowsColorOutput(os.Stderr), + }, + ), + ) + + ... +``` + + +下一个示例展示如何构建一个忽略警告的客户端。 +这对于那些操作所有资源类型(使用发现 API 在运行时动态发现) +的元数据并且不会从已弃用的特定资源的警告中受益的客户端很有用。 +对于需要使用特定 API 的客户端,不建议抑制弃用警告。 + +```go +import ( + "k8s.io/client-go/rest" + "k8s.io/client-go/kubernetes" +) + +func getClientWithoutWarnings(config *rest.Config) (kubernetes.Interface, error) { + // copy to avoid mutating the passed-in config + config = rest.CopyConfig(config) + // set the warning handler for this client to ignore warnings + config.WarningHandler = rest.NoWarnings{} + // construct and return the client + return kubernetes.NewForConfig(config) +} +``` + + +## Kubectl 强制模式 {#kubectl-strict-mode} + + +如果你想确保及时注意到弃用问题并立即着手解决它们, +`kubectl` 在 v1.19 中添加了 `--warnings-as-errors` 选项。使用此选项调用时, +`kubectl` 将从服务器收到的所有警告视为错误,并以非零码退出: + +kubectl 在设置 --warnings-as-errors 标记的情况下执行一个清单文件, 返回警告消息和非零退出码。 + +这可以在 CI 作业中用于将清单文件应用到当前服务器, +其中要求通过零退出码才能使 CI 作业成功。 + + +## 未来的可能性 {#future-possibilities} + + +现在我们有了一种在上下文中向用户传达有用信息的方法, +我们已经在考虑使用其他方法来改善人们使用 Kubernetes 的体验。 +我们接下来要研究的几个领域是关于[已知有问题的值](http://issue.k8s.io/64841#issuecomment-395141013)的警告。 +出于兼容性原因,我们不能直接拒绝,而应就使用已弃用的字段或字段值 +(例如使用 beta os/arch 节点标签的选择器, +[在 v1.14 中已弃用](/zh-cn/docs/reference/labels-annotations-taints/#beta-kubernetes-io-arch-deprecated)) +给出警告。 +我很高兴看到这方面的进展,继续让 Kubernetes 更容易使用。 diff --git a/content/zh-cn/blog/_posts/2020-09-03-warnings/kubectl-warnings-as-errors.png b/content/zh-cn/blog/_posts/2020-09-03-warnings/kubectl-warnings-as-errors.png new file mode 100644 index 0000000000..5171eca6bc Binary files /dev/null and b/content/zh-cn/blog/_posts/2020-09-03-warnings/kubectl-warnings-as-errors.png differ diff --git a/content/zh-cn/blog/_posts/2020-09-03-warnings/kubectl-warnings.png b/content/zh-cn/blog/_posts/2020-09-03-warnings/kubectl-warnings.png new file mode 100644 index 0000000000..967bc591bf Binary files /dev/null and b/content/zh-cn/blog/_posts/2020-09-03-warnings/kubectl-warnings.png differ diff --git a/content/zh-cn/docs/concepts/architecture/nodes.md b/content/zh-cn/docs/concepts/architecture/nodes.md index 515d9caf20..8e4670e3c4 100644 --- a/content/zh-cn/docs/concepts/architecture/nodes.md +++ b/content/zh-cn/docs/concepts/architecture/nodes.md @@ -1117,7 +1117,7 @@ the kubelet, and the `--fail-swap-on` command line flag or `failSwapOn` [configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) must be set to false. --> -要在节点上启用交换内存,必须启用kubelet 的 `NodeSwap` 特性门控, +要在节点上启用交换内存,必须启用 kubelet 的 `NodeSwap` 特性门控, 同时使用 `--fail-swap-on` 命令行参数或者将 `failSwapOn` [配置](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)设置为 false。 diff --git a/content/zh-cn/docs/concepts/cluster-administration/flow-control.md b/content/zh-cn/docs/concepts/cluster-administration/flow-control.md index 0efbecff6f..50027f7cd0 100644 --- a/content/zh-cn/docs/concepts/cluster-administration/flow-control.md +++ b/content/zh-cn/docs/concepts/cluster-administration/flow-control.md @@ -484,7 +484,7 @@ incoming request is for a resource or non-resource URL) matches the request. 当给定的请求与某个 FlowSchema 的 `rules` 的其中一条匹配,那么就认为该请求与该 FlowSchema 匹配。 判断规则与该请求是否匹配,**不仅**要求该条规则的 `subjects` 字段至少存在一个与该请求相匹配, **而且**要求该条规则的 `resourceRules` 或 `nonResourceRules` -(取决于传入请求是针对资源URL还是非资源URL)字段至少存在一个与该请求相匹配。 +(取决于传入请求是针对资源 URL 还是非资源 URL)字段至少存在一个与该请求相匹配。 * `apiserver_flowcontrol_read_vs_write_request_count_samples` 是一个直方图向量, 记录当前请求数量的观察值, - 由标签 `phase`(取值为 `waiting` 和 `executing`)和 `request_kind` - (取值 `mutating` 和 `readOnly`)拆分。定期以高速率观察该值。 + 由标签 `phase`(取值为 `waiting` 及 `executing`)和 `request_kind` + (取值 `mutating` 及 `readOnly`)拆分。定期以高速率观察该值。 每个观察到的值是一个介于 0 和 1 之间的比值,计算方式为请求数除以该请求数的对应限制 (等待的队列长度限制和执行所用的并发限制)。 -* `apiserver_flowcontrol_read_vs_write_request_count_watermarks` 是一个直方图向量, - 记录请求数量的高/低水位线, - 由标签 `phase`(取值为 `waiting` 和 `executing`)和 `request_kind` - (取值为 `mutating` 和 `readOnly`)拆分;标签 `mark` 取值为 `high` 和 `low`。 +* `apiserver_flowcontrol_read_vs_write_request_count_watermarks` + 是请求数量的高或低水位线的直方图向量(除以相应的限制,得到介于 0 至 1 的比率), + 由标签 `phase`(取值为 `waiting` 及 `executing`)和 `request_kind` + (取值为 `mutating` 及 `readOnly`)拆分;标签 `mark` 取值为 `high` 和 `low`。 `apiserver_flowcontrol_read_vs_write_request_count_samples` 向量观察到有值新增, 则该向量累积。这些水位线显示了样本值的范围。 * `apiserver_flowcontrol_current_inqueue_requests` 是一个表向量, 记录包含排队中的(未执行)请求的瞬时数量, - 由标签 `priorityLevel` 和 `flowSchema` 拆分。 + 由标签 `priority_level` 和 `flow_schema` 拆分。 * `apiserver_flowcontrol_priority_level_request_count_samples` 是一个直方图向量, - 记录当前请求的观测值,由标签 `phase`(取值为`waiting` 和 `executing`)和 + 记录当前请求的观测值,由标签 `phase`(取值为`waiting` 及 `executing`)和 `priority_level` 进一步区分。 每个直方图都会定期进行观察,直到相关类别的最后活动为止。观察频率高。 + 所观察到的值都是请求数除以相应的请求数限制(等待的队列长度限制和执行的并发限制)的比率, + 介于 0 和 1 之间。 -* `apiserver_flowcontrol_priority_level_request_count_watermarks` 是一个直方图向量, - 记录请求数的高/低水位线,由标签 `phase`(取值为 `waiting` 和 `executing`)和 +* `apiserver_flowcontrol_priority_level_request_count_watermarks` + 是请求数量的高或低水位线的直方图向量(除以相应的限制,得到 0 到 1 的范围内的比率), + 由标签 `phase`(取值为 `waiting` 及 `executing`)和 `priority_level` 拆分; 标签 `mark` 取值为 `high` 和 `low`。 `apiserver_flowcontrol_priority_level_request_count_samples` 向量观察到有值新增, @@ -1020,7 +1028,7 @@ poorly-behaved workloads that may be harming system health. @@ -1031,8 +1039,8 @@ poorly-behaved workloads that may be harming system health. @@ -1056,8 +1064,8 @@ poorly-behaved workloads that may be harming system health. * `apiserver_flowcontrol_request_execution_seconds` 是一个直方图向量, @@ -1065,6 +1073,39 @@ poorly-behaved workloads that may be harming system health. 由标签 `flow_schema`(表示与请求匹配的 FlowSchema)和 `priority_level`(表示分配给该请求的优先级)进一步区分。 + +* `apiserver_flowcontrol_watch_count_samples` 是一个直方图向量, + 记录给定写的相关活动 WATCH 请求数量, + 由标签 `flow_schema` 和 `priority_level` 进一步区分。 + + +* `apiserver_flowcontrol_work_estimated_seats` 是一个直方图向量, + 记录与估计席位(最初阶段和最后阶段的最多人数)相关联的请求数量, + 由标签 `flow_schema` 和 `priority_level` 进一步区分。 + + +* `apiserver_flowcontrol_request_dispatch_no_accommodation_total` + 是一个事件数量的计数器,这些事件在原则上可能导致请求被分派, + 但由于并发度不足而没有被分派, + 由标签 `flow_schema` 和 `priority_level` 进一步区分。 + 相关的事件类型是请求的到达和请求的完成。 + -## 监控计算和内存资源用量 {#monitoring-compute-memory-resource-usage} +### 监控计算和内存资源用量 {#monitoring-compute-memory-resource-usage} kubelet 会将 Pod 的资源使用情况作为 Pod [`status`](/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status) @@ -431,12 +432,11 @@ locally-attached writeable devices or, sometimes, by RAM. Pods use ephemeral local storage for scratch space, caching, and for logs. The kubelet can provide scratch space to Pods using local ephemeral storage to mount [`emptyDir`](/docs/concepts/storage/volumes/#emptydir) -{{< glossary_tooltip term_id="volume" text="volumes" >}} into containers. + {{< glossary_tooltip term_id="volume" text="volumes" >}} into containers. --> ## 本地临时存储 {#local-ephemeral-storage} - {{< feature-state for_k8s_version="v1.25" state="stable" >}} 节点通常还可以具有本地的临时性存储,由本地挂接的可写入设备或者有时也用 RAM @@ -633,12 +633,14 @@ or 400 megabytes (`400M`). In the following example, the Pod has two containers. Each container has a request of 2GiB of local ephemeral storage. Each container has a limit of 4GiB of local ephemeral storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and -a limit of 8GiB of local ephemeral storage. +a limit of 8GiB of local ephemeral storage. 500Mi of that limit could be +consumed by the `emptyDir` volume. --> 在下面的例子中,Pod 包含两个容器。每个容器请求 2 GiB 大小的本地临时性存储。 每个容器都设置了 4 GiB 作为其本地临时性存储的限制。 因此,整个 Pod 的本地临时性存储请求是 4 GiB,且其本地临时性存储的限制为 8 GiB。 +该限制值中有 500Mi 可供 `emptyDir` 卷使用。 ```yaml apiVersion: v1 @@ -669,7 +671,8 @@ spec: mountPath: "/tmp" volumes: - name: ephemeral - emptyDir: {} + emptyDir: + sizeLimit: 500Mi ``` **示例:** @@ -1235,7 +1238,7 @@ Allocated resources: In the preceding output, you can see that if a Pod requests more than 1.120 CPUs or more than 6.23Gi of memory, that Pod will not fit on the node. -By looking at the "Pods" section, you can see which Pods are taking up space on +By looking at the “Pods” section, you can see which Pods are taking up space on the node. --> 在上面的输出中,你可以看到如果 Pod 请求超过 1.120 CPU 或者 6.23Gi 内存,节点将无法满足。 @@ -1347,7 +1350,7 @@ Events: 在上面的例子中,`Restart Count: 5` 意味着 Pod 中的 `simmemleak` diff --git a/content/zh-cn/docs/concepts/configuration/secret.md b/content/zh-cn/docs/concepts/configuration/secret.md index 9dc4b86434..b2a77a02d2 100644 --- a/content/zh-cn/docs/concepts/configuration/secret.md +++ b/content/zh-cn/docs/concepts/configuration/secret.md @@ -60,6 +60,7 @@ Kubernetes Secrets are, by default, stored unencrypted in the API server's under Additionally, anyone who is authorized to create a Pod in a namespace can use that access to read any Secret in that namespace; this includes indirect access such as the ability to create a Deployment. + In order to safely use Secrets, take at least the following steps: 1. [Enable Encryption at Rest](/docs/tasks/administer-cluster/encrypt-data/) for Secrets. @@ -190,17 +191,19 @@ the exact mechanisms for issuing and refreshing those session tokens. There are several options to create a Secret: -- [create Secret using `kubectl` command](/docs/tasks/configmap-secret/managing-secret-using-kubectl/) -- [create Secret from config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/) -- [create Secret using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/) +- [Use `kubectl`](/docs/tasks/configmap-secret/managing-secret-using-kubectl/) +- [Use a configuration file](/docs/tasks/configmap-secret/managing-secret-using-config-file/) +- [Use the Kustomize tool](/docs/tasks/configmap-secret/managing-secret-using-kustomize/) --> ## 使用 Secret {#working-with-secrets} ### 创建 Secret {#creating-a-secret} -- [使用 `kubectl` 命令来创建 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl/) -- [基于配置文件来创建 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-config-file/) -- [使用 kustomize 来创建 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kustomize/) +创建 Secret 有以下几种可选方式: + +- [使用 `kubectl`](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl/) +- [使用配置文件](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-config-file/) +- [使用 Kustomize 工具](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kustomize/) ### 编辑 Secret {#editing-a-secret} -你可以使用 kubectl 来编辑一个已有的 Secret: - -```shell -kubectl edit secrets mysecret -``` +你可以编辑一个已有的 Secret,除非它是[不可变更的](#secret-immutable)。 +要编辑一个 Secret,可使用以下方法之一: -这一命令会启动你的默认编辑器,允许你更新 `data` 字段中存放的 base64 编码的 Secret 值; -例如: - -```yaml -# 请编辑以下对象。以 `#` 开头的几行将被忽略, -# 且空文件将放弃编辑。如果保存此文件时出错, -# 则重新打开此文件时也会有相关故障。 -apiVersion: v1 -data: - username: YWRtaW4= - password: MWYyZDFlMmU2N2Rm -kind: Secret -metadata: - annotations: - kubectl.kubernetes.io/last-applied-configuration: { ... } - creationTimestamp: 2020-01-22T18:41:56Z - name: mysecret - namespace: default - resourceVersion: "164619" - uid: cfee02d6-c137-11e5-8d73-42010af00002 -type: Opaque -``` +* [使用 `kubectl`](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl/#edit-secret) +* [使用配置文件](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-config-file/#edit-secret) -这一示例清单定义了一个 Secret,其 `data` 字段中包含两个主键:`username` 和 `password`。 -清单中的字段值是 Base64 字符串,不过,当你在 Pod 中使用 Secret 时,kubelet 为 Pod -及其中的容器提供的是**解码**后的数据。 +你也可以使用 +[Kustomize 工具](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kustomize/#edit-secret)编辑数据。 +然而这种方法会用编辑过的数据创建新的 `Secret` 对象。 -你可以在一个 Secret 中打包多个主键和数值,也可以选择使用多个 Secret, -完全取决于哪种方式最方便。 +根据你创建 Secret 的方式以及该 Secret 在 Pod 中被使用的方式,对已有 `Secret` +对象的更新将自动扩散到使用此数据的 Pod。有关更多信息, +请参阅[自动更新挂载的 Secret](#mounted-secrets-are-updated-automatically)。 ### 以环境变量的方式使用 Secret {#using-secrets-as-environment-variables} -如果需要在 Pod 中以{{< glossary_tooltip text="环境变量" term_id="container-env-variables" >}} -的形式使用 Secret: +如果需要在 Pod +中以{{< glossary_tooltip text="环境变量" term_id="container-env-variables" >}}的形式使用 Secret: Pod 的 `imagePullSecrets` 字段是一个对 Pod 所在的名字空间中的 Secret @@ -880,7 +863,8 @@ kubelet 使用这个信息来替你的 Pod 拉取私有镜像。 The `imagePullSecrets` field is a list of references to secrets in the same namespace. You can use an `imagePullSecrets` to pass a secret that contains a Docker (or other) image registry password to the kubelet. The kubelet uses this information to pull a private image on behalf of your Pod. -See the [PodSpec API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core) for more information about the `imagePullSecrets` field. +See the [PodSpec API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core) +for more information about the `imagePullSecrets` field. --> #### 使用 imagePullSecrets {#using-imagepullsecrets-1} @@ -1137,6 +1121,7 @@ For example, if your actual password is `S!B\*d$zDsb=`, you should execute the c ```shell kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb=' ``` + @@ -1949,7 +1934,7 @@ A bootstrap type Secret has the following keys specified under `data`: - `token-secret`: A random 16 character string as the actual token secret. Required. - `description`: A human-readable string that describes what the token is used for. Optional. -- `expiration`: An absolute UTC time using RFC3339 specifying when the token +- `expiration`: An absolute UTC time using [RFC3339](https://datatracker.ietf.org/doc/html/rfc3339) specifying when the token should be expired. Optional. - `usage-bootstrap-`: A boolean flag indicating additional usage for the bootstrap token. @@ -1961,7 +1946,8 @@ A bootstrap type Secret has the following keys specified under `data`: - `token-id`:由 6 个随机字符组成的字符串,作为令牌的标识符。必需。 - `token-secret`:由 16 个随机字符组成的字符串,包含实际的令牌机密。必需。 - `description`:供用户阅读的字符串,描述令牌的用途。可选。 -- `expiration`:一个使用 RFC3339 来编码的 UTC 绝对时间,给出令牌要过期的时间。可选。 +- `expiration`:一个使用 [RFC3339](https://datatracker.ietf.org/doc/html/rfc3339) + 来编码的 UTC 绝对时间,给出令牌要过期的时间。可选。 - `usage-bootstrap-`:布尔类型的标志,用来标明启动引导令牌的其他用途。 - `auth-extra-groups`:用逗号分隔的组名列表,身份认证时除被认证为 `system:bootstrappers` 组之外,还会被添加到所列的用户组中。 @@ -2148,7 +2134,6 @@ Secrets used on that node. - Learn how to [manage Secrets using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/) - Read the [API reference](/docs/reference/kubernetes-api/config-and-storage-resources/secret-v1/) for `Secret` --> - - 有关管理和提升 Secret 安全性的指南,请参阅 [Kubernetes Secret 良好实践](/zh-cn/docs/concepts/security/secrets-good-practices) - 学习如何[使用 `kubectl` 管理 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl/) - 学习如何[使用配置文件管理 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-config-file/) diff --git a/content/zh-cn/docs/concepts/containers/images.md b/content/zh-cn/docs/concepts/containers/images.md index dec9281533..3625397d16 100644 --- a/content/zh-cn/docs/concepts/containers/images.md +++ b/content/zh-cn/docs/concepts/containers/images.md @@ -2,6 +2,7 @@ title: 镜像 content_type: concept weight: 10 +hide_summary: true # 在章节索引中单独列出 --- @@ -33,6 +35,16 @@ This page provides an outline of the container image concept. 本页概要介绍容器镜像的概念。 +{{< note >}} + +如果你正在寻找 Kubernetes 某个发行版本(如最新次要版本 v{{< skew latestVersion >}}) +的容器镜像,请访问[下载 Kubernetes](/zh-cn/releases/download/)。 +{{< /note >}} + #### 默认镜像拉取策略 {#imagepullpolicy-defaulting} -当你(或控制器)向 API 服务器提交一个新的 Pod 时,你的集群会在满足特定条件时设置 `imagePullPolicy `字段: +当你(或控制器)向 API 服务器提交一个新的 Pod 时,你的集群会在满足特定条件时设置 `imagePullPolicy` 字段: -### 配置 Node 对私有仓库认证 {configuring-nodes-to-authenticate-to-a-private-registry} +### 配置 Node 对私有仓库认证 {#configuring-nodes-to-authenticate-to-a-private-registry} 设置凭据的具体说明取决于你选择使用的容器运行时和仓库。 你应该参考解决方案的文档来获取最准确的信息。 - -{{< note >}} -Kubernetes 默认仅支持 Docker 配置中的 `auths` 和 `HttpHeaders` 部分, -不支持 Docker 凭据辅助程序(`credHelpers` 或 `credsStore`)。 -{{< /note >}} - -#### 在 Pod 中引用 ImagePullSecrets {referring-to-an-imagepullsecrets-on-a-pod} +#### 在 Pod 中引用 ImagePullSecrets {#referring-to-an-imagepullsecrets-on-a-pod} 现在,在创建 Pod 时,可以在 Pod 定义中增加 `imagePullSecrets` 部分来引用该 Secret。 `imagePullSecrets` 数组中的每一项只能引用同一名字空间中的 Secret。 @@ -705,10 +708,8 @@ common use cases and suggested solutions. 如果你需要访问多个仓库,可以为每个仓库创建一个 Secret。 -`kubelet` 将所有 `imagePullSecrets` 合并为一个虚拟的 `.docker/config.json` 文件。 ## {{% heading "whatsnext" %}} diff --git a/content/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects.md index bf52f1b340..fc3a427247 100644 --- a/content/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects.md @@ -192,8 +192,8 @@ nested fields specific to that object. The [Kubernetes API Reference](/docs/refe can help you find the spec format for all of the objects you can create using Kubernetes. --> 对每个 Kubernetes 对象而言,其 `spec` 之精确格式都是不同的,包含了特定于该对象的嵌套字段。 -我们能在 [Kubernetes API 参考](/zh-cn/docs/reference/kubernetes-api/) -找到我们想要在 Kubernetes 上创建的任何对象的规约格式。 +[Kubernetes API 参考](/zh-cn/docs/reference/kubernetes-api/)可以帮助你找到想要使用 +Kubernetes 创建的所有对象的规约格式。 -* 了解最重要的 Kubernetes 基本对象,例如 [Pod](/zh-cn/docs/concepts/workloads/pods/)。 -* 了解 Kubernetes 中的[控制器](/zh-cn/docs/concepts/architecture/controller/)。 -* [使用 Kubernetes API](/zh-cn/docs/reference/using-api/) 一节解释了一些 API 概念。 +进一步了解以下信息: +* 最重要的 Kubernetes 基本对象 [Pod](/zh-cn/docs/concepts/workloads/pods/)。 +* [Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/) 对象。 +* Kubernetes 中的[控制器](/zh-cn/docs/concepts/architecture/controller/)。 +* 解释了一些 API 概念的 [Kubernetes API 概述](/zh-cn/docs/reference/using-api/)。 +* [kubectl](/zh-cn/docs/reference/kubectl/) 和 [kubectl 命令](/docs/reference/generated/kubectl/kubectl-commands)。 diff --git a/content/zh-cn/docs/concepts/overview/working-with-objects/names.md b/content/zh-cn/docs/concepts/overview/working-with-objects/names.md index 69f088c4df..2786c19e7a 100644 --- a/content/zh-cn/docs/concepts/overview/working-with-objects/names.md +++ b/content/zh-cn/docs/concepts/overview/working-with-objects/names.md @@ -26,7 +26,7 @@ For example, you can only have one Pod named `myapp-1234` within the same [names 每个 Kubernetes 对象也有一个 [**UID**](#uids) 来标识在整个集群中的唯一性。 比如,在同一个[名字空间](/zh-cn/docs/concepts/overview/working-with-objects/namespaces/) -中有一个名为 `myapp-1234` 的 Pod,但是可以命名一个 Pod 和一个 Deployment 同为 `myapp-1234`。 +中只能有一个名为 `myapp-1234` 的 Pod,但是可以命名一个 Pod 和一个 Deployment 同为 `myapp-1234`。 -* 进一步了解 Kubernetes [标签](/zh-cn/docs/concepts/overview/working-with-objects/labels/) +* 进一步了解 Kubernetes [标签](/zh-cn/docs/concepts/overview/working-with-objects/labels/)和[注解](/zh-cn/docs/concepts/overview/working-with-objects/annotations/)。 * 参阅 [Kubernetes 标识符和名称](https://git.k8s.io/design-proposals-archive/architecture/identifiers.md)的设计文档 diff --git a/content/zh-cn/docs/concepts/scheduling-eviction/pod-overhead.md b/content/zh-cn/docs/concepts/scheduling-eviction/pod-overhead.md index 046a27cd99..fea91a658f 100644 --- a/content/zh-cn/docs/concepts/scheduling-eviction/pod-overhead.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/pod-overhead.md @@ -154,7 +154,7 @@ map[cpu:250m memory:120Mi] If a [ResourceQuota](/docs/concepts/policy/resource-quotas/) is defined, the sum of container requests as well as the `overhead` field are counted. --> -如果定义了 [ResourceQuata](/zh-cn/docs/concepts/policy/resource-quotas/), +如果定义了 [ResourceQuota](/zh-cn/docs/concepts/policy/resource-quotas/), 则容器请求的总量以及 `overhead` 字段都将计算在内。 ## 定义 Service {#defining-a-service} @@ -143,7 +143,7 @@ Service 在 Kubernetes 中是一个 REST 对象,和 Pod 类似。 Service 对象的名称必须是合法的 [RFC 1035 标签名称](/zh-cn/docs/concepts/overview/working-with-objects/names#rfc-1035-label-names)。 -例如,假定有一组 Pod,它们对外暴露了 9376 端口,同时还被打上 `app=MyApp` 标签: +例如,假定有一组 Pod,它们对外暴露了 9376 端口,同时还被打上 `app.kubernetes.io/name=MyApp` 标签: ```yaml apiVersion: v1 @@ -582,7 +582,7 @@ thus is only available to use as-is. Note that the kube-proxy starts up in different modes, which are determined by its configuration. - The kube-proxy's configuration is done via a ConfigMap, and the ConfigMap for kube-proxy - effectively deprecates the behaviour for almost all of the flags for the kube-proxy. + effectively deprecates the behavior for almost all of the flags for the kube-proxy. - The ConfigMap for the kube-proxy does not support live reloading of configuration. - The ConfigMap parameters for the kube-proxy cannot all be validated and verified on startup. For example, if your operating system doesn't allow you to run iptables commands, @@ -603,7 +603,7 @@ Note that the kube-proxy starts up in different modes, which are determined by i ### userspace 代理模式 {#proxy-mode-userspace} -这种模式,kube-proxy 会监视 Kubernetes 控制平面对 Service 对象和 Endpoints 对象的添加和移除操作。 +在这种(遗留)模式下,kube-proxy 会监视 Kubernetes 控制平面对 Service 对象和 Endpoints 对象的添加和移除操作。 对每个 Service,它会在本地 Node 上打开一个端口(随机选择)。 任何连接到“代理端口”的请求,都会被代理到 Service 的后端 `Pods` 中的某个上面(如 `Endpoints` 所报告的一样)。 使用哪个后端 Pod,是 kube-proxy 基于 `SessionAffinity` 来确定的。 @@ -639,7 +639,7 @@ In this mode, kube-proxy watches the Kubernetes control plane for the addition a removal of Service and Endpoint objects. For each Service, it installs iptables rules, which capture traffic to the Service's `clusterIP` and `port`, and redirect that traffic to one of the Service's -backend sets. For each Endpoint object, it installs iptables rules which +backend sets. For each Endpoint object, it installs iptables rules which select a backend Pod. By default, kube-proxy in iptables mode chooses a backend at random. @@ -701,7 +701,7 @@ The IPVS proxy mode is based on netfilter hook function that is similar to iptables mode, but uses a hash table as the underlying data structure and works in the kernel space. That means kube-proxy in IPVS mode redirects traffic with lower latency than -kube-proxy in iptables mode, with much better performance when synchronising +kube-proxy in iptables mode, with much better performance when synchronizing proxy rules. Compared to the other proxy modes, IPVS mode also supports a higher throughput of network traffic. @@ -819,7 +819,7 @@ also start and end with an alphanumeric character. For example, the names `123-abc` and `web` are valid, but `123_abc` and `-web` are not. --> -与一般的Kubernetes名称一样,端口名称只能包含小写字母数字字符 和 `-`。 +与一般的 Kubernetes 名称一样,端口名称只能包含小写字母数字字符 和 `-`。 端口名称还必须以字母数字字符开头和结尾。 例如,名称 `123-abc` 和 `web` 有效,但是 `123_abc` 和 `-web` 无效。 @@ -874,7 +874,7 @@ endpoints, the kube-proxy does not forward any traffic for the relevant Service. 如果你启用了 kube-proxy 的 `ProxyTerminatingEndpoints` @@ -934,7 +934,11 @@ Kubernetes 支持两种基本的服务发现模式 —— 环境变量和 DNS。 ### Environment variables When a Pod is run on a Node, the kubelet adds a set of environment variables -for each active Service. It adds `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables, where the Service name is upper-cased and dashes are converted to underscores. It also supports variables (see [makeLinkVariables](https://github.com/kubernetes/kubernetes/blob/dd2d12f6dc0e654c15d5db57a5f9f6ba61192726/pkg/kubelet/envvars/envvars.go#L72)) that are compatible with Docker Engine's "_[legacy container links](https://docs.docker.com/network/links/)_" feature. +for each active Service. It adds `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables, +where the Service name is upper-cased and dashes are converted to underscores. +It also supports variables (see [makeLinkVariables](https://github.com/kubernetes/kubernetes/blob/dd2d12f6dc0e654c15d5db57a5f9f6ba61192726/pkg/kubelet/envvars/envvars.go#L72)) +that are compatible with Docker Engine's +"_[legacy container links](https://docs.docker.com/network/links/)_" feature. For example, the Service `redis-primary` which exposes TCP port 6379 and has been allocated cluster IP address 10.0.0.11, produces the following environment @@ -1002,7 +1006,7 @@ create a DNS record for `my-service.my-ns`. Pods in the `my-ns` namespace should be able to find the service by doing a name lookup for `my-service` (`my-service.my-ns` would also work). -Pods in other Namespaces must qualify the name as `my-service.my-ns`. These names +Pods in other namespaces must qualify the name as `my-service.my-ns`. These names will resolve to the cluster IP assigned for the Service. --> 例如,如果你在 Kubernetes 命名空间 `my-ns` 中有一个名为 `my-service` 的服务, @@ -1145,7 +1149,10 @@ Kubernetes `ServiceTypes` 允许指定你所需要的 Service 类型。 {{< /note >}} 你也可以使用 [Ingress](/zh-cn/docs/concepts/services-networking/ingress/) 来暴露自己的服务。 Ingress 不是一种服务类型,但它充当集群的入口点。 @@ -1260,10 +1267,6 @@ kube-proxy only selects the loopback interface for NodePort Services. The default for `--nodeport-addresses` is an empty list. This means that kube-proxy should consider all available network interfaces for NodePort. (That's also compatible with earlier Kubernetes releases.) -Note that this Service is visible as `:spec.ports[*].nodePort` -and `.spec.clusterIP:spec.ports[*].port`. -If the `--nodeport-addresses` flag for kube-proxy or the equivalent field -in the kube-proxy configuration file is set, `` would be a filtered node IP address (or possibly IP addresses). --> 此标志采用逗号分隔的 IP 段列表(例如 `10.0.0.0/8`、`192.0.2.0/25`)来指定 kube-proxy 应视为该节点本地的 IP 地址范围。 @@ -1273,9 +1276,17 @@ IP 地址范围。 `--nodeport-addresses` 的默认值是一个空列表。 这意味着 kube-proxy 应考虑 NodePort 的所有可用网络接口。 (这也与早期的 Kubernetes 版本兼容。) -请注意,此服务显示为 `:spec.ports[*].nodePort` 和 `.spec.clusterIP:spec.ports[*].port`。 + +{{< note >}} + +此服务呈现为 `:spec.ports[*].nodePort` 和 `.spec.clusterIP:spec.ports[*].port`。 如果设置了 kube-proxy 的 `--nodeport-addresses` 标志或 kube-proxy 配置文件中的等效字段, 则 `` 将是过滤的节点 IP 地址(或可能的 IP 地址)。 +{{< /note >}} 来自外部负载均衡器的流量将直接重定向到后端 Pod 上,不过实际它们是如何工作的,这要依赖于云提供商。 @@ -1439,13 +1451,13 @@ LoadBalancer 类型的服务继续分配节点端口。 `spec.loadBalancerClass` enables you to use a load balancer implementation other than the cloud provider default. By default, `spec.loadBalancerClass` is `nil` and a `LoadBalancer` type of Service uses the cloud provider's default load balancer implementation if the cluster is configured with -a cloud provider using the `--cloud-provider` component flag. +a cloud provider using the `--cloud-provider` component flag. If `spec.loadBalancerClass` is specified, it is assumed that a load balancer implementation that matches the specified class is watching for Services. Any default load balancer implementation (for example, the one provided by the cloud provider) will ignore Services that have this field set. `spec.loadBalancerClass` can be set on a Service of type `LoadBalancer` only. -Once set, it cannot be changed. +Once set, it cannot be changed. --> `spec.loadBalancerClass` 允许你不使用云提供商的默认负载均衡器实现,转而使用指定的负载均衡器实现。 默认情况下,`.spec.loadBalancerClass` 的取值是 `nil`,如果集群使用 `--cloud-provider` 配置了云提供商, @@ -1469,7 +1481,8 @@ Unprefixed names are reserved for end-users. In a mixed environment it is sometimes necessary to route traffic from Services inside the same (virtual) network address block. -In a split-horizon DNS environment you would need two Services to be able to route both external and internal traffic to your endpoints. +In a split-horizon DNS environment you would need two Services to be able to route both external +and internal traffic to your endpoints. To set an internal load balancer, add one of the following annotations to your Service depending on the cloud Service provider you're using. @@ -1667,7 +1680,9 @@ TCP 和 SSL 选择第4层代理:ELB 转发流量而不修改报头。 In the above example, if the Service contained three ports, `80`, `443`, and `8443`, then `443` and `8443` would use the SSL certificate, but `80` would be proxied HTTP. -From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services. +From Kubernetes v1.9 onwards you can use +[predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) +with HTTPS or SSL listeners for your Services. To see which policies are available for use, you can use the `aws` command line tool: --> 在上例中,如果服务包含 `80`、`443` 和 `8443` 三个端口, 那么 `443` 和 `8443` 将使用 SSL 证书, @@ -1777,7 +1792,8 @@ Connection draining for Classic ELBs can be managed with the annotation `service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled` set to the value of `"true"`. The annotation `service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout` can -also be used to set maximum time, in seconds, to keep the existing connections open before deregistering the instances. +also be used to set maximum time, in seconds, to keep the existing connections open before +deregistering the instances. --> #### AWS 上的连接排空 @@ -1879,7 +1895,8 @@ To use a Network Load Balancer on AWS, use the annotation `service.beta.kubernet {{< note >}} NLB 仅适用于某些实例类。有关受支持的实例类型的列表, @@ -1901,9 +1918,9 @@ the NLB Target Group's health check on the auto-assigned `.spec.healthCheckNodePort` and not receive any traffic. --> 与经典弹性负载均衡器不同,网络负载均衡器(NLB)将客户端的 IP 地址转发到该节点。 -如果服务的 `.spec.externalTrafficPolicy` 设置为 `Cluster` ,则客户端的IP地址不会传达到最终的 Pod。 +如果服务的 `.spec.externalTrafficPolicy` 设置为 `Cluster` ,则客户端的 IP 地址不会传达到最终的 Pod。 -通过将 `.spec.externalTrafficPolicy` 设置为 `Local`,客户端IP地址将传播到最终的 Pod, +通过将 `.spec.externalTrafficPolicy` 设置为 `Local`,客户端 IP 地址将传播到最终的 Pod, 但这可能导致流量分配不均。 没有针对特定 LoadBalancer 服务的任何 Pod 的节点将无法通过自动分配的 `.spec.healthCheckNodePort` 进行 NLB 目标组的运行状况检查,并且不会收到任何流量。 @@ -2066,7 +2083,8 @@ spec: {{< note >}} @@ -2091,9 +2109,13 @@ Service's `type`. {{< warning >}} 对于一些常见的协议,包括 HTTP 和 HTTPS,你使用 ExternalName 可能会遇到问题。 如果你使用 ExternalName,那么集群内客户端使用的主机名与 ExternalName 引用的名称不同。 @@ -2191,7 +2213,7 @@ The previous information should be sufficient for many people who want to use Services. However, there is a lot going on behind the scenes that may be worth understanding. --> -## 虚拟IP实施 {#the-gory-details-of-virtual-ips} +## 虚拟 IP 实施 {#the-gory-details-of-virtual-ips} 对很多想使用 Service 的人来说,前面的信息应该足够了。 然而,有很多内部原理性的内容,还是值去理解的。 @@ -2219,7 +2241,7 @@ fail with a message indicating an IP address could not be allocated. In the control plane, a background controller is responsible for creating that map (needed to support migrating from older versions of Kubernetes that used in-memory locking). Kubernetes also uses controllers to check for invalid -assignments (eg due to administrator intervention) and for cleaning up allocated +assignments (e.g. due to administrator intervention) and for cleaning up allocated IP addresses that are no longer used by any Services. --> ### 避免冲突 {#avoiding-collisions} @@ -2374,8 +2396,11 @@ through a load-balancer, though in those cases the client IP does get altered. #### IPVS 在大规模集群(例如 10000 个服务)中,iptables 操作会显着降低速度。 IPVS 专为负载均衡而设计,并基于内核内哈希表。 @@ -2386,14 +2411,15 @@ IPVS 专为负载均衡而设计,并基于内核内哈希表。 ## API Object Service is a top-level resource in the Kubernetes REST API. You can find more details -about the API object at: [Service API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core). +about the [Service API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core). ## Supported protocols {#protocol-support} --> ## API 对象 {#api-object} -Service 是 Kubernetes REST API 中的顶级资源。你可以在以下位置找到有关 API 对象的更多详细信息: -[Service 对象 API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core). +Service 是 Kubernetes REST API 中的顶级资源。你可以找到有关 +[Service 对象 API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core) +的更多详细信息。 ## 受支持的协议 {#protocol-support} @@ -2437,11 +2463,12 @@ provider offering this facility. (Most do not). {{< warning >}} -支持多宿主SCTP关联要求 CNI 插件能够支持为一个 Pod 分配多个接口和 IP 地址。 +支持多宿主 SCTP 关联要求 CNI 插件能够支持为一个 Pod 分配多个接口和 IP 地址。 用于多宿主 SCTP 关联的 NAT 在相应的内核模块中需要特殊的逻辑。 {{< /warning >}} @@ -2483,7 +2510,7 @@ HTTP/HTTPS 反向代理,并将其转发到该服务的 Endpoints。 {{< note >}} 你还可以使用 {{< glossary_tooltip text="Ingress" term_id="ingress" >}} 代替 Service 来公开 HTTP/HTTPS 服务。 @@ -2522,11 +2549,10 @@ followed by the data from the client. ## {{% heading "whatsnext" %}} -* 阅读[使用服务访问应用](/zh-cn/docs/concepts/services-networking/connect-applications-service/) +* 遵循[使用 Service 连接到应用](/zh-cn/docs/tutorials/services/connect-applications-service/)教程 * 阅读了解 [Ingress](/zh-cn/docs/concepts/services-networking/ingress/) * 阅读了解[端点切片(Endpoint Slices)](/zh-cn/docs/concepts/services-networking/endpoint-slices/) - diff --git a/content/zh-cn/docs/concepts/storage/projected-volumes.md b/content/zh-cn/docs/concepts/storage/projected-volumes.md index bf61134654..286e089e24 100644 --- a/content/zh-cn/docs/concepts/storage/projected-volumes.md +++ b/content/zh-cn/docs/concepts/storage/projected-volumes.md @@ -51,8 +51,8 @@ Currently, the following types of volume sources can be projected: All sources are required to be in the same namespace as the Pod. For more details, see the [all-in-one volume](https://git.k8s.io/design-proposals-archive/node/all-in-one-volume.md) design document. --> -所有的卷源都要求处于 Pod 所在的同一个名字空间内。进一步的详细信息,可参考 -[一体化卷](https://git.k8s.io/design-proposals-archive/node/all-in-one-volume.md)设计文档。 +所有的卷源都要求处于 Pod 所在的同一个名字空间内。更多详细信息, +可参考[一体化卷](https://git.k8s.io/design-proposals-archive/node/all-in-one-volume.md)设计文档。 ## serviceAccountToken 投射卷 {#serviceaccounttoken} -当 `TokenRequestProjection` 特性被启用时,你可以将当前 -[服务账号](/zh-cn/docs/reference/access-authn-authz/authentication/#service-account-tokens) -的令牌注入到 Pod 中特定路径下。例如: +你可以将当前[服务账号](/zh-cn/docs/reference/access-authn-authz/authentication/#service-account-tokens)的令牌注入到 +Pod 中特定路径下。例如: {{< codenew file="pods/storage/projected-service-account-token.yaml" >}} @@ -159,6 +157,39 @@ ownership. 中设置了 `RunAsUser` 属性的 Linux Pod 中,投射文件具有正确的属主属性设置, 其中包含了容器用户属主。 + +当 Pod 中的所有容器在其 +[`PodSecurityContext`](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context) +或容器 +[`SecurityContext`](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context-1) +中设置了相同的 `runAsUser` 时,kubelet 将确保 `serviceAccountToken` +卷的内容归该用户所有,并且令牌文件的权限模式会被设置为 `0600`。 + +{{< note >}} + +在某 Pod 被创建后为其添加的{{< glossary_tooltip text="临时容器" term_id="ephemeral-container" >}}**不会**更改创建该 +Pod 时设置的卷权限。 + +如果 Pod 的 `serviceAccountToken` 卷权限被设为 `0600` +是因为 Pod 中的其他所有容器都具有相同的 `runAsUser`, +则临时容器必须使用相同的 `runAsUser` 才能读取令牌。 +{{< /note >}} + ### Windows 在 Kubernetes 中,**卷快照** 是一个存储系统上卷的快照,本文假设你已经熟悉了 Kubernetes 的[持久卷](/zh-cn/docs/concepts/storage/persistent-volumes/)。 @@ -23,34 +31,45 @@ In Kubernetes, a _VolumeSnapshot_ represents a snapshot of a volume on a storage - ## 介绍 {#introduction} 与 `PersistentVolume` 和 `PersistentVolumeClaim` 这两个 API 资源用于给用户和管理员制备卷类似, `VolumeSnapshotContent` 和 `VolumeSnapshot` 这两个 API 资源用于给用户和管理员创建卷快照。 `VolumeSnapshotContent` 是从一个卷获取的一种快照,该卷由管理员在集群中进行制备。 就像持久卷(PersistentVolume)是集群的资源一样,它也是集群中的资源。 `VolumeSnapshot` 是用户对于卷的快照的请求。它类似于持久卷声明(PersistentVolumeClaim)。 `VolumeSnapshotClass` 允许指定属于 `VolumeSnapshot` 的不同属性。在从存储系统的相同卷上获取的快照之间, 这些属性可能有所不同,因此不能通过使用与 `PersistentVolumeClaim` 相同的 `StorageClass` 来表示。 卷快照能力为 Kubernetes 用户提供了一种标准的方式来在指定时间点复制卷的内容,并且不需要创建全新的卷。 例如,这一功能使得数据库管理员能够在执行编辑或删除之类的修改之前对数据库执行备份。 @@ -61,34 +80,49 @@ Users need to be aware of the following when using this feature: 当使用该功能时,用户需要注意以下几点: -* API 对象 `VolumeSnapshot`,`VolumeSnapshotContent` 和 `VolumeSnapshotClass` +- API 对象 `VolumeSnapshot`,`VolumeSnapshotContent` 和 `VolumeSnapshotClass` 是 {{< glossary_tooltip term_id="CustomResourceDefinition" text="CRD" >}}, 不属于核心 API。 -* `VolumeSnapshot` 支持仅可用于 CSI 驱动。 -* 作为 `VolumeSnapshot` 部署过程的一部分,Kubernetes 团队提供了一个部署于控制平面的快照控制器, +- `VolumeSnapshot` 支持仅可用于 CSI 驱动。 +- 作为 `VolumeSnapshot` 部署过程的一部分,Kubernetes 团队提供了一个部署于控制平面的快照控制器, 并且提供了一个叫做 `csi-snapshotter` 的边车(Sidecar)辅助容器,和 CSI 驱动程序一起部署。 快照控制器监视 `VolumeSnapshot` 和 `VolumeSnapshotContent` 对象, 并且负责创建和删除 `VolumeSnapshotContent` 对象。 边车 csi-snapshotter 监视 `VolumeSnapshotContent` 对象, 并且触发针对 CSI 端点的 `CreateSnapshot` 和 `DeleteSnapshot` 的操作。 -* 还有一个验证性质的 Webhook 服务器,可以对快照对象进行更严格的验证。 +- 还有一个验证性质的 Webhook 服务器,可以对快照对象进行更严格的验证。 Kubernetes 发行版应将其与快照控制器和 CRD(而非 CSI 驱动程序)一起安装。 此服务器应该安装在所有启用了快照功能的 Kubernetes 集群中。 -* CSI 驱动可能实现,也可能没有实现卷快照功能。CSI 驱动可能会使用 csi-snapshotter +- CSI 驱动可能实现,也可能没有实现卷快照功能。CSI 驱动可能会使用 csi-snapshotter 来提供对卷快照的支持。详见 [CSI 驱动程序文档](https://kubernetes-csi.github.io/docs/) -* Kubernetes 负责 CRD 和快照控制器的安装。 +- Kubernetes 负责 CRD 和快照控制器的安装。 ## 卷快照和卷快照内容的生命周期 {#lifecycle-of-a-volume-snapshot-and-volume-snapshot-content} @@ -106,7 +140,10 @@ There are two ways snapshots may be provisioned: pre-provisioned or dynamically #### 预制备 {#static} @@ -116,7 +153,9 @@ A cluster administrator creates a number of `VolumeSnapshotContents`. They carry #### 动态制备 {#dynamic} @@ -127,7 +166,9 @@ Instead of using a pre-existing snapshot, you can request that a snapshot to be ### 绑定 {#binding} @@ -135,7 +176,8 @@ The snapshot controller handles the binding of a `VolumeSnapshot` object with an 绑定关系是一对一的。 在预制备快照绑定场景下,`VolumeSnapshotContent` 对象创建之后,才会和 `VolumeSnapshot` 进行绑定。 @@ -144,31 +186,32 @@ In the case of pre-provisioned binding, the VolumeSnapshot will remain unbound u The purpose of this protection is to ensure that in-use {{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}} -API objects are not removed from the system while a snapshot is being taken from it (as this may result in data loss). - +API objects are not removed from the system while a snapshot is being taken from it +(as this may result in data loss). --> -### 快照源的持久性卷声明保护 +### 快照源的持久性卷声明保护 {#persistent-volume-claim-as-snapshot-source-protection} 这种保护的目的是确保在从系统中获取快照时,不会将正在使用的 - {{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}} - API 对象从系统中删除(因为这可能会导致数据丢失)。 +{{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}} +API 对象从系统中删除(因为这可能会导致数据丢失)。 -如果一个 PVC 正在被快照用来作为源进行快照创建,则该 PVC 是使用中的。如果用户删除正作为快照源的 PVC API 对象, -则 PVC 对象不会立即被删除掉。相反,PVC 对象的删除将推迟到任何快照不在主动使用它为止。 -当快照的 `Status` 中的 `ReadyToUse`值为 `true` 时,PVC 将不再用作快照源。 - -当从 `PersistentVolumeClaim` 中生成快照时,`PersistentVolumeClaim` 就在被使用了。 -如果删除一个作为快照源的 `PersistentVolumeClaim` 对象,这个 `PersistentVolumeClaim` 对象不会立即被删除的。 -相反,删除 `PersistentVolumeClaim` 对象的动作会被放弃,或者推迟到快照的 Status 为 ReadyToUse 时再执行。 +在为某 `PersistentVolumeClaim` 生成快照时,该 `PersistentVolumeClaim` 处于被使用状态。 +如果删除一个正作为快照源使用的 `PersistentVolumeClaim` API 对象,该 `PersistentVolumeClaim` 对象不会立即被移除。 +相反,移除 `PersistentVolumeClaim` 对象的动作会被推迟,直到快照状态变为 ReadyToUse 或快照操作被中止时再执行。 ### 删除 {#delete} @@ -197,11 +240,13 @@ spec: ``` `persistentVolumeClaimName` 是 `PersistentVolumeClaim` 数据源对快照的名称。 这个字段是动态制备快照中的必填字段。 @@ -210,7 +255,9 @@ using the attribute `volumeSnapshotClassName`. If nothing is set, then the defau 使用 `volumeSnapshotClassName` 属性来请求特定类。如果没有设置,那么使用默认类(如果有)。 如下面例子所示,对于预制备的快照,需要给快照指定 `volumeSnapshotContentName` 作为来源。 对于预制备的快照 `source` 中的`volumeSnapshotContentName` 字段是必填的。 @@ -228,9 +275,11 @@ spec: +## 卷快照内容 {#volume-snapshot-contents} + 每个 VolumeSnapshotContent 对象包含 spec 和 status。 在动态制备时,快照通用控制器创建 `VolumeSnapshotContent` 对象。下面是例子: @@ -253,11 +302,16 @@ spec: ``` -`volumeHandle` 是存储后端创建卷的唯一标识符,在卷创建期间由 CSI 驱动程序返回。动态设置快照需要此字段。它指出了快照的卷源。 +`volumeHandle` 是存储后端创建卷的唯一标识符,在卷创建期间由 CSI 驱动程序返回。 +动态设置快照需要此字段。它指出了快照的卷源。 对于预制备快照,你(作为集群管理员)要按如下命令来创建 `VolumeSnapshotContent` 对象。 @@ -276,23 +330,29 @@ spec: name: new-snapshot-test namespace: default ``` + -`snapshotHandle` 是存储后端创建卷的唯一标识符。对于预设置快照,这个字段是必须的。 +`snapshotHandle` 是存储后端创建卷的唯一标识符。对于预制备的快照,这个字段是必需的。 它指定此 `VolumeSnapshotContent` 表示的存储系统上的 CSI 快照 ID。 `sourceVolumeMode` 是创建快照的卷的模式。`sourceVolumeMode` 字段的值可以是 `Filesystem` 或 `Block`。如果没有指定源卷模式,Kubernetes 会将快照视为未知的源卷模式。 `volumeSnapshotRef` 字段是对相应的 `VolumeSnapshot` 的引用。 请注意,当 `VolumeSnapshotContent` 被创建为预配置快照时。 @@ -314,8 +374,8 @@ To check if your cluster has capability for this feature, run the following comm 要检查你的集群是否具有此特性的能力,可以运行如下命令: -```yaml -$ kubectl get crd volumesnapshotcontent -o yaml +```shell +kubectl get crd volumesnapshotcontent -o yaml ``` -你可以制备一个新卷,该卷预填充了快照中的数据,在 `持久卷声明` 对象中使用 **dataSource** 字段。 +你可以制备一个新卷,该卷预填充了快照中的数据,在 `PersistentVolumeClaim` 对象中使用 **dataSource** 字段。 -更多详细信息,请参阅 -[卷快照和从快照还原卷](/zh-cn/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support)。 +更多详细信息, +请参阅[卷快照和从快照还原卷](/zh-cn/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support)。 diff --git a/content/zh-cn/docs/concepts/workloads/controllers/replicaset.md b/content/zh-cn/docs/concepts/workloads/controllers/replicaset.md index 62aa1b9c14..ddf1c38cf5 100644 --- a/content/zh-cn/docs/concepts/workloads/controllers/replicaset.md +++ b/content/zh-cn/docs/concepts/workloads/controllers/replicaset.md @@ -1,5 +1,12 @@ --- title: ReplicaSet +feature: + title: 自我修复 + anchor: ReplicationController 如何工作 + description: > + 重新启动失败的容器,在节点死亡时替换并重新调度容器, + 杀死不响应用户定义的健康检查的容器, + 并且在它们准备好服务之前不会将它们公布给客户端。 content_type: concept weight: 20 --- @@ -9,6 +16,13 @@ reviewers: - bprashanth - madhusudancs title: ReplicaSet +feature: + title: Self-healing + anchor: How a ReplicaSet works + description: > + Restarts containers that fail, replaces and reschedules containers when nodes die, + kills containers that don't respond to your user-defined health check, + and doesn't advertise them to clients until they are ready to serve. content_type: concept weight: 20 --> diff --git a/content/zh-cn/docs/contribute/participate/roles-and-responsibilities.md b/content/zh-cn/docs/contribute/participate/roles-and-responsibilities.md index 8df7b4cddf..21ee1ac9d5 100644 --- a/content/zh-cn/docs/contribute/participate/roles-and-responsibilities.md +++ b/content/zh-cn/docs/contribute/participate/roles-and-responsibilities.md @@ -183,7 +183,10 @@ After submitting at least 5 substantial pull requests and meeting the other [req @@ -356,7 +360,7 @@ Approvers and SIG Docs leads are the only ones who can merge pull requests into 不小心的合并可能会破坏整个站点。在执行合并操作时,务必小心。 {{< /warning >}} -- 确保所提议的变更满足[贡献指南](/zh-cn/docs/contribute/style/content-guide/#contributing-content)要求。 +- 确保所提议的变更满足[文档内容指南](/zh-cn/docs/contribute/style/content-guide/)要求。 如果有问题或者疑惑,可以根据需要请他人帮助评审。 diff --git a/content/zh-cn/docs/reference/access-authn-authz/service-accounts-admin.md b/content/zh-cn/docs/reference/access-authn-authz/service-accounts-admin.md index ad57c17754..d9d6673592 100644 --- a/content/zh-cn/docs/reference/access-authn-authz/service-accounts-admin.md +++ b/content/zh-cn/docs/reference/access-authn-authz/service-accounts-admin.md @@ -3,34 +3,60 @@ title: 管理服务账号 content_type: concept weight: 50 --- - + -这是一篇针对服务账号的集群管理员指南。 -你应该熟悉[配置 Kubernetes 服务账号](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/)。 +**ServiceAccount** 为 Pod 中运行的进程提供了一个身份。 -对鉴权和用户账号的支持已在规划中,当前并不完备。 -为了更好地描述服务账号,有时这些不完善的特性也会被提及。 +Pod 内的进程可以使用其关联服务账号的身份,向集群的 API 服务器进行身份认证。 + + +有关服务账号的介绍, +请参阅[配置服务账号](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/)。 + +本任务指南阐述有关 ServiceAccount 的几个概念。 +本指南还讲解如何获取或撤销代表 ServiceAccount 的令牌。 +## {{% heading "prerequisites" %}} + +{{< include "task-tutorial-prereqs.md" >}} + + +为了能够准确地跟随这些步骤,确保你有一个名为 `examplens` 的名字空间。 +如果你没有,运行以下命令创建一个名字空间: + +```shell +kubectl create namespace examplens +``` + +- 用户账号是针对人而言的。而服务账号是针对运行在 Pod 中的应用进程而言的, + 在 Kubernetes 中这些进程运行在容器中,而容器是 Pod 的一部分。 +- 用户账号是全局性的。其名称在某集群中的所有名字空间中必须是唯一的。 + 无论你查看哪个名字空间,代表用户的特定用户名都代表着同一个用户。 + 在 Kubernetes 中,服务账号是名字空间作用域的。 + 两个不同的名字空间可以包含具有相同名称的 ServiceAccount。 + +- 通常情况下,集群的用户账号可能会从企业数据库进行同步,创建新用户需要特殊权限,并且涉及到复杂的业务流程。 + 服务账号创建有意做得更轻量,允许集群用户为了具体的任务按需创建服务账号。 + 将 ServiceAccount 的创建与新用户注册的步骤分离开来,使工作负载更易于遵从权限最小化原则。 + -- 用户账号是针对人而言的。而服务账号是针对运行在 Pod 中的进程而言的。 -- 用户账号是全局性的。其名称在某集群中的所有名字空间中必须是唯一的。服务账号是名字空间作用域的。 -- 通常情况下,集群的用户账号可能会从企业数据库进行同步,其创建需要特殊权限, - 并且涉及到复杂的业务流程。 - 服务账号创建有意做得更轻量,允许集群用户为了具体的任务创建服务账号以遵从权限最小化原则。 -- 对人员和服务账号审计所考虑的因素可能不同。 +- 对人员和服务账号审计所考虑的因素可能不同;这种分离更容易区分不同之处。 - 针对复杂系统的配置包可能包含系统组件相关的各种服务账号的定义。 - 因为服务账号的创建约束不多并且有名字空间域的名称,这种配置是很轻量的。 - - -## 服务账号的自动化 {#service-account-automation} - -以下三个独立组件协作完成服务账号相关的自动化: - -- `ServiceAccount` 准入控制器 -- Token 控制器 -- `ServiceAccount` 控制器 + 因为服务账号的创建约束不多并且有名字空间域的名称,所以这种配置通常是轻量的。 -### ServiceAccount 准入控制器 {#serviceaccount-admission-controller} - -对 Pod 的改动通过一个被称为[准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/)的插件来实现。 -它是 API 服务器的一部分。当 Pod 被创建或更新时,它会同步地修改 Pod。 -如果该插件处于激活状态(在大多数发行版中都是默认激活的), -当 Pod 被创建或更新时它会进行以下操作: - - -1. 如果该 Pod 没有设置 `ServiceAccount`,将其 `ServiceAccount` 设为 `default`。 -1. 保证 Pod 所引用的 `ServiceAccount` 确实存在,否则拒绝该 Pod。 -1. 如果服务账号的 `automountServiceAccountToken` 或 Pod 的 - `automountServiceAccountToken` 都未显式设置为 `false`,则为 Pod 创建一个 - `volume`,在其中包含用来访问 API 的令牌。 -1. 如果前一步中为服务账号令牌创建了卷,则为 Pod 中的每个容器添加一个 - `volumeSource`,挂载在其 `/var/run/secrets/kubernetes.io/serviceaccount` - 目录下。 -1. 如果 Pod 不包含 `imagePullSecrets` 设置,将 `ServiceAccount` - 所引用的服务账号中的 `imagePullSecrets` 信息添加到 Pod 中。 - - -#### 绑定的服务账号令牌卷 {#bound-service-account-token-volume} +## 绑定的服务账号令牌卷机制 {#bound-service-account-token-volume} {{< feature-state for_k8s_version="v1.22" state="stable" >}} -ServiceAccount 准入控制器将添加如下投射卷, -而不是为令牌控制器所生成的不过期的服务账号令牌而创建的基于 Secret 的卷。 +默认情况下,Kubernetes 控制平面(特别是 [ServiceAccount 准入控制器](#service-account-admission-controller)) +添加一个[投射卷](/zh-cn/docs/concepts/storage/projected-volumes/)到 Pod, +此卷包括了访问 Kubernetes API 的令牌。 + +以下示例演示如何查找已启动的 Pod: ```yaml -- name: kube-api-access-<随机后缀> - projected: - defaultMode: 420 # 0644 - sources: - - serviceAccountToken: - expirationSeconds: 3607 - path: token - - configMap: - items: - - key: ca.crt - path: ca.crt - name: kube-root-ca.crt - - downwardAPI: - items: - - fieldRef: - apiVersion: v1 - fieldPath: metadata.namespace - path: namespace +... + - name: kube-api-access-<随机后缀> + projected: + sources: + - serviceAccountToken: + path: token # 必须与应用所预期的路径匹配 + - configMap: + items: + - key: ca.crt + path: ca.crt + name: kube-root-ca.crt + - downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + path: namespace ``` -此投射卷有三个数据源: +该清单片段定义了由三个数据源组成的投射卷。在当前场景中,每个数据源也代表该卷内的一条独立路径。这三个数据源是: -1. 通过 TokenRequest API 从 kube-apiserver 处获得的 `serviceAccountToken`。 - 这一令牌默认会在一个小时之后或者 Pod 被删除时过期。 - 该令牌绑定到 Pod 上,并将其 audience(受众)设置为与 `kube-apiserver` 的 audience 相匹配。 -1. 包含用来验证与 kube-apiserver 连接的 CA 证书包的 `configMap` 对象。 -1. 引用 Pod 名字空间的一个 `downwardAPI`。 +1. `serviceAccountToken` 数据源,包含 kubelet 从 kube-apiserver 获取的令牌。 + kubelet 使用 TokenRequest API 获取有时间限制的令牌。为 TokenRequest 服务的这个令牌会在 + Pod 被删除或定义的生命周期(默认为 1 小时)结束之后过期。该令牌绑定到特定的 Pod, + 并将其 audience(受众)设置为与 `kube-apiserver` 的 audience 相匹配。 + 这种机制取代了之前基于 Secret 添加卷的机制,之前 Secret 代表了针对 Pod 的 ServiceAccount 但不会过期。 +1. `configMap` 数据源。ConfigMap 包含一组证书颁发机构数据。 + Pod 可以使用这些证书来确保自己连接到集群的 kube-apiserver(而不是连接到中间件或意外配置错误的对等点上)。 +1. `downwardAPI` 数据源,用于查找包含 Pod 的名字空间的名称, + 并使该名称信息可用于在 Pod 内运行的应用程序代码。 -参阅[投射卷](/zh-cn/docs/tasks/configure-pod-container/configure-projected-volume-storage/)了解进一步的细节。 +Pod 内挂载这个特定卷的所有容器都可以访问上述信息。 + +{{< note >}} + +没有特定的机制可以使通过 TokenRequest 签发的令牌无效。如果你不再信任为某个 Pod 绑定的服务账号令牌, +你可以删除该 Pod。删除 Pod 将使其绑定的服务账号令牌过期。 +{{< /note >}} +## 手动管理 ServiceAccount 的 Secret {#manual-secret-management-for-serviceaccounts} -- watches ServiceAccount creation and creates a corresponding - ServiceAccount token Secret to allow API access. -- watches ServiceAccount deletion and deletes all corresponding ServiceAccount +v1.22 之前的 Kubernetes 版本会自动创建凭据访问 Kubernetes API。 +这种更老的机制基于先创建令牌 Secret,然后将其挂载到正运行的 Pod 中。 + + +在包括 Kubernetes v{{< skew currentVersion >}} 在内最近的几个版本中,使用 +[TokenRequest](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) API +[直接获得](#bound-service-account-token-volume) API 凭据,并使用投射卷挂载到 Pod 中。 +使用这种方法获得的令牌具有绑定的生命周期,当挂载的 Pod 被删除时这些令牌将自动失效。 + + +你仍然可以[手动创建](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount) +Secret 来保存服务账号令牌;例如在你需要一个永不过期的令牌的时候。 + +一旦你手动创建一个 Secret 并将其关联到 ServiceAccount,Kubernetes 控制平面就会自动将令牌填充到该 Secret 中。 + +{{< note >}} + +尽管存在手动创建长久 ServiceAccount 令牌的机制,但还是推荐使用 +[TokenRequest](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) +获得短期的 API 访问令牌。 +{{< /note >}} + + -### Token 控制器 {#token-controller} +## 控制平面细节 {#control-plane-details} -TokenController 作为 `kube-controller-manager` 的一部分运行,以异步的形式工作。 +### 令牌控制器 {#token-controller} + +服务账号令牌控制器作为 `kube-controller-manager` 的一部分运行,以异步的形式工作。 其职责包括: -- 监测 ServiceAccount 的创建并创建相应的服务账号令牌 Secret 以允许访问 API。 - 监测 ServiceAccount 的删除并删除所有相应的服务账号令牌 Secret。 - 监测服务账号令牌 Secret 的添加,保证相应的 ServiceAccount 存在,如有需要, 向 Secret 中添加令牌。 @@ -217,57 +277,374 @@ verify the tokens during authentication. kube-apiserver。公钥用于在身份认证过程中校验令牌。 -#### 创建额外的 API 令牌 {#to-create-additional-api-tokens} +### ServiceAccount 准入控制器 {#serviceaccount-admission-controller} -控制器中有专门的循环来保证每个 ServiceAccount 都存在对应的包含 API 令牌的 Secret。 -当需要为 ServiceAccount 创建额外的 API 令牌时,可以创建一个类型为 -`kubernetes.io/service-account-token` 的 Secret,并在其注解中引用对应的 -ServiceAccount。控制器会生成令牌并更新该 Secret: +对 Pod 的改动通过一个被称为[准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/)的插件来实现。 +它是 API 服务器的一部分。当 Pod 被创建时,该准入控制器会同步地修改 Pod。 +如果该插件处于激活状态(在大多数发行版中都是默认激活的),当 Pod 被创建时它会进行以下操作: -下面是这种 Secret 的一个示例配置: + +1. 如果该 Pod 没有设置 `.spec.serviceAccountName`, + 准入控制器为新来的 Pod 将 ServiceAccount 的名称设为 `default`。 +2. 准入控制器保证新来的 Pod 所引用的 ServiceAccount 确实存在。 + 如果没有 ServiceAccount 具有匹配的名称,则准入控制器拒绝新来的 Pod。 + 这个检查甚至适用于 `default` ServiceAccount。 + +3. 如果服务账号的 `automountServiceAccountToken` 字段或 Pod 的 + `automountServiceAccountToken` 字段都未显式设置为 `false`: + - 准入控制器变更新来的 Pod,添加一个包含 API + 访问令牌的额外{{< glossary_tooltip text="卷" term_id="volume" >}}。 + - 准入控制器将 `volumeMount` 添加到 Pod 中的每个容器, + 忽略已为 `/var/run/secrets/kubernetes.io/serviceaccount` 路径定义的卷挂载的所有容器。 + 对于 Linux 容器,此卷挂载在 `/var/run/secrets/kubernetes.io/serviceaccount`; + 在 Windows 节点上,此卷挂载在等价的路径上。 +4. 如果新来 Pod 的规约已包含任何 `imagePullSecrets`,则准入控制器添加 `imagePullSecrets`, + 并从 `ServiceAccount` 进行复制。 + +### TokenRequest API + +{{< feature-state for_k8s_version="v1.22" state="stable" >}} + + +你使用 ServiceAccount 的 +[TokenRequest](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) +子资源为该 ServiceAccount 获取有时间限制的令牌。 +你不需要调用它来获取在容器中使用的 API 令牌,因为 kubelet 使用 **投射卷** 对此进行了设置。 + +如果你想要从 `kubectl` 使用 TokenRequest API, +请参阅[为 ServiceAccount 手动创建 API 令牌](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount)。 + + +Kubernetes 控制平面(特别是 ServiceAccount 准入控制器)向 Pod 添加了一个投射卷, +kubelet 确保该卷包含允许容器作为正确 ServiceAccount 进行身份认证的令牌。 + +(这种机制取代了之前基于 Secret 添加卷的机制,之前 Secret 代表了 Pod 所用的 ServiceAccount 但不会过期。) + +以下示例演示如何查找已启动的 Pod: + +```yaml +... + - name: kube-api-access- + projected: + defaultMode: 420 # 这个十进制数等同于八进制 0644 + sources: + - serviceAccountToken: + expirationSeconds: 3607 + path: token + - configMap: + items: + - key: ca.crt + path: ca.crt + name: kube-root-ca.crt + - downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + path: namespace +``` + + +该清单片段定义了由三个数据源信息组成的投射卷。 + +1. `serviceAccountToken` 数据源,包含 kubelet 从 kube-apiserver 获取的令牌。 + kubelet 使用 TokenRequest API 获取有时间限制的令牌。为 TokenRequest 服务的这个令牌会在 + Pod 被删除或定义的生命周期(默认为 1 小时)结束之后过期。该令牌绑定到特定的 Pod, + 并将其 audience(受众)设置为与 `kube-apiserver` 的 audience 相匹配。 +1. `configMap` 数据源。ConfigMap 包含一组证书颁发机构数据。 + Pod 可以使用这些证书来确保自己连接到集群的 kube-apiserver(而不是连接到中间件或意外配置错误的对等点上)。 +1. `downwardAPI` 数据源。这个 `downwardAPI` 卷获得包含 Pod 的名字空间的名称, + 并使该名称信息可用于在 Pod 内运行的应用程序代码。 + + +挂载此卷的 Pod 内的所有容器均可以访问上述信息。 + +## 创建额外的 API 令牌 {#create-token} + +{{< caution >}} + +只有[令牌请求](#tokenrequest-api)机制不合适,才需要创建长久的 API 令牌。 +令牌请求机制提供有时间限制的令牌;因为随着这些令牌过期,它们对信息安全方面的风险也会降低。 +{{< /caution >}} + + +要为 ServiceAccount 创建一个不过期、持久化的 API 令牌, +请创建一个类型为 `kubernetes.io/service-account-token` 的 Secret,附带引用 ServiceAccount 的注解。 +控制平面随后生成一个长久的令牌,并使用生成的令牌数据更新该 Secret。 + +以下是此类 Secret 的示例清单: + +{{< codenew file="secret/serviceaccount/mysecretname.yaml" >}} + + +若要基于此示例创建 Secret,运行以下命令: + +```shell +kubectl -n examplens create -f https://k8s.io/examples/secret/serviceaccount/mysecretname.yaml +``` + + +若要查看该 Secret 的详细信息,运行以下命令: + +```shell +kubectl -n examplens describe secret mysecretname +``` + + +输出类似于: + +``` +Name: mysecretname +Namespace: examplens +Labels: +Annotations: kubernetes.io/service-account.name=myserviceaccount + kubernetes.io/service-account.uid=8a85c4c4-8483-11e9-bc42-526af7764f64 + +Type: kubernetes.io/service-account-token + +Data +==== +ca.crt: 1362 bytes +namespace: 9 bytes +token: ... +``` + + +如果你在 `examplens` 名字空间中启动新的 Pod,可以使用你刚刚创建的 +`myserviceaccount` service-account-token Secret。 + + +## 删除/废止 ServiceAccount 令牌 {#delete-token} + +如果你知道 Secret 的名称且该 Secret 包含要移除的令牌: + +```shell +kubectl delete secret name-of-secret +``` + + +否则,先找到 ServiceAccount 所用的 Secret。 + +```shell +# 此处假设你已有一个名为 'examplens' 的名字空间 +kubectl -n examplens get serviceaccount/example-automated-thing -o yaml +``` + + +输出类似于: ```yaml apiVersion: v1 -kind: Secret +kind: ServiceAccount metadata: - name: mysecretname annotations: - kubernetes.io/service-account.name: myserviceaccount -type: kubernetes.io/service-account-token -``` - -```shell -kubectl create -f ./secret.yaml -kubectl describe secret mysecretname + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"example-automated-thing","namespace":"examplens"}} + creationTimestamp: "2019-07-21T07:07:07Z" + name: example-automated-thing + namespace: examplens + resourceVersion: "777" + selfLink: /api/v1/namespaces/examplens/serviceaccounts/example-automated-thing + uid: f23fd170-66f2-4697-b049-e1e266b7f835 +secrets: +- name: example-automated-thing-token-zyxwv ``` -#### 删除/废止服务账号令牌 Secret +随后删除你现在知道名称的 Secret: ```shell -kubectl delete secret mysecretname +kubectl -n examplens delete secret/example-automated-thing-token-zyxwv ``` +控制平面发现 ServiceAccount 缺少其 Secret,并创建一个替代项: + +```shell +kubectl -n examplens get serviceaccount/example-automated-thing -o yaml +``` + +```yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"example-automated-thing","namespace":"examplens"}} + creationTimestamp: "2019-07-21T07:07:07Z" + name: example-automated-thing + namespace: examplens + resourceVersion: "1026" + selfLink: /api/v1/namespaces/examplens/serviceaccounts/example-automated-thing + uid: f23fd170-66f2-4697-b049-e1e266b7f835 +secrets: +- name: example-automated-thing-token-4rdrh +``` + + +## 清理 {#clean-up} + +如果创建了一个 `examplens` 名字空间进行试验,你可以移除它: + +```shell +kubectl delete namespace examplens +``` + + -### 服务账号控制器 {#serviceaccount-controller} +## 控制平面细节 {#control-plane-details} -服务账号控制器管理各名字空间下的 ServiceAccount 对象, -并且保证每个活跃的名字空间下存在一个名为 "default" 的 ServiceAccount。 +### ServiceAccount 控制器 {#serviceaccount-controller} +ServiceAccount 控制器管理名字空间内的 ServiceAccount,并确保每个活跃的名字空间中都存在名为 +“default” 的 ServiceAccount。 + + +### 令牌控制器 + +服务账号令牌控制器作为 `kube-controller-manager` 的一部分运行,以异步的形式工作。 +其职责包括: + +- 监测 ServiceAccount 的创建并创建相应的服务账号令牌 Secret 以允许 API 访问。 +- 监测 ServiceAccount 的删除并删除所有相应的服务账号令牌 Secret。 +- 监测服务账号令牌 Secret 的添加,保证相应的 ServiceAccount 存在,如有需要, + 向 Secret 中添加令牌。 +- 监测 Secret 的删除,如有需要,从相应的 ServiceAccount 中移除引用。 + + +你必须通过 `--service-account-private-key-file` 标志为 `kube-controller-manager` +的令牌控制器传入一个服务账号私钥文件。该私钥用于为所生成的服务账号令牌签名。 +同样地,你需要通过 `--service-account-key-file` 标志将对应的公钥通知给 +kube-apiserver。公钥用于在身份认证过程中校验令牌。 + +## {{% heading "whatsnext" %}} + + +- 查阅有关[投射卷](/zh-cn/docs/concepts/storage/projected-volumes/)的更多细节。 diff --git a/content/zh-cn/docs/reference/glossary/kube-proxy.md b/content/zh-cn/docs/reference/glossary/kube-proxy.md index ef7b661715..f49d3d90c8 100644 --- a/content/zh-cn/docs/reference/glossary/kube-proxy.md +++ b/content/zh-cn/docs/reference/glossary/kube-proxy.md @@ -31,7 +31,7 @@ tags: {{< glossary_tooltip term_id="service">}} concept. --> [kube-proxy](/zh-cn/docs/reference/command-line-tools-reference/kube-proxy/) -是集群中每个{{< glossary_tooltip text="节点(node)" term_id="node" >}}所上运行的网络代理, +是集群中每个{{< glossary_tooltip text="节点(node)" term_id="node" >}}上所运行的网络代理, 实现 Kubernetes {{< glossary_tooltip term_id="service">}} 概念的一部分。 diff --git a/content/zh-cn/docs/reference/labels-annotations-taints/_index.md b/content/zh-cn/docs/reference/labels-annotations-taints/_index.md index 036c5ac45a..8d11ab3bdc 100644 --- a/content/zh-cn/docs/reference/labels-annotations-taints/_index.md +++ b/content/zh-cn/docs/reference/labels-annotations-taints/_index.md @@ -1,14 +1,14 @@ --- title: 众所周知的标签、注解和污点 content_type: concept -weight: 20 +weight: 40 no_list: true --- @@ -626,6 +626,24 @@ StatefulSet topic for more details. 有关详细信息,请参阅 StatefulSet 主题中的 [Pod 名称标签](/zh-cn/docs/concepts/workloads/controllers/statefulset/#pod-name-label)。 + +### scheduler.alpha.kubernetes.io/node-selector {#schedulerkubernetesnode-selector} + +例子:`scheduler.alpha.kubernetes.io/node-selector: "name-of-node-selector"` + +用于:Namespace + +[PodNodeSelector](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#podnodeselector) +使用此注解键为名字空间中的 Pod 设置节点选择算符。 + 从指定版本开始 -: 以确切的资源版本开始 **watcH**。监视事件适用于提供的资源版本之后的所有更改。 +: 以确切的资源版本开始 **watch**。监视事件适用于提供的资源版本之后的所有更改。 与 “Get State and Start at Most Recent” 和 “Get State and Start at Any” 不同, **watch** 不会以所提供资源版本的合成 “添加” 事件启动。 由于客户端提供了资源版本,因此假定客户端已经具有起始资源版本的初始状态。 diff --git a/content/zh-cn/docs/tasks/access-application-cluster/service-access-application-cluster.md b/content/zh-cn/docs/tasks/access-application-cluster/service-access-application-cluster.md index 5f966ed09e..6023ebf0f4 100644 --- a/content/zh-cn/docs/tasks/access-application-cluster/service-access-application-cluster.md +++ b/content/zh-cn/docs/tasks/access-application-cluster/service-access-application-cluster.md @@ -3,7 +3,6 @@ title: 使用服务来访问集群中的应用 content_type: tutorial weight: 60 --- - * 运行 Hello World 应用的两个实例。 -* 创建一个服务对象来暴露 node port。 +* 创建一个服务对象来暴露 NodePort。 * 使用服务对象来访问正在运行的应用。 @@ -51,9 +50,15 @@ Here is the configuration file for the application Deployment: +1. 在你的集群中运行一个 Hello World 应用。 + 使用上面的文件创建应用程序 Deployment: + ```shell kubectl apply -f https://k8s.io/examples/service/access/hello-application.yaml ``` + + -1. 在你的集群中运行一个 Hello World 应用: - 使用上面的文件创建应用程序 Deployment: - - ```shell - kubectl apply -f https://k8s.io/examples/service/access/hello-application.yaml - ``` + --> 上面的命令创建一个 {{< glossary_tooltip text="Deployment" term_id="deployment" >}} 对象 @@ -118,7 +117,7 @@ Here is the configuration file for the application Deployment: --> 输出类似于: - ```shell + ``` Name: example-service Namespace: default Labels: run=load-balancer-example @@ -138,7 +137,7 @@ Here is the configuration file for the application Deployment: Make a note of the NodePort value for the service. For example, in the preceding output, the NodePort value is 31496. --> - 注意服务中的 NodePort 值。例如在上面的输出中,NodePort 是 31496。 + 注意服务中的 NodePort 值。例如在上面的输出中,NodePort 值是 31496。 + 输出类似于: - ```shell + ``` NAME READY STATUS ... IP NODE hello-world-2895499144-bsbk5 1/1 Running ... 10.200.1.4 worker1 hello-world-2895499144-m1pwt 1/1 Running ... 10.200.2.5 worker2 @@ -238,8 +238,9 @@ kubectl delete deployment hello-world ## {{% heading "whatsnext" %}} -进一步了解[通过服务连接应用](/zh-cn/docs/concepts/services-networking/connect-applications-service/)。 +跟随教程[使用 Service 连接到应用](/zh-cn/docs/tutorials/services/connect-applications-service/)。 diff --git a/content/zh-cn/docs/tasks/administer-cluster/enabling-service-topology.md b/content/zh-cn/docs/tasks/administer-cluster/enabling-service-topology.md index c646046c05..102fcc6adb 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/enabling-service-topology.md +++ b/content/zh-cn/docs/tasks/administer-cluster/enabling-service-topology.md @@ -3,22 +3,28 @@ title: 开启服务拓扑 content_type: task min-kubernetes-server-version: 1.17 --- + {{< feature-state for_k8s_version="v1.21" state="deprecated" >}} + -这项功能,特别是 Alpha 状态的 `topologyKeys` 字段,在 kubernetes v1.21 中已经弃用。 -在 kubernetes v1.21 加入的[拓扑感知提示](/zh-cn/docs/concepts/services-networking/topology-aware-hints/) -提供了类似的功能。 - -## {{% heading "prerequisites" %}} - - {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} +这项功能,特别是 Alpha 状态的 `topologyKeys` 字段,在 Kubernetes v1.21 中已经弃用。 +在 Kubernetes v1.21 +加入的[拓扑感知提示](/zh-cn/docs/concepts/services-networking/topology-aware-hints/)提供了类似的功能。 - -_服务拓扑(Service Topology)_ 使 {{< glossary_tooltip term_id="service" text="服务">}} +**服务拓扑(Service Topology)** 使 {{< glossary_tooltip term_id="service">}} 能够根据集群中的 Node 拓扑来路由流量。 比如,服务可以指定将流量优先路由到与客户端位于同一节点或者同一可用区域的端点上。 ## {{% heading "prerequisites" %}} - {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -需要下面列的先决条件,才能启用拓扑感知的服务路由: +需要满足下列先决条件,才能启用拓扑感知的服务路由: + +* Kubernetes 1.17 或更高版本 +* 配置 {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}} 以 iptables 或者 IPVS 模式运行 - * Kubernetes 1.17 或更新版本 - * 配置 {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}} 以 iptables 或者 IPVS 模式运行 +## 启用服务拓扑 {#enable-service-topology} {{< feature-state for_k8s_version="v1.21" state="deprecated" >}} + -## 启用服务拓扑 - -{{< feature-state for_k8s_version="v1.21" state="deprecated" >}} - -要启用服务拓扑功能,需要为所有 Kubernetes 组件启用 `ServiceTopology` +要启用服务拓扑,需要为所有 Kubernetes 组件启用 `ServiceTopology` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/): ``` --feature-gates="ServiceTopology=true` ``` - ## {{% heading "whatsnext" %}} - * 阅读[拓扑感知提示](/zh-cn/docs/concepts/services-networking/topology-aware-hints/),该技术是用来替换 `topologyKeys` 字段的。 * 阅读[端点切片](/zh-cn/docs/concepts/services-networking/endpoint-slices) * 阅读[服务拓扑](/zh-cn/docs/concepts/services-networking/service-topology)概念 -* 阅读[通过服务来连接应用](/zh-cn/docs/concepts/services-networking/connect-applications-service/) \ No newline at end of file +* 阅读[使用 Service 连接到应用](/zh-cn/docs/tutorials/services/connect-applications-service/) \ No newline at end of file diff --git a/content/zh-cn/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md b/content/zh-cn/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md index 8d4d3b13dd..4c48164322 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md +++ b/content/zh-cn/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md @@ -3,6 +3,13 @@ title: 使用 Calico 提供 NetworkPolicy content_type: task weight: 10 --- + ## 在 Google Kubernetes Engine (GKE) 上创建一个 Calico 集群 {#gke-cluster} -**先决条件**: [gcloud](https://cloud.google.com/sdk/docs/quickstarts) +**先决条件**:[gcloud](https://cloud.google.com/sdk/docs/quickstarts) -1. 启动一个带有 Calico 的 GKE 集群,需要加上参数 `--enable-network-policy`。 +1. 启动一个带有 Calico 的 GKE 集群,需要加上参数 `--enable-network-policy`。 - **语法** - ```shell - gcloud container clusters create [CLUSTER_NAME] --enable-network-policy - ``` + **语法** + ```shell + gcloud container clusters create [CLUSTER_NAME] --enable-network-policy + ``` - **示例** - ```shell - gcloud container clusters create my-calico-cluster --enable-network-policy - ``` + **示例** + ```shell + gcloud container clusters create my-calico-cluster --enable-network-policy + ``` -2. 使用如下命令验证部署是否正确。 +2. 使用如下命令验证部署是否正确。 - ```shell - kubectl get pods --namespace=kube-system - ``` + ```shell + kubectl get pods --namespace=kube-system + ``` - - Calico 的 pods 名以 `calico` 打头,检查确认每个 pods 状态为 `Running`。 + + + Calico 的 Pod 名以 `calico` 打头,检查确认每个 Pod 状态为 `Running`。 -## 使用 kubeadm 创建一个本地 Calico 集群 {#local-cluster} +## 使用 kubeadm 创建一个本地 Calico 集群 {#local-cluster} 使用 kubeadm 在 15 分钟内得到一个本地单主机 Calico 集群,请参考 [Calico 快速入门](https://docs.projectcalico.org/latest/getting-started/kubernetes/)。 @@ -73,6 +81,7 @@ To get a local single-host Calico cluster in fifteen minutes using kubeadm, refe -集群运行后,你可以按照[声明网络策略](/zh-cn/docs/tasks/administer-cluster/declare-network-policy/) -去尝试使用 Kubernetes NetworkPolicy。 +集群运行后, +你可以按照[声明网络策略](/zh-cn/docs/tasks/administer-cluster/declare-network-policy/)去尝试使用 +Kubernetes NetworkPolicy。 diff --git a/content/zh-cn/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md b/content/zh-cn/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md index 051ac153d1..43e057a911 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md +++ b/content/zh-cn/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md @@ -39,11 +39,11 @@ to perform a basic DaemonSet installation of Cilium in minikube. To start minikube, minimal version required is >= v1.5.2, run the with the following arguments: --> -## 在 Minikube 上部署 Cilium 用于基本测试 +## 在 Minikube 上部署 Cilium 用于基本测试 {#deploying-cilium-on-minikube-for-basic-testing} -为了轻松熟悉 Cilium 你可以根据 +为了轻松熟悉 Cilium,你可以根据 [Cilium Kubernetes 入门指南](https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/s) -在 minikube 中执行一个 cilium 的基本 DaemonSet 安装。 +在 minikube 中执行一个 Cilium 的基本 DaemonSet 安装。 要启动 minikube,需要的最低版本为 1.5.2,使用下面的参数运行: @@ -55,58 +55,75 @@ minikube version: v1.5.2 ``` ```shell -minikube start --network-plugin=cni --memory=4096 +minikube start --network-plugin=cni ``` 对于 minikube 你可以使用 Cilium 的 CLI 工具安装它。 -Cilium 将自动检测集群配置并为成功的集群部署选择合适的组件。 +为此,先用以下命令下载最新版本的 CLI: ```shell curl -LO https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz +``` + + +然后用以下命令将下载的文件解压缩到你的 `/usr/local/bin` 目录: + +```shell sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin rm cilium-linux-amd64.tar.gz +``` + + +运行上述命令后,你现在可以用以下命令安装 Cilium: + +```shell cilium install ``` -``` -🔮 Auto-detected Kubernetes kind: minikube -✨ Running "minikube" validation checks -✅ Detected minikube version "1.20.0" -ℹ️ Cilium version not set, using default version "v1.10.0" -🔮 Auto-detected cluster name: minikube -🔮 Auto-detected IPAM mode: cluster-pool -🔮 Auto-detected datapath mode: tunnel -🔑 Generating CA... -2021/05/27 02:54:44 [INFO] generate received request -2021/05/27 02:54:44 [INFO] received CSR -2021/05/27 02:54:44 [INFO] generating key: ecdsa-256 -2021/05/27 02:54:44 [INFO] encoded CSR -2021/05/27 02:54:44 [INFO] signed certificate with serial number 48713764918856674401136471229482703021230538642 -🔑 Generating certificates for Hubble... -2021/05/27 02:54:44 [INFO] generate received request -2021/05/27 02:54:44 [INFO] received CSR -2021/05/27 02:54:44 [INFO] generating key: ecdsa-256 -2021/05/27 02:54:44 [INFO] encoded CSR -2021/05/27 02:54:44 [INFO] signed certificate with serial number 3514109734025784310086389188421560613333279574 -🚀 Creating Service accounts... -🚀 Creating Cluster roles... -🚀 Creating ConfigMap... -🚀 Creating Agent DaemonSet... -🚀 Creating Operator Deployment... -⌛ Waiting for Cilium to be installed... -``` + + +随后 Cilium 将自动检测集群配置,并创建和安装合适的组件以成功完成安装。 +这些组件为: + +- Secret `cilium-ca` 中的证书机构 (CA) 和 Hubble(Cilium 的可观测层)所用的证书。 +- 服务账号。 +- 集群角色。 +- ConfigMap。 +- Agent DaemonSet 和 Operator Deployment。 + + +安装之后,你可以用 `cilium status` 命令查看 Cilium Deployment 的整体状态。 +[在此处](https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/#validate-the-installation)查看 +`status` 命令的预期输出。 -入门指南其余的部分用一个示例应用说明了如何强制执行 L3/L4(即 IP 地址+端口)的安全策略 -以及L7 (如 HTTP)的安全策略。 +入门指南其余的部分用一个示例应用说明了如何强制执行 L3/L4(即 IP 地址 + 端口)的安全策略以及 +L7 (如 HTTP)的安全策略。 -## 部署 Cilium 用于生产用途 +## 部署 Cilium 用于生产用途 {#deployment-cilium-for-production-use} -关于部署 Cilium 用于生产的详细说明,请见 -[Cilium Kubernetes 安装指南](https://docs.cilium.io/en/stable/concepts/kubernetes/intro/) +关于部署 Cilium 用于生产的详细说明,请参见 +[Cilium Kubernetes 安装指南](https://docs.cilium.io/en/stable/concepts/kubernetes/intro/)。 此文档包括详细的需求、说明和生产用途 DaemonSet 文件示例。 @@ -129,17 +146,19 @@ production DaemonSet files. Deploying a cluster with Cilium adds Pods to the `kube-system` namespace. To see this list of Pods run: - --> -## 了解 Cilium 组件 +--> +## 了解 Cilium 组件 {#understanding-cilium-components} -部署使用 Cilium 的集群会添加 Pods 到 `kube-system` 命名空间。要查看 Pod 列表,运行: +部署使用 Cilium 的集群会添加 Pod 到 `kube-system` 命名空间。要查看 Pod 列表,运行: ```shell kubectl get pods --namespace=kube-system -l k8s-app=cilium ``` - -你将看到像这样的 Pods 列表: + +你将看到像这样的 Pod 列表: ```console NAME READY STATUS RESTARTS AGE @@ -163,9 +182,8 @@ to try out Kubernetes NetworkPolicy with Cilium. Have fun, and if you have questions, contact us using the [Cilium Slack Channel](https://cilium.herokuapp.com/). --> -集群运行后,你可以按照 -[声明网络策略](/zh-cn/docs/tasks/administer-cluster/declare-network-policy/) -试用基于 Cilium 的 Kubernetes NetworkPolicy。 -玩得开心,如果你有任何疑问,请到 [Cilium Slack 频道](https://cilium.herokuapp.com/) -联系我们。 +集群运行后, +你可以按照[声明网络策略](/zh-cn/docs/tasks/administer-cluster/declare-network-policy/)试用基于 +Cilium 的 Kubernetes NetworkPolicy。玩得开心,如果你有任何疑问,请到 +[Cilium Slack 频道](https://cilium.herokuapp.com/)联系我们。 diff --git a/content/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl.md b/content/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl.md index be100e17d8..7bf9c0c2b7 100644 --- a/content/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl.md +++ b/content/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl.md @@ -13,6 +13,14 @@ description: Creating Secret objects using kubectl command line. + +本页向你展示如何使用 `kubectl` 命令行工具来创建、编辑、管理和删除。 +Kubernetes {{}} + ## {{% heading "prerequisites" %}} {{< include "task-tutorial-prereqs.md" >}} @@ -23,118 +31,139 @@ description: Creating Secret objects using kubectl command line. ## 创建 Secret {#create-a-secret} -一个 `Secret` 可以包含 Pod 访问数据库所需的用户凭证。 -例如,由用户名和密码组成的数据库连接字符串。 -你可以在本地计算机上,将用户名存储在文件 `./username.txt` 中,将密码存储在文件 `./password.txt` 中。 - -```shell -echo -n 'admin' > ./username.txt -echo -n '1f2d1e2e67df' > ./password.txt -``` - - -在这些命令中,`-n` 标志确保生成的文件在文本末尾不包含额外的换行符。 -这一点很重要,因为当 `kubectl` 读取文件并将内容编码为 base64 字符串时,多余的换行符也会被编码。 +`Secret` 对象用来存储敏感数据,如 Pod 用于访问服务的凭据。例如,为访问数据库,你可能需要一个 +Secret 来存储所需的用户名及密码。 - -`kubectl create secret` 命令将这些文件打包成一个 Secret 并在 API 服务器上创建对象。 +你可以通过在命令中传递原始数据,或将凭据存储文件中,然后再在命令行中创建 Secret。以下命令 +将创建一个存储用户名 `admin` 和密码 `S!B\*d$zDsb=` 的 Secret。 + + +### 使用原始数据 + + +执行以下命令: ```shell kubectl create secret generic db-user-pass \ - --from-file=./username.txt \ - --from-file=./password.txt + --from-literal=username=admin \ + --from-literal=password='S!B\*d$zDsb=' ``` - -输出类似于: + +你必须使用单引号 `''` 转义字符串中的特殊字符,如 `$`、`\`、`*`、`=`和`!` 。否则,你的 shell +将会解析这些字符。 + + +### 使用源文件 + + +1. 对凭证的取值作 base64 编码后保存到文件中: + + ```shell + echo -n 'admin' | base64 > ./username.txt + echo -n 'S!B\*d$zDsb=' | base64 > ./password.txt + ``` + + + `-n` 标志用来确保生成文件的文末没有多余的换行符。这很重要,因为当 `kubectl` + 读取文件并将内容编码为 base64 字符串时,额外的换行符也会被编码。 + 你不需要对文件中包含的字符串中的特殊字符进行转义。 + + +2. 在 `kubectl` 命令中传递文件路径: + + ```shell + kubectl create secret generic db-user-pass \ + --from-file=./username.txt \ + --from-file=./password.txt + ``` + + + 默认键名为文件名。你也可以通过 `--from-file=[key=]source` 设置键名,例如: + + ```shell + kubectl create secret generic db-user-pass \ + --from-file=username=./username.txt \ + --from-file=password=./password.txt + ``` + + +无论使用哪种方法,输出都类似于: ``` secret/db-user-pass created ``` - -默认密钥名称是文件名。 你可以选择使用 `--from-file=[key=]source` 来设置密钥名称。例如: +## 验证 Secret {#verify-the-secret} -```shell -kubectl create secret generic db-user-pass \ - --from-file=username=./username.txt \ - --from-file=password=./password.txt -``` - - -你不需要对文件中包含的密码字符串中的特殊字符进行转义。 - - -你还可以使用 `--from-literal==` 标签提供 Secret 数据。 -可以多次使用此标签,提供多个键值对。 -请注意,特殊字符(例如:`$`,`\`,`*`,`=` 和 `!`)由你的 [shell](https://en.wikipedia.org/wiki/Shell_(computing)) -解释执行,而且需要转义。 - -在大多数 shell 中,转义密码最简便的方法是用单引号括起来。 -比如,如果你的密码是 `S!B\*d$zDsb=`, -可以像下面一样执行命令: - -```shell -kubectl create secret generic db-user-pass \ - --from-literal=username=devuser \ - --from-literal=password='S!B\*d$zDsb=' -``` - - -## 验证 Secret {#verify-the-secret} - - -检查 secret 是否已创建: +检查 Secret 是否已创建: ```shell kubectl get secrets ``` - + 输出类似于: ``` -NAME TYPE DATA AGE -db-user-pass Opaque 2 51s +NAME TYPE DATA AGE +db-user-pass Opaque 2 51s ``` - -你可以查看 `Secret` 的描述: + +查看 Secret 的细节: ```shell -kubectl describe secrets/db-user-pass +kubectl describe secret db-user-pass ``` - + 输出类似于: ``` @@ -159,84 +188,133 @@ accidentally, or from being stored in a terminal log. `kubectl get` 和 `kubectl describe` 命令默认不显示 `Secret` 的内容。 这是为了防止 `Secret` 被意外暴露或存储在终端日志中。 - -查看编码数据的实际内容,请参考[解码 Secret](#decoding-secret)。 +### 解码 Secret {#decoding-secret} - -## 解码 Secret {#decoding-secret} + +1. 查看你所创建的 Secret 内容 + + ```shell + kubectl get secret db-user-pass -o jsonpath='{.data}' + ``` + + + 输出类似于: + + ```json + {"password":"UyFCXCpkJHpEc2I9","username":"YWRtaW4="} + ``` -要查看创建的 Secret 的内容,运行以下命令: +2. 解码 `password` 数据: + + ```shell + echo 'UyFCXCpkJHpEc2I9' | base64 --decode + ``` + + + 输出类似于: + + ``` + S!B\*d$zDsb= + ``` + + {{< caution >}} + + 这是一个出于文档编制目的的示例。实际上,该方法可能会导致包含编码数据的命令存储在 + Shell 的历史记录中。任何可以访问你的计算机的人都可以找到该命令并对 Secret 进行解码。 + 更好的办法是将查看和解码命令一同使用。 + {{< /caution >}} + + ```shell + kubectl get secret db-user-pass -o jsonpath='{.data.password}' | base64 --decode + ``` + + +## 编辑 Secret {#edit-secret} + + +你可以编辑一个现存的 `Secret` 对象,除非它是[不可改变的](/zh-cn/docs/concepts/configuration/secret/#secret-immutable)。 +要想编辑一个 Secret,请执行以下命令: ```shell -kubectl get secret db-user-pass -o jsonpath='{.data}' -``` - - -输出类似于: - -```json -{"password":"MWYyZDFlMmU2N2Rm","username":"YWRtaW4="} +kubectl edit secrets ``` -现在你可以解码 `password` 的数据: - -```shell -# 这是一个用于文档说明的示例。 -# 如果你这样做,数据 'MWYyZDFlMmU2N2Rm' 可以存储在你的 shell 历史中。 -# 可以进入你电脑的人可以找到那个记住的命令并可以在你不知情的情况下 base-64 解码这个 Secret。 -# 通常最好将这些步骤结合起来,如页面后面所示。 -echo 'MWYyZDFlMmU2N2Rm' | base64 --decode -``` - - -输出类似于: - -``` -1f2d1e2e67df -``` +这将打开默认编辑器,并允许你更新 `data` 字段中的 base64 编码的 Secret 值,示例如下: -为了避免在 shell 历史记录中存储 Secret 的编码值,可以执行如下命令: -```shell -kubectl get secret db-user-pass -o jsonpath='{.data.password}' | base64 --decode +```yaml +#请编辑下面的对象。以“#”开头的行将被忽略, +#空文件将中止编辑。如果在保存此文件时发生错误, +#则将重新打开该文件并显示相关的失败。 +apiVersion: v1 +data: + password: UyFCXCpkJHpEc2I9 + username: YWRtaW4= +kind: Secret +metadata: + creationTimestamp: "2022-06-28T17:44:13Z" + name: db-user-pass + namespace: default + resourceVersion: "12708504" + uid: 91becd59-78fa-4c85-823f-6d44436242ac +type: Opaque ``` - -输出应与上述类似。 - - ## 清理 {#clean-up} - -删除创建的 Secret: + +要想删除一个 Secret,请执行以下命令: ```shell kubectl delete secret db-user-pass ``` - - ## {{% heading "whatsnext" %}} - 进一步阅读 [Secret 概念](/zh-cn/docs/concepts/configuration/secret/) - 了解如何[使用配置文件管理 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-config-file/) -- 了解如何[使用 kustomize 管理 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kustomize/) +- 了解如何[使用 Kustomize 管理 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kustomize/) diff --git a/content/zh-cn/docs/tasks/configure-pod-container/configure-volume-storage.md b/content/zh-cn/docs/tasks/configure-pod-container/configure-volume-storage.md index b468a6943e..a0f6fa6b0e 100644 --- a/content/zh-cn/docs/tasks/configure-pod-container/configure-volume-storage.md +++ b/content/zh-cn/docs/tasks/configure-pod-container/configure-volume-storage.md @@ -31,7 +31,6 @@ applications, such as key-value stores (such as Redis) and databases. - 2. 验证 Pod 中的容器是否正在运行,然后留意 Pod 的更改: @@ -67,17 +67,21 @@ restarts. Here is the configuration file for the Pod: kubectl get pod redis --watch ``` + + 输出如下: - ```shell + ```console NAME READY STATUS RESTARTS AGE redis 1/1 Running 0 13s ``` -3. 在另一个终端,用 shell 连接正在运行的容器: +3. 在另一个终端,用 Shell 连接正在运行的容器: ```shell kubectl exec -it redis -- /bin/bash @@ -86,7 +90,7 @@ restarts. Here is the configuration file for the Pod: -4. 在你的 Shell中,切换到 `/data/redis` 目录下,然后创建一个文件: +4. 在你的 Shell 中,切换到 `/data/redis` 目录下,然后创建一个文件: ```shell root@redis:/data# cd /data/redis/ @@ -94,7 +98,7 @@ restarts. Here is the configuration file for the Pod: ``` 5. 在你的 Shell 中,列出正在运行的进程: @@ -104,9 +108,13 @@ restarts. Here is the configuration file for the Pod: root@redis:/data/redis# ps aux ``` + + 输出类似于: - ```shell + ```console USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND redis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379 root 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash @@ -114,7 +122,7 @@ restarts. Here is the configuration file for the Pod: ``` 6. 在你的 Shell 中,结束 Redis 进程: @@ -122,15 +130,19 @@ restarts. Here is the configuration file for the Pod: root@redis:/data/redis# kill ``` + + 其中 `` 是 Redis 进程的 ID (PID)。 7. 在你原先终端中,留意 Redis Pod 的更改。最终你将会看到和下面类似的输出: - ```shell + ```console NAME READY STATUS RESTARTS AGE redis 1/1 Running 0 13s redis 0/1 Completed 0 6m @@ -148,7 +160,7 @@ of `Always`. 为 `Always`。 1. 用 Shell 进入重新启动的容器中: @@ -157,7 +169,7 @@ of `Always`. ``` 2. 在你的 Shell 中,进入到 `/data/redis` 目录下,并确认 `test-file` 文件是否仍然存在。 @@ -168,7 +180,7 @@ of `Always`. ``` 3. 删除为此练习所创建的 Pod: @@ -179,19 +191,19 @@ of `Always`. ## {{% heading "whatsnext" %}} -* 参阅 [Volume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volume-v1-core)。 -* 参阅 [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)。 -* 除了 `emptyDir` 提供的本地磁盘存储外,Kubernetes 还支持许多不同的网络附加存储解决方案, +- 参阅 [Volume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volume-v1-core)。 +- 参阅 [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)。 +- 除了 `emptyDir` 提供的本地磁盘存储外,Kubernetes 还支持许多不同的网络附加存储解决方案, 包括 GCE 上的 PD 和 EC2 上的 EBS,它们是关键数据的首选,并将处理节点上的一些细节, 例如安装和卸载设备。了解更多详情请参阅[卷](/zh-cn/docs/concepts/storage/volumes/)。 diff --git a/content/zh-cn/docs/tasks/job/coarse-parallel-processing-work-queue.md b/content/zh-cn/docs/tasks/job/coarse-parallel-processing-work-queue.md index a22279e659..5982367c60 100644 --- a/content/zh-cn/docs/tasks/job/coarse-parallel-processing-work-queue.md +++ b/content/zh-cn/docs/tasks/job/coarse-parallel-processing-work-queue.md @@ -6,12 +6,10 @@ weight: 20 --- @@ -26,11 +24,11 @@ from a task queue, completes it, deletes it from the queue, and exits. Here is an overview of the steps in this example: 1. **Start a message queue service.** In this example, we use RabbitMQ, but you could use another - one. In practice you would set up a message queue service once and reuse it for many jobs. + one. In practice you would set up a message queue service once and reuse it for many jobs. 1. **Create a queue, and fill it with messages.** Each message represents one task to be done. In this example, a message is an integer that we will do a lengthy computation on. 1. **Start a Job that works on tasks from the queue**. The Job starts several pods. Each pod takes - one task from the message queue, processes it, and repeats until the end of the queue is reached. + one task from the message queue, processes it, and repeats until the end of the queue is reached. --> 本例中,我们会运行包含多个并行工作进程的 Kubernetes Job。 @@ -38,12 +36,17 @@ Here is an overview of the steps in this example: 下面是本次示例的主要步骤: -1. **启动一个消息队列服务** 本例中,我们使用 RabbitMQ,你也可以用其他的消息队列服务。在实际工作环境中,你可以创建一次消息队列服务然后在多个任务中重复使用。 +1. **启动一个消息队列服务**。 + 本例中,我们使用 RabbitMQ,你也可以用其他的消息队列服务。 + 在实际工作环境中,你可以创建一次消息队列服务然后在多个任务中重复使用。 -1. **创建一个队列,放上消息数据** 每个消息表示一个要执行的任务。本例中,每个消息是一个整数值。我们将基于这个整数值执行很长的计算操作。 - -1. **启动一个在队列中执行这些任务的 Job**。该 Job 启动多个 Pod。每个 Pod 从消息队列中取走一个任务,处理它,然后重复执行,直到队列的队尾。 +1. **创建一个队列,放上消息数据**。 + 每个消息表示一个要执行的任务。本例中,每个消息是一个整数值。 + 我们将基于这个整数值执行很长的计算操作。 +1. **启动一个在队列中执行这些任务的 Job**。 + 该 Job 启动多个 Pod。每个 Pod 从消息队列中取走一个任务,处理它, + 然后重复执行,直到队列的队尾。 ## {{% heading "prerequisites" %}} @@ -96,8 +99,8 @@ replicationcontroller "rabbitmq-controller" created - -我们仅用到 [celery-rabbitmq 示例](https://github.com/kubernetes/kubernetes/tree/release-1.3/examples/celery-rabbitmq) 中描述的部分功能。 +我们仅用到 +[celery-rabbitmq 示例](https://github.com/kubernetes/kubernetes/tree/release-1.3/examples/celery-rabbitmq)中描述的部分功能。 ## 测试消息队列服务 {#testing-the-message-queue-service} -现在,我们可以试着访问消息队列。我们将会创建一个临时的可交互的 Pod,在它上面安装一些工具,然后用队列做实验。 +现在,我们可以试着访问消息队列。我们将会创建一个临时的可交互的 Pod, +在它上面安装一些工具,然后用队列做实验。 首先创建一个临时的可交互的 Pod: ```shell # 创建一个临时的可交互的 Pod -kubectl run -i --tty temp --image ubuntu:14.04 +kubectl run -i --tty temp --image ubuntu:18.04 ``` ``` Waiting for pod default/temp-loe07 to be running, status is Pending, pod ready: false @@ -130,7 +134,7 @@ Next install the `amqp-tools` so we can work with message queues. --> 请注意你的 Pod 名称和命令提示符将会不同。 -接下来安装 `amqp-tools` ,这样我们就能用消息队列了。 +接下来安装 `amqp-tools`,这样我们就能用消息队列了。 ```shell # 安装一些工具 @@ -145,10 +149,9 @@ Later, we will make a docker image that includes these packages. Next, we will check that we can discover the rabbitmq service: --> - 后续,我们将制作一个包含这些包的 Docker 镜像。 -接着,我们将要验证我们发现 RabbitMQ 服务: +接着,我们将要验证可以发现 RabbitMQ 服务: 如果 Kube-DNS 没有正确安装,上一步可能会出错。 @@ -227,7 +230,7 @@ from the queue, and passes that message to the standard input of an arbitrary co return so the example is readable. --> -最后一个命令中, `amqp-consume` 工具从队列中取走了一个消息,并把该消息传递给了随机命令的标准输出。 +最后一个命令中,`amqp-consume` 工具从队列中取走了一个消息,并把该消息传递给了随机命令的标准输出。 在这种情况下,`cat` 会打印它从标准输入中读取的字符,echo 会添加回车符以便示例可读。 -这样,我们给队列中填充了8个消息。 +这样,我们给队列中填充了 8 个消息。 ## 创建镜像 {#create-an-image} 现在我们可以创建一个做为 Job 来运行的镜像。 -我们将用 `amqp-consume` 来从队列中读取消息并实际运行我们的程序。这里给出一个非常简单的示例程序: +我们将用 `amqp-consume` 实用程序从队列中读取消息并运行实际的程序。 +这里给出一个非常简单的示例程序: {{< codenew language="python" file="application/job/rabbitmq/worker.py" >}} @@ -323,9 +326,9 @@ build the image with this command: 现在,编译镜像。如果你在用源代码树,那么切换到目录 `examples/job/work-queue-1`。 否则的话,创建一个临时目录,切换到这个目录。下载 -[Dockerfile](/examples/application/job/rabbitmq/Dockerfile),和 +[Dockerfile](/examples/application/job/rabbitmq/Dockerfile) 和 [worker.py](/examples/application/job/rabbitmq/worker.py)。 -无论哪种情况,都可以用下面的命令编译镜像 +无论哪种情况,都可以用下面的命令编译镜像: ```shell docker build -t job-wq-1 . @@ -367,7 +370,7 @@ image to match the name you used, and call it `./job.yaml`. --> ## 定义 Job {#defining-a-job} -这里给出一个 Job 定义 yaml文件。你需要拷贝一份并编辑镜像以匹配你使用的名称,保存为 `./job.yaml`。 +这里给出一个 Job 定义 YAML 文件。你将需要拷贝一份 Job 并编辑该镜像以匹配你使用的名称,保存为 `./job.yaml`。 {{< codenew file="application/job/rabbitmq/job.yaml" >}} @@ -380,7 +383,9 @@ done. So we set, `.spec.completions: 8` for the example, since we put 8 items i So, now run the Job: --> -本例中,每个 Pod 使用队列中的一个消息然后退出。这样,Job 的完成计数就代表了完成的工作项的数量。本例中我们设置 `.spec.completions: 8`,因为我们放了8项内容在队列中。 +本例中,每个 Pod 使用队列中的一个消息然后退出。 +这样,Job 的完成计数就代表了完成的工作项的数量。 +本例中我们设置 `.spec.completions: 8`,因为我们放了 8 项内容在队列中。 ## 运行 Job {#running-the-job} @@ -391,14 +396,23 @@ kubectl apply -f ./job.yaml ``` -稍等片刻,然后检查 Job。 +你可以等待 Job 在某个超时时间后成功: + +```shell +# 状况名称的检查不区分大小写 +kubectl wait --for=condition=complete --timeout=300s job/job-wq-1 +``` + + +接下来查看 Job: ```shell kubectl describe jobs/job-wq-1 ``` - ``` Name: job-wq-1 Namespace: default @@ -436,9 +450,9 @@ Events: ``` -我们所有的 Pod 都成功了。耶! +该 Job 的所有 Pod 都已成功。耶! @@ -456,8 +470,8 @@ want to consider one of the other [job patterns](/docs/concepts/workloads/contro 本文所讲述的处理方法的好处是你不需要修改你的 "worker" 程序使其知道工作队列的存在。 -本文所描述的方法需要你运行一个消息队列服务。如果不方便运行消息队列服务,你也许会考虑另外一种 -[任务模式](/zh-cn/docs/concepts/workloads/controllers/job/#job-patterns)。 +本文所描述的方法需要你运行一个消息队列服务。如果不方便运行消息队列服务, +你也许会考虑另外一种[任务模式](/zh-cn/docs/concepts/workloads/controllers/job/#job-patterns)。 - 本文所述的方法为每个工作项创建了一个 Pod。 -如果你的工作项仅需数秒钟,为每个工作项创建 Pod会增加很多的常规消耗。 +如果你的工作项仅需数秒钟,为每个工作项创建 Pod 会增加很多的常规消耗。 可以考虑另外的方案请参考[示例](/zh-cn/docs/tasks/job/fine-parallel-processing-work-queue/), 这种方案可以实现每个 Pod 执行多个工作项。 示例中,我们使用 `amqp-consume` 从消息队列读取消息并执行我们真正的程序。 这样的好处是你不需要修改你的程序使其知道队列的存在。 -要了解怎样使用客户端库和工作队列通信,请参考 -[不同的示例](/zh-cn/docs/tasks/job/fine-parallel-processing-work-queue/)。 +要了解怎样使用客户端库和工作队列通信, +请参考[不同的示例](/zh-cn/docs/tasks/job/fine-parallel-processing-work-queue/)。 -本教程提供了容器镜像,使用 NGINX 来对所有请求做出回应: +本教程提供了容器镜像,使用 NGINX 来对所有请求做出回应。 @@ -157,7 +157,6 @@ tutorial has only one Container. A Kubernetes Pod and restarts the Pod's Container if it terminates. Deployments are the recommended way to manage the creation and scaling of Pods. --> - ## 创建 Deployment {#create-a-deployment} Kubernetes [**Pod**](/zh-cn/docs/concepts/workloads/pods/) @@ -171,16 +170,15 @@ Deployment 是管理 Pod 创建和扩展的推荐方法。 Pod runs a Container based on the provided Docker image. --> 1. 使用 `kubectl create` 命令创建管理 Pod 的 Deployment。该 Pod 根据提供的 Docker - 镜像运行 Container。 + 镜像运行容器。 ```shell - kubectl create deployment hello-node --image=registry.k8s.io/echoserver:1.4 + kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.39 -- /agnhost netexec --http-port=8080 ``` - 2. 查看 Deployment: ```shell @@ -268,11 +266,11 @@ Kubernetes [*Service*](/docs/concepts/services-networking/service/). ``` 这里的 `--type=LoadBalancer` 参数表明你希望将你的 Service 暴露到集群外部。 @@ -344,9 +342,9 @@ The minikube tool includes a set of built-in {{< glossary_tooltip text="addons" 1. List the currently supported addons: --> -## 启用插件 +## 启用插件 {#enable-addons} -Minikube 有一组内置的 {{< glossary_tooltip text="插件" term_id="addons" >}}, +Minikube 有一组内置的{{< glossary_tooltip text="插件" term_id="addons" >}}, 可以在本地 Kubernetes 环境中启用、禁用和打开。 1. 列出当前支持的插件: diff --git a/content/zh-cn/docs/tutorials/services/connect-applications-service.md b/content/zh-cn/docs/tutorials/services/connect-applications-service.md new file mode 100644 index 0000000000..861c3a838c --- /dev/null +++ b/content/zh-cn/docs/tutorials/services/connect-applications-service.md @@ -0,0 +1,634 @@ +--- +title: 使用 Service 连接到应用 +content_type: tutorial +weight: 20 +--- + + + + + +## Kubernetes 连接容器的模型 {#the-kubernetes-model-for-connecting-containers} + +既然有了一个持续运行、可复制的应用,我们就能够将它暴露到网络上。 + +Kubernetes 假设 Pod 可与其它 Pod 通信,不管它们在哪个主机上。 +Kubernetes 给每一个 Pod 分配一个集群私有 IP 地址,所以没必要在 +Pod 与 Pod 之间创建连接或将容器的端口映射到主机端口。 +这意味着同一个 Pod 内的所有容器能通过 localhost 上的端口互相连通,集群中的所有 Pod +也不需要通过 NAT 转换就能够互相看到。 +本文档的剩余部分详述如何在上述网络模型之上运行可靠的服务。 + +本教程使用一个简单的 Nginx 服务器来演示概念验证原型。 + + + + +## 在集群中暴露 Pod {#exposing-pods-to-the-cluster} + +我们在之前的示例中已经做过,然而让我们以网络连接的视角再重做一遍。 +创建一个 Nginx Pod,注意其中包含一个容器端口的规约: + +{{< codenew file="service/networking/run-my-nginx.yaml" >}} + + +这使得可以从集群中任何一个节点来访问它。检查节点,该 Pod 正在运行: + +```shell +kubectl apply -f ./run-my-nginx.yaml +kubectl get pods -l run=my-nginx -o wide +``` +``` +NAME READY STATUS RESTARTS AGE IP NODE +my-nginx-3800858182-jr4a2 1/1 Running 0 13s 10.244.3.4 kubernetes-minion-905m +my-nginx-3800858182-kna2y 1/1 Running 0 13s 10.244.2.5 kubernetes-minion-ljyd +``` + + +检查 Pod 的 IP 地址: + +``` +kubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs + POD_IP + [map[ip:10.244.3.4]] + [map[ip:10.244.2.5]] +``` + + +你应该能够通过 ssh 登录到集群中的任何一个节点上,并使用诸如 `curl` 之类的工具向这两个 IP 地址发出查询请求。 +需要注意的是,容器 **不会** 使用该节点上的 80 端口,也不会使用任何特定的 NAT 规则去路由流量到 Pod 上。 +这意味着可以在同一个节点上运行多个 Nginx Pod,使用相同的 `containerPort`,并且可以从集群中任何其他的 +Pod 或节点上使用 IP 的方式访问到它们。 +如果你想的话,你依然可以将宿主节点的某个端口的流量转发到 Pod 中,但是出于网络模型的原因,你不必这么做。 + +如果对此好奇,请参考 [Kubernetes 网络模型](/zh-cn/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model)。 + + +## 创建 Service {#creating-a-service} + +我们有一组在一个扁平的、集群范围的地址空间中运行 Nginx 服务的 Pod。 +理论上,你可以直接连接到这些 Pod,但如果某个节点死掉了会发生什么呢? +Pod 会终止,Deployment 将创建新的 Pod,且使用不同的 IP。这正是 Service 要解决的问题。 + +Kubernetes Service 是集群中提供相同功能的一组 Pod 的抽象表达。 +当每个 Service 创建时,会被分配一个唯一的 IP 地址(也称为 clusterIP)。 +这个 IP 地址与 Service 的生命周期绑定在一起,只要 Service 存在,它就不会改变。 +可以配置 Pod 使它与 Service 进行通信,Pod 知道与 Service 通信将被自动地负载均衡到该 +Service 中的某些 Pod 上。 + +可以使用 `kubectl expose` 命令为 2个 Nginx 副本创建一个 Service: + +```shell +kubectl expose deployment/my-nginx +``` +``` +service/my-nginx exposed +``` + + +这等价于使用 `kubectl create -f` 命令及如下的 yaml 文件创建: + +{{< codenew file="service/networking/nginx-svc.yaml" >}} + + +上述规约将创建一个 Service,该 Service 会将所有具有标签 `run: my-nginx` 的 Pod 的 TCP +80 端口暴露到一个抽象的 Service 端口上(`targetPort`:容器接收流量的端口;`port`: +可任意取值的抽象的 Service 端口,其他 Pod 通过该端口访问 Service)。 +查看 [Service](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core) +API 对象以了解 Service 所能接受的字段列表。 +查看你的 Service 资源: + +```shell +kubectl get svc my-nginx +``` +``` +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +my-nginx ClusterIP 10.0.162.149 80/TCP 21s +``` + + +正如前面所提到的,一个 Service 由一组 Pod 提供支撑。这些 Pod 通过 +{{}} 暴露出来。 +Service Selector 将持续评估,结果被 POST +到使用{{< glossary_tooltip text="标签" term_id="label" >}}与该 Service 连接的一个 EndpointSlice。 +当 Pod 终止后,它会自动从包含该 Pod 的 EndpointSlices 中移除。 +新的能够匹配上 Service Selector 的 Pod 将自动地被为该 Service 添加到 EndpointSlice 中。 +检查 Endpoint,注意到 IP 地址与在第一步创建的 Pod 是相同的。 + +```shell +kubectl describe svc my-nginx +``` +``` +Name: my-nginx +Namespace: default +Labels: run=my-nginx +Annotations: +Selector: run=my-nginx +Type: ClusterIP +IP: 10.0.162.149 +Port: 80/TCP +Endpoints: 10.244.2.5:80,10.244.3.4:80 +Session Affinity: None +Events: +``` +```shell +kubectl get endpointslices -l kubernetes.io/service-name=my-nginx +``` +``` +NAME ADDRESSTYPE PORTS ENDPOINTS AGE +my-nginx-7vzhx IPv4 80 10.244.2.5,10.244.3.4 21s +``` + + +现在,你应该能够从集群中任意节点上使用 curl 命令向 `:` 发送请求以访问 Nginx Service。 +注意 Service IP 完全是虚拟的,它从来没有走过网络,如果对它如何工作的原理感到好奇, +可以进一步阅读[服务代理](/zh-cn/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)的内容。 + + +## 访问 Service {#accessing-the-service} + +Kubernetes 支持两种查找服务的主要模式:环境变量和 DNS。前者开箱即用,而后者则需要 +[CoreDNS 集群插件](https://releases.k8s.io/{{< param "fullversion" >}}/cluster/addons/dns/coredns)。 + +{{< note >}} + +如果不需要服务环境变量(因为可能与预期的程序冲突,可能要处理的变量太多,或者仅使用DNS等),则可以通过在 +[pod spec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core) +上将 `enableServiceLinks` 标志设置为 `false` 来禁用此模式。 +{{< /note >}} + + +### 环境变量 {#environment-variables} + +当 Pod 在节点上运行时,kubelet 会针对每个活跃的 Service 为 Pod 添加一组环境变量。 +这就引入了一个顺序的问题。为解释这个问题,让我们先检查正在运行的 Nginx Pod +的环境变量(你的环境中的 Pod 名称将会与下面示例命令中的不同): + +```shell +kubectl exec my-nginx-3800858182-jr4a2 -- printenv | grep SERVICE +``` +``` +KUBERNETES_SERVICE_HOST=10.0.0.1 +KUBERNETES_SERVICE_PORT=443 +KUBERNETES_SERVICE_PORT_HTTPS=443 +``` + + +能看到环境变量中并没有你创建的 Service 相关的值。这是因为副本的创建先于 Service。 +这样做的另一个缺点是,调度器可能会将所有 Pod 部署到同一台机器上,如果该机器宕机则整个 Service 都会离线。 +要改正的话,我们可以先终止这 2 个 Pod,然后等待 Deployment 去重新创建它们。 +这次 Service 会 **先于** 副本存在。这将实现调度器级别的 Pod 按 Service +分布(假定所有的节点都具有同样的容量),并提供正确的环境变量: + +```shell +kubectl scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2; + +kubectl get pods -l run=my-nginx -o wide +``` +``` +NAME READY STATUS RESTARTS AGE IP NODE +my-nginx-3800858182-e9ihh 1/1 Running 0 5s 10.244.2.7 kubernetes-minion-ljyd +my-nginx-3800858182-j4rm4 1/1 Running 0 5s 10.244.3.8 kubernetes-minion-905m +``` + + +你可能注意到,Pod 具有不同的名称,这是因为它们是被重新创建的。 + +```shell +kubectl exec my-nginx-3800858182-e9ihh -- printenv | grep SERVICE +``` +``` +KUBERNETES_SERVICE_PORT=443 +MY_NGINX_SERVICE_HOST=10.0.162.149 +KUBERNETES_SERVICE_HOST=10.0.0.1 +MY_NGINX_SERVICE_PORT=80 +KUBERNETES_SERVICE_PORT_HTTPS=443 +``` + +### DNS + + +Kubernetes 提供了一个自动为其它 Service 分配 DNS 名字的 DNS 插件 Service。 +你可以通过如下命令检查它是否在工作: + +```shell +kubectl get services kube-dns --namespace=kube-system +``` +``` +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +kube-dns ClusterIP 10.0.0.10 53/UDP,53/TCP 8m +``` + + +本段剩余的内容假设你已经有一个拥有持久 IP 地址的 Service(my-nginx),以及一个为其 +IP 分配名称的 DNS 服务器。 这里我们使用 CoreDNS 集群插件(应用名为 `kube-dns`), +所以在集群中的任何 Pod 中,你都可以使用标准方法(例如:`gethostbyname()`)与该 Service 通信。 +如果 CoreDNS 没有在运行,你可以参照 +[CoreDNS README](https://github.com/coredns/deployment/tree/master/kubernetes) +或者[安装 CoreDNS](/zh-cn/docs/tasks/administer-cluster/coredns/#installing-coredns) 来启用它。 +让我们运行另一个 curl 应用来进行测试: + +```shell +kubectl run curl --image=radial/busyboxplus:curl -i --tty +``` +``` +Waiting for pod default/curl-131556218-9fnch to be running, status is Pending, pod ready: false +Hit enter for command prompt +``` + + +然后,按回车并执行命令 `nslookup my-nginx`: + +```shell +[ root@curl-131556218-9fnch:/ ]$ nslookup my-nginx +Server: 10.0.0.10 +Address 1: 10.0.0.10 + +Name: my-nginx +Address 1: 10.0.162.149 +``` + + +## 保护 Service {#securing-the-service} + +到现在为止,我们只在集群内部访问了 Nginx 服务器。在将 Service 暴露到因特网之前,我们希望确保通信信道是安全的。 +为实现这一目的,需要: + +* 用于 HTTPS 的自签名证书(除非已经有了一个身份证书) +* 使用证书配置的 Nginx 服务器 +* 使 Pod 可以访问证书的 [Secret](/zh-cn/docs/concepts/configuration/secret/) + +你可以从 +[Nginx https 示例](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/)获取所有上述内容。 +你需要安装 go 和 make 工具。如果你不想安装这些软件,可以按照后文所述的手动执行步骤执行操作。简要过程如下: + +```shell +make keys KEY=/tmp/nginx.key CERT=/tmp/nginx.crt +kubectl create secret tls nginxsecret --key /tmp/nginx.key --cert /tmp/nginx.crt +``` +``` +secret/nginxsecret created +``` +```shell +kubectl get secrets +``` +``` +NAME TYPE DATA AGE +nginxsecret kubernetes.io/tls 2 1m +``` + + +以下是 configmap: + +```shell +kubectl create configmap nginxconfigmap --from-file=default.conf +``` +``` +configmap/nginxconfigmap created +``` +```shell +kubectl get configmaps +``` +``` +NAME DATA AGE +nginxconfigmap 1 114s +``` + + +以下是你在运行 make 时遇到问题时要遵循的手动步骤(例如,在 Windows 上): + +```shell +# 创建公钥和相对应的私钥 +openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /d/tmp/nginx.key -out /d/tmp/nginx.crt -subj "/CN=my-nginx/O=my-nginx" +# 对密钥实施 base64 编码 +cat /d/tmp/nginx.crt | base64 +cat /d/tmp/nginx.key | base64 +``` + + +使用前面命令的输出来创建 yaml 文件,如下所示。 base64 编码的值应全部放在一行上。 + +```yaml +apiVersion: "v1" +kind: "Secret" +metadata: + name: "nginxsecret" + namespace: "default" +type: kubernetes.io/tls +data: + tls.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURIekNDQWdlZ0F3SUJBZ0lKQUp5M3lQK0pzMlpJTUEwR0NTcUdTSWIzRFFFQkJRVUFNQ1l4RVRBUEJnTlYKQkFNVENHNW5hVzU0YzNaak1SRXdEd1lEVlFRS0V3aHVaMmx1ZUhOMll6QWVGdzB4TnpFd01qWXdOekEzTVRKYQpGdzB4T0RFd01qWXdOekEzTVRKYU1DWXhFVEFQQmdOVkJBTVRDRzVuYVc1NGMzWmpNUkV3RHdZRFZRUUtFd2h1CloybHVlSE4yWXpDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBSjFxSU1SOVdWM0IKMlZIQlRMRmtobDRONXljMEJxYUhIQktMSnJMcy8vdzZhU3hRS29GbHlJSU94NGUrMlN5ajBFcndCLzlYTnBwbQppeW1CL3JkRldkOXg5UWhBQUxCZkVaTmNiV3NsTVFVcnhBZW50VWt1dk1vLzgvMHRpbGhjc3paenJEYVJ4NEo5Ci82UVRtVVI3a0ZTWUpOWTVQZkR3cGc3dlVvaDZmZ1Voam92VG42eHNVR0M2QURVODBpNXFlZWhNeVI1N2lmU2YKNHZpaXdIY3hnL3lZR1JBRS9mRTRqakxCdmdONjc2SU90S01rZXV3R0ljNDFhd05tNnNTSzRqYUNGeGpYSnZaZQp2by9kTlEybHhHWCtKT2l3SEhXbXNhdGp4WTRaNVk3R1ZoK0QrWnYvcW1mMFgvbVY0Rmo1NzV3ajFMWVBocWtsCmdhSXZYRyt4U1FVQ0F3RUFBYU5RTUU0d0hRWURWUjBPQkJZRUZPNG9OWkI3YXc1OUlsYkROMzhIYkduYnhFVjcKTUI4R0ExVWRJd1FZTUJhQUZPNG9OWkI3YXc1OUlsYkROMzhIYkduYnhFVjdNQXdHQTFVZEV3UUZNQU1CQWY4dwpEUVlKS29aSWh2Y05BUUVGQlFBRGdnRUJBRVhTMW9FU0lFaXdyMDhWcVA0K2NwTHI3TW5FMTducDBvMm14alFvCjRGb0RvRjdRZnZqeE04Tzd2TjB0clcxb2pGSW0vWDE4ZnZaL3k4ZzVaWG40Vm8zc3hKVmRBcStNZC9jTStzUGEKNmJjTkNUekZqeFpUV0UrKzE5NS9zb2dmOUZ3VDVDK3U2Q3B5N0M3MTZvUXRUakViV05VdEt4cXI0Nk1OZWNCMApwRFhWZmdWQTRadkR4NFo3S2RiZDY5eXM3OVFHYmg5ZW1PZ05NZFlsSUswSGt0ejF5WU4vbVpmK3FqTkJqbWZjCkNnMnlwbGQ0Wi8rUUNQZjl3SkoybFIrY2FnT0R4elBWcGxNSEcybzgvTHFDdnh6elZPUDUxeXdLZEtxaUMwSVEKQ0I5T2wwWW5scE9UNEh1b2hSUzBPOStlMm9KdFZsNUIyczRpbDlhZ3RTVXFxUlU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K" + tls.key: "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2RhaURFZlZsZHdkbFIKd1V5eFpJWmVEZWNuTkFhbWh4d1NpeWF5N1AvOE9ta3NVQ3FCWmNpQ0RzZUh2dGtzbzlCSzhBZi9WemFhWm9zcApnZjYzUlZuZmNmVUlRQUN3WHhHVFhHMXJKVEVGSzhRSHA3VkpMcnpLUC9QOUxZcFlYTE0yYzZ3MmtjZUNmZitrCkU1bEVlNUJVbUNUV09UM3c4S1lPNzFLSWVuNEZJWTZMMDUrc2JGQmd1Z0ExUE5JdWFubm9UTWtlZTRuMG4rTDQKb3NCM01ZUDhtQmtRQlAzeE9JNHl3YjREZXUraURyU2pKSHJzQmlIT05Xc0RadXJFaXVJMmdoY1kxeWIyWHI2UAozVFVOcGNSbC9pVG9zQngxcHJHclk4V09HZVdPeGxZZmcvbWIvNnBuOUYvNWxlQlkrZStjSTlTMkQ0YXBKWUdpCkwxeHZzVWtGQWdNQkFBRUNnZ0VBZFhCK0xkbk8ySElOTGo5bWRsb25IUGlHWWVzZ294RGQwci9hQ1Zkank4dlEKTjIwL3FQWkUxek1yall6Ry9kVGhTMmMwc0QxaTBXSjdwR1lGb0xtdXlWTjltY0FXUTM5SjM0VHZaU2FFSWZWNgo5TE1jUHhNTmFsNjRLMFRVbUFQZytGam9QSFlhUUxLOERLOUtnNXNrSE5pOWNzMlY5ckd6VWlVZWtBL0RBUlBTClI3L2ZjUFBacDRuRWVBZmI3WTk1R1llb1p5V21SU3VKdlNyblBESGtUdW1vVlVWdkxMRHRzaG9reUxiTWVtN3oKMmJzVmpwSW1GTHJqbGtmQXlpNHg0WjJrV3YyMFRrdWtsZU1jaVlMbjk4QWxiRi9DSmRLM3QraTRoMTVlR2ZQegpoTnh3bk9QdlVTaDR2Q0o3c2Q5TmtEUGJvS2JneVVHOXBYamZhRGR2UVFLQmdRRFFLM01nUkhkQ1pKNVFqZWFKClFGdXF4cHdnNzhZTjQyL1NwenlUYmtGcVFoQWtyczJxWGx1MDZBRzhrZzIzQkswaHkzaE9zSGgxcXRVK3NHZVAKOWRERHBsUWV0ODZsY2FlR3hoc0V0L1R6cEdtNGFKSm5oNzVVaTVGZk9QTDhPTm1FZ3MxMVRhUldhNzZxelRyMgphRlpjQ2pWV1g0YnRSTHVwSkgrMjZnY0FhUUtCZ1FEQmxVSUUzTnNVOFBBZEYvL25sQVB5VWs1T3lDdWc3dmVyClUycXlrdXFzYnBkSi9hODViT1JhM05IVmpVM25uRGpHVHBWaE9JeXg5TEFrc2RwZEFjVmxvcG9HODhXYk9lMTAKMUdqbnkySmdDK3JVWUZiRGtpUGx1K09IYnRnOXFYcGJMSHBzUVpsMGhucDBYSFNYVm9CMUliQndnMGEyOFVadApCbFBtWmc2d1BRS0JnRHVIUVV2SDZHYTNDVUsxNFdmOFhIcFFnMU16M2VvWTBPQm5iSDRvZUZKZmcraEppSXlnCm9RN3hqWldVR3BIc3AyblRtcHErQWlSNzdyRVhsdlhtOElVU2FsbkNiRGlKY01Pc29RdFBZNS9NczJMRm5LQTQKaENmL0pWb2FtZm1nZEN0ZGtFMXNINE9MR2lJVHdEbTRpb0dWZGIwMllnbzFyb2htNUpLMUI3MkpBb0dBUW01UQpHNDhXOTVhL0w1eSt5dCsyZ3YvUHM2VnBvMjZlTzRNQ3lJazJVem9ZWE9IYnNkODJkaC8xT2sybGdHZlI2K3VuCnc1YytZUXRSTHlhQmd3MUtpbGhFZDBKTWU3cGpUSVpnQWJ0LzVPbnlDak9OVXN2aDJjS2lrQ1Z2dTZsZlBjNkQKckliT2ZIaHhxV0RZK2Q1TGN1YSt2NzJ0RkxhenJsSlBsRzlOZHhrQ2dZRUF5elIzT3UyMDNRVVV6bUlCRkwzZAp4Wm5XZ0JLSEo3TnNxcGFWb2RjL0d5aGVycjFDZzE2MmJaSjJDV2RsZkI0VEdtUjZZdmxTZEFOOFRwUWhFbUtKCnFBLzVzdHdxNWd0WGVLOVJmMWxXK29xNThRNTBxMmk1NVdUTThoSDZhTjlaMTltZ0FGdE5VdGNqQUx2dFYxdEYKWSs4WFJkSHJaRnBIWll2NWkwVW1VbGc9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K" +``` + + +现在使用文件创建 Secret: + +```shell +kubectl apply -f nginxsecrets.yaml +kubectl get secrets +``` +``` +NAME TYPE DATA AGE +nginxsecret kubernetes.io/tls 2 1m +``` + + +现在修改 Nginx 副本以启动一个使用 Secret 中的证书的 HTTPS 服务器以及相应的用于暴露其端口(80 和 443)的 Service: + +{{< codenew file="service/networking/nginx-secure-app.yaml" >}} + + +关于 nginx-secure-app 清单,值得注意的几点如下: + +- 它将 Deployment 和 Service 的规约放在了同一个文件中。 +- [Nginx 服务器](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/default.conf)通过 + 80 端口处理 HTTP 流量,通过 443 端口处理 HTTPS 流量,而 Nginx Service 则暴露了这两个端口。 +- 每个容器能通过挂载在 `/etc/nginx/ssl` 的卷访问秘钥。卷和密钥需要在 Nginx 服务器启动 **之前** 配置好。 + +```shell +kubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml +``` + + +这时,你可以从任何节点访问到 Nginx 服务器。 + +``` +kubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs + POD_IP + [map[ip:10.244.3.5]] +``` + +``` +node $ curl -k https://10.244.3.5 +... +

Welcome to nginx!

+``` + + +注意最后一步我们是如何提供 `-k` 参数执行 curl 命令的,这是因为在证书生成时, +我们不知道任何关于运行 nginx 的 Pod 的信息,所以不得不在执行 curl 命令时忽略 CName 不匹配的情况。 +通过创建 Service,我们连接了在证书中的 CName 与在 Service 查询时被 Pod 使用的实际 DNS 名字。 +让我们从一个 Pod 来测试(为了方便,这里使用同一个 Secret,Pod 仅需要使用 nginx.crt 去访问 Service): + +{{< codenew file="service/networking/curlpod.yaml" >}} + +```shell +kubectl apply -f ./curlpod.yaml +kubectl get pods -l app=curlpod +``` +``` +NAME READY STATUS RESTARTS AGE +curl-deployment-1515033274-1410r 1/1 Running 0 1m +``` +```shell +kubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacert /etc/nginx/ssl/tls.crt +... +Welcome to nginx! +... +``` + + +## 暴露 Service {#exposing-the-service} + +对应用的某些部分,你可能希望将 Service 暴露在一个外部 IP 地址上。 +Kubernetes 支持两种实现方式:NodePort 和 LoadBalancer。 +在上一段创建的 Service 使用了 `NodePort`,因此,如果你的节点有一个公网 +IP,那么 Nginx HTTPS 副本已经能够处理因特网上的流量。 + +```shell +kubectl get svc my-nginx -o yaml | grep nodePort -C 5 +``` + +``` + uid: 07191fb3-f61a-11e5-8ae5-42010af00002 +spec: + clusterIP: 10.0.162.149 + ports: + - name: http + nodePort: 31704 + port: 8080 + protocol: TCP + targetPort: 80 + - name: https + nodePort: 32453 + port: 443 + protocol: TCP + targetPort: 443 + selector: + run: my-nginx +``` + +```shell +kubectl get nodes -o yaml | grep ExternalIP -C 1 +``` + +``` + - address: 104.197.41.11 + type: ExternalIP + allocatable: +-- + - address: 23.251.152.56 + type: ExternalIP + allocatable: +... + +$ curl https://: -k +... +

Welcome to nginx!

+``` + + +让我们重新创建一个 Service 以使用云负载均衡器。 +将 `my-nginx` Service 的 `Type` 由 `NodePort` 改成 `LoadBalancer`: + +```shell +kubectl edit svc my-nginx +kubectl get svc my-nginx +``` +``` +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +my-nginx LoadBalancer 10.0.162.149 xx.xxx.xxx.xxx 8080:30163/TCP 21s +``` +``` +curl https:// -k +... +Welcome to nginx! +``` + + +在 `EXTERNAL-IP` 列中的 IP 地址能在公网上被访问到。`CLUSTER-IP` 只能从集群/私有云网络中访问。 + +注意,在 AWS 上,类型 `LoadBalancer` 的服务会创建一个 ELB,且 ELB 使用主机名(比较长),而不是 IP。 +ELB 的主机名太长以至于不能适配标准 `kubectl get svc` 的输出,所以需要通过执行 +`kubectl describe service my-nginx` 命令来查看它。 +可以看到类似如下内容: + +```shell +kubectl describe service my-nginx +... +LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com +... +``` + +## {{% heading "whatsnext" %}} + + +* 进一步了解如何[使用 Service 访问集群中的应用](/zh-cn/docs/tasks/access-application-cluster/service-access-application-cluster/) +* 进一步了解如何[使用 Service 将前端连接到后端](/zh-cn/docs/tasks/access-application-cluster/connecting-frontend-backend/) +* 进一步了解如何[创建外部负载均衡器](/zh-cn/docs/tasks/access-application-cluster/create-external-load-balancer/) diff --git a/content/zh-cn/docs/tutorials/stateless-application/expose-external-ip-address.md b/content/zh-cn/docs/tutorials/stateless-application/expose-external-ip-address.md index cd3989c3e8..a17fcc3605 100644 --- a/content/zh-cn/docs/tutorials/stateless-application/expose-external-ip-address.md +++ b/content/zh-cn/docs/tutorials/stateless-application/expose-external-ip-address.md @@ -50,7 +50,7 @@ external IP address. -## 为一个在五个 pod 中运行的应用程序创建服务 +## 为一个在五个 pod 中运行的应用程序创建服务 {#creating-a-service-for-an-app-running-in-five-pods} + 前面的命令创建一个 {{< glossary_tooltip text="Deployment" term_id="deployment" >}} 对象和一个关联的 @@ -119,6 +121,7 @@ external IP address. + 输出类似于: ```console @@ -126,20 +129,20 @@ external IP address. my-service LoadBalancer 10.3.245.137 104.198.205.71 8080/TCP 54s ``` - - 提示:`type=LoadBalancer` 服务由外部云服务提供商提供支持,本例中不包含此部分, + `type=LoadBalancer` 服务由外部云服务提供商提供支持,本例中不包含此部分, 详细信息请参考[此页](/zh-cn/docs/concepts/services-networking/service/#loadbalancer) - - - 提示:如果外部 IP 地址显示为 \,请等待一分钟再次输入相同的命令。 + 如果外部 IP 地址显示为 \,请等待一分钟再次输入相同的命令。 + {{< /note >}} + 输出类似于: ```console @@ -170,12 +174,14 @@ external IP address. Session Affinity: None Events: ``` + + 记下服务公开的外部 IP 地址(`LoadBalancer Ingress`)。 在本例中,外部 IP 地址是 104.198.205.71。还要注意 `Port` 和 `NodePort` 的值。 在本例中,`Port` 是 8080,`NodePort` 是 32377。 @@ -198,6 +204,7 @@ external IP address. + 输出类似于: ```console @@ -225,13 +232,16 @@ external IP address. If you are using minikube, typing `minikube service my-service` will automatically open the Hello World application in a browser. --> + 其中 `` 是你的服务的外部 IP 地址(`LoadBalancer Ingress`), `` 是你的服务描述中的 `port` 的值。 - 如果你正在使用 minikube,输入 `minikube service my-service` 将在浏览器中自动打开 Hello World 应用程序。 + 如果你正在使用 minikube,输入 `minikube service my-service` + 将在浏览器中自动打开 Hello World 应用程序。 + 成功请求的响应是一条问候消息: ```shell @@ -253,7 +263,7 @@ kubectl delete services my-service To delete the Deployment, the ReplicaSet, and the Pods that are running the Hello World application, enter this command: --> -要删除正在运行 Hello World 应用程序的 Deployment,ReplicaSet 和 Pod,请输入以下命令: +要删除正在运行 Hello World 应用程序的 Deployment、ReplicaSet 和 Pod,请输入以下命令: ```shell kubectl delete deployment hello-world @@ -263,7 +273,7 @@ kubectl delete deployment hello-world -进一步了解[将应用程序与服务连接](/zh-cn/docs/concepts/services-networking/connect-applications-service/)。 +进一步了解[使用 Service 连接到应用](/zh-cn/docs/tutorials/services/connect-applications-service/)。 diff --git a/content/zh-cn/docs/tutorials/stateless-application/guestbook.md b/content/zh-cn/docs/tutorials/stateless-application/guestbook.md index 25274ca7ed..7f10de59eb 100644 --- a/content/zh-cn/docs/tutorials/stateless-application/guestbook.md +++ b/content/zh-cn/docs/tutorials/stateless-application/guestbook.md @@ -14,6 +14,7 @@ source: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook title: "Example: Deploying PHP Guestbook application with Redis" reviewers: - ahmetb +- jimangel content_type: tutorial weight: 20 card: @@ -21,23 +22,26 @@ card: weight: 30 title: "Stateless Example: PHP Guestbook with Redis" min-kubernetes-server-version: v1.14 +source: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook --> 本教程向你展示如何使用 Kubernetes 和 [Docker](https://www.docker.com/) -构建和部署一个简单的 **(非面向生产的)** 多层 web 应用程序。本例由以下组件组成: +构建和部署一个简单的 **(非面向生产的)** 多层 Web 应用程序。本例由以下组件组成: - * 单实例 [Redis](https://www.redis.io/) 以保存留言板条目 -* 多个 web 前端实例 +* 多个 Web 前端实例 ## {{% heading "objectives" %}} @@ -64,7 +68,7 @@ This tutorial shows you how to build and deploy a simple _(not production ready) -## 启动 Redis 数据库 +## 启动 Redis 数据库 {#start-up-the-redis-database} -### 创建 Redis Deployment +### 创建 Redis Deployment {#creating-the-redis-deployment} -### 创建 Redis 领导者服务 +### 创建 Redis 领导者服务 {#creating-the-redis-leader-service} 留言板应用程序需要往 Redis 中写数据。因此,需要创建 [Service](/zh-cn/docs/concepts/services-networking/service/) 来转发 Redis Pod @@ -169,16 +176,18 @@ The guestbook application needs to communicate to the Redis to write its data. Y --> 响应应该与此类似: - ```shell + ``` NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 443/TCP 1m redis-leader ClusterIP 10.103.78.24 6379/TCP 16s ``` - {{< note >}} + 这个清单文件创建了一个名为 `redis-leader` 的 Service,其中包含一组 与前面定义的标签匹配的标签,因此服务将网络流量路由到 Redis Pod 上。 {{< /note >}} @@ -186,9 +195,10 @@ This manifest file creates a Service named `redis-leader` with a set of labels t -### 设置 Redis 跟随者 +### 设置 Redis 跟随者 {#set-up-redis-followers} 尽管 Redis 领导者只有一个 Pod,你可以通过添加若干 Redis 跟随者来将其配置为高可用状态, 以满足流量需求。 @@ -196,7 +206,7 @@ Although the Redis leader is a single Pod, you can make it highly available and {{< codenew file="application/guestbook/redis-follower-deployment.yaml" >}} 1. 应用下面的 `redis-follower-deployment.yaml` 文件创建 Redis Deployment: @@ -233,9 +243,11 @@ Although the Redis leader is a single Pod, you can make it highly available and -### 创建 Redis 跟随者服务 +### 创建 Redis 跟随者服务 {#creating-the-redis-follower-service} Guestbook 应用需要与 Redis 跟随者通信以读取数据。 为了让 Redis 跟随者可被发现,你必须创建另一个 @@ -280,23 +292,30 @@ Guestbook 应用需要与 Redis 跟随者通信以读取数据。 {{< note >}} 清单文件创建了一个名为 `redis-follower` 的 Service,该 Service -具有一些与之前所定义的标签相匹配的标签,因此该 Service 能够将网络流量 -路由到 Redis Pod 之上。 +具有一些与之前所定义的标签相匹配的标签,因此该 Service 能够将网络流量路由到 +Redis Pod 之上。 {{< /note >}} -## 设置并公开留言板前端 +## 设置并公开留言板前端 {#set-up-and-expose-the-guestbook-frontend} 现在你有了一个为 Guestbook 应用配置的 Redis 存储处于运行状态, 接下来可以启动 Guestbook 的 Web 服务器了。 @@ -309,7 +328,7 @@ Guestbook 应用使用 PHP 前端。该前端被配置成与后端的 Redis 跟 -### 创建 Guestbook 前端 Deployment +### 创建 Guestbook 前端 Deployment {#creating-the-guestbook-frontend-deployment} {{< codenew file="application/guestbook/frontend-deployment.yaml" >}} @@ -351,20 +370,24 @@ Guestbook 应用使用 PHP 前端。该前端被配置成与后端的 Redis 跟 -### 创建前端服务 +### 创建前端服务 {#creating-the-frontend-service} 应用的 `Redis` 服务只能在 Kubernetes 集群中访问,因为服务的默认类型是 [ClusterIP](/zh-cn/docs/concepts/services-networking/service/#publishing-services-service-types)。 `ClusterIP` 为服务指向的 Pod 集提供一个 IP 地址。这个 IP 地址只能在集群中访问。 如果你希望访客能够访问你的 Guestbook,你必须将前端服务配置为外部可见的, @@ -372,10 +395,12 @@ from outside the Kubernetes cluster. However a Kubernetes user can use 然而即便使用了 `ClusterIP`,Kubernetes 用户仍可以通过 `kubectl port-forward` 访问服务。 - {{< note >}} + 一些云提供商,如 Google Compute Engine 或 Google Kubernetes Engine, 支持外部负载均衡器。如果你的云提供商支持负载均衡器,并且你希望使用它, 只需取消注释 `type: LoadBalancer`。 @@ -422,7 +447,7 @@ Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, su -### 通过 `kubectl port-forward` 查看前端服务 +### 通过 `kubectl port-forward` 查看前端服务 {#viewing-the-frontend-service-via-kubectl-port-forward} -2. 在浏览器中加载 [http://localhost:8080](http://localhost:8080) -页面以查看 Guestbook。 +2. 在浏览器中加载 [http://localhost:8080](http://localhost:8080) 页面以查看 Guestbook。 -### 通过 `LoadBalancer` 查看前端服务 +### 通过 `LoadBalancer` 查看前端服务 {#viewing-the-frontend-service-via-loadbalancer} -如果你部署了 `frontend-service.yaml`,需要找到用来查看 Guestbook 的 -IP 地址。 +如果你部署了 `frontend-service.yaml`,需要找到用来查看 Guestbook 的 IP 地址。 尝试通过输入消息并点击 Submit 来添加一些留言板条目。 -你所输入的消息会在前端显示。这一消息表明数据被通过你 -之前所创建的 Service 添加到 Redis 存储中。 +你所输入的消息会在前端显示。这一消息表明数据被通过你之前所创建的 +Service 添加到 Redis 存储中。 {{< /note >}} -## 扩展 Web 前端 +## 扩展 Web 前端 {#scale-the-web-frontend} 你可以根据需要执行伸缩操作,这是因为服务器本身被定义为使用一个 Deployment 控制器的 Service。 @@ -574,7 +601,8 @@ Deployment 控制器的 Service。 ## {{% heading "cleanup" %}} 删除 Deployments 和服务还会删除正在运行的 Pod。 使用标签用一个命令删除多个资源。 @@ -582,7 +610,7 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels -1. 运行以下命令以删除所有 Pod,Deployments 和 Services。 +1. 运行以下命令以删除所有 Pod、Deployment 和 Service。 ```shell kubectl delete deployment -l app=redis @@ -602,6 +630,7 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels deployment.apps "frontend" deleted service "frontend" deleted ``` + @@ -617,7 +646,6 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels 响应应该是: ``` - No resources found in default namespace. ``` @@ -626,11 +654,11 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels * 完成 [Kubernetes 基础](/zh-cn/docs/tutorials/kubernetes-basics/) 交互式教程 * 使用 Kubernetes 创建一个博客,使用 [MySQL 和 Wordpress 的持久卷](/zh-cn/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog) -* 进一步阅读[连接应用程序](/zh-cn/docs/concepts/services-networking/connect-applications-service/) +* 进一步阅读[使用 Service 连接到应用](/zh-cn/docs/tutorials/services/connect-applications-service/) * 进一步阅读[管理资源](/zh-cn/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively) diff --git a/content/zh-cn/examples/secret/serviceaccount/mysecretname.yaml b/content/zh-cn/examples/secret/serviceaccount/mysecretname.yaml new file mode 100644 index 0000000000..e50fe72d71 --- /dev/null +++ b/content/zh-cn/examples/secret/serviceaccount/mysecretname.yaml @@ -0,0 +1,7 @@ +apiVersion: v1 +kind: Secret +type: kubernetes.io/service-account-token +metadata: + name: mysecretname + annotations: + - kubernetes.io/service-account.name: myserviceaccount diff --git a/content/zh-cn/releases/patch-releases.md b/content/zh-cn/releases/patch-releases.md index 3169a1d568..479a4cb89e 100644 --- a/content/zh-cn/releases/patch-releases.md +++ b/content/zh-cn/releases/patch-releases.md @@ -2,12 +2,12 @@ title: 补丁版本 type: docs --- - - - ## Cherry Picks 请遵循 [Cherry Pick 流程](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-release/cherry-picks.md)。 @@ -83,7 +82,7 @@ PR 越早准备好越好,因为在实际发布之前,合并了你的 Cherry 不符合合并标准的 Cherry Pick PR 将被带入下一个补丁版本中跟踪。 - + | 月度补丁发布 | Cherry Pick 截止日期 | 目标日期 | | -------------- | -------------------- | ----------- | -| 2022 年 11 月 | 2022-11-04 | 2022-11-09 | -| 2022 年 12 月 | 2022-12-09 | 2022-12-14 | +| 2022 年 12 月 | 2022-12-02 | 2022-12-07 | | 2023 年 1 月 | 2023-01-13 | 2023-01-18 | | 2023 年 2 月 | 2023-02-10 | 2023-02-15 | - ## 活动分支的详细发布历史 {#detailed-release-history-for-active-branches} {{< release-branches >}} -