Merge branch 'dev-1.21' into new-annotations

pull/27095/head
Paco Xu 2021-03-19 13:43:05 +08:00 committed by GitHub
commit 4c5130519c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
133 changed files with 4024 additions and 1353 deletions

View File

@ -176,14 +176,20 @@ aliases:
# zhangxiaoyu-zidif
sig-docs-pt-owners: # Admins for Portuguese content
- femrtnz
- jailton
- jcjesus
- devlware
- jhonmike
- rikatz
- yagonobre
sig-docs-pt-reviews: # PR reviews for Portugese content
- femrtnz
- jailton
- jcjesus
- devlware
- jhonmike
- rikatz
- yagonobre
sig-docs-vi-owners: # Admins for Vietnamese content
- huynguyennovem
- ngtuna

View File

@ -91,7 +91,7 @@ blog = "/:section/:year/:month/:day/:slug/"
[outputs]
home = [ "HTML", "RSS", "HEADERS" ]
page = [ "HTML"]
section = [ "HTML"]
section = [ "HTML", "print" ]
# Add a "text/netlify" media type for auto-generating the _headers file
[mediaTypes]

View File

@ -176,7 +176,7 @@ Cluster-distributed stateful services (e.g., Cassandra) can benefit from splitti
[Logs](/docs/concepts/cluster-administration/logging/) and [metrics](/docs/tasks/debug-application-cluster/resource-usage-monitoring/) (if collected and persistently retained) are valuable to diagnose outages, but given the variety of technologies available it will not be addressed in this blog. If Internet connectivity is available, it may be desirable to retain logs and metrics externally at a central location.
Your production deployment should utilize an automated installation, configuration and update tool (e.g., [Ansible](https://github.com/kubernetes-incubator/kubespray), [BOSH](https://github.com/cloudfoundry-incubator/kubo-deployment), [Chef](https://github.com/chef-cookbooks/kubernetes), [Juju](/docs/getting-started-guides/ubuntu/installation/), [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), [Puppet](https://forge.puppet.com/puppetlabs/kubernetes), etc.). A manual process will have repeatability issues, be labor intensive, error prone, and difficult to scale. [Certified distributions](https://www.cncf.io/certification/software-conformance/#logos) are likely to include a facility for retaining configuration settings across updates, but if you implement your own install and config toolchain, then retention, backup and recovery of the configuration artifacts is essential. Consider keeping your deployment components and settings under a version control system such as Git.
Your production deployment should utilize an automated installation, configuration and update tool (e.g., [Ansible](https://github.com/kubernetes-incubator/kubespray), [BOSH](https://github.com/cloudfoundry-incubator/kubo-deployment), [Chef](https://github.com/chef-cookbooks/kubernetes), [Juju](/docs/getting-started-guides/ubuntu/installation/), [kubeadm](/docs/reference/setup-tools/kubeadm/), [Puppet](https://forge.puppet.com/puppetlabs/kubernetes), etc.). A manual process will have repeatability issues, be labor intensive, error prone, and difficult to scale. [Certified distributions](https://www.cncf.io/certification/software-conformance/#logos) are likely to include a facility for retaining configuration settings across updates, but if you implement your own install and config toolchain, then retention, backup and recovery of the configuration artifacts is essential. Consider keeping your deployment components and settings under a version control system such as Git.
## Outage recovery

View File

@ -17,7 +17,7 @@ Lets dive into the key features of this release:
## Simplified Kubernetes Cluster Management with kubeadm in GA
Most people who have gotten hands-on with Kubernetes have at some point been hands-on with kubeadm. It's an essential tool for managing the cluster lifecycle, from creation to configuration to upgrade; and now kubeadm is officially GA. [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) handles the bootstrapping of production clusters on existing hardware and configuring the core Kubernetes components in a best-practice-manner to providing a secure yet easy joining flow for new nodes and supporting easy upgrades. Whats notable about this GA release are the now graduated advanced features, specifically around pluggability and configurability. The scope of kubeadm is to be a toolbox for both admins and automated, higher-level system and this release is a significant step in that direction.
Most people who have gotten hands-on with Kubernetes have at some point been hands-on with kubeadm. It's an essential tool for managing the cluster lifecycle, from creation to configuration to upgrade; and now kubeadm is officially GA. [kubeadm](/docs/reference/setup-tools/kubeadm/) handles the bootstrapping of production clusters on existing hardware and configuring the core Kubernetes components in a best-practice-manner to providing a secure yet easy joining flow for new nodes and supporting easy upgrades. Whats notable about this GA release are the now graduated advanced features, specifically around pluggability and configurability. The scope of kubeadm is to be a toolbox for both admins and automated, higher-level system and this release is a significant step in that direction.
## Container Storage Interface (CSI) Goes GA

Binary file not shown.

After

Width:  |  Height:  |  Size: 181 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

View File

@ -0,0 +1,63 @@
---
layout: blog
title: "The Evolution of Kubernetes Dashboard"
date: 2021-03-09
slug: the-evolution-of-kubernetes-dashboard
---
Authors: Marcin Maciaszczyk, Kubermatic & Sebastian Florek, Kubermatic
In October 2020, the Kubernetes Dashboard officially turned five. As main project maintainers, we can barely believe that so much time has passed since our very first commits to the project. However, looking back with a bit of nostalgia, we realize that quite a lot has happened since then. Now its due time to celebrate “our baby” with a short recap.
## How It All Began
The initial idea behind the Kubernetes Dashboard project was to provide a web interface for Kubernetes. We wanted to reflect the kubectl functionality through an intuitive web UI. The main benefit from using the UI is to be able to quickly see things that do not work as expected (monitoring and troubleshooting). Also, the Kubernetes Dashboard is a great starting point for users that are new to the Kubernetes ecosystem.
The very [first commit](https://github.com/kubernetes/dashboard/commit/5861187fa807ac1cc2d9b2ac786afeced065076c) to the Kubernetes Dashboard was made by Filip Grządkowski from Google on 16th October 2015 just a few months from the initial commit to the Kubernetes repository. Our initial commits go back to November 2015 ([Sebastian committed on 16 November 2015](https://github.com/kubernetes/dashboard/commit/09e65b6bb08c49b926253de3621a73da05e400fd); [Marcin committed on 23 November 2015](https://github.com/kubernetes/dashboard/commit/1da4b1c25ef040818072c734f71333f9b4733f55)). Since that time, weve become regular contributors to the project. For the next two years, we worked closely with the Googlers, eventually becoming main project maintainers ourselves.
{{< figure src="first-ui.png" caption="The First Version of the User Interface" >}}
{{< figure src="along-the-way-ui.png" caption="Prototype of the New User Interface" >}}
{{< figure src="current-ui.png" caption="The Current User Interface" >}}
As you can see, the initial look and feel of the project were completely different from the current one. We have changed the design multiple times. The same has happened with the code itself.
## Growing Up - The Big Migration
At [the beginning of 2018](https://github.com/kubernetes/dashboard/pull/2727), we reached a point where AngularJS was getting closer to the end of its life, while the new Angular versions were published quite often. A lot of the libraries and the modules that we were using were following the trend. That forced us to spend a lot of the time rewriting the frontend part of the project to make it work with newer technologies.
The migration came with many benefits like being able to refactor a lot of the code, introduce design patterns, reduce code complexity, and benefit from the new modules. However, you can imagine that the scale of the migration was huge. Luckily, there were a number of contributions from the community helping us with the resource support, new Kubernetes version support, i18n, and much more. After many long days and nights, we finally released the [first beta version](https://github.com/kubernetes/dashboard/releases/tag/v2.0.0-beta1) in July 2019, followed by the [2.0 release](https://github.com/kubernetes/dashboard/releases/tag/v2.0.0) in April 2020 — our baby had grown up.
## Where Are We Standing in 2021?
Due to limited resources, unfortunately, we were not able to offer extensive support for many different Kubernetes versions. So, weve decided to always try and support the latest Kubernetes version available at the time of the Kubernetes Dashboard release. The latest release, [Dashboard v2.2.0](https://github.com/kubernetes/dashboard/releases/tag/v2.2.0) provides support for Kubernetes v1.20.
On top of that, we put in a great deal of effort into [improving resource support](https://github.com/kubernetes/dashboard/issues/5232). Meanwhile, we do offer support for most of the Kubernetes resources. Also, the Kubernetes Dashboard supports multiple languages: English, German, French, Japanese, Korean, Chinese (Traditional, Simplified, Traditional Hong Kong). Persian and Russian localizations are currently in progress. Moreover, we are working on the support for 3rd party themes and the design of the app in general. As you can see, quite a lot of things are going on.
Luckily, we do have regular contributors with domain knowledge who are taking care of the project, updating the Helm charts, translations, Go modules, and more. But as always, there could be many more hands on deck. So if you are thinking about contributing to Kubernetes, keep us in mind ;)
## Whats Next
The Kubernetes Dashboard has been growing and prospering for more than 5 years now. It provides the community with an intuitive Web UI, thereby decreasing the complexity of Kubernetes and increasing its accessibility to new community members. We are proud of what the project has achieved so far, but this is by far not the end. These are our priorities for the future:
* Keep providing support for the new Kubernetes versions
* Keep improving the support for the existing resources
* Keep working on auth system improvements
* [Rewrite the API to use gRPC and shared informers](https://github.com/kubernetes/dashboard/pull/5449): This will allow us to improve the performance of the application but, most importantly, to support live updates coming from the Kubernetes project. It is one of the most requested features from the community.
* Split the application into two containers, one with the UI and the second with the API running inside.
## The Kubernetes Dashboard in Numbers
* Initial commit made on October 16, 2015
* Over 100 million pulls from Dockerhub since the v2 release
* 8 supported languages and the next 2 in progress
* Over 3360 closed PRs
* Over 2260 closed issues
* 100% coverage of the supported core Kubernetes resources
* Over 9000 stars on GitHub
* Over 237 000 lines of code
## Join Us
As mentioned earlier, we are currently looking for more people to help us further develop and grow the project. We are open to contributions in multiple areas, i.e., [issues with help wanted label](https://github.com/kubernetes/dashboard/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22). Please feel free to reach out via GitHub or the #sig-ui channel in the [Kubernetes Slack](https://slack.k8s.io/).

View File

@ -31,7 +31,7 @@ The [components](/docs/concepts/overview/components/#node-components) on a node
There are two main ways to have Nodes added to the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}:
1. The kubelet on a node self-registers to the control plane
2. You, or another human user, manually add a Node object
2. You (or another human user) manually add a Node object
After you create a Node object, or the kubelet on a node self-registers, the
control plane checks whether the new Node object is valid. For example, if you
@ -52,8 +52,8 @@ try to create a Node from the following JSON manifest:
Kubernetes creates a Node object internally (the representation). Kubernetes checks
that a kubelet has registered to the API server that matches the `metadata.name`
field of the Node. If the node is healthy (if all necessary services are running),
it is eligible to run a Pod. Otherwise, that node is ignored for any cluster activity
field of the Node. If the node is healthy (i.e. all necessary services are running),
then it is eligible to run a Pod. Otherwise, that node is ignored for any cluster activity
until it becomes healthy.
{{< note >}}
@ -96,14 +96,14 @@ You can create and modify Node objects using
When you want to create Node objects manually, set the kubelet flag `--register-node=false`.
You can modify Node objects regardless of the setting of `--register-node`.
For example, you can set labels on an existing Node, or mark it unschedulable.
For example, you can set labels on an existing Node or mark it unschedulable.
You can use labels on Nodes in conjunction with node selectors on Pods to control
scheduling. For example, you can constrain a Pod to only be eligible to run on
a subset of the available nodes.
Marking a node as unschedulable prevents the scheduler from placing new pods onto
that Node, but does not affect existing Pods on the Node. This is useful as a
that Node but does not affect existing Pods on the Node. This is useful as a
preparatory step before a node reboot or other maintenance.
To mark a Node unschedulable, run:
@ -179,14 +179,14 @@ The node condition is represented as a JSON object. For example, the following s
]
```
If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout` (an argument passed to the {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}), all the Pods on the node are scheduled for deletion by the node controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the API server is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the API server is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node.
If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout` (an argument passed to the {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}), then all the Pods on the node are scheduled for deletion by the node controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the API server is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the API server is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node.
The node controller does not force delete pods until it is confirmed that they have stopped
running in the cluster. You can see the pods that might be running on an unreachable node as
being in the `Terminating` or `Unknown` state. In cases where Kubernetes cannot deduce from the
underlying infrastructure if a node has permanently left a cluster, the cluster administrator
may need to delete the node object by hand. Deleting the node object from Kubernetes causes
all the Pod objects running on the node to be deleted from the API server, and frees up their
may need to delete the node object by hand. Deleting the node object from Kubernetes causes
all the Pod objects running on the node to be deleted from the API server and frees up their
names.
The node lifecycle controller automatically creates
@ -199,7 +199,7 @@ for more details.
### Capacity and Allocatable {#capacity}
Describes the resources available on the node: CPU, memory and the maximum
Describes the resources available on the node: CPU, memory, and the maximum
number of pods that can be scheduled onto the node.
The fields in the capacity block indicate the total amount of resources that a
@ -225,18 +225,19 @@ CIDR block to the node when it is registered (if CIDR assignment is turned on).
The second is keeping the node controller's internal list of nodes up to date with
the cloud provider's list of available machines. When running in a cloud
environment, whenever a node is unhealthy, the node controller asks the cloud
environment and whenever a node is unhealthy, the node controller asks the cloud
provider if the VM for that node is still available. If not, the node
controller deletes the node from its list of nodes.
The third is monitoring the nodes' health. The node controller is
responsible for updating the NodeReady condition of NodeStatus to
ConditionUnknown when a node becomes unreachable (i.e. the node controller stops
receiving heartbeats for some reason, for example due to the node being down), and then later evicting
all the pods from the node (using graceful termination) if the node continues
to be unreachable. (The default timeouts are 40s to start reporting
ConditionUnknown and 5m after that to start evicting pods.) The node controller
checks the state of each node every `--node-monitor-period` seconds.
responsible for:
- Updating the NodeReady condition of NodeStatus to ConditionUnknown when a node
becomes unreachable, as the node controller stops receiving heartbeats for some
reason such as the node being down.
- Evicting all the pods from the node using graceful termination if
the node continues to be unreachable. The default timeouts are 40s to start
reporting ConditionUnknown and 5m after that to start evicting pods.
The node controller checks the state of each node every `--node-monitor-period` seconds.
#### Heartbeats
@ -252,13 +253,14 @@ of the node heartbeats as the cluster scales.
The kubelet is responsible for creating and updating the `NodeStatus` and
a Lease object.
- The kubelet updates the `NodeStatus` either when there is change in status,
- The kubelet updates the `NodeStatus` either when there is change in status
or if there has been no update for a configured interval. The default interval
for `NodeStatus` updates is 5 minutes (much longer than the 40 second default
timeout for unreachable nodes).
for `NodeStatus` updates is 5 minutes, which is much longer than the 40 second default
timeout for unreachable nodes.
- The kubelet creates and then updates its Lease object every 10 seconds
(the default update interval). Lease updates occur independently from the
`NodeStatus` updates. If the Lease update fails, the kubelet retries with exponential backoff starting at 200 milliseconds and capped at 7 seconds.
`NodeStatus` updates. If the Lease update fails, the kubelet retries with
exponential backoff starting at 200 milliseconds and capped at 7 seconds.
#### Reliability
@ -269,23 +271,24 @@ from more than 1 node per 10 seconds.
The node eviction behavior changes when a node in a given availability zone
becomes unhealthy. The node controller checks what percentage of nodes in the zone
are unhealthy (NodeReady condition is ConditionUnknown or ConditionFalse) at
the same time. If the fraction of unhealthy nodes is at least
`--unhealthy-zone-threshold` (default 0.55) then the eviction rate is reduced:
if the cluster is small (i.e. has less than or equal to
`--large-cluster-size-threshold` nodes - default 50) then evictions are
stopped, otherwise the eviction rate is reduced to
`--secondary-node-eviction-rate` (default 0.01) per second. The reason these
policies are implemented per availability zone is because one availability zone
might become partitioned from the master while the others remain connected. If
your cluster does not span multiple cloud provider availability zones, then
there is only one availability zone (the whole cluster).
the same time:
- If the fraction of unhealthy nodes is at least `--unhealthy-zone-threshold`
(default 0.55), then the eviction rate is reduced.
- If the cluster is small (i.e. has less than or equal to
`--large-cluster-size-threshold` nodes - default 50), then evictions are stopped.
- Otherwise, the eviction rate is reduced to `--secondary-node-eviction-rate`
(default 0.01) per second.
The reason these policies are implemented per availability zone is because one
availability zone might become partitioned from the master while the others remain
connected. If your cluster does not span multiple cloud provider availability zones,
then there is only one availability zone (i.e. the whole cluster).
A key reason for spreading your nodes across availability zones is so that the
workload can be shifted to healthy zones when one entire zone goes down.
Therefore, if all nodes in a zone are unhealthy then the node controller evicts at
Therefore, if all nodes in a zone are unhealthy, then the node controller evicts at
the normal rate of `--node-eviction-rate`. The corner case is when all zones are
completely unhealthy (i.e. there are no healthy nodes in the cluster). In such a
case, the node controller assumes that there's some problem with master
case, the node controller assumes that there is some problem with master
connectivity and stops all evictions until some connectivity is restored.
The node controller is also responsible for evicting pods running on nodes with
@ -303,8 +306,8 @@ eligible for, effectively removing incoming load balancer traffic from the cordo
### Node capacity
Node objects track information about the Node's resource capacity (for example: the amount
of memory available, and the number of CPUs).
Node objects track information about the Node's resource capacity: for example, the amount
of memory available and the number of CPUs.
Nodes that [self register](#self-registration-of-nodes) report their capacity during
registration. If you [manually](#manual-node-administration) add a Node, then
you need to set the node's capacity information when you add it.
@ -338,7 +341,7 @@ for more information.
If you have enabled the `GracefulNodeShutdown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/), then the kubelet attempts to detect the node system shutdown and terminates pods running on the node.
Kubelet ensures that pods follow the normal [pod termination process](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) during the node shutdown.
When the `GracefulNodeShutdown` feature gate is enabled, kubelet uses [systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to delay the node shutdown with a given duration. During a shutdown kubelet terminates pods in two phases:
When the `GracefulNodeShutdown` feature gate is enabled, kubelet uses [systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to delay the node shutdown with a given duration. During a shutdown, kubelet terminates pods in two phases:
1. Terminate regular pods running on the node.
2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) running on the node.

View File

@ -83,12 +83,15 @@ As an example, you can find detailed information about how `kube-up.sh` sets
up logging for COS image on GCP in the corresponding
[`configure-helper` script](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh).
When using a **CRI container runtime**, the kubelet is responsible for rotating the logs and managing the logging directory structure. The kubelet
sends this information to the CRI container runtime and the runtime writes the container logs to the given location. The two kubelet flags `container-log-max-size` and `container-log-max-files` can be used to configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
When you run [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) as in
the basic logging example, the kubelet on the node handles the request and
reads directly from the log file. The kubelet returns the content of the log file.
{{< note >}}
If an external system has performed the rotation,
If an external system has performed the rotation or a CRI container runtime is used,
only the contents of the latest log file will be available through
`kubectl logs`. For example, if there's a 10MB file, `logrotate` performs
the rotation and there are two files: one file that is 10MB in size and a second file that is empty.

View File

@ -134,7 +134,7 @@ cloudprovider_gce_api_request_duration_seconds { request = "list_disk"}
### kube-scheduler metrics
{{< feature-state for_k8s_version="v1.20" state="alpha" >}}
{{< feature-state for_k8s_version="v1.21" state="beta" >}}
The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current or historical scheduling limits, quickly identify workloads that cannot schedule due to lack of resources, and compare actual usage to the pod's request.

View File

@ -43,7 +43,7 @@ Kubernetes objects that have a `spec`, a ConfigMap has `data` and `binaryData`
fields. These fields accept key-value pairs as their values. Both the `data`
field and the `binaryData` are optional. The `data` field is designed to
contain UTF-8 byte sequences while the `binaryData` field is designed to
contain binary data.
contain binary data as base64-encoded strings.
The name of a ConfigMap must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).

View File

@ -81,9 +81,9 @@ The [imagePullPolicy](/docs/concepts/containers/images/#updating-images) and the
- `imagePullPolicy: Always`: every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image digest. If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet downloads (pulls) the image with the resolved digest, and uses that image to launch the container.
- `imagePullPolicy` is omitted and either the image tag is `:latest` or it is omitted: `Always` is applied.
- `imagePullPolicy` is omitted and either the image tag is `:latest` or it is omitted: `imagePullPolicy` is automatically set to `Always`. Note that this will _not_ be updated to `IfNotPresent` if the tag changes value.
- `imagePullPolicy` is omitted and the image tag is present but not `:latest`: `IfNotPresent` is applied.
- `imagePullPolicy` is omitted and the image tag is present but not `:latest`: `imagePullPolicy` is automatically set to `IfNotPresent`. Note that this will _not_ be updated to `Always` if the tag is later removed or changed to `:latest`.
- `imagePullPolicy: Never`: the image is assumed to exist locally. No attempt is made to pull the image.
@ -96,7 +96,7 @@ You should avoid using the `:latest` tag when deploying containers in production
{{< /note >}}
{{< note >}}
The caching semantics of the underlying image provider make even `imagePullPolicy: Always` efficient. With Docker, for example, if the image already exists, the pull attempt is fast because all image layers are cached and no image download is needed.
The caching semantics of the underlying image provider make even `imagePullPolicy: Always` efficient, as long as the registry is reliably accessible. With Docker, for example, if the image already exists, the pull attempt is fast because all image layers are cached and no image download is needed.
{{< /note >}}
## Using kubectl

View File

@ -49,16 +49,32 @@ Instead, specify a meaningful tag such as `v1.42.0`.
## Updating images
The default pull policy is `IfNotPresent` which causes the
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} to skip
pulling an image if it already exists. If you would like to always force a pull,
you can do one of the following:
When you first create a {{< glossary_tooltip text="Deployment" term_id="deployment" >}},
{{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}}, Pod, or other
object that includes a Pod template, then by default the pull policy of all
containers in that pod will be set to `IfNotPresent` if it is not explicitly
specified. This policy causes the
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} to skip pulling an
image if it already exists.
If you would like to always force a pull, you can do one of the following:
- set the `imagePullPolicy` of the container to `Always`.
- omit the `imagePullPolicy` and use `:latest` as the tag for the image to use.
- omit the `imagePullPolicy` and use `:latest` as the tag for the image to use;
Kubernetes will set the policy to `Always`.
- omit the `imagePullPolicy` and the tag for the image to use.
- enable the [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) admission controller.
{{< note >}}
The value of `imagePullPolicy` of the container is always set when the object is
first _created_, and is not updated if the image's tag later changes.
For example, if you create a Deployment with an image whose tag is _not_
`:latest`, and later update that Deployment's image to a `:latest` tag, the
`imagePullPolicy` field will _not_ change to `Always`. You must manually change
the pull policy of any object after its initial creation.
{{< /note >}}
When `imagePullPolicy` is defined without a specific value, it is also set to `Always`.
## Multi-architecture images with image indexes

View File

@ -103,26 +103,27 @@ as well as keeping the existing service in good shape.
## Writing your own Operator {#writing-operator}
If there isn't an Operator in the ecosystem that implements the behavior you
want, you can code your own. In [What's next](#what-s-next) you'll find a few
links to libraries and tools you can use to write your own cloud native
Operator.
want, you can code your own.
You also implement an Operator (that is, a Controller) using any language / runtime
that can act as a [client for the Kubernetes API](/docs/reference/using-api/client-libraries/).
Following are a few libraries and tools you can use to write your own cloud native
Operator.
{{% thirdparty-content %}}
* [kubebuilder](https://book.kubebuilder.io/)
* [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator)
* [Metacontroller](https://metacontroller.app/) along with WebHooks that
you implement yourself
* [Operator Framework](https://operatorframework.io)
## {{% heading "whatsnext" %}}
* Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
* Find ready-made operators on [OperatorHub.io](https://operatorhub.io/) to suit your use case
* Use existing tools to write your own operator, eg:
* using [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator)
* using [kubebuilder](https://book.kubebuilder.io/)
* using [Metacontroller](https://metacontroller.app/) along with WebHooks that
you implement yourself
* using the [Operator Framework](https://operatorframework.io)
* [Publish](https://operatorhub.io/) your operator for other people to use
* Read [CoreOS' original article](https://web.archive.org/web/20170129131616/https://coreos.com/blog/introducing-operators.html) that introduced the Operator pattern (this is an archived version of the original article).
* Read an [article](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) from Google Cloud about best practices for building Operators

View File

@ -124,6 +124,10 @@ In release 1.8, quota support for local ephemeral storage is added as an alpha f
| `limits.ephemeral-storage` | Across all pods in the namespace, the sum of local ephemeral storage limits cannot exceed this value. |
| `ephemeral-storage` | Same as `requests.ephemeral-storage`. |
{{< note >}}
When using a CRI container runtime, container logs will count against the ephemeral storage quota. This can result in the unexpected eviction of pods that have exhausted their storage quotas. Refer to [Logging Architecture](/docs/concepts/cluster-administration/logging/) for details.
{{< /note >}}
## Object Count Quota
You can set quota for the total number of certain resources of all standard,

View File

@ -5,7 +5,7 @@ reviewers:
- bsalamat
title: Assigning Pods to Nodes
content_type: concept
weight: 50
weight: 20
---

View File

@ -5,7 +5,7 @@ reviewers:
- ahg-g
title: Resource Bin Packing for Extended Resources
content_type: concept
weight: 50
weight: 30
---
<!-- overview -->

View File

@ -80,7 +80,7 @@ parameters:
Users request dynamically provisioned storage by including a storage class in
their `PersistentVolumeClaim`. Before Kubernetes v1.6, this was done via the
`volume.beta.kubernetes.io/storage-class` annotation. However, this annotation
is deprecated since v1.6. Users now can and should instead use the
is deprecated since v1.9. Users now can and should instead use the
`storageClassName` field of the `PersistentVolumeClaim` object. The value of
this field must match the name of a `StorageClass` configured by the
administrator (see [below](#enabling-dynamic-provisioning)).

View File

@ -17,6 +17,7 @@ which a pod runs: network-attached storage might not be accessible by
all nodes, or storage is local to a node to begin with.
{{< feature-state for_k8s_version="v1.19" state="alpha" >}}
{{< feature-state for_k8s_version="v1.21" state="beta" >}}
This page describes how Kubernetes keeps track of storage capacity and
how the scheduler uses that information to schedule Pods onto nodes
@ -103,34 +104,10 @@ to handle this automatically.
## Enabling storage capacity tracking
Storage capacity tracking is an *alpha feature* and only enabled when
the `CSIStorageCapacity` [feature
gate](/docs/reference/command-line-tools-reference/feature-gates/) and
the `storage.k8s.io/v1alpha1` {{< glossary_tooltip text="API group" term_id="api-group" >}} are enabled. For details on
that, see the `--feature-gates` and `--runtime-config` [kube-apiserver
parameters](/docs/reference/command-line-tools-reference/kube-apiserver/).
A quick check
whether a Kubernetes cluster supports the feature is to list
CSIStorageCapacity objects with:
```shell
kubectl get csistoragecapacities --all-namespaces
```
If your cluster supports CSIStorageCapacity, the response is either a list of CSIStorageCapacity objects or:
```
No resources found
```
If not supported, this error is printed instead:
```
error: the server doesn't have a resource type "csistoragecapacities"
```
In addition to enabling the feature in the cluster, a CSI
driver also has to
support it. Please refer to the driver's documentation for
details.
Storage capacity tracking is a beta feature and enabled by default in
a Kubernetes cluster since Kubernetes 1.21. In addition to having the
feature enabled in the cluster, a CSI driver also has to support
it. Please refer to the driver's documentation for details.
## {{% heading "whatsnext" %}}

View File

@ -34,8 +34,9 @@ Kubernetes supports many types of volumes. A {{< glossary_tooltip term_id="pod"
can use any number of volume types simultaneously.
Ephemeral volume types have a lifetime of a pod, but persistent volumes exist beyond
the lifetime of a pod. Consequently, a volume outlives any containers
that run within the pod, and data is preserved across container restarts. When a
pod ceases to exist, the volume is destroyed.
that run within the pod, and data is preserved across container restarts. When a pod
ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not
destroy persistent volumes.
At its core, a volume is just a directory, possibly with some data in it, which
is accessible to the containers in a pod. How that directory comes to be, the
@ -152,14 +153,16 @@ For more details, see the [`azureFile` volume plugin](https://github.com/kuberne
#### azureFile CSI migration
{{< feature-state for_k8s_version="v1.15" state="alpha" >}}
{{< feature-state for_k8s_version="v1.21" state="beta" >}}
The `CSIMigration` feature for `azureFile`, when enabled, redirects all plugin operations
from the existing in-tree plugin to the `file.csi.azure.com` Container
Storage Interface (CSI) Driver. In order to use this feature, the [Azure File CSI
Driver](https://github.com/kubernetes-sigs/azurefile-csi-driver)
must be installed on the cluster and the `CSIMigration` and `CSIMigrationAzureFile`
alpha features must be enabled.
[feature gates](/docs/reference/command-line-tools-reference/feature-gates/) must be enabled.
Azure File CSI driver does not support using same volume with different fsgroups, if Azurefile CSI migration is enabled, using same volume with different fsgroups won't be supported at all.
### cephfs

View File

@ -54,7 +54,7 @@ In this example:
{{< note >}}
The `.spec.selector.matchLabels` field is a map of {key,value} pairs.
A single {key,value} in the `matchLabels` map is equivalent to an element of `matchExpressions`,
whose key field is "key" the operator is "In", and the values array contains only "value".
whose `key` field is "key", the `operator` is "In", and the `values` array contains only "value".
All of the requirements, from both `matchLabels` and `matchExpressions`, must be satisfied in order to match.
{{< /note >}}

View File

@ -145,8 +145,8 @@ There are three main types of task suitable to run as a Job:
- the Job is complete as soon as its Pod terminates successfully.
1. Parallel Jobs with a *fixed completion count*:
- specify a non-zero positive value for `.spec.completions`.
- the Job represents the overall task, and is complete when there is one successful Pod for each value in the range 1 to `.spec.completions`.
- **not implemented yet:** Each Pod is passed a different index in the range 1 to `.spec.completions`.
- the Job represents the overall task, and is complete when there are `.spec.completions` successful Pods.
- when using `.spec.completionMode="Indexed"`, each Pod gets a different index in the range 0 to `.spec.completions-1`.
1. Parallel Jobs with a *work queue*:
- do not specify `.spec.completions`, default to `.spec.parallelism`.
- the Pods must coordinate amongst themselves or an external service to determine what each should work on. For example, a Pod might fetch a batch of up to N items from the work queue.
@ -166,7 +166,6 @@ a non-negative integer.
For more information about how to make use of the different types of job, see the [job patterns](#job-patterns) section.
#### Controlling parallelism
The requested parallelism (`.spec.parallelism`) can be set to any non-negative value.
@ -185,6 +184,33 @@ parallelism, for a variety of reasons:
- The Job controller may throttle new Pod creation due to excessive previous pod failures in the same Job.
- When a Pod is gracefully shut down, it takes time to stop.
### Completion mode
{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
{{< note >}}
To be able to create Indexed Jobs, make sure to enable the `IndexedJob`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
on the [API server](docs/reference/command-line-tools-reference/kube-apiserver/)
and the [controller manager](/docs/reference/command-line-tools-reference/kube-controller-manager/).
{{< /note >}}
Jobs with _fixed completion count_ - that is, jobs that have non null
`.spec.completions` - can have a completion mode that is specified in `.spec.completionMode`:
- `NonIndexed` (default): the Job is considered complete when there have been
`.spec.completions` successfully completed Pods. In other words, each Pod
completion is homologous to each other. Note that Jobs that have null
`.spec.completions` are implicitly `NonIndexed`.
- `Indexed`: the Pods of a Job get an associated completion index from 0 to
`.spec.completions-1`, available in the annotation `batch.kubernetes.io/job-completion-index`.
The Job is considered complete when there is one successfully completed Pod
for each index. For more information about how to use this mode, see
[Indexed Job for Parallel Processing with Static Work Assignment](/docs/tasks/job/indexed-parallel-processing-static/).
Note that, although rare, more than one Pod could be started for the same
index, but only one of them will count towards the completion count.
## Handling Pod and container failures
A container in a Pod may fail for a number of reasons, such as because the process in it exited with
@ -348,12 +374,12 @@ The tradeoffs are:
The tradeoffs are summarized here, with columns 2 to 4 corresponding to the above tradeoffs.
The pattern names are also links to examples and more detailed description.
| Pattern | Single Job object | Fewer pods than work items? | Use app unmodified? | Works in Kube 1.1? |
| -------------------------------------------------------------------- |:-----------------:|:---------------------------:|:-------------------:|:-------------------:|
| [Job Template Expansion](/docs/tasks/job/parallel-processing-expansion/) | | | ✓ | ✓ |
| [Queue with Pod Per Work Item](/docs/tasks/job/coarse-parallel-processing-work-queue/) | ✓ | | sometimes | ✓ |
| [Queue with Variable Pod Count](/docs/tasks/job/fine-parallel-processing-work-queue/) | ✓ | ✓ | | ✓ |
| Single Job with Static Work Assignment | ✓ | | ✓ | |
| Pattern | Single Job object | Fewer pods than work items? | Use app unmodified? |
| ----------------------------------------- |:-----------------:|:---------------------------:|:-------------------:|
| [Queue with Pod Per Work Item] | ✓ | | sometimes |
| [Queue with Variable Pod Count] | ✓ | ✓ | |
| [Indexed Job with Static Work Assignment] | ✓ | | ✓ |
| [Job Template Expansion] | | | ✓ |
When you specify completions with `.spec.completions`, each Pod created by the Job controller
has an identical [`spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). This means that
@ -364,13 +390,17 @@ are different ways to arrange for pods to work on different things.
This table shows the required settings for `.spec.parallelism` and `.spec.completions` for each of the patterns.
Here, `W` is the number of work items.
| Pattern | `.spec.completions` | `.spec.parallelism` |
| -------------------------------------------------------------------- |:-------------------:|:--------------------:|
| [Job Template Expansion](/docs/tasks/job/parallel-processing-expansion/) | 1 | should be 1 |
| [Queue with Pod Per Work Item](/docs/tasks/job/coarse-parallel-processing-work-queue/) | W | any |
| [Queue with Variable Pod Count](/docs/tasks/job/fine-parallel-processing-work-queue/) | 1 | any |
| Single Job with Static Work Assignment | W | any |
| Pattern | `.spec.completions` | `.spec.parallelism` |
| ----------------------------------------- |:-------------------:|:--------------------:|
| [Queue with Pod Per Work Item] | W | any |
| [Queue with Variable Pod Count] | null | any |
| [Indexed Job with Static Work Assignment] | W | any |
| [Job Template Expansion] | 1 | should be 1 |
[Queue with Pod Per Work Item]: /docs/tasks/job/coarse-parallel-processing-work-queue/
[Queue with Variable Pod Count]: /docs/tasks/job/fine-parallel-processing-work-queue/
[Indexed Job with Static Work Assignment]: /docs/tasks/job/indexed-parallel-processing-static/
[Job Template Expansion]: /docs/tasks/job/parallel-processing-expansion/
## Advanced usage

View File

@ -62,8 +62,6 @@ different Kubernetes components.
| `BoundServiceAccountTokenVolume` | `false` | Alpha | 1.13 | |
| `CPUManager` | `false` | Alpha | 1.8 | 1.9 |
| `CPUManager` | `true` | Beta | 1.10 | |
| `CRIContainerLogRotation` | `false` | Alpha | 1.10 | 1.10 |
| `CRIContainerLogRotation` | `true` | Beta| 1.11 | |
| `CSIInlineVolume` | `false` | Alpha | 1.15 | 1.15 |
| `CSIInlineVolume` | `true` | Beta | 1.16 | - |
| `CSIMigration` | `false` | Alpha | 1.14 | 1.16 |
@ -74,7 +72,8 @@ different Kubernetes components.
| `CSIMigrationAzureDisk` | `false` | Alpha | 1.15 | 1.18 |
| `CSIMigrationAzureDisk` | `false` | Beta | 1.19 | |
| `CSIMigrationAzureDiskComplete` | `false` | Alpha | 1.17 | |
| `CSIMigrationAzureFile` | `false` | Alpha | 1.15 | |
| `CSIMigrationAzureFile` | `false` | Alpha | 1.15 | 1.19 |
| `CSIMigrationAzureFile` | `false` | Beta | 1.21 | |
| `CSIMigrationAzureFileComplete` | `false` | Alpha | 1.17 | |
| `CSIMigrationGCE` | `false` | Alpha | 1.14 | 1.16 |
| `CSIMigrationGCE` | `false` | Beta | 1.17 | |
@ -85,7 +84,8 @@ different Kubernetes components.
| `CSIMigrationvSphere` | `false` | Beta | 1.19 | |
| `CSIMigrationvSphereComplete` | `false` | Beta | 1.19 | |
| `CSIServiceAccountToken` | `false` | Alpha | 1.20 | |
| `CSIStorageCapacity` | `false` | Alpha | 1.19 | |
| `CSIStorageCapacity` | `false` | Alpha | 1.19 | 1.20 |
| `CSIStorageCapacity` | `true` | Beta | 1.21 | |
| `CSIVolumeFSGroupPolicy` | `false` | Alpha | 1.19 | 1.19 |
| `CSIVolumeFSGroupPolicy` | `true` | Beta | 1.20 | |
| `ConfigurableFSGroupPolicy` | `false` | Alpha | 1.18 | 1.19 |
@ -199,6 +199,9 @@ different Kubernetes components.
| `BlockVolume` | `false` | Alpha | 1.9 | 1.12 |
| `BlockVolume` | `true` | Beta | 1.13 | 1.17 |
| `BlockVolume` | `true` | GA | 1.18 | - |
| `CRIContainerLogRotation` | `false` | Alpha | 1.10 | 1.10 |
| `CRIContainerLogRotation` | `true` | Beta | 1.11 | 1.20 |
| `CRIContainerLogRotation` | `true` | GA | 1.21 | - |
| `CSIBlockVolume` | `false` | Alpha | 1.11 | 1.13 |
| `CSIBlockVolume` | `true` | Beta | 1.14 | 1.17 |
| `CSIBlockVolume` | `true` | GA | 1.18 | - |
@ -260,6 +263,7 @@ different Kubernetes components.
| `ImmutableEphemeralVolumes` | `false` | Alpha | 1.18 | 1.18 |
| `ImmutableEphemeralVolumes` | `true` | Beta | 1.19 | 1.20 |
| `ImmutableEphemeralVolumes` | `true` | GA | 1.21 | |
| `IndexedJob` | `false` | Alpha | 1.21 | |
| `Initializers` | `false` | Alpha | 1.7 | 1.13 |
| `Initializers` | - | Deprecated | 1.14 | - |
| `KubeletConfigFile` | `false` | Alpha | 1.8 | 1.9 |
@ -449,7 +453,9 @@ Each feature gate is designed for enabling/disabling a specific feature:
for more details.
- `CPUManager`: Enable container level CPU affinity support, see
[CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/).
- `CRIContainerLogRotation`: Enable container log rotation for cri container runtime.
- `CRIContainerLogRotation`: Enable container log rotation for CRI container runtime. The default max size of a log file is 10MB and the
default max number of log files allowed for a container is 5. These values can be configured in the kubelet config.
See the [logging at node level](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level) documentation for more details.
- `CSIBlockVolume`: Enable external CSI volume drivers to support block storage.
See the [`csi` raw block volume support](/docs/concepts/storage/volumes/#csi-raw-block-volume-support)
documentation for more details.
@ -629,10 +635,12 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `HyperVContainer`: Enable
[Hyper-V isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container)
for Windows containers.
- `IPv6DualStack`: Enable [dual stack](/docs/concepts/services-networking/dual-stack/)
support for IPv6.
- `ImmutableEphemeralVolumes`: Allows for marking individual Secrets and ConfigMaps as
immutable for better safety and performance.
- `IndexedJob`: Allows the [Job](/docs/concepts/workloads/controllers/job/)
controller to manage Pod completions per completion index.
- `IPv6DualStack`: Enable [dual stack](/docs/concepts/services-networking/dual-stack/)
support for IPv6.
- `KubeletConfigFile` (*deprecated*): Enable loading kubelet configuration from
a file specified using a config file.
See [setting kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file/)
@ -737,7 +745,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
[ServiceTopology](/docs/concepts/services-networking/service-topology/)
for more details.
- `SizeMemoryBackedVolumes`: Enables kubelet support to size memory backed volumes.
See [volumes](docs/concepts/storage/volumes) for more details.
See [volumes](/docs/concepts/storage/volumes) for more details.
- `SetHostnameAsFQDN`: Enable the ability of setting Fully Qualified Domain
Name(FQDN) as the hostname of a pod. See
[Pod's `setHostnameAsFQDN` field](/docs/concepts/services-networking/dns-pod-service/#pod-sethostnameasfqdn-field).

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -224,14 +224,14 @@ kubelet [flags]
<td colspan="2">--container-log-max-files int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">&lt;Warning: Beta feature&gt; Set the maximum number of container log files that can be present for a container. The number must be &ge; 2. This flag can only be used with `--container-runtime=remote`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Set the maximum number of container log files that can be present for a container. The number must be &ge; 2. This flag can only be used with `--container-runtime=remote`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr>
<tr>
<td colspan="2">--container-log-max-size string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `10Mi`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">&lt;Warning: Beta feature&gt; Set the maximum size (e.g. 10Mi) of container log file before it is rotated. This flag can only be used with `--container-runtime=remote`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Set the maximum size (e.g. 10Mi) of container log file before it is rotated. This flag can only be used with `--container-runtime=remote`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr>
<tr>
@ -298,13 +298,6 @@ kubelet [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;">The Kubelet will use this directory for checkpointing downloaded configurations and tracking configuration health. The Kubelet will create this directory if it does not already exist. The path may be absolute or relative; relative paths start at the Kubelet's current working directory. Providing this flag enables dynamic Kubelet configuration. The `DynamicKubeletConfig` feature gate must be enabled to pass this flag; this gate currently defaults to `true` because the feature is beta.</td>
</tr>
<tr>
<td colspan="2">--enable-cadvisor-json-endpoints&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `false`</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Enable cAdvisor json `/spec` and `/stats/*` endpoints. This flag has no effect on the /stats/summary endpoint. (DEPRECATED: will be removed in a future version)</td>
</tr>
<tr>
<td colspan="2">--enable-controller-attach-detach&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: `true`</td>
</tr>
@ -462,7 +455,6 @@ AppArmor=true|false (BETA - default=true)<br/>
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)<br/>
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)<br/>
CPUManager=true|false (BETA - default=true)<br/>
CRIContainerLogRotation=true|false (BETA - default=true)<br/>
CSIInlineVolume=true|false (BETA - default=true)<br/>
CSIMigration=true|false (BETA - default=true)<br/>
CSIMigrationAWS=true|false (BETA - default=false)<br/>

View File

@ -19,7 +19,7 @@ files by setting the KUBECONFIG environment variable or by setting the
This overview covers `kubectl` syntax, describes the command operations, and provides common examples.
For details about each command, including all the supported flags and subcommands, see the
[kubectl](/docs/reference/generated/kubectl/kubectl-commands/) reference documentation.
For installation instructions see [installing kubectl](/docs/tasks/tools/install-kubectl/).
For installation instructions see [installing kubectl](/docs/tasks/tools/).
<!-- body -->

View File

@ -198,6 +198,15 @@ The kubelet can set this annotation on a Node to denote its configured IPv4 addr
When kubelet is started with the "external" cloud provider, it sets this annotation on the Node to denote an IP address set from the command line flag (`--node-ip`). This IP is verified with the cloud provider as valid by the cloud-controller-manager.
## batch.kubernetes.io/job-completion-index
Example: `batch.kubernetes.io/job-completion-index: "3"`
Used on: Pod
The Job controller in the kube-controller-manager sets this annotation for Pods
created with Indexed [completion mode](/docs/concepts/workloads/controllers/job/#completion-mode).
## kubectl.kubernetes.io/default-container
Example: `kubectl.kubernetes.io/default-container: "front-end-app"`

View File

@ -181,8 +181,6 @@ that are not enabled by default:
- `RequestedToCapacityRatio`: Favor nodes according to a configured function of
the allocated resources.
Extension points: `Score`.
- `NodeResourceLimits`: Favors nodes that satisfy the Pod resource limits.
Extension points: `PreScore`, `Score`.
- `CinderVolume`: Checks that OpenStack Cinder volume limits can be satisfied
for the node.
Extension points: `Filter`.

View File

@ -9,7 +9,7 @@ weight: 40
<!-- overview -->
Kubernetes requires PKI certificates for authentication over TLS.
If you install Kubernetes with [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), the certificates that your cluster requires are automatically generated.
If you install Kubernetes with [kubeadm](/docs/reference/setup-tools/kubeadm/), the certificates that your cluster requires are automatically generated.
You can also generate your own certificates -- for example, to keep your private keys more secure by not storing them on the API server.
This page explains the certificates that your cluster requires.
@ -74,7 +74,7 @@ Required certificates:
| kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | |
| front-proxy-client | kubernetes-front-proxy-ca | | client | |
[1]: any other IP or DNS name you contact your cluster on (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/)
[1]: any other IP or DNS name you contact your cluster on (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/)
the load balancer stable IP and/or DNS name, `kubernetes`, `kubernetes.default`, `kubernetes.default.svc`,
`kubernetes.default.svc.cluster`, `kubernetes.default.svc.cluster.local`)
@ -100,7 +100,7 @@ For kubeadm users only:
### Certificate paths
Certificates should be placed in a recommended path (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/)).
Certificates should be placed in a recommended path (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/)).
Paths should be specified using the given argument regardless of location.
| Default CN | recommended key path | recommended cert path | command | key argument | cert argument |

View File

@ -59,7 +59,7 @@ When nodes start up, the kubelet on each node automatically adds
{{< glossary_tooltip text="labels" term_id="label" >}} to the Node object
that represents that specific kubelet in the Kubernetes API.
These labels can include
[zone information](/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone).
[zone information](/docs/reference/labels-annotations-taints/#topologykubernetesiozone).
If your cluster spans multiple zones or regions, you can use node labels
in conjunction with

View File

@ -63,7 +63,7 @@ configuration, or reinstall it using automation.
### containerd
This section contains the necessary steps to use `containerd` as CRI runtime.
This section contains the necessary steps to use containerd as CRI runtime.
Use the following commands to install Containerd on your system:
@ -92,170 +92,62 @@ sudo sysctl --system
Install containerd:
{{< tabs name="tab-cri-containerd-installation" >}}
{{% tab name="Ubuntu 16.04" %}}
{{% tab name="Linux" %}}
```shell
# (Install containerd)
## Set up the repository
### Install packages to allow apt to use a repository over HTTPS
sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
```
1. Install the `containerd.io` package from the official Docker repositories. Instructions for setting up the Docker repository for your respective Linux distribution and installing the `containerd.io` package can be found at [Install Docker Engine](https://docs.docker.com/engine/install/#server).
```shell
## Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add -
```
2. Configure containerd:
```shell
## Add Docker apt repository.
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
```
```shell
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
```
```shell
## Install containerd
sudo apt-get update && sudo apt-get install -y containerd.io
```
3. Restart containerd:
```shell
# Configure containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
```
```shell
sudo systemctl restart containerd
```
```shell
# Restart containerd
sudo systemctl restart containerd
```
{{% /tab %}}
{{% tab name="Ubuntu 18.04/20.04" %}}
```shell
# (Install containerd)
sudo apt-get update && sudo apt-get install -y containerd
```
```shell
# Configure containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
```
```shell
# Restart containerd
sudo systemctl restart containerd
```
{{% /tab %}}
{{% tab name="Debian 9+" %}}
```shell
# (Install containerd)
## Set up the repository
### Install packages to allow apt to use a repository over HTTPS
sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
```
```shell
## Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add -
```
```shell
## Add Docker apt repository.
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
```
```shell
## Install containerd
sudo apt-get update && sudo apt-get install -y containerd.io
```
```shell
# Set default containerd configuration
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
```
```shell
# Restart containerd
sudo systemctl restart containerd
```
{{% /tab %}}
{{% tab name="CentOS/RHEL 7.4+" %}}
```shell
# (Install containerd)
## Set up the repository
### Install required packages
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
```
```shell
## Add docker repository
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
```
```shell
## Install containerd
sudo yum update -y && sudo yum install -y containerd.io
```
```shell
## Configure containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
```
```shell
# Restart containerd
sudo systemctl restart containerd
```
{{% /tab %}}
{{% tab name="Windows (PowerShell)" %}}
<br />
Start a Powershell session, set `$Version` to the desired version (ex: `$Version=1.4.3`), and then run the following commands:
<br />
```powershell
# (Install containerd)
# Download containerd
curl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz
tar.exe xvf .\containerd-windows-amd64.tar.gz
```
1. Download containerd:
```powershell
# Extract and configure
Copy-Item -Path ".\bin\" -Destination "$Env:ProgramFiles\containerd" -Recurse -Force
cd $Env:ProgramFiles\containerd\
.\containerd.exe config default | Out-File config.toml -Encoding ascii
```powershell
curl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz
tar.exe xvf .\containerd-windows-amd64.tar.gz
```
# Review the configuration. Depending on setup you may want to adjust:
# - the sandbox_image (Kubernetes pause image)
# - cni bin_dir and conf_dir locations
Get-Content config.toml
2. Extract and configure:
# (Optional - but highly recommended) Exclude containerd form Windows Defender Scans
Add-MpPreference -ExclusionProcess "$Env:ProgramFiles\containerd\containerd.exe"
```
```powershell
Copy-Item -Path ".\bin\" -Destination "$Env:ProgramFiles\containerd" -Recurse -Force
cd $Env:ProgramFiles\containerd\
.\containerd.exe config default | Out-File config.toml -Encoding ascii
```powershell
# Start containerd
.\containerd.exe --register-service
Start-Service containerd
```
# Review the configuration. Depending on setup you may want to adjust:
# - the sandbox_image (Kubernetes pause image)
# - cni bin_dir and conf_dir locations
Get-Content config.toml
# (Optional - but highly recommended) Exclude containerd from Windows Defender Scans
Add-MpPreference -ExclusionProcess "$Env:ProgramFiles\containerd\containerd.exe"
```
3. Start containerd:
```powershell
.\containerd.exe --register-service
Start-Service containerd
```
{{% /tab %}}
{{< /tabs >}}
#### systemd {#containerd-systemd}
#### Using the `systemd` cgroup driver {#containerd-systemd}
To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`, set
@ -266,6 +158,12 @@ To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`,
SystemdCgroup = true
```
If you apply this change make sure to restart containerd again:
```shell
sudo systemctl restart containerd
```
When using kubeadm, manually configure the
[cgroup driver for kubelet](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-control-plane-node).
@ -455,138 +353,38 @@ in sync.
### Docker
On each of your nodes, install Docker CE.
1. On each of your nodes, install the Docker for your Linux distribution as per [Install Docker Engine](https://docs.docker.com/engine/install/#server). You can find the latest validated version of Docker in this [dependencies](https://git.k8s.io/kubernetes/build/dependencies.yaml) file.
The Kubernetes release notes list which versions of Docker are compatible
with that version of Kubernetes.
2. Configure the Docker daemon, in particular to use systemd for the management of the containers cgroups.
Use the following commands to install Docker on your system:
```shell
sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
```
{{< tabs name="tab-cri-docker-installation" >}}
{{% tab name="Ubuntu 16.04+" %}}
{{< note >}}
`overlay2` is the preferred storage driver for systems running Linux kernel version 4.0 or higher, or RHEL or CentOS using version 3.10.0-514 and above.
{{< /note >}}
```shell
# (Install Docker CE)
## Set up the repository:
### Install packages to allow apt to use a repository over HTTPS
sudo apt-get update && sudo apt-get install -y \
apt-transport-https ca-certificates curl software-properties-common gnupg2
```
3. Restart Docker and enable on boot:
```shell
# Add Docker's official GPG key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add -
```
```shell
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
```
```shell
# Add the Docker apt repository:
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
```
```shell
# Install Docker CE
sudo apt-get update && sudo apt-get install -y \
containerd.io=1.2.13-2 \
docker-ce=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) \
docker-ce-cli=5:19.03.11~3-0~ubuntu-$(lsb_release -cs)
```
```shell
## Create /etc/docker
sudo mkdir /etc/docker
```
```shell
# Set up the Docker daemon
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
```
```shell
# Create /etc/systemd/system/docker.service.d
sudo mkdir -p /etc/systemd/system/docker.service.d
```
```shell
# Restart Docker
sudo systemctl daemon-reload
sudo systemctl restart docker
```
{{% /tab %}}
{{% tab name="CentOS/RHEL 7.4+" %}}
```shell
# (Install Docker CE)
## Set up the repository
### Install required packages
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
```
```shell
## Add the Docker repository
sudo yum-config-manager --add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
```
```shell
# Install Docker CE
sudo yum update -y && sudo yum install -y \
containerd.io-1.2.13 \
docker-ce-19.03.11 \
docker-ce-cli-19.03.11
```
```shell
## Create /etc/docker
sudo mkdir /etc/docker
```
```shell
# Set up the Docker daemon
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
```
```shell
# Create /etc/systemd/system/docker.service.d
sudo mkdir -p /etc/systemd/system/docker.service.d
```
```shell
# Restart Docker
sudo systemctl daemon-reload
sudo systemctl restart docker
```
{{% /tab %}}
{{< /tabs >}}
If you want the `docker` service to start on boot, run the following command:
```shell
sudo systemctl enable docker
```
Refer to the [official Docker installation guides](https://docs.docker.com/engine/installation/)
for more information.
{{< note >}}
For more information refer to
- [Configure the Docker daemon](https://docs.docker.com/config/daemon/)
- [Control Docker with systemd](https://docs.docker.com/config/daemon/systemd/)
{{< /note >}}

View File

@ -23,7 +23,7 @@ kops is an automated provisioning system:
## {{% heading "prerequisites" %}}
* You must have [kubectl](/docs/tasks/tools/install-kubectl/) installed.
* You must have [kubectl](/docs/tasks/tools/) installed.
* You must [install](https://github.com/kubernetes/kops#installing) `kops` on a 64-bit (AMD64 and Intel 64) device architecture.

View File

@ -137,7 +137,7 @@ is not supported by kubeadm.
### More information
For more information about `kubeadm init` arguments, see the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm/).
For more information about `kubeadm init` arguments, see the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/).
To configure `kubeadm init` with a configuration file see [Using kubeadm init with a configuration file](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file).

View File

@ -160,7 +160,7 @@ kubelet and the control plane is supported, but the kubelet version may never ex
server version. For example, the kubelet running 1.7.0 should be fully compatible with a 1.8.0 API server,
but not vice versa.
For information about installing `kubectl`, see [Install and set up kubectl](/docs/tasks/tools/install-kubectl/).
For information about installing `kubectl`, see [Install and set up kubectl](/docs/tasks/tools/).
{{< warning >}}
These instructions exclude all Kubernetes packages from any system upgrades.
@ -175,16 +175,34 @@ For more information on version skews, see:
{{< tabs name="k8s_install" >}}
{{% tab name="Debian-based distributions" %}}
```bash
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
```
1. Update the `apt` package index and install packages needed to use the Kubernetes `apt` repository:
```shell
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
```
2. Download the Google Cloud public signing key:
```shell
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
```
3. Add the Kubernetes `apt` repository:
```shell
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
```
4. Update `apt` package index, install kubelet, kubeadm and kubectl, and pin their version:
```shell
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
```
{{% /tab %}}
{{% tab name="Red Hat-based distributions" %}}
```bash

View File

@ -23,7 +23,7 @@ Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [i
* continuous integration tests
To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to
[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).
[kubeadm](/docs/reference/setup-tools/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).
<!-- body -->

View File

@ -221,7 +221,7 @@ On Windows, you can use the following settings to configure Services and load ba
#### IPv4/IPv6 dual-stack
You can enable IPv4/IPv6 dual-stack networking for `l2bridge` networks using the `IPv6DualStack` [feature gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/). See [enable IPv4/IPv6 dual stack](/docs/concepts/services-networking/dual-stack#enable-ipv4ipv6-dual-stack) for more details.
You can enable IPv4/IPv6 dual-stack networking for `l2bridge` networks using the `IPv6DualStack` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/). See [enable IPv4/IPv6 dual stack](/docs/concepts/services-networking/dual-stack#enable-ipv4ipv6-dual-stack) for more details.
{{< note >}}
On Windows, using IPv6 with Kubernetes require Windows Server, version 2004 (kernel version 10.0.19041.610) or later.
@ -237,7 +237,7 @@ Overlay (VXLAN) networks on Windows do not support dual-stack networking today.
Windows is only supported as a worker node in the Kubernetes architecture and component matrix. This means that a Kubernetes cluster must always include Linux master nodes, zero or more Linux worker nodes, and zero or more Windows worker nodes.
#### Compute {compute-limitations}
#### Compute {#compute-limitations}
##### Resource management and process isolation
@ -297,7 +297,7 @@ As a result, the following storage functionality is not supported on Windows nod
* NFS based storage/volume support
* Expanding the mounted volume (resizefs)
#### Networking {networking-limitations}
#### Networking {#networking-limitations}
Windows Container Networking differs in some important ways from Linux networking. The [Microsoft documentation for Windows Container Networking](https://docs.microsoft.com/en-us/virtualization/windowscontainers/container-networking/architecture) contains additional details and background.

View File

@ -1906,7 +1906,7 @@ filename | sha512 hash
- Promote SupportNodePidsLimit to GA to provide node to pod pid isolation
Promote SupportPodPidsLimit to GA to provide ability to limit pids per pod ([#94140](https://github.com/kubernetes/kubernetes/pull/94140), [@derekwaynecarr](https://github.com/derekwaynecarr)) [SIG Node and Testing]
- Rename pod_preemption_metrics to preemption_metrics. ([#93256](https://github.com/kubernetes/kubernetes/pull/93256), [@ahg-g](https://github.com/ahg-g)) [SIG Instrumentation and Scheduling]
- Server-side apply behavior has been regularized in the case where a field is removed from the applied configuration. Removed fields which have no other owners are deleted from the live object, or reset to their default value if they have one. Safe ownership transfers, such as the transfer of a `replicas` field from a user to an HPA without resetting to the default value are documented in [Transferring Ownership](https://kubernetes.io/docs/reference/using-api/api-concepts/&#35;transferring-ownership) ([#92661](https://github.com/kubernetes/kubernetes/pull/92661), [@jpbetz](https://github.com/jpbetz)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation and Testing]
- Server-side apply behavior has been regularized in the case where a field is removed from the applied configuration. Removed fields which have no other owners are deleted from the live object, or reset to their default value if they have one. Safe ownership transfers, such as the transfer of a `replicas` field from a user to an HPA without resetting to the default value are documented in [Transferring Ownership](/docs/reference/using-api/server-side-apply/#transferring-ownership) ([#92661](https://github.com/kubernetes/kubernetes/pull/92661), [@jpbetz](https://github.com/jpbetz)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation and Testing]
- Set CSIMigrationvSphere feature gates to beta.
Users should enable CSIMigration + CSIMigrationvSphere features and install the vSphere CSI Driver (https://github.com/kubernetes-sigs/vsphere-csi-driver) to move workload from the in-tree vSphere plugin "kubernetes.io/vsphere-volume" to vSphere CSI Driver.

View File

@ -192,7 +192,7 @@ func main() {
}
```
If the application is deployed as a Pod in the cluster, see [Accessing the API from within a Pod](#accessing-the-api-from-within-a-pod).
If the application is deployed as a Pod in the cluster, see [Accessing the API from within a Pod](/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod).
#### Python client

View File

@ -34,7 +34,7 @@ If your cluster was deployed using the `kubeadm` tool, refer to
for detailed information on how to upgrade the cluster.
Once you have upgraded the cluster, remember to
[install the latest version of `kubectl`](/docs/tasks/tools/install-kubectl/).
[install the latest version of `kubectl`](/docs/tasks/tools/).
### Manual deployments
@ -52,7 +52,7 @@ You should manually update the control plane following this sequence:
- cloud controller manager, if you use one
At this point you should
[install the latest version of `kubectl`](/docs/tasks/tools/install-kubectl/).
[install the latest version of `kubectl`](/docs/tasks/tools/).
For each node in your cluster, [drain](/docs/tasks/administer-cluster/safely-drain-node/)
that node and then either replace it with a new node that uses the {{< skew latestVersion >}}

View File

@ -170,36 +170,7 @@ controllerManager:
### Create certificate signing requests (CSR)
You can create the certificate signing requests for the Kubernetes certificates API with `kubeadm certs renew --use-api`.
If you set up an external signer such as [cert-manager](https://github.com/jetstack/cert-manager), certificate signing requests (CSRs) are automatically approved.
Otherwise, you must manually approve certificates with the [`kubectl certificate`](/docs/setup/best-practices/certificates/) command.
The following kubeadm command outputs the name of the certificate to approve, then blocks and waits for approval to occur:
```shell
sudo kubeadm certs renew apiserver --use-api &
```
The output is similar to this:
```
[1] 2890
[certs] certificate request "kubeadm-cert-kube-apiserver-ld526" created
```
### Approve certificate signing requests (CSR)
If you set up an external signer, certificate signing requests (CSRs) are automatically approved.
Otherwise, you must manually approve certificates with the [`kubectl certificate`](/docs/setup/best-practices/certificates/) command. e.g.
```shell
kubectl certificate approve kubeadm-cert-kube-apiserver-ld526
```
The output is similar to this:
```shell
certificatesigningrequest.certificates.k8s.io/kubeadm-cert-kube-apiserver-ld526 approved
```
You can view a list of pending certificates with `kubectl get csr`.
See [Create CertificateSigningRequest](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#create-certificatesigningrequest) for creating CSRs with the Kubernetes API.
## Renew certificates with external CA

View File

@ -202,4 +202,7 @@ verify that the pods were scheduled by the desired schedulers.
```shell
kubectl get events
```
You can also use a [custom scheduler configuration](/docs/reference/scheduling/config/#multiple-profiles)
or a custom container image for the cluster's main scheduler by modifying its static pod manifest
on the relevant control plane nodes.

View File

@ -1129,8 +1129,6 @@ resources that have the scale subresource enabled.
### Categories
{{< feature-state state="beta" for_k8s_version="v1.10" >}}
Categories is a list of grouped resources the custom resource belongs to (eg. `all`).
You can use `kubectl get <category-name>` to list the resources belonging to the category.

View File

@ -2,7 +2,7 @@
title: Coarse Parallel Processing Using a Work Queue
min-kubernetes-server-version: v1.8
content_type: task
weight: 30
weight: 20
---

View File

@ -2,7 +2,7 @@
title: Fine Parallel Processing Using a Work Queue
content_type: task
min-kubernetes-server-version: v1.8
weight: 40
weight: 30
---
<!-- overview -->

View File

@ -0,0 +1,176 @@
---
title: Indexed Job for Parallel Processing with Static Work Assignment
content_type: task
min-kubernetes-server-version: v1.21
weight: 30
---
{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
<!-- overview -->
In this example, you will run a Kubernetes Job that uses multiple parallel
worker processes.
Each worker is a different container running in its own Pod. The Pods have an
_index number_ that the control plane sets automatically, which allows each Pod
to identify which part of the overall task to work on.
The pod index is available in the {{< glossary_tooltip text="annotation" term_id="annotation" >}}
`batch.kubernetes.io/job-completion-index` as string representing its
decimal value. In order for the containerized task process to obtain this index,
you can publish the value of the annotation using the [downward API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#the-downward-api)
mechanism.
For convenience, the control plane automatically sets the downward API to
expose the index in the `JOB_COMPLETION_INDEX` environment variable.
Here is an overview of the steps in this example:
1. **Create an image that can read the pod index**. You might modify the worker
program or add a script wrapper.
2. **Start an Indexed Job**. The downward API allows you to pass the annotation
as an environment variable or file to the container.
## {{% heading "prerequisites" %}}
Be familiar with the basic,
non-parallel, use of [Job](/docs/concepts/workloads/controllers/job/).
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
To be able to create Indexed Jobs, make sure to enable the `IndexedJob`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
on the [API server](docs/reference/command-line-tools-reference/kube-apiserver/)
and the [controller manager](/docs/reference/command-line-tools-reference/kube-controller-manager/).
<!-- steps -->
## Choose an approach
To access the work item from the worker program, you have a few options:
1. Read the `JOB_COMPLETION_INDEX` environment variable. The Job
{{< glossary_tooltip text="controller" term_id="controller" >}}
automatically links this variable to the annotation containing the completion
index.
1. Read a file that contains the completion index.
1. Assuming that you can't modify the program, you can wrap it with a script
that reads the index using any of the methods above and converts it into
something that the program can use as input.
For this example, imagine that you chose option 3 and you want to run the
[rev](https://man7.org/linux/man-pages/man1/rev.1.html) utility. This
program accepts a file as an argument and prints its content reversed.
```shell
rev data.txt
```
For this example, you'll use the `rev` tool from the
[`busybox`](https://hub.docker.com/_/busybox) container image.
## Define an Indexed Job
Here is a job definition. You'll need to edit the container image to match your
preferred registry.
{{< codenew language="yaml" file="application/job/indexed-job.yaml" >}}
In the example above, you use the builtin `JOB_COMPLETION_INDEX` environment
variable set by the Job controller for all containers. An [init container](/docs/concepts/workloads/pods/init-containers/)
maps the index to a static value and writes it to a file that is shared with the
container running the worker through an [emptyDir volume](/docs/concepts/storage/volumes/#emptydir).
Optionally, you can [define your own environment variable through the downward
API](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/)
to publish the index to containers. You can also choose to load a list of values
from a [ConfigMap as an environment variable or file](/docs/tasks/configure-pod-container/configure-pod-configmap/).
Alternatively, you can directly [use the downward API to pass the annotation
value as a volume file](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#store-pod-fields),
like shown in the following example:
{{< codenew language="yaml" file="application/job/indexed-job-vol.yaml" >}}
## Running the Job
Now run the Job:
```shell
kubectl apply -f ./indexed-job.yaml
```
Wait a bit, then check on the job:
```shell
kubectl describe jobs/indexed-job
```
The output is similar to:
```
Name: indexed-job
Namespace: default
Selector: controller-uid=bf865e04-0b67-483b-9a90-74cfc4c3e756
Labels: controller-uid=bf865e04-0b67-483b-9a90-74cfc4c3e756
job-name=indexed-job
Annotations: <none>
Parallelism: 3
Completions: 5
Start Time: Thu, 11 Mar 2021 15:47:34 +0000
Pods Statuses: 2 Running / 3 Succeeded / 0 Failed
Completed Indexes: 0-2
Pod Template:
Labels: controller-uid=bf865e04-0b67-483b-9a90-74cfc4c3e756
job-name=indexed-job
Init Containers:
input:
Image: docker.io/library/bash
Port: <none>
Host Port: <none>
Command:
bash
-c
items=(foo bar baz qux xyz)
echo ${items[$JOB_COMPLETION_INDEX]} > /input/data.txt
Environment: <none>
Mounts:
/input from input (rw)
Containers:
worker:
Image: docker.io/library/busybox
Port: <none>
Host Port: <none>
Command:
rev
/input/data.txt
Environment: <none>
Mounts:
/input from input (rw)
Volumes:
input:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 4s job-controller Created pod: indexed-job-njkjj
Normal SuccessfulCreate 4s job-controller Created pod: indexed-job-9kd4h
Normal SuccessfulCreate 4s job-controller Created pod: indexed-job-qjwsz
Normal SuccessfulCreate 1s job-controller Created pod: indexed-job-fdhq5
Normal SuccessfulCreate 1s job-controller Created pod: indexed-job-ncslj
```
In this example, we run the job with custom values for each index. You can
inspect the output of the pods:
```shell
kubectl logs indexed-job-fdhq5 # Change this to match the name of a Pod in your cluster.
```
The output is similar to:
```
xuq
```

View File

@ -2,7 +2,7 @@
title: Parallel Processing using Expansions
content_type: task
min-kubernetes-server-version: v1.8
weight: 20
weight: 50
---
<!-- overview -->

View File

@ -16,7 +16,7 @@ preview of what changes `apply` will make.
## {{% heading "prerequisites" %}}
Install [`kubectl`](/docs/tasks/tools/install-kubectl/).
Install [`kubectl`](/docs/tasks/tools/).
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}

View File

@ -12,7 +12,7 @@ explains how those commands are organized and how to use them to manage live obj
## {{% heading "prerequisites" %}}
Install [`kubectl`](/docs/tasks/tools/install-kubectl/).
Install [`kubectl`](/docs/tasks/tools/).
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}

View File

@ -13,7 +13,7 @@ This document explains how to define and manage objects using configuration file
## {{% heading "prerequisites" %}}
Install [`kubectl`](/docs/tasks/tools/install-kubectl/).
Install [`kubectl`](/docs/tasks/tools/).
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}

View File

@ -29,7 +29,7 @@ kubectl apply -k <kustomization_directory>
## {{% heading "prerequisites" %}}
Install [`kubectl`](/docs/tasks/tools/install-kubectl/).
Install [`kubectl`](/docs/tasks/tools/).
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}

View File

@ -19,7 +19,7 @@ Up to date information on this process can be found at the
* You must have a Kubernetes cluster with cluster DNS enabled.
* If you are using a cloud-based Kubernetes cluster or {{< glossary_tooltip text="Minikube" term_id="minikube" >}}, you may already have cluster DNS enabled.
* If you are using `hack/local-up-cluster.sh`, ensure that the `KUBE_ENABLE_CLUSTER_DNS` environment variable is set, then run the install script.
* [Install and setup kubectl](/docs/tasks/tools/install-kubectl/) v1.7 or higher. Make sure it is configured to connect to the Kubernetes cluster.
* [Install and setup kubectl](/docs/tasks/tools/) v1.7 or higher. Make sure it is configured to connect to the Kubernetes cluster.
* Install [Helm](https://helm.sh/) v2.7.0 or newer.
* Follow the [Helm install instructions](https://helm.sh/docs/intro/install/).
* If you already have an appropriate version of Helm installed, execute `helm init` to install Tiller, the server-side component of Helm.

View File

@ -23,7 +23,7 @@ Service Catalog itself can work with any kind of managed service, not just Googl
* Install [Go 1.6+](https://golang.org/dl/) and set the `GOPATH`.
* Install the [cfssl](https://github.com/cloudflare/cfssl) tool needed for generating SSL artifacts.
* Service Catalog requires Kubernetes version 1.7+.
* [Install and setup kubectl](/docs/tasks/tools/install-kubectl/) so that it is configured to connect to a Kubernetes v1.7+ cluster.
* [Install and setup kubectl](/docs/tasks/tools/) so that it is configured to connect to a Kubernetes v1.7+ cluster.
* The kubectl user must be bound to the *cluster-admin* role for it to install Service Catalog. To ensure that this is true, run the following command:
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=<user-name>

View File

@ -80,10 +80,10 @@ You now have to ensure that the kubectl completion script gets sourced in all yo
echo 'complete -F __start_kubectl k' >>~/.bash_profile
```
- If you installed kubectl with Homebrew (as explained [above](#install-with-homebrew-on-macos)), then the kubectl completion script should already be in `/usr/local/etc/bash_completion.d/kubectl`. In that case, you don't need to do anything.
- If you installed kubectl with Homebrew (as explained [here](/docs/tasks/tools/install-kubectl-macos/#install-with-homebrew-on-macos)), then the kubectl completion script should already be in `/usr/local/etc/bash_completion.d/kubectl`. In that case, you don't need to do anything.
{{< note >}}
The Homebrew installation of bash-completion v2 sources all the files in the `BASH_COMPLETION_COMPAT_DIR` directory, that's why the latter two methods work.
{{< /note >}}
In any case, after reloading your shell, kubectl completion should be working.
In any case, after reloading your shell, kubectl completion should be working.

View File

@ -100,15 +100,38 @@ For example, to download version {{< param "fullversion" >}} on Linux, type:
### Install using native package management
{{< tabs name="kubectl_install" >}}
{{< tab name="Ubuntu, Debian or HypriotOS" codelang="bash" >}}
sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2 curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl
{{< /tab >}}
{{% tab name="Debian-based distributions" %}}
{{< tab name="CentOS, RHEL or Fedora" codelang="bash" >}}cat <<EOF > /etc/yum.repos.d/kubernetes.repo
1. Update the `apt` package index and install packages needed to use the Kubernetes `apt` repository:
```shell
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
```
2. Download the Google Cloud public signing key:
```shell
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
```
3. Add the Kubernetes `apt` repository:
```shell
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
```
4. Update `apt` package index with the new repository and install kubectl:
```shell
sudo apt-get update
sudo apt-get install -y kubectl
```
{{% /tab %}}
{{< tab name="Red Hat-based distributions" codelang="bash" >}}
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

View File

@ -23,7 +23,7 @@ The following methods exist for installing kubectl on macOS:
- [Install kubectl binary with curl on macOS](#install-kubectl-binary-with-curl-on-macos)
- [Install with Homebrew on macOS](#install-with-homebrew-on-macos)
- [Install with Macports on macOS](#install-with-macports-on-macos)
- [Install on Linux as part of the Google Cloud SDK](#install-on-linux-as-part-of-the-google-cloud-sdk)
- [Install on macOS as part of the Google Cloud SDK](#install-on-macos-as-part-of-the-google-cloud-sdk)
### Install kubectl binary with curl on macOS
@ -157,4 +157,4 @@ Below are the procedures to set up autocompletion for Bash and Zsh.
## {{% heading "whatsnext" %}}
{{< include "included/kubectl-whats-next.md" >}}
{{< include "included/kubectl-whats-next.md" >}}

View File

@ -37,7 +37,7 @@ profiles that give only the necessary privileges to your container processes.
In order to complete all steps in this tutorial, you must install
[kind](https://kind.sigs.k8s.io/docs/user/quick-start/) and
[kubectl](/docs/tasks/tools/install-kubectl/). This tutorial will show examples
[kubectl](/docs/tasks/tools/). This tutorial will show examples
with both alpha (pre-v1.19) and generally available seccomp functionality, so
make sure that your cluster is [configured
correctly](https://kind.sigs.k8s.io/docs/user/quick-start/#setting-kubernetes-version)

View File

@ -15,10 +15,8 @@ This page provides a real world example of how to configure Redis using a Config
## {{% heading "objectives" %}}
* Create a `kustomization.yaml` file containing:
* a ConfigMap generator
* a Pod resource config using the ConfigMap
* Apply the directory by running `kubectl apply -k ./`
* Create a ConfigMap with Redis configuration values
* Create a Redis Pod that mounts and uses the created ConfigMap
* Verify that the configuration was correctly applied.
@ -38,82 +36,218 @@ This page provides a real world example of how to configure Redis using a Config
## Real World Example: Configuring Redis using a ConfigMap
You can follow the steps below to configure a Redis cache using data stored in a ConfigMap.
Follow the steps below to configure a Redis cache using data stored in a ConfigMap.
First create a `kustomization.yaml` containing a ConfigMap from the `redis-config` file:
{{< codenew file="pods/config/redis-config" >}}
First create a ConfigMap with an empty configuration block:
```shell
curl -OL https://k8s.io/examples/pods/config/redis-config
cat <<EOF >./kustomization.yaml
configMapGenerator:
- name: example-redis-config
files:
- redis-config
cat <<EOF >./example-redis-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: example-redis-config
data:
redis-config: ""
EOF
```
Add the pod resource config to the `kustomization.yaml`:
Apply the ConfigMap created above, along with a Redis pod manifest:
```shell
kubectl apply -f example-redis-config.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/config/redis-pod.yaml
```
Examine the contents of the Redis pod manifest and note the following:
* A volume named `config` is created by `spec.volumes[1]`
* The `key` and `path` under `spec.volumes[1].items[0]` exposes the `redis-config` key from the
`example-redis-config` ConfigMap as a file named `redis.conf` on the `config` volume.
* The `config` volume is then mounted at `/redis-master` by `spec.containers[0].volumeMounts[1]`.
This has the net effect of exposing the data in `data.redis-config` from the `example-redis-config`
ConfigMap above as `/redis-master/redis.conf` inside the Pod.
{{< codenew file="pods/config/redis-pod.yaml" >}}
```shell
curl -OL https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/config/redis-pod.yaml
Examine the created objects:
cat <<EOF >>./kustomization.yaml
resources:
- redis-pod.yaml
EOF
```shell
kubectl get pod/redis configmap/example-redis-config
```
Apply the kustomization directory to create both the ConfigMap and Pod objects:
You should see the following output:
```shell
kubectl apply -k .
```
Examine the created objects by
```shell
> kubectl get -k .
NAME DATA AGE
configmap/example-redis-config-dgh9dg555m 1 52s
NAME READY STATUS RESTARTS AGE
pod/redis 1/1 Running 0 52s
pod/redis 1/1 Running 0 8s
NAME DATA AGE
configmap/example-redis-config 1 14s
```
In the example, the config volume is mounted at `/redis-master`.
It uses `path` to add the `redis-config` key to a file named `redis.conf`.
The file path for the redis config, therefore, is `/redis-master/redis.conf`.
This is where the image will look for the config file for the redis master.
Recall that we left `redis-config` key in the `example-redis-config` ConfigMap blank:
Use `kubectl exec` to enter the pod and run the `redis-cli` tool to verify that
the configuration was correctly applied:
```shell
kubectl describe configmap/example-redis-config
```
You should see an empty `redis-config` key:
```shell
Name: example-redis-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
redis-config:
```
Use `kubectl exec` to enter the pod and run the `redis-cli` tool to check the current configuration:
```shell
kubectl exec -it redis -- redis-cli
```
Check `maxmemory`:
```shell
127.0.0.1:6379> CONFIG GET maxmemory
```
It should show the default value of 0:
```shell
1) "maxmemory"
2) "0"
```
Similarly, check `maxmemory-policy`:
```shell
127.0.0.1:6379> CONFIG GET maxmemory-policy
```
Which should also yield its default value of `noeviction`:
```shell
1) "maxmemory-policy"
2) "noeviction"
```
Now let's add some configuration values to the `example-redis-config` ConfigMap:
{{< codenew file="pods/config/example-redis-config.yaml" >}}
Apply the updated ConfigMap:
```shell
kubectl apply -f example-redis-config.yaml
```
Confirm that the ConfigMap was updated:
```shell
kubectl describe configmap/example-redis-config
```
You should see the configuration values we just added:
```shell
Name: example-redis-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
redis-config:
----
maxmemory 2mb
maxmemory-policy allkeys-lru
```
Check the Redis Pod again using `redis-cli` via `kubectl exec` to see if the configuration was applied:
```shell
kubectl exec -it redis -- redis-cli
```
Check `maxmemory`:
```shell
127.0.0.1:6379> CONFIG GET maxmemory
```
It remains at the default value of 0:
```shell
1) "maxmemory"
2) "0"
```
Similarly, `maxmemory-policy` remains at the `noeviction` default setting:
```shell
127.0.0.1:6379> CONFIG GET maxmemory-policy
```
Returns:
```shell
1) "maxmemory-policy"
2) "noeviction"
```
The configuration values have not changed because the Pod needs to be restarted to grab updated
values from associated ConfigMaps. Let's delete and recreate the Pod:
```shell
kubectl delete pod redis
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/config/redis-pod.yaml
```
Now re-check the configuration values one last time:
```shell
kubectl exec -it redis -- redis-cli
```
Check `maxmemory`:
```shell
127.0.0.1:6379> CONFIG GET maxmemory
```
It should now return the updated value of 2097152:
```shell
1) "maxmemory"
2) "2097152"
```
Similarly, `maxmemory-policy` has also been updated:
```shell
127.0.0.1:6379> CONFIG GET maxmemory-policy
```
It now reflects the desired value of `allkeys-lru`:
```shell
1) "maxmemory-policy"
2) "allkeys-lru"
```
Delete the created pod:
Clean up your work by deleting the created resources:
```shell
kubectl delete pod redis
kubectl delete pod/redis configmap/example-redis-config
```
## {{% heading "whatsnext" %}}
* Learn more about [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/).

View File

@ -37,7 +37,7 @@ weight: 10
<li><i>ClusterIP</i> (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.</li>
<li><i>NodePort</i> - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <code>&lt;NodeIP&gt;:&lt;NodePort&gt;</code>. Superset of ClusterIP.</li>
<li><i>LoadBalancer</i> - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.</li>
<li><i>ExternalName</i> - Exposes the Service using an arbitrary name (specified by <code>externalName</code> in the spec) by returning a CNAME record with the name. No proxy is used. This type requires v1.7 or higher of <code>kube-dns</code>.</li>
<li><i>ExternalName</i> - Maps the Service to the contents of the <code>externalName</code> field (e.g. `foo.bar.example.com`), by returning a <code>CNAME</code> record with its value. No proxying of any kind is set up. This type requires v1.7 or higher of <code>kube-dns</code>, or CoreDNS version 0.0.8 or higher.</li>
</ul>
<p>More information about the different types of Services can be found in the <a href="/docs/tutorials/services/source-ip/">Using Source IP</a> tutorial. Also see <a href="/docs/concepts/services-networking/connect-applications-service">Connecting Applications with Services</a>.</p>
<p>Additionally, note that there are some use cases with Services that involve not defining <code>selector</code> in the spec. A Service created without <code>selector</code> will also not create the corresponding Endpoints object. This allows users to manually map a Service to specific endpoints. Another possibility why there may be no selector is you are strictly using <code>type: ExternalName</code>.</p>

View File

@ -11,7 +11,7 @@ external IP address.
## {{% heading "prerequisites" %}}
* Install [kubectl](/docs/tasks/tools/install-kubectl/).
* Install [kubectl](/docs/tasks/tools/).
* Use a cloud provider like Google Kubernetes Engine or Amazon Web Services to
create a Kubernetes cluster. This tutorial creates an
[external load balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/),

View File

@ -104,7 +104,7 @@ kubectl apply -f ./content/en/examples/application/guestbook/mongo-service.yaml
```shell
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1m
mongo ClusterIP 10.0.0.151 <none> 6379/TCP 8s
mongo ClusterIP 10.0.0.151 <none> 27017/TCP 8s
```
{{< note >}}

View File

@ -0,0 +1,27 @@
apiVersion: batch/v1
kind: Job
metadata:
name: 'indexed-job'
spec:
completions: 5
parallelism: 3
completionMode: Indexed
template:
spec:
restartPolicy: Never
containers:
- name: 'worker'
image: 'docker.io/library/busybox'
command:
- "rev"
- "/input/data.txt"
volumeMounts:
- mountPath: /input
name: input
volumes:
- name: input
downwardAPI:
items:
- path: "data.txt"
fieldRef:
fieldPath: metadata.annotations['batch.kubernetes.io/job-completion-index']

View File

@ -0,0 +1,35 @@
apiVersion: batch/v1
kind: Job
metadata:
name: 'indexed-job'
spec:
completions: 5
parallelism: 3
completionMode: Indexed
template:
spec:
restartPolicy: Never
initContainers:
- name: 'input'
image: 'docker.io/library/bash'
command:
- "bash"
- "-c"
- |
items=(foo bar baz qux xyz)
echo ${items[$JOB_COMPLETION_INDEX]} > /input/data.txt
volumeMounts:
- mountPath: /input
name: input
containers:
- name: 'worker'
image: 'docker.io/library/busybox'
command:
- "rev"
- "/input/data.txt"
volumeMounts:
- mountPath: /input
name: input
volumes:
- name: input
emptyDir: {}

View File

@ -0,0 +1,8 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: example-redis-config
data:
redis-config: |
maxmemory 2mb
maxmemory-policy allkeys-lru

View File

@ -0,0 +1,205 @@
---
reviewers:
title: RuntimeClass
content_type: concept
weight: 20
---
<!-- overview -->
{{< feature-state for_k8s_version="v1.20" state="stable" >}}
Esta página describe el recurso RuntimeClass y el mecanismo de selección del
motor de ejecución.
RuntimeClass es una característica que permite seleccionar la configuración del
motor de ejecución para los contenedores. La configuración del motor de ejecución para
los contenedores se utiliza para ejecutar los contenedores de un Pod.
<!-- body -->
## Motivación
Se puede seleccionar un RuntimeClass diferente entre diferentes Pods para
proporcionar equilibrio entre rendimiento y seguridad. Por ejemplo, si parte de
la carga de trabajo requiere un alto nivel de garantía de seguridad, se podrían
planificar esos Pods para ejecutarse en un motor de ejecución que use
virtualización de hardware. Así se beneficiaría con un mayor aislamiento del motor
de ejecución alternativo, con el coste de alguna sobrecarga adicional.
También se puede utilizar el RuntimeClass para ejecutar distintos Pods con el
mismo motor de ejecución pero con distintos parámetros.
## Configuración
1. Configurar la implementación del CRI en los nodos (depende del motor de
ejecución)
2. Crear los recursos RuntimeClass correspondientes.
### 1. Configurar la implementación del CRI en los nodos
La configuración disponible utilizando RuntimeClass dependen de la
implementación de la Interfaz del Motor de ejecución de Containers (CRI). Véase
la sección [Configuración del CRI](#cri-configuration) para más
información sobre cómo configurar la implementación del CRI.
{{< note >}}
RuntimeClass por defecto asume una configuración de nodos homogénea para todo el
clúster (lo que significa que todos los nodos están configurados de la misma
forma para el motor de ejecución de los contenedores). Para soportar configuraciones
heterogéneas de nodos, véase [Planificación](#scheduling) más abajo.
{{< /note >}}
Las configuraciones tienen un nombre de `handler` (manipulador) correspondiente, referenciado
por la RuntimeClass. El `handler` debe ser una etiqueta DNS 1123 válida
(alfanumérico + caracter `-`).
### 2. Crear los recursos RuntimeClass correspondientes.
Cada configuración establecida en el paso 1 tiene un nombre de `handler`, que
identifica a dicha configuración. Para cada `handler`, hay que crear un objeto
RuntimeClass correspondiente.
Actualmente el recurso RuntimeClass sólo tiene dos campos significativos: el
nombre del RuntimeClass (`metadata.name`) y el `handler`. La
definición del objeto se parece a ésta:
```yaml
apiVersion: node.k8s.io/v1 # La RuntimeClass se define en el grupo node.k8s.io
kind: RuntimeClass
metadata:
name: myclass # Nombre por el que se referenciará la RuntimeClass
# no contiene espacio de nombres
handler: myconfiguration # El nombre de la configuración CRI correspondiente
```
El nombre de un objeto RuntimeClass debe ser un [nombre de subdominio
DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)
válido.
{{< note >}}
Se recomienda que las operaciones de escritura de la RuntimeClass
(creación/modificación/parcheo/elimiación) se restrinjan al administrador del
clúster. Habitualmente es el valor por defecto. Véase [Visión general de la
Autorización](/docs/reference/access-authn-authz/authorization/) para más
detalles.
{{< /note >}}
## Uso
Una vez se han configurado las RuntimeClasses para el clúster, el utilizarlas es
muy sencillo. Solo se especifica un `runtimeClassName` en la especificación del Pod.
Por ejemplo:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
runtimeClassName: myclass
# ...
```
Así se informa a Kubelet del nombre de la RuntimeClass a utilizar para
este pod. Si dicha RuntimeClass no existe, o el CRI no puede ejecutar el
`handler` correspondiente, el pod entrará en la
[fase](/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) final `Failed`.
Se puede buscar por el correspondiente
[evento](/docs/tasks/debug-application-cluster/debug-application-introspection/)
con el mensaje de error.
Si no se especifica ninguna `runtimeClassName`, se usará el RuntimeHandler por
defecto, lo que equivale al comportamiento cuando la opción RuntimeClass está
deshabilitada.
### Configuración del CRI
Para más detalles sobre cómo configurar los motores de ejecución del CRI, véase
[instalación del CRI](/docs/setup/production-environment/container-runtimes/).
#### dockershim
El CRI dockershim incorporado por Kubernetes no soporta manejadores del motor de
ejecución.
#### {{< glossary_tooltip term_id="containerd" >}}
Los `handlers` del motor de ejecución se configuran mediante la configuración
de containerd en `/etc/containerd/config.toml`. Los `handlers` válidos se
configuran en la sección de motores de ejecución:
```
[plugins.cri.containerd.runtimes.${HANDLER_NAME}]
```
Véase la configuración de containerd para más detalles:
https://github.com/containerd/cri/blob/master/docs/config.md
#### {{< glossary_tooltip term_id="cri-o" >}}
Los `handlers` del motor de ejecución se configuran a través de la
configuración del CRI-O en `/etc/crio/crio.conf`. Los manejadores válidos se
configuran en la [tabla
crio.runtime](https://github.com/cri-o/cri-o/blob/master/docs/crio.conf.5.md#crioruntime-table)
```
[crio.runtime.runtimes.${HANDLER_NAME}]
runtime_path = "${PATH_TO_BINARY}"
```
Véase la [documentación de la
configuración](https://raw.githubusercontent.com/cri-o/cri-o/9f11d1d/docs/crio.conf.5.md)
de CRI-O para más detalles.
## Planificación
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
Especificando el campo `scheduling` en una RuntimeClass se pueden establecer
restricciones para asegurar que los Pods ejecutándose con dicha RuntimeClass se
planifican en los nodos que la soportan.
Para asegurar que los pods sean asignados en nodos que soportan una RuntimeClass
determinada, ese conjunto de nodos debe tener una etiqueta común que se
selecciona en el campo `runtimeclass.scheduling.nodeSelector`. El nodeSelector
de la RuntimeClass se combina con el nodeSelector del pod durante la admisión,
haciéndose efectiva la intersección del conjunto de nodos seleccionados por
ambos. Si hay conflicto, el pod se rechazará.
Si los nodos soportados se marcan para evitar que los pods con otra RuntimeClass
se ejecuten en el nodo, se pueden añadir `tolerations` al RuntimeClass. Igual
que con el `nodeSelector`, las tolerancias se mezclan con las tolerancias del
pod durante la admisión, haciéndose efectiva la unión del conjunto de nodos
tolerados por ambos.
Para saber más sobre configurar el selector de nodos y las tolerancias, véase
[Asignando Pods a Nodos](/docs/concepts/scheduling-eviction/assign-pod-node/).
### Sobrecarga del Pod
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
Se pueden especificar recursos de _sobrecarga_ adicional que se asocian a los
Pods que estén ejecutándose. Declarar la sobrecarga permite al clúster (incluido
el planificador) contabilizarlo al tomar decisiones sobre los Pods y los
recursos. Para utilizar la sobrecarga de pods, se debe haber habilitado la
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
PodOverhead (lo está por defecto).
La sobrecarga de pods se define en la RuntimeClass a través del los campos de
`overhead`. Con estos campos se puede especificar la sobrecarga de los pods en
ejecución que utilizan esta RuntimeClass para asegurar que estas sobrecargas se
cuentan en Kubernetes.
## {{% heading "whatsnext" %}}
- [Diseño de RuntimeClass](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md)
- [Diseño de programación de RuntimeClass](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md#runtimeclass-scheduling)
- Leer sobre el concepto de [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/)
- [Diseño de capacidad de PodOverhead](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)

View File

@ -1,4 +1,8 @@
---
title: "Políticas"
title: Políticas
weight: 90
---
description: >
Políticas configurables que se aplican a grupos de recursos.
---
La sección de Políticas describe las diferentes políticas configurables que se aplican a grupos de recursos:

View File

@ -0,0 +1,70 @@
---
reviewers:
- raelga
title: Rangos de límites (Limit Ranges)
description: >
Aplica límites de recursos a un Namespace para restringir y garantizar la asignación y consumo de recursos informáticos.
content_type: concept
weight: 10
---
<!-- overview -->
### Contexto
Por defecto, los contenedores se ejecutan sin restricciones sobre los [recursos informáticos disponibles en un clúster de Kubernetes](/docs/concepts/configuration/manage-resources-containers/).
Si el {{< glossary_tooltip text="Nodo" term_id="node" >}} dispone de los recursos informáticos, un {{< glossary_tooltip text="Pod" term_id="pod" >}} o sus {{< glossary_tooltip text="Contenedores" term_id="container" >}} tienen permitido consumir por encima de la cuota solicitada si no superan el límite establecido en su especificación.
Existe la preocupación de que un Pod o Contenedor pueda monopolizar todos los recursos disponibles.
### Utilidad
Aplicando restricciones de asignación de recursos, los administradores de clústeres se aseguran del cumplimiento del consumo de recursos por espacio de nombre ({{< glossary_tooltip text="Namespace" term_id="namespace" >}}).
Un **{{< glossary_tooltip text="LimitRange" term_id="limitrange" >}}** es la política que permite:
- Imponer restricciones de requisitos de recursos a {{< glossary_tooltip text="Pods" term_id="pod" >}} o {{< glossary_tooltip text="Contenedores" term_id="container" >}} por Namespace.
- Imponer las limitaciones de recursos mínimas/máximas para Pods o Contenedores dentro de un Namespace.
- Especificar requisitos y límites de recursos predeterminados para Pods o Contenedores de un Namespace.
- Imponer una relación de proporción entre los requisitos y el límite de un recurso.
- Imponer el cumplimiento de las demandas de almacenamiento mínimo/máximo para {{< glossary_tooltip text="Solicitudes de Volúmenes Persistentes" term_id="persistent-volume-claim" >}}.
### Habilitar el LimitRange
La compatibilidad con LimitRange está habilitada por defecto en Kubernetes desde la versión 1.10.
Para que un LimitRange se active en un {{< glossary_tooltip text="Namespace" term_id="namespace" >}} en particular, el LimitRange debe definirse con el Namespace, o aplicarse a éste.
El nombre de recurso de un objeto LimitRange debe ser un
[nombre de subdominio DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido.
### Aplicando LimitRanges
- El administrador crea un LimitRange en un {{< glossary_tooltip text="Namespace" term_id="namespace" >}}.
- Los usuarios crean recursos como {{< glossary_tooltip text="Pods" term_id="pod" >}}, {{< glossary_tooltip text="Contenedores" term_id="container" >}} o {{< glossary_tooltip text="Solicitudes de Volúmenes Persistentes" term_id="persistent-volume-claim" >}} en el Namespace.
- El controlador de admisión `LimitRanger` aplicará valores predeterminados y límites, para todos los Pods o Contenedores que no establezcan requisitos de recursos informáticos. Y realizará un seguimiento del uso para garantizar que no excedan el mínimo, el máximo, y la proporción de ningún LimitRange definido en el Namespace.
- Si al crear o actualizar un recurso del ejemplo (Pods, Contenedores, {{< glossary_tooltip text="Solicitudes de Volúmenes Persistentes" term_id="persistent-volume-claim" >}}) se viola una restricción al LimitRange, la solicitud al servidor API fallará con un código de estado HTTP "403 FORBIDDEN" y un mensaje que explica la restricción que se ha violado.
- En caso de que en se active un LimitRange para recursos de cómputos como `cpu` y `memory`, los usuarios deberán especificar los requisitos y/o límites de recursos a dichos valores. De lo contrario, el sistema puede rechazar la creación del Pod.
- Las validaciones de LimitRange ocurren solo en la etapa de Admisión de Pod, no en Pods que ya se han iniciado (Running {{< glossary_tooltip text="Pods" term_id="pod" >}}).
Algunos ejemplos de políticas que se pueden crear utilizando rangos de límites son:
- En un clúster de 2 nodos con una capacidad de 8 GiB de RAM y 16 núcleos, podría restringirse los {{< glossary_tooltip text="Pods" term_id="pod" >}} en un {{< glossary_tooltip text="Namespace" term_id="namespace" >}} a requerir `100m` de CPU con un límite máximo de `500m` para CPU y requerir `200Mi` de memoria con un límite máximo de `600Mi` de memoria.
- Definir el valor por defecto de límite y requisitos de CPU a `150m` y el valor por defecto de requisito de memoria a `300Mi` {{< glossary_tooltip text="Contenedores" term_id="container" >}} que se iniciaron sin requisitos de CPU y memoria en sus especificaciones.
En el caso de que los límites totales del {{< glossary_tooltip text="Namespace" term_id="namespace" >}} sean menores que la suma de los límites de los {{< glossary_tooltip text="Pods" term_id="pod" >}},
puede haber contienda por los recursos. En este caso, los contenedores o pods no seran creados.
Ni la contención ni los cambios en un LimitRange afectarán a los recursos ya creados.
## {{% heading "whatsnext" %}}
Consulte el [documento de diseño del LimitRanger](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) para más información.
Los siguientes ejemplos utilizan límites y están pendientes de su traducción:
- [how to configure minimum and maximum CPU constraints per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/).
- [how to configure minimum and maximum Memory constraints per namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/).
- [how to configure default CPU Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/).
- [how to configure default Memory Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/).
- [how to configure minimum and maximum Storage consumption per namespace](/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage).
- [a detailed example on configuring quota per namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/).

View File

@ -0,0 +1,23 @@
---
title: LimitRange
id: limitrange
date: 2019-04-15
full_link: /docs/concepts/policy/limit-range/
short_description: >
Proporciona restricciones para limitar el consumo de recursos por Contenedores o Pods en un espacio de nombres
aka:
tags:
- core-object
- fundamental
- architecture
related:
- pod
- container
---
Proporciona restricciones para limitar el consumo de recursos por {{< glossary_tooltip text="Contenedores" term_id="container" >}} o {{< glossary_tooltip text="Pods" term_id="pod" >}} en un espacio de nombres ({{< glossary_tooltip text="Namespace" term_id="namespace" >}})
<!--more-->
LimitRange limita la cantidad de objetos que se pueden crear por tipo, así como la cantidad de recursos informáticos que pueden ser requeridos/consumidos por {{< glossary_tooltip text="Pods" term_id="pod" >}} o {{< glossary_tooltip text="Contenedores" term_id="container" >}} individuales en un {{< glossary_tooltip text="Namespace" term_id="namespace" >}}.

View File

@ -35,7 +35,7 @@ Vous devriez choisir une solution locale si vous souhaitez :
* Essayer ou commencer à apprendre Kubernetes
* Développer et réaliser des tests sur des clusters locaux
Choisissez une [solution locale] (/fr/docs/setup/pick-right-solution/#solutions-locales).
Choisissez une [solution locale](/fr/docs/setup/pick-right-solution/#solutions-locales).
## Solutions hébergées
@ -49,7 +49,7 @@ Vous devriez choisir une solution hébergée si vous :
* N'avez pas d'équipe de Site Reliability Engineering (SRE) dédiée, mais que vous souhaitez une haute disponibilité.
* Vous n'avez pas les ressources pour héberger et surveiller vos clusters
Choisissez une [solution hébergée] (/fr/docs/setup/pick-right-solution/#solutions-hebergées).
Choisissez une [solution hébergée](/fr/docs/setup/pick-right-solution/#solutions-hebergées).
## Solutions cloud clés en main
@ -63,7 +63,7 @@ Vous devriez choisir une solution cloud clés en main si vous :
* Voulez plus de contrôle sur vos clusters que ne le permettent les solutions hébergées
* Voulez réaliser vous même un plus grand nombre d'operations
Choisissez une [solution clé en main] (/fr/docs/setup/pick-right-solution/#solutions-clés-en-main)
Choisissez une [solution clé en main](/fr/docs/setup/pick-right-solution/#solutions-clés-en-main)
## Solutions clés en main sur site
@ -76,7 +76,7 @@ Vous devriez choisir une solution de cloud clé en main sur site si vous :
* Disposez d'une équipe SRE dédiée
* Avez les ressources pour héberger et surveiller vos clusters
Choisissez une [solution clé en main sur site] (/fr/docs/setup/pick-right-solution/#solutions-on-premises-clés-en-main).
Choisissez une [solution clé en main sur site](/fr/docs/setup/pick-right-solution/#solutions-on-premises-clés-en-main).
## Solutions personnalisées
@ -84,11 +84,11 @@ Les solutions personnalisées vous offrent le maximum de liberté sur vos cluste
d'expertise. Ces solutions vont du bare-metal aux fournisseurs de cloud sur
différents systèmes d'exploitation.
Choisissez une [solution personnalisée] (/fr/docs/setup/pick-right-solution/#solutions-personnalisées).
Choisissez une [solution personnalisée](/fr/docs/setup/pick-right-solution/#solutions-personnalisées).
## {{% heading "whatsnext" %}}
Allez à [Choisir la bonne solution] (/fr/docs/setup/pick-right-solution/) pour une liste complète de solutions.
Allez à [Choisir la bonne solution](/fr/docs/setup/pick-right-solution/) pour une liste complète de solutions.

View File

@ -237,7 +237,7 @@ guestbook-redis-slave-qgazl 1/1 Running 0 3m
image: gb-frontend:v3
```
そして2つの異なるPodのセットを上書きしないようにするため、`track`ラベルに異なる値を持つ(例: `canary`)ようなguestbookフロントエンドの新しいリリースを作成できます。
そして2つの異なるPodのセットを上書きしないようにするため、`track`ラベルに異なる値を持つ(例: `canary`)ようなguestbookフロントエンドの新しいリリースを作成できます。
```yaml
name: frontend-canary

View File

@ -17,7 +17,7 @@ card:
<!-- body -->
Kubernetesは、宣言的な構成管理と自動化を促進し、コンテナ化されたワークロードやサービスを管理するための、ポータブルで拡張性のあるオープンソースのプラットフォームです。Kubernetesは巨大で急速に成長しているエコシステムを備えており、それらのサービス、サポート、ツールは幅広い形で利用可能です。
Kubernetesの名称は、ギリシャ語に由来し、操舵手やパイロットを意味しています。Googleは2014年にKubernetesプロジェクトをオープンソース化しました。Kubernetesは、本番環境で大規模なワークロードを稼働させた[Googleの15年以上の経験](/blog/2015/04/borg-predecessor-to-kubernetes/)と、コミュニティからの最高のアイディアや実践を組み合わせています。
Kubernetesの名称は、ギリシャ語に由来し、操舵手やパイロットを意味しています。Googleは2014年にKubernetesプロジェクトをオープンソース化しました。Kubernetesは、本番環境で大規模なワークロードを稼働させた[Googleの15年以上の経験](/blog/2015/04/borg-predecessor-to-kubernetes/)と、コミュニティからの最高のアイディアや実践を組み合わせています。
## 過去を振り返ってみると

View File

@ -22,7 +22,7 @@ weight: 10
- 異なる名前空間で異なるチームが存在するとき。現時点ではこれは自主的なものですが、将来的にはACLsを介してリソースクォータの設定を強制するように計画されています。
- 管理者は各名前空間で1つの`ResourceQuota`を作成します。
- ユーザーが名前空間内でリソース(Pod、Serviceなど)を作成し、クォータシステムが`ResourceQuota`によって定義されたハードリソースリミットを超えないことを保証するために、リソースの使用量をトラッキングします。
- リソースの作成や更新がクォータの制約に違反しているとき、そのリクエストはHTTPステータスコード`403 FORBIDDEN`で失敗し、違反した制約を説明するメッセージが表示されます。
- リソースの作成や更新がクォータの制約に違反しているとき、そのリクエストはHTTPステータスコード`403 FORBIDDEN`で失敗し、違反した制約を説明するメッセージが表示されます。
- `cpu`や`memory`といったコンピューターリソースに対するクォータが名前空間内で有効になっているとき、ユーザーはそれらの値に対する`requests`や`limits`を設定する必要があります。設定しないとクォータシステムがPodの作成を拒否します。 ヒント: コンピュートリソースの要求を設定しないPodに対してデフォルト値を強制するために、`LimitRanger`アドミッションコントローラーを使用してください。この問題を解決する例は[walkthrough](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)で参照できます。
`ResourceQuota`のオブジェクト名は、有効な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)である必要があります.

View File

@ -77,7 +77,7 @@ Kubernetesを保護する為にはつの懸念事項があります。
### クラスター内のコンポーネント(アプリケーション) {#cluster-applications}
アプリケーションを対象にした攻撃に応じて、セキュリティの特定側面に焦点をあてたい場合があります。例:他のリソースとの連携で重要なサービス(サービスA)と、リソース枯渇攻撃に対して脆弱な別のワークロード(サービスB)が実行されている場合、サービスBのリソースを制限していないとサービスAが危険にさらされるリスクが高くなります。次の表はセキュリティの懸念事項とKubernetesで実行されるワークロードを保護するための推奨事項を示しています。
アプリケーションを対象にした攻撃に応じて、セキュリティの特定側面に焦点をあてたい場合があります。例:他のリソースとの連携で重要なサービス(サービスA)と、リソース枯渇攻撃に対して脆弱な別のワークロード(サービスB)が実行されている場合、サービスBのリソースを制限していないとサービスAが危険にさらされるリスクが高くなります。次の表はセキュリティの懸念事項とKubernetesで実行されるワークロードを保護するための推奨事項を示しています。
ワークロードセキュリティに関する懸念事項 | 推奨事項 |

View File

@ -42,7 +42,7 @@ weight: 80
エフェメラルコンテナを利用する場合には、他のコンテナ内のプロセスにアクセスできるように、[プロセス名前空間の共有](/ja/docs/tasks/configure-pod-container/share-process-namespace/)を有効にすると便利です。
エフェメラルコンテナを利用してトラブルシューティングを行う例については、[デバッグ用のエフェメラルコンテナを使用してデバッグする](/docs/tasks/debug-application-cluster/debug-running-pod/#debugging-with-ephemeral-debug-container)を参照してください。
エフェメラルコンテナを利用してトラブルシューティングを行う例については、[デバッグ用のエフェメラルコンテナを使用してデバッグする](/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container)を参照してください。
## Ephemeral containers API

View File

@ -0,0 +1,86 @@
---
title: プルリクエストのレビュー
content_type: concept
main_menu: true
weight: 10
---
<!-- overview -->
ドキュメントのプルリクエストは誰でもレビューすることができます。Kubernetesのwebsiteリポジトリで[pull requests](https://github.com/kubernetes/website/pulls)のセクションに移動し、open状態のプルリクエストを確認してください。
ドキュメントのプルリクエストのレビューは、Kubernetesコミュニティに自分を知ってもらうためのよい方法の1つです。コードベースについて学んだり、他のコントリビューターとの信頼関係を築く助けともなるはずです。
レビューを行う前には、以下のことを理解しておくとよいでしょう。
- [コンテンツガイド](/docs/contribute/style/content-guide/)と[スタイルガイド](/docs/contribute/style/style-guide/)を読んで、有益なコメントを残せるようにする。
- Kubernetesのドキュメントコミュニティにおける[役割と責任](/docs/contribute/participate/roles-and-responsibilities/)の違いを理解する。
<!-- body -->
## はじめる前に
レビューを始める前に、以下のことを心に留めてください。
- [CNCFの行動規範](https://github.com/cncf/foundation/blob/master/code-of-conduct.md)を読み、いかなる時にも行動規範にしたがって行動するようにする。
- 礼儀正しく、思いやりを持ち、助け合う気持ちを持つ。
- 変更点だけでなく、PRのポジティブな側面についてもコメントする。
- 相手の気持ちに共感して、自分のレビューが相手にどのように受け取られるのかをよく意識する。
- 相手の善意を前提として、疑問点を明確にする質問をする。
- 経験を積んだコントリビューターの場合、コンテンツに大幅な変更が必要な新規のコントリビューターとペアを組んで作業に取り組むことを考える。
## レビューのプロセス
一般に、コンテンツや文体に対するプルリクエストは、英語でレビューを行います。
1. [https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls)に移動します。Kubernetesのウェブサイトとドキュメントに対するopen状態のプルリクエスト一覧が表示されます。
2. open状態のPRに、以下に示すラベルを1つ以上使って絞り込みます。
- `cncf-cla: yes` (推奨): CLAにサインしていないコントリビューターが提出したPRはマージできません。詳しい情報は、[CLAの署名](/docs/contribute/new-content/overview/#sign-the-cla)を読んでください。
- `language/en` (推奨): 英語のPRだけに絞り込みます。
- `size/<size>`: 特定の大きさのPRだけに絞り込みます。レビューを始めたばかりの人は、小さなPRから始めてください。
さらに、PRがwork in progressとしてマークされていないことも確認してください。`work in progress`ラベルの付いたPRは、まだレビューの準備ができていない状態です。
3. レビューするPRを選んだら、以下のことを行い、変更点について理解します。
- PRの説明を読み、行われた変更について理解し、関連するissueがあればそれも読みます。
- 他のレビュアのコメントがあれば読みます。
- **Files changed**タブをクリックし、変更されたファイルと行を確認します。
- **Conversation**タブの下にあるPRのbuild checkセクションまでスクロールし、**deploy/netlify**の行の**Details**リンクをクリックして、Netlifyのプレビュービルドで変更点をプレビューします。
4. **Files changed**タブに移動してレビューを始めます。
1. コメントしたい場合は行の横の`+`マークをクリックします。
2. その行に関するコメントを書き、**Add single comment**(1つのコメントだけを残したい場合)または**Start a review**(複数のコメントを行いたい場合)のいずれかをクリックします。
3. コメントをすべて書いたら、ページ上部の**Review changes**をクリックします。ここでは、レビューの要約を追加できます(コントリビューターにポジティブなコメントも書きましょう!)。必要に応じて、PRを承認したり、コメントしたり、変更をリクエストします。新しいコントリビューターの場合は**Comment**だけが行えます。
## レビューのチェックリスト
レビューするときは、最初に以下の点を確認してみてください。
### 言語と文法
- 言語や文法に明らかな間違いはないですか? もっとよい言い方はないですか?
- もっと簡単な単語に置き換えられる複雑な単語や古い単語はありませんか?
- 使われている単語や専門用語や言い回しで差別的ではない別の言葉に置き換えられるものはありませんか?
- 言葉選びや大文字の使い方は[style guide](/docs/contribute/style/style-guide/)に従っていますか?
- もっと短くしたり単純な文に書き換えられる長い文はありませんか?
- 箇条書きやテーブルでもっとわかりやすく表現できる長いパラグラフはありませんか?
### コンテンツ
- 同様のコンテンツがKubernetesのサイト上のどこかに存在しませんか
- コンテンツが外部サイト、特定のベンダー、オープンソースではないドキュメントなどに過剰にリンクを張っていませんか?
### ウェブサイト
- PRはページ名、slug/alias、アンカーリンクの変更や削除をしていますか その場合、このPRの変更の結果、リンク切れは発生しませんか ページ名を変更してslugはそのままにするなど、他の選択肢はありませんか
- PRは新しいページを作成するものですか その場合、次の点に注意してください。
- ページは正しい[page content type](/docs/contribute/style/page-content-types/)と関係するHugoのshortcodeを使用していますか
- セクションの横のナビゲーション(または全体)にページは正しく表示されますか?
- ページは[Docs Home](/docs/home/)に一覧されますか?
- Netlifyのプレビューで変更は確認できますか 特にリスト、コードブロック、テーブル、備考、画像などに注意してください。
### その他
PRに関して誤字や空白などの小さな問題を指摘する場合は、コメントの前に`nit:`と書いてください。こうすることで、PRの作者は問題が深刻なものではないことが分かります。

View File

@ -96,7 +96,7 @@ spec:
* ネットワークを介したードとPod間通信、LinuxマスターからのPod IPのポート80に向けて`curl`して、ウェブサーバーの応答をチェックします
* docker execまたはkubectl execを使用したPod間通信、Pod間(および複数のWindowsードがある場合はホスト間)へのpingします
* ServiceからPodへの通信、Linuxマスターおよび個々のPodからの仮想Service IP(`kubectl get services`で表示される)に`curl`します
* サービスディスカバリ、Kuberntesの[default DNS suffix](/ja/docs/concepts/services-networking/dns-pod-service/#services)と共にService名に`curl`します
* サービスディスカバリ、Kubernetesの[default DNS suffix](/ja/docs/concepts/services-networking/dns-pod-service/#services)と共にService名に`curl`します
* Inbound connectivity, `curl` the NodePort from the Linux master or machines outside of the cluster
* インバウンド接続、Linuxマスターまたはクラスター外のマシンからNodePortに`curl`します
* アウトバウンド接続、kubectl execを使用したPod内からの外部IPに`curl`します

View File

@ -31,9 +31,9 @@ card:
## クラスター、ユーザー、コンテキストを設定する
例として、開発用のクラスターが一つ、実験用のクラスターが一つ、計二つのクラスターが存在する場合を考えます。`development`と呼ばれる開発用のクラスター内では、フロントエンドの開発者は`frontend`というnamespace内で、ストレージの開発者は`storage`というnamespace内で作業をします。`scratch`と呼ばれる実験用のクラスター内では、開発者はデフォルトのnamespaceで作業をするか、状況に応じて追加のnamespaceを作成します。開発用のクラスターは証明書を通しての認証を必要とします。実験用のクラスターはユーザーネームとパスワードを通しての認証を必要とします。
例として、開発用のクラスターが一つ、実験用のクラスターが一つ、計二つのクラスターが存在する場合を考えます。`development`と呼ばれる開発用のクラスター内では、フロントエンドの開発者は`frontend`というnamespace内で、ストレージの開発者は`storage`というnamespace内で作業をします。`scratch`と呼ばれる実験用のクラスター内では、開発者はデフォルトのnamespaceで作業をするか、状況に応じて追加のnamespaceを作成します。開発用のクラスターは証明書を通しての認証を必要とします。実験用のクラスターはユーザーネームとパスワードを通しての認証を必要とします。
`config-exercise`というディレクトリを作成してください。`config-exercise`ディレクトリ内に、以下を含む`config-demo`というファイルを作成してください:
`config-exercise`というディレクトリを作成してください。`config-exercise`ディレクトリ内に、以下を含む`config-demo`というファイルを作成してください:
```shell
apiVersion: v1
@ -61,7 +61,7 @@ contexts:
設定ファイルには、クラスター、ユーザー、コンテキストの情報が含まれています。上記の`config-demo`設定ファイルには、二つのクラスター、二人のユーザー、三つのコンテキストの情報が含まれています。
`config-exercise`ディレクトリに移動してください。クラスター情報を設定ファイルに追加するために、以下のコマンドを実行してください:
`config-exercise`ディレクトリに移動してください。クラスター情報を設定ファイルに追加するために、以下のコマンドを実行してください:
```shell
kubectl config --kubeconfig=config-demo set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file
@ -89,7 +89,7 @@ kubectl config --kubeconfig=config-demo set-context dev-storage --cluster=develo
kubectl config --kubeconfig=config-demo set-context exp-scratch --cluster=scratch --namespace=default --user=experimenter
```
追加した情報を確認するために、`config-demo`ファイルを開いてください。`config-demo`ファイルを開く代わりに、`config view`のコマンドを使うこともできます。
追加した情報を確認するために、`config-demo`ファイルを開いてください。`config-demo`ファイルを開く代わりに、`config view`のコマンドを使うこともできます。
```shell
kubectl config --kubeconfig=config-demo view

View File

@ -134,28 +134,12 @@ weight: 100
1. 以下の内容で`example-ingress.yaml`を作成します。
```yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 8080
```
{{< codenew file="service/networking/example-ingress.yaml" >}}
1. 次のコマンドを実行して、Ingressリソースを作成します。
```shell
kubectl apply -f example-ingress.yaml
kubectl apply -f https://kubernetes.io/examples/service/networking/example-ingress.yaml
```
出力は次のようになります。
@ -175,8 +159,8 @@ weight: 100
{{< /note >}}
```shell
NAME HOSTS ADDRESS PORTS AGE
example-ingress hello-world.info 172.17.0.15 80 38s
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress <none> hello-world.info 172.17.0.15 80 38s
```
1. 次の行を`/etc/hosts`ファイルの最後に書きます。
@ -241,9 +225,12 @@ weight: 100
```yaml
- path: /v2
pathType: Prefix
backend:
serviceName: web2
servicePort: 8080
service:
name: web2
port:
number: 8080
```
1. 次のコマンドで変更を適用します。
@ -300,6 +287,3 @@ weight: 100
* [Ingress](/ja/docs/concepts/services-networking/ingress/)についてさらに学ぶ。
* [Ingressコントローラー](/ja/docs/concepts/services-networking/ingress-controllers/)についてさらに学ぶ。
* [Service](/ja/docs/concepts/services-networking/service/)についてさらに学ぶ。

View File

@ -0,0 +1,6 @@
---
title: "Secretの管理"
weight: 28
description: Secretを使用した機密設定データの管理
---

View File

@ -0,0 +1,146 @@
---
title: kubectlを使用してSecretを管理する
content_type: task
weight: 10
description: kubectlコマンドラインを使用してSecretを作成する
---
<!-- overview -->
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}}
<!-- steps -->
## Secretを作成する
`Secret`はデータベースにアクセスするためにPodが必要とするユーザー資格情報を含めることができます。
たとえば、データベース接続文字列はユーザー名とパスワードで構成されます。
ユーザー名はローカルマシンの`./username.txt`に、パスワードは`./password.txt`に保存します。
```shell
echo -n 'admin' > ./username.txt
echo -n '1f2d1e2e67df' > ./password.txt
```
上記の2つのコマンドの`-n`フラグは、生成されたファイルにテキスト末尾の余分な改行文字が含まれないようにします。
`kubectl`がファイルを読み取り、内容をbase64文字列にエンコードすると、余分な改行文字もエンコードされるため、これは重要です。
`kubectl create secret`コマンドはこれらのファイルをSecretにパッケージ化し、APIサーバー上にオブジェクトを作成します。
```shell
kubectl create secret generic db-user-pass \
--from-file=./username.txt \
--from-file=./password.txt
```
出力は次のようになります:
```
secret/db-user-pass created
```
ファイル名がデフォルトのキー名になります。オプションで`--from-file=[key=]source`を使用してキー名を設定できます。たとえば:
```shell
kubectl create secret generic db-user-pass \
--from-file=username=./username.txt \
--from-file=password=./password.txt
```
`--from-file`に指定したファイルに含まれるパスワードの特殊文字をエスケープする必要はありません。
また、`--from-literal=<key>=<value>`タグを使用してSecretデータを提供することもできます。
このタグは、複数のキーと値のペアを提供するために複数回指定することができます。
`$`、`\`、`*`、`=`、`!`などの特殊文字は[シェル](https://en.wikipedia.org/wiki/Shell_(computing))によって解釈されるため、エスケープを必要とすることに注意してください。
ほとんどのシェルでは、パスワードをエスケープする最も簡単な方法は、シングルクォート(`'`)で囲むことです。
たとえば、実際のパスワードが`S!B\*d$zDsb=`の場合、次のようにコマンドを実行します:
```shell
kubectl create secret generic dev-db-secret \
--from-literal=username=devuser \
--from-literal=password='S!B\*d$zDsb='
```
## Secretを検証する
Secretが作成されたことを確認できます:
```shell
kubectl get secrets
```
出力は次のようになります:
```
NAME TYPE DATA AGE
db-user-pass Opaque 2 51s
```
`Secret`の説明を参照できます:
```shell
kubectl describe secrets/db-user-pass
```
出力は次のようになります:
```
Name: db-user-pass
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
password: 12 bytes
username: 5 bytes
```
`kubectl get`と`kubectl describe`コマンドはデフォルトでは`Secret`の内容を表示しません。
これは、`Secret`が不用意に他人にさらされたり、ターミナルログに保存されたりしないようにするためです。
## Secretをデコードする {#decoding-secret}
先ほど作成したSecretの内容を見るには、以下のコマンドを実行します:
```shell
kubectl get secret db-user-pass -o jsonpath='{.data}'
```
出力は次のようになります:
```json
{"password.txt":"MWYyZDFlMmU2N2Rm","username.txt":"YWRtaW4="}
```
`password.txt`のデータをデコードします:
```shell
echo 'MWYyZDFlMmU2N2Rm' | base64 --decode
```
出力は次のようになります:
```
1f2d1e2e67df
```
## クリーンアップ
作成したSecretを削除するには次のコマンドを実行します:
```shell
kubectl delete secret db-user-pass
```
<!-- discussion -->
## {{% heading "whatsnext" %}}
- [Secretのコンセプト](/ja/docs/concepts/configuration/secret/)を読む
- [設定ファイルを使用してSecretを管理する](/docs/tasks/configmap-secret/managing-secret-using-config-file/)方法を知る
- [kustomizeを使用してSecretを管理する](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)方法を知る

View File

@ -33,7 +33,7 @@ kubectl delete pods <pod>
上記がグレースフルターミネーションにつながるためには、`pod.Spec.TerminationGracePeriodSeconds`に0を指定しては**いけません**。`pod.Spec.TerminationGracePeriodSeconds`を0秒に設定することは安全ではなく、StatefulSet Podには強くお勧めできません。グレースフル削除は安全で、kubeletがapiserverから名前を削除する前にPodが[適切にシャットダウンする](/ja/docs/concepts/workloads/pods/pod-lifecycle/#termination-of-pods)ことを保証します。
Kubernetes(バージョン1.5以降)は、Nodeにアクセスできないという理由だけでPodを削除しません。到達不能なNodeで実行されているPodは、[タイムアウト](/docs/concepts/architecture/nodes/#node-condition)の後に`Terminating`または`Unknown`状態になります。到達不能なNode上のPodをユーザーが適切に削除しようとすると、Podはこれらの状態に入ることもあります。そのような状態のPodをapiserverから削除することができる唯一の方法は以下の通りです:
Kubernetes(バージョン1.5以降)は、Nodeにアクセスできないという理由だけでPodを削除しません。到達不能なNodeで実行されているPodは、[タイムアウト](/ja/docs/concepts/architecture/nodes/#condition)の後に`Terminating`または`Unknown`状態になります。到達不能なNode上のPodをユーザーが適切に削除しようとすると、Podはこれらの状態に入ることもあります。そのような状態のPodをapiserverから削除することができる唯一の方法は以下の通りです:
* (ユーザーまたは[Node Controller](/ja/docs/concepts/architecture/nodes/)によって)Nodeオブジェクトが削除されます。
* 応答していないNodeのkubeletが応答を開始し、Podを終了してapiserverからエントリーを削除します。
@ -76,4 +76,3 @@ StatefulSet Podの強制削除は、常に慎重に、関連するリスクを
[StatefulSetのデバッグ](/docs/tasks/debug-application-cluster/debug-stateful-set/)の詳細

View File

@ -0,0 +1,403 @@
---
title: Horizontal Pod Autoscalerウォークスルー
content_type: task
weight: 100
---
<!-- overview -->
Horizontal Pod Autoscalerは、Deployment、ReplicaSetまたはStatefulSetといったレプリケーションコントローラ内のPodの数を、観測されたCPU使用率もしくはベータサポートの、アプリケーションによって提供されるその他のメトリクスに基づいて自動的にスケールさせます。
このドキュメントはphp-apacheサーバーに対しHorizontal Pod Autoscalerを有効化するという例に沿ってウォークスルーで説明していきます。Horizontal Pod Autoscalerの動作についてのより詳細な情報を知りたい場合は、[Horizontal Pod Autoscalerユーザーガイド](/docs/tasks/run-application/horizontal-pod-autoscale/)をご覧ください。
## {{% heading "前提条件" %}}
この例ではバージョン1.2以上の動作するKubernetesクラスターおよびkubectlが必要です。
[Metrics API](https://github.com/kubernetes/metrics)を介してメトリクスを提供するために、[Metrics server](https://github.com/kubernetes-sigs/metrics-server)によるモニタリングがクラスター内にデプロイされている必要があります。
Horizontal Pod Autoscalerはメトリクスを収集するためにこのAPIを利用します。metrics-serverをデプロイする方法を知りたい場合は[metrics-server ドキュメント](https://github.com/kubernetes-sigs/metrics-server#deployment)をご覧ください。
Horizontal Pod Autoscalerで複数のリソースメトリクスを利用するためには、バージョン1.6以上のKubernetesクラスターおよびkubectlが必要です。カスタムメトリクスを使えるようにするためには、あなたのクラスターがカスタムメトリクスAPIを提供するAPIサーバーと通信できる必要があります。
最後に、Kubernetesオブジェクトと関係のないメトリクスを使うにはバージョン1.10以上のKubernetesクラスターおよびkubectlが必要で、さらにあなたのクラスターが外部メトリクスAPIを提供するAPIサーバーと通信できる必要があります。
詳細については[Horizontal Pod Autoscaler user guide](/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics)をご覧ください。
<!-- steps -->
## php-apacheの起動と公開
Horizontal Pod Autoscalerのデモンストレーションのために、php-apacheイメージをもとにしたカスタムのDockerイメージを使います。
このDockerfileは下記のようになっています。
```dockerfile
FROM php:5-apache
COPY index.php /var/www/html/index.php
RUN chmod a+rx index.php
```
これはCPU負荷の高い演算を行うindex.phpを定義しています。
```php
<?php
$x = 0.0001;
for ($i = 0; $i <= 1000000; $i++) {
$x += sqrt($x);
}
echo "OK!";
?>
```
まず最初に、イメージを動かすDeploymentを起動し、Serviceとして公開しましょう。
下記の設定を使います。
{{< codenew file="application/php-apache.yaml" >}}
以下のコマンドを実行してください。
```shell
kubectl apply -f https://k8s.io/examples/application/php-apache.yaml
```
```
deployment.apps/php-apache created
service/php-apache created
```
## Horizontal Pod Autoscalerを作成する
サーバーが起動したら、[kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands#autoscale)を使ってautoscalerを作成しましょう。以下のコマンドで、最初のステップで作成したphp-apache deploymentによって制御されるPodレプリカ数を1から10の間に維持するHorizontal Pod Autoscalerを作成します。
簡単に言うと、HPAはDeploymentを通じてレプリカ数を増減させ、すべてのPodにおける平均CPU使用率を50%それぞれのPodは`kubectl run`で200 milli-coresを要求しているため、平均CPU使用率100 milli-coresを意味しますに保とうとします。
このアルゴリズムについての詳細は[こちら](/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details)をご覧ください。
```shell
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
```
```
horizontalpodautoscaler.autoscaling/php-apache autoscaled
```
以下を実行して現在のAutoscalerの状況を確認できます。
```shell
kubectl get hpa
```
```
NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 18s
```
現在はサーバーにリクエストを送っていないため、CPU使用率が0%になっていることに注意してください(`TARGET`カラムは対応するDeploymentによって制御される全てのPodの平均値を示しています。
## 負荷の増加
Autoscalerがどのように負荷の増加に反応するか見てみましょう。
コンテナを作成し、クエリの無限ループをphp-apacheサーバーに送ってみますこれは別のターミナルで実行してください
```shell
kubectl run -i --tty load-generator --rm --image=busybox --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"
```
数分以内に、下記を実行することでCPU負荷が高まっていることを確認できます。
```shell
kubectl get hpa
```
```
NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache/scale 305% / 50% 1 10 1 3m
```
ここでは、CPU使用率はrequestの305%にまで高まっています。
結果として、Deploymentはレプリカ数7にリサイズされました。
```shell
kubectl get deployment php-apache
```
```
NAME READY UP-TO-DATE AVAILABLE AGE
php-apache 7/7 7 7 19m
```
{{< note >}}
レプリカ数が安定するまでは数分かかることがあります。負荷量は何らかの方法で制御されているわけではないので、最終的なレプリカ数はこの例とは異なる場合があります。
{{< /note >}}
## 負荷の停止
ユーザー負荷を止めてこの例を終わらせましょう。
私たちが`busybox`イメージを使って作成したコンテナ内のターミナルで、`<Ctrl> + C`を入力して負荷生成を終了させます。
そして結果の状態を確認します(数分後)。
```shell
kubectl get hpa
```
```
NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 11m
```
```shell
kubectl get deployment php-apache
```
```
NAME READY UP-TO-DATE AVAILABLE AGE
php-apache 1/1 1 1 27m
```
ここでCPU使用率は0に下がり、HPAによってオートスケールされたレプリカ数は1に戻ります。
{{< note >}}
レプリカのオートスケールには数分かかることがあります。
{{< /note >}}
<!-- discussion -->
## 複数のメトリクスやカスタムメトリクスを基にオートスケーリングする
`autoscaling/v2beta2` APIバージョンと使うと、`php-apache` Deploymentをオートスケーリングする際に使う追加のメトリクスを導入することが出来ます。
まず、`autoscaling/v2beta2`内のHorizontalPodAutoscalerのYAMLファイルを入手します。
```shell
kubectl get hpa.v2beta2.autoscaling -o yaml > /tmp/hpa-v2.yaml
```
`/tmp/hpa-v2.yaml`ファイルをエディタで開くと、以下のようなYAMLファイルが見えるはずです。
```yaml
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: php-apache
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
status:
observedGeneration: 1
lastScaleTime: <some-time>
currentReplicas: 1
desiredReplicas: 1
currentMetrics:
- type: Resource
resource:
name: cpu
current:
averageUtilization: 0
averageValue: 0
```
`targetCPUUtilizationPercentage`フィールドは`metrics`と呼ばれる配列に置換されています。
CPU使用率メトリクスは、Podコンテナで定められたリソースの割合として表されるため、*リソースメトリクス*です。CPU以外のリソースメトリクスを指定することもできます。デフォルトでは、他にメモリだけがリソースメトリクスとしてサポートされています。これらのリソースはクラスター間で名前が変わることはなく、そして`metrics.k8s.io` APIが利用可能である限り常に利用可能です。
さらに`target.type`において`Utilization`の代わりに`AverageValue`を使い、`target.averageUtilization`フィールドの代わりに対応する`target.averageValue`フィールドを設定することで、リソースメトリクスをrequest値に対する割合に代わり、直接的な値に設定することも可能です。
PodメトリクスとObjectメトリクスという2つの異なる種類のメトリクスが存在し、どちらも*カスタムメトリクス*とみなされます。これらのメトリクスはクラスター特有の名前を持ち、利用するにはより発展的なクラスター監視設定が必要となります。
これらの代替メトリクスタイプのうち、最初のものが*Podメトリクス*です。これらのメトリクスはPodを説明し、Podを渡って平均され、レプリカ数を決定するためにターゲット値と比較されます。
これらはほとんどリソースメトリクス同様に機能しますが、`target`の種類としては`AverageValue`*のみ*をサポートしている点が異なります。
Podメトリクスはmetricブロックを使って以下のように指定されます。
```yaml
type: Pods
pods:
metric:
name: packets-per-second
target:
type: AverageValue
averageValue: 1k
```
2つ目のメトリクスタイプは*Objectメトリクス*です。これらのメトリクスはPodを説明するかわりに、同一Namespace内の異なったオブジェクトを説明します。このメトリクスはオブジェクトから取得される必要はありません。単に説明するだけです。Objectメトリクスは`target`の種類として`Value`と`AverageValue`をサポートします。`Value`では、ターゲットはAPIから返ってきたメトリクスと直接比較されます。`AverageValue`では、カスタムメトリクスAPIから返ってきた値はターゲットと比較される前にPodの数で除算されます。以下の例は`requests-per-second`メトリクスのYAML表現です。
```yaml
type: Object
object:
metric:
name: requests-per-second
describedObject:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
name: main-route
target:
type: Value
value: 2k
```
もしこのようなmetricブロックを複数提供した場合、HorizontalPodAutoscalerはこれらのメトリクスを順番に処理します。
HorizontalPodAutoscalerはそれぞれのメトリクスについて推奨レプリカ数を算出し、その中で最も多いレプリカ数を採用します。
例えば、もしあなたがネットワークトラフィックについてのメトリクスを収集する監視システムを持っているなら、`kubectl edit`を使って指定を次のように更新することができます。
```yaml
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: php-apache
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
- type: Pods
pods:
metric:
name: packets-per-second
target:
type: AverageValue
averageValue: 1k
- type: Object
object:
metric:
name: requests-per-second
describedObject:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
name: main-route
target:
type: Value
value: 10k
status:
observedGeneration: 1
lastScaleTime: <some-time>
currentReplicas: 1
desiredReplicas: 1
currentMetrics:
- type: Resource
resource:
name: cpu
current:
averageUtilization: 0
averageValue: 0
- type: Object
object:
metric:
name: requests-per-second
describedObject:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
name: main-route
current:
value: 10k
```
この時、HorizontalPodAutoscalerはそれぞれのPodがCPU requestの50%を使い、1秒当たり1000パケットを送信し、そしてmain-route
Ingressの裏にあるすべてのPodが合計で1秒当たり10000パケットを送信する状態を保持しようとします。
### より詳細なメトリクスをもとにオートスケーリングする
多くのメトリクスパイプラインは、名前もしくは _labels_ と呼ばれる追加の記述子の組み合わせによって説明することができます。全てのリソースメトリクス以外のメトリクスタイプPod、Object、そして下で説明されている外部メトリクスにおいて、メトリクスパイプラインに渡す追加のラベルセレクターを指定することができます。例えば、もしあなたが`http_requests`メトリクスを`verb`ラベルとともに収集しているなら、下記のmetricブロックを指定してGETリクエストにのみ基づいてスケールさせることができます。
```yaml
type: Object
object:
metric:
name: http_requests
selector: {matchLabels: {verb: GET}}
```
このセレクターは完全なKubernetesラベルセレクターと同じ文法を利用します。もし名前とセレクターが複数の系列に一致した場合、この監視パイプラインはどのようにして複数の系列を一つの値にまとめるかを決定します。このセレクターは付加的なもので、ターゲットオブジェクト`Pods`タイプの場合は対象Pod、`Object`タイプの場合は説明されるオブジェクト)では**ない**オブジェクトを説明するメトリクスを選択することは出来ません。
### Kubernetesオブジェクトと関係ないメトリクスに基づいたオートスケーリング
Kubernetes上で動いているアプリケーションを、Kubernetes Namespaceと直接的な関係がないサービスを説明するメトリクスのような、Kubernetesクラスター内のオブジェクトと明確な関係が無いメトリクスを基にオートスケールする必要があるかもしれません。Kubernetes 1.10以降では、このようなユースケースを*外部メトリクス*によって解決できます。
外部メトリクスを使うにはあなたの監視システムについての知識が必要となります。この設定はカスタムメトリクスを使うときのものに似ています。外部メトリクスを使うとあなたの監視システムのあらゆる利用可能なメトリクスに基づいてクラスターをオートスケールできるようになります。上記のように`metric`ブロックで`name`と`selector`を設定し、`Object`のかわりに`External`メトリクスタイプを使います。
もし複数の時系列が`metricSelector`により一致した場合は、それらの値の合計がHorizontalPodAutoscalerに使われます。
外部メトリクスは`Value`と`AverageValue`の両方のターゲットタイプをサポートしています。これらの機能は`Object`タイプを利用するときとまったく同じです。
例えばもしあなたのアプリケーションがホストされたキューサービスからのタスクを処理している場合、あなたは下記のセクションをHorizontalPodAutoscalerマニフェストに追記し、未処理のタスク30個あたり1つのワーカーを必要とすることを指定します。
```yaml
- type: External
external:
metric:
name: queue_messages_ready
selector: "queue=worker_tasks"
target:
type: AverageValue
averageValue: 30
```
可能なら、クラスター管理者がカスタムメトリクスAPIを保護することを簡単にするため、外部メトリクスのかわりにカスタムメトリクスを用いることが望ましいです。外部メトリクスAPIは潜在的に全てのメトリクスへのアクセスを許可するため、クラスター管理者はこれを公開する際には注意が必要です。
## 付録: Horizontal Pod Autoscaler status conditions
`autoscaling/v2beta2`形式のHorizontalPodAutoscalerを使っている場合は、KubernetesによるHorizontalPodAutoscaler上の*status conditions*セットを見ることができます。status conditionsはHorizontalPodAutoscalerがスケール可能かどうか、そして現時点でそれが何らかの方法で制限されているかどうかを示しています。
このconditionsは`status.conditions`フィールドに現れます。HorizontalPodAutoscalerに影響しているconditionsを確認するために、`kubectl describe hpa`を利用できます。
```shell
kubectl describe hpa cm-test
```
```
Name: cm-test
Namespace: prom
Labels: <none>
Annotations: <none>
CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000
Reference: ReplicationController/cm-test
Metrics: ( current / target )
"http_requests" on pods: 66m / 500m
Min replicas: 1
Max replicas: 4
ReplicationController pods: 1 current / 1 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_requests
ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range
Events:
```
このHorizontalPodAutoscalerにおいて、いくつかの正常な状態のconditionsを見ることができます。まず最初に、`AbleToScale`は、HPAがスケール状況を取得し、更新させることが出来るかどうかだけでなく、何らかのbackoffに関連した状況がスケーリングを妨げていないかを示しています。2番目に、`ScalingActive`は、HPAが有効化されているかどうか例えば、レプリカ数のターゲットがゼロでないことや、望ましいスケールを算出できるかどうかを示します。もしこれが`False`の場合、大体はメトリクスの取得において問題があることを示しています。最後に、一番最後の状況である`ScalingLimited`は、HorizontalPodAutoscalerの最大値や最小値によって望ましいスケールがキャップされていることを示しています。この指標を見てHorizontalPodAutoscaler上の最大・最小レプリカ数制限を増やす、もしくは減らす検討ができます。
## 付録: 数量
全てのHorizontalPodAutoscalerおよびメトリクスAPIにおけるメトリクスは{{< glossary_tooltip term_id="quantity" text="quantity">}}として知られる特殊な整数表記によって指定されます。例えば、`10500m`という数量は10進数表記で`10.5`と書くことができます。メトリクスAPIは可能であれば接尾辞を用いない整数を返し、そうでない場合は基本的にミリ単位での数量を返します。これはメトリクス値が`1`と`1500m`の間で、もしくは10進法表記で書かれた場合は`1`と`1.5`の間で変動するということを意味します。
## 付録: その他の起きうるシナリオ
### Autoscalerを宣言的に作成する
`kubectl autoscale`コマンドを使って命令的にHorizontalPodAutoscalerを作るかわりに、下記のファイルを使って宣言的に作成することができます。
{{< codenew file="application/hpa/php-apache.yaml" >}}
下記のコマンドを実行してAutoscalerを作成します。
```shell
kubectl create -f https://k8s.io/examples/application/hpa/php-apache.yaml
```
```
horizontalpodautoscaler.autoscaling/php-apache created
```

View File

@ -272,7 +272,7 @@ graph TD;
## `Type=LoadBalancer`を使用したServiceでの送信元IP
[`Type=LoadBalancer`](/ja/docs/concepts/services-networking/service/#loadbalancer)を使用したServiceに送られたパケットは、デフォルトでは送信元のNATは行われません。`Ready`状態にあるすべてのスケジュール可能なKubernetesのNodeは、ロードバランサーからのトラフィックを受付可能であるためです。そのため、エンドポイントが存在しないードにパケットが到達した場合、システムはエンドポイントが*存在する*ノードにパケットをプロシキーします。このとき、(前のセクションで説明したように)パケットの送信元IPがードのIPに置換されます。
[`Type=LoadBalancer`](/ja/docs/concepts/services-networking/service/#loadbalancer)を使用したServiceに送られたパケットは、デフォルトで送信元のNATが行われます。`Ready`状態にあるすべてのスケジュール可能なKubernetesのNodeは、ロードバランサーからのトラフィックを受付可能であるためです。そのため、エンドポイントが存在しないードにパケットが到達した場合、システムはエンドポイントが*存在する*ノードにパケットをプロシキーします。このとき、(前のセクションで説明したように)パケットの送信元IPがードのIPに置換されます。
ロードバランサー経由でsource-ip-appを公開することで、これをテストできます。

View File

@ -0,0 +1,36 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: php-apache
spec:
selector:
matchLabels:
run: php-apache
replicas: 1
template:
metadata:
labels:
run: php-apache
spec:
containers:
- name: php-apache
image: k8s.gcr.io/hpa-example
ports:
- containerPort: 80
resources:
limits:
cpu: 500m
requests:
cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
name: php-apache
labels:
run: php-apache
spec:
ports:
- port: 80
selector:
run: php-apache

View File

@ -0,0 +1,18 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080

View File

@ -47,7 +47,7 @@ O Kubernetes é Open Source, o que te oferece a liberdade de utilizá-lo em seu
<br>
<br>
<br>
<a href="https://www.lfasiallc.com/events/kubecon-cloudnativecon-china-2019" button id="desktopKCButton">KubeCon em Shanghai em June 24-26, 2019</a>
<a href="https://www.lfasiallc.com/events/kubecon-cloudnativecon-china-2019" button id="desktopKCButton">KubeCon em Shanghai em Junho, 24-26 de 2019</a>
</div>
<div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>
@ -57,4 +57,4 @@ O Kubernetes é Open Source, o que te oferece a liberdade de utilizá-lo em seu
{{< blocks/kubernetes-features >}}
{{< blocks/case-studies >}}
{{< blocks/case-studies >}}

View File

@ -0,0 +1,45 @@
---
layout: blog
title: "Não entre em pânico: Kubernetes e Docker"
date: 2020-12-02
slug: dont-panic-kubernetes-and-docker
---
**Autores / Autoras**: Jorge Castro, Duffie Cooley, Kat Cosgrove, Justin Garrison, Noah Kantrowitz, Bob Killen, Rey Lejano, Dan “POP” Papandrea, Jeffrey Sica, Davanum “Dims” Srinivas
**Tradução:** João Brito
Kubernetes está [deixando de usar Docker](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation) como seu agente de execução após a versão v1.20.
**Não entre em pânico. Não é tão dramático quanto parece.**
TL;DR Docker como um agente de execução primário está sendo deixado de lado em favor de agentes de execução que utilizam a Interface de Agente de Execução de Containers (Container Runtime Interface "CRI") criada para o Kubernetes. As imagens criadas com o Docker continuarão a funcionar em seu cluster com os agentes atuais, como sempre estiveram.
Se você é um usuário final de Kubernetes, quase nada mudará para você. Isso não significa a morte do Docker, e isso não significa que você não pode, ou não deva, usar ferramentas Docker em desenvolvimento mais. Docker ainda é uma ferramenta útil para a construção de containers, e as imagens resultantes de executar `docker build` ainda rodarão em seu cluster Kubernetes.
Se você está usando um Kubernetes gerenciado como GKE, EKS, ou AKS (que usa como [padrão containerd](https://github.com/Azure/AKS/releases/tag/2020-11-16)) você precisará ter certeza que seus nós estão usando um agente de execução de container suportado antes que o suporte ao Docker seja removido nas versões futuras do Kubernetes. Se você tem mudanças em seus nós, talvez você precise atualizá-los baseado em seu ambiente e necessidades do agente de execução.
Se você está rodando seus próprios clusters, você também precisa fazer mudanças para evitar quebras em seu cluster. Na versão v1.20, você terá o aviso de alerta da perda de suporte ao Docker. Quando o suporte ao agente de execução do Docker for removido em uma versão futura (atualmente planejado para a versão 1.22 no final de 2021) do Kubernetes ele não será mais suportado e você precisará trocar para um dos outros agentes de execução de container compatível, como o containerd ou CRI-O. Mas tenha certeza que esse agente de execução escolhido tenha suporte às configurações do daemon do Docker usadas atualmente (Ex.: logs)
## Então porque a confusão e toda essa turma surtando?
Estamos falando aqui de dois ambientes diferentes, e isso está criando essa confusão. Dentro do seu cluster Kubernetes, existe uma coisa chamada de agente de execução de container que é responsável por baixar e executar as imagens de seu container. Docker é a escolha popular para esse agente de execução (outras escolhas comuns incluem containerd e CRI-O), mas Docker não foi projetado para ser embutido no Kubernetes, e isso causa problemas.
Se liga, o que chamamos de "Docker" não é exatamente uma coisa - é uma stack tecnológica inteira, e uma parte disso é chamado de "containerd", que é o agente de execução de container de alto-nível por si só. Docker é legal e útil porque ele possui muitas melhorias de experiência do usuário e isso o torna realmente fácil para humanos interagirem com ele enquanto estão desenvolvendo, mas essas melhorias para o usuário não são necessárias para o Kubernetes, pois ele não é humano.
Como resultado dessa camada de abstração amigável aos humanos, seu cluster Kubernetes precisa usar outra ferramenta chamada Dockershim para ter o que ele realmente precisa, que é o containerd. Isso não é muito bom, porque adiciona outra coisa a ser mantida e que pode quebrar. O que está atualmente acontecendo aqui é que o Dockershim está sendo removido do Kubelet assim que que a versão v1.23 for lançada, que remove o suporte ao Docker como agente de execução de container como resultado. Você deve estar pensando, mas se o containerd está incluso na stack do Docker, porque o Kubernetes precisa do Dockershim?
Docker não é compatível com CRI, a [Container Runtime Interface](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/) (interface do agente de execução de container). Se fosse, nós não precisaríamos do shim, e isso não seria nenhum problema. Mas isso não é o fim do mundo, e você não precisa entrar em pânico - você só precisa mudar seu agente de execução de container do Docker para um outro suportado.
Uma coisa a ser notada: Se você está contando com o socket do Docker (`/var/run/docker.sock`) como parte do seu fluxo de trabalho em seu cluster hoje, mover para um agente de execução diferente acaba com sua habilidade de usá-lo. Esse modelo é conhecido como Docker em Docker. Existem diversas opções por aí para esse caso específico como o [kaniko](https://github.com/GoogleContainerTools/kaniko), [img](https://github.com/genuinetools/img), e [buildah](https://github.com/containers/buildah).
## O que essa mudança representa para os desenvolvedores? Ainda escrevemos Dockerfiles? Ainda vamos fazer build com Docker?
Essa mudança aborda um ambiente diferente do que a maioria das pessoas usa para interagir com Docker. A instalação do Docker que você está usando em desenvolvimento não tem relação com o agente de execução de Docker dentro de seu cluster Kubernetes. É confuso, dá pra entender.
Como desenvolvedor, Docker ainda é útil para você em todas as formas que era antes dessa mudança ser anunciada. A imagem que o Docker cria não é uma imagem específica para Docker e sim uma imagem que segue o padrão OCI ([Open Container Initiative](https://opencontainers.org/)).
Qualquer imagem compatível com OCI, independente da ferramenta usada para construí-la será vista da mesma forma pelo Kubernetes. Ambos [containerd](https://containerd.io/) e [CRI-O](https://cri-o.io/) sabem como baixar e executá-las. Esse é o porque temos um padrão para containers.
Então, essa mudança está chegando. Isso irá causar problemas para alguns, mas nada catastrófico, no geral é uma boa coisa. Dependendo de como você interage com o Kubernetes, isso tornará as coisas mais fáceis. Se isso ainda é confuso para você, tudo bem, tem muita coisa rolando aqui; Kubernetes tem um monte de partes móveis, e ninguém é 100% especialista nisso. Nós encorajamos toda e qualquer tipo de questão independente do nível de experiência ou de complexidade! Nosso objetivo é ter certeza que todos estão entendendo o máximo possível as mudanças que estão chegando. Esperamos que isso tenha respondido a maioria de suas questões e acalmado algumas ansiedades! ❤️
Procurando mais respostas? Dê uma olhada em nosso apanhado de [questões quanto ao desuso do Dockershim](/blog/2020/12/02/dockershim-faq/).

View File

@ -0,0 +1,34 @@
---
title: Contêineres
weight: 40
description: Tecnologia para empacotar aplicações com suas dependências em tempo de execução
content_type: concept
no_list: true
---
<!-- overview -->
Cada contêiner executado é repetível; a padronização de ter
dependências incluídas significa que você obtém o mesmo comportamento onde quer que você execute.
Os contêineres separam os aplicativos da infraestrutura de _host_ subjacente.
Isso torna a implantação mais fácil em diferentes ambientes de nuvem ou sistema operacional.
<!-- body -->
## Imagem de contêiner
Uma [imagem de contêiner](/docs/concepts/containers/images/) é um pacote de software pronto para executar, contendo tudo que é preciso para executar uma aplicação:
o código e o agente de execução necessário, aplicação, bibliotecas do sistema e valores padrões para qualquer configuração essencial.
Por _design_, um contêiner é imutável: você não pode mudar o código de um contêiner que já está executando. Se você tem uma aplicação conteinerizada e quer fazer mudanças, você precisa construir uma nova imagem que inclui a mudança, e recriar o contêiner para iniciar a partir da imagem atualizada.
## Agente de execução de contêiner
{{< glossary_definition term_id="container-runtime" length="all" >}}
## {{% heading "whatsnext" %}}
* [Imagens de contêineres](/docs/concepts/containers/images/)
* [Pods](/docs/concepts/workloads/pods/)

View File

@ -0,0 +1,56 @@
---
title: Ambiente de Contêiner
content_type: concept
weight: 20
---
<!-- overview -->
Essa página descreve os recursos disponíveis para contêineres no ambiente de contêiner.
<!-- body -->
## Ambiente de contêiner
O ambiente de contêiner do Kubernetes fornece recursos importantes para contêineres:
* Um sistema de arquivos, que é a combinação de uma [imagem](/docs/concepts/containers/images/) e um ou mais [volumes](/docs/concepts/storage/volumes/).
* Informação sobre o contêiner propriamente.
* Informação sobre outros objetos no cluster.
### Informação de contêiner
O _hostname_ de um contêiner é o nome do Pod em que o contêiner está executando.
Isso é disponibilizado através do comando `hostname` ou da função [`gethostname`](https://man7.org/linux/man-pages/man2/gethostname.2.html) chamada na libc.
O nome do Pod e o Namespace são expostos como variáveis de ambiente através de um mecanismo chamado [downward API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/).
Variáveis de ambiente definidas pelo usuário a partir da definição do Pod também são disponíveis para o contêiner, assim como qualquer variável de ambiente especificada estáticamente na imagem Docker.
### Informação do cluster
Uma lista de todos os serviços que estão executando quando um contêiner foi criado é disponibilizada para o contêiner como variáveis de ambiente.
Essas variáveis de ambiente são compatíveis com a funcionalidade _docker link_ do Docker.
Para um serviço nomeado *foo* que mapeia para um contêiner nomeado *bar*, as seguintes variáveis são definidas:
```shell
FOO_SERVICE_HOST=<o host em que o serviço está executando>
FOO_SERVICE_PORT=<a porta em que o serviço está executando>
```
Serviços possuem endereço IP dedicado e são disponibilizados para o contêiner via DNS,
se possuírem [DNS addon](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/) habilitado.
## {{% heading "whatsnext" %}}
* Aprenda mais sobre [hooks de ciclo de vida do contêiner](/docs/concepts/containers/container-lifecycle-hooks/).
* Obtenha experiência prática
[anexando manipuladores a eventos de ciclo de vida do contêiner](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/).

View File

@ -0,0 +1,290 @@
---
reviewers:
- femrtnz
- jcjesus
- hugopfeffer
title: Imagens
content_type: concept
weight: 10
---
<!-- overview -->
Uma imagem de contêiner representa dados binários que encapsulam uma aplicação e todas as suas dependências de software. As imagens de contêiner são pacotes de software executáveis que podem ser executados de forma autônoma e que fazem suposições muito bem definidas sobre seu agente de execução do ambiente.
Normalmente, você cria uma imagem de contêiner da sua aplicação e a envia para um registro antes de fazer referência a ela em um {{< glossary_tooltip text="Pod" term_id="pod" >}}
Esta página fornece um resumo sobre o conceito de imagem de contêiner.
<!-- body -->
## Nomes das imagens
As imagens de contêiner geralmente recebem um nome como `pause`, `exemplo/meuconteiner`, ou `kube-apiserver`.
As imagens também podem incluir um hostname de algum registro; por exemplo: `exemplo.registro.ficticio/nomeimagem`,
e um possível número de porta; por exemplo: `exemplo.registro.ficticio:10443/nomeimagem`.
Se você não especificar um hostname de registro, o Kubernetes presumirá que você se refere ao registro público do Docker.
Após a parte do nome da imagem, você pode adicionar uma _tag_ (como também usar com comandos como `docker` e` podman`).
As tags permitem identificar diferentes versões da mesma série de imagens.
Tags de imagem consistem em letras maiúsculas e minúsculas, dígitos, sublinhados (`_`),
pontos (`.`) e travessões (` -`).
Existem regras adicionais sobre onde você pode colocar o separador
caracteres (`_`,`-` e `.`) dentro de uma tag de imagem.
Se você não especificar uma tag, o Kubernetes presumirá que você se refere à tag `latest` (mais recente).
{{< caution >}}
Você deve evitar usar a tag `latest` quando estiver realizando o deploy de contêineres em produção,
pois é mais difícil rastrear qual versão da imagem está sendo executada, além de tornar mais difícil o processo de reversão para uma versão funcional.
Em vez disso, especifique uma tag significativa, como `v1.42.0`.
{{< /caution >}}
## Atualizando imagens
A política padrão de pull é `IfNotPresent` a qual faz com que o
{{<glossary_tooltip text = "kubelet" term_id = "kubelet">}} ignore
o processo de *pull* da imagem, caso a mesma já exista. Se você prefere sempre forçar o processo de *pull*,
você pode seguir uma das opções abaixo:
- defina a `imagePullPolicy` do contêiner para` Always`.
- omita `imagePullPolicy` e use`: latest` como a tag para a imagem a ser usada.
- omita o `imagePullPolicy` e a tag da imagem a ser usada.
- habilite o [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) controlador de admissão.
Quando `imagePullPolicy` é definido sem um valor específico, ele também é definido como` Always`.
## Multiarquitetura de imagens com índice de imagens
Além de fornecer o binário das imagens, um registro de contêiner também pode servir um [índice de imagem do contêiner](https://github.com/opencontainers/image-spec/blob/master/image-index.md). Um índice de imagem pode apontar para múltiplos [manifestos da imagem](https://github.com/opencontainers/image-spec/blob/master/manifest.md) para versões específicas de arquitetura de um contêiner. A ideia é que você possa ter um nome para uma imagem (por exemplo: `pause`, `exemple/meuconteiner`, `kube-apiserver`) e permitir que diferentes sistemas busquem o binário da imagem correta para a arquitetura de máquina que estão usando.
O próprio Kubernetes normalmente nomeia as imagens de contêiner com o sufixo `-$(ARCH)`. Para retrocompatibilidade, gere as imagens mais antigas com sufixos. A ideia é gerar a imagem `pause` que tem o manifesto para todas as arquiteturas e `pause-amd64` que é retrocompatível com as configurações anteriores ou arquivos YAML que podem ter codificado as imagens com sufixos.
## Usando um registro privado
Os registros privados podem exigir chaves para acessar as imagens deles.
As credenciais podem ser fornecidas de várias maneiras:
- Configurando nós para autenticação em um registro privado
- todos os pods podem ler qualquer registro privado configurado
- requer configuração de nó pelo administrador do cluster
- Imagens pré-obtidas
- todos os pods podem usar qualquer imagem armazenada em cache em um nó
- requer acesso root a todos os nós para configurar
- Especificando ImagePullSecrets em um Pod
- apenas pods que fornecem chaves próprias podem acessar o registro privado
- Extensões locais ou específicas do fornecedor
- se estiver usando uma configuração de nó personalizado, você (ou seu provedor de nuvem) pode implementar seu mecanismo para autenticar o nó ao registro do contêiner.
Essas opções são explicadas com mais detalhes abaixo.
### Configurando nós para autenticação em um registro privado
Se você executar o Docker em seus nós, poderá configurar o contêiner runtime do Docker
para autenticação em um registro de contêiner privado.
Essa abordagem é adequada se você puder controlar a configuração do nó.
{{< note >}}
O Kubernetes padrão é compatível apenas com as seções `auths` e` HttpHeaders` na configuração do Docker.
Auxiliares de credencial do Docker (`credHelpers` ou` credsStore`) não são suportados.
{{< /note >}}
Docker armazena chaves de registros privados no arquivo `$HOME/.dockercfg` ou `$HOME/.docker/config.json`. Se você colocar o mesmo arquivo na lista de caminhos de pesquisa abaixo, o kubelet o usa como provedor de credenciais ao obter imagens.
* `{--root-dir:-/var/lib/kubelet}/config.json`
* `{cwd of kubelet}/config.json`
* `${HOME}/.docker/config.json`
* `/.docker/config.json`
* `{--root-dir:-/var/lib/kubelet}/.dockercfg`
* `{cwd of kubelet}/.dockercfg`
* `${HOME}/.dockercfg`
* `/.dockercfg`
{{< note >}}
Você talvez tenha que definir `HOME=/root` explicitamente no ambiente do processo kubelet.
{{< /note >}}
Aqui estão as etapas recomendadas para configurar seus nós para usar um registro privado. Neste
exemplo, execute-os em seu desktop/laptop:
1. Execute `docker login [servidor]` para cada conjunto de credenciais que deseja usar. Isso atualiza o `$HOME/.docker/config.json` em seu PC.
1. Visualize `$HOME/.docker/config.json` em um editor para garantir que contém apenas as credenciais que você deseja usar.
1. Obtenha uma lista de seus nós; por exemplo:
- se você quiser os nomes: `nodes=$( kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}' )`
- se você deseja obter os endereços IP: `nodes=$( kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}' )`
1. Copie seu `.docker/config.json` local para uma das listas de caminhos de busca acima.
- por exemplo, para testar isso: `for n in $nodes; do scp ~/.docker/config.json root@"$n":/var/lib/kubelet/config.json; done`
{{< note >}}
Para clusters de produção, use uma ferramenta de gerenciamento de configuração para que você possa aplicar esta
configuração em todos os nós que você precisar.
{{< /note >}}
Verifique se está funcionando criando um pod que usa uma imagem privada; por exemplo:
```shell
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: private-image-test-1
spec:
containers:
- name: uses-private-image
image: $PRIVATE_IMAGE_NAME
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
EOF
```
```
pod/private-image-test-1 created
```
Se tudo estiver funcionando, então, após algum tempo, você pode executar:
```shell
kubectl logs private-image-test-1
```
e veja o resultado do comando:
```
SUCCESS
```
Se você suspeitar que o comando falhou, você pode executar:
```shell
kubectl describe pods/private-image-test-1 | grep 'Failed'
```
Em caso de falha, a saída é semelhante a:
```
Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found
```
Você deve garantir que todos os nós no cluster tenham o mesmo `.docker/config.json`. Caso contrário, os pods serão executados com sucesso em alguns nós e falharão em outros. Por exemplo, se você usar o escalonamento automático de nós, cada modelo de instância precisa incluir o `.docker/config.json` ou montar um drive que o contenha.
Todos os pods terão premissão de leitura às imagens em qualquer registro privado, uma vez que
as chaves privadas do registro são adicionadas ao `.docker/config.json`.
### Imagens pré-obtidas
{{< note >}}
Essa abordagem é adequada se você puder controlar a configuração do nó. Isto
não funcionará de forma confiável se o seu provedor de nuvem for responsável pelo gerenciamento de nós e os substituir
automaticamente.
{{< /note >}}
Por padrão, o kubelet tenta realizar um "pull" para cada imagem do registro especificado.
No entanto, se a propriedade `imagePullPolicy` do contêiner for definida como` IfNotPresent` ou `Never`,
em seguida, uma imagem local é usada (preferencial ou exclusivamente, respectivamente).
Se você quiser usar imagens pré-obtidas como um substituto para a autenticação do registro,
você deve garantir que todos os nós no cluster tenham as mesmas imagens pré-obtidas.
Isso pode ser usado para pré-carregar certas imagens com o intuíto de aumentar a velocidade ou como uma alternativa para autenticação em um registro privado.
Todos os pods terão permissão de leitura a quaisquer imagens pré-obtidas.
### Especificando imagePullSecrets em um pod
{{< note >}}
Esta é a abordagem recomendada para executar contêineres com base em imagens
de registros privados.
{{< /note >}}
O Kubernetes oferece suporte à especificação de chaves de registro de imagem de contêiner em um pod.
#### Criando um segredo com Docker config
Execute o seguinte comando, substituindo as palavras em maiúsculas com os valores apropriados:
```shell
kubectl create secret docker-registry <name> --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
```
Se você já tem um arquivo de credenciais do Docker, em vez de usar o
comando acima, você pode importar o arquivo de credenciais como um Kubernetes
{{< glossary_tooltip text="Secrets" term_id="secret" >}}.
[Criar um segredo com base nas credenciais Docker existentes](/docs/tasks/configure-pod-container/pull-image-private-registry/#registry-secret-existing-credentials) explica como configurar isso.
Isso é particularmente útil se você estiver usando vários registros privados de contêineres, como `kubectl create secret docker-registry` cria um Segredo que
só funciona com um único registro privado.
{{< note >}}
Os pods só podem fazer referência a *pull secrets* de imagem em seu próprio namespace,
portanto, esse processo precisa ser feito uma vez por namespace.
{{< /note >}}
#### Referenciando um imagePullSecrets em um pod
Agora, você pode criar pods que fazem referência a esse segredo adicionando uma seção `imagePullSecrets`
na definição de Pod.
Por exemplo:
```shell
cat <<EOF > pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
EOF
cat <<EOF >> ./kustomization.yaml
resources:
- pod.yaml
EOF
```
Isso precisa ser feito para cada pod que está usando um registro privado.
No entanto, a configuração deste campo pode ser automatizada definindo o imagePullSecrets
em um recurso de [ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/).
Verifique [Adicionar ImagePullSecrets a uma conta de serviço](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) para obter instruções detalhadas.
Você pode usar isso em conjunto com um `.docker / config.json` por nó. As credenciais
serão mescladas.
## Casos de uso
Existem várias soluções para configurar registros privados. Aqui estão alguns
casos de uso comuns e soluções sugeridas.
1. Cluster executando apenas imagens não proprietárias (por exemplo, código aberto). Não há necessidade de ocultar imagens.
- Use imagens públicas no Docker hub.
- Nenhuma configuração necessária.
- Alguns provedores de nuvem armazenam em cache ou espelham automaticamente imagens públicas, o que melhora a disponibilidade e reduz o tempo para extrair imagens.
1. Cluster executando algumas imagens proprietárias que devem ser ocultadas para quem está fora da empresa, mas
visível para todos os usuários do cluster.
- Use um [registro Docker](https://docs.docker.com/registry/) privado hospedado.
- Pode ser hospedado no [Docker Hub](https://hub.docker.com/signup) ou em outro lugar.
- Configure manualmente .docker/config.json em cada nó conforme descrito acima.
- Ou execute um registro privado interno atrás de seu firewall com permissão de leitura.
- Nenhuma configuração do Kubernetes é necessária.
- Use um serviço de registro de imagem de contêiner que controla o acesso à imagem
- Funcionará melhor com o escalonamento automático do cluster do que com a configuração manual de nós.
- Ou, em um cluster onde alterar a configuração do nó é inconveniente, use `imagePullSecrets`.
1. Cluster com imagens proprietárias, algumas das quais requerem controle de acesso mais rígido.
- Certifique-se de que o [controlador de admissão AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) está ativo. Caso contrário, todos os pods têm potencialmente acesso a todas as imagens.
- Mova dados confidenciais para um recurso "secreto", em vez de empacotá-los em uma imagem.
1. Um cluster multilocatário em que cada locatário precisa de seu próprio registro privado.
- Certifique-se de que o [controlador de admissão AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) está ativo. Caso contrário, todos os Pods de todos os locatários terão potencialmente acesso a todas as imagens.
- Execute um registro privado com autorização necessária.
- Gere credenciais de registro para cada locatário, coloque em segredo e preencha o segredo para cada namespace de locatário.
- O locatário adiciona esse segredo a imagePullSecrets de cada namespace.
Se precisar de acesso a vários registros, você pode criar um segredo para cada registro.
O Kubelet mesclará qualquer `imagePullSecrets` em um único `.docker/config.json` virtual
## {{% heading "whatsnext" %}}
* Leia a [OCI Image Manifest Specification](https://github.com/opencontainers/image-spec/blob/master/manifest.md)

View File

@ -0,0 +1,7 @@
---
title: "Visão Geral"
weight: 20
description: Obtenha uma visão em alto-nível do Kubernetes e dos componentes a partir dos quais ele é construído.
sitemap:
priority: 0.9
---

View File

@ -0,0 +1,117 @@
---
reviewers:
title: Componentes do Kubernetes
content_type: concept
description: >
Um cluster Kubernetes consiste de componentes que representam a camada de gerenciamento, e um conjunto de máquinas chamadas nós.
weight: 20
card:
name: concepts
weight: 20
---
<!-- overview -->
Ao implantar o Kubernetes, você obtém um cluster.
{{< glossary_definition term_id="cluster" length="all" prepend="Um cluster Kubernetes consiste em">}}
Este documento descreve os vários componentes que você precisa ter para implantar um cluster Kubernetes completo e funcional.
Esse é o diagrama de um cluster Kubernetes com todos os componentes interligados.
![Componentes do Kubernetes](/images/docs/components-of-kubernetes.svg)
<!-- body -->
## Componentes da camada de gerenciamento
Os componentes da camada de gerenciamento tomam decisões globais sobre o cluster (por exemplo, agendamento de _pods_), bem como detectam e respondem aos eventos do cluster (por exemplo, iniciando um novo _{{< glossary_tooltip text="pod" term_id="pod" >}}_ quando o campo `replicas` de um _Deployment_ não está atendido).
Os componentes da camada de gerenciamento podem ser executados em qualquer máquina do cluster. Contudo, para simplificar, os _scripts_ de configuração normalmente iniciam todos os componentes da camada de gerenciamento na mesma máquina, e não executa contêineres de usuário nesta máquina. Veja [Construindo clusters de alta disponibilidade](/docs/admin/high-availability/) para um exemplo de configuração de múltiplas VMs para camada de gerenciamento (_multi-main-VM_).
### kube-apiserver
{{< glossary_definition term_id="kube-apiserver" length="all" >}}
### etcd
{{< glossary_definition term_id="etcd" length="all" >}}
### kube-scheduler
{{< glossary_definition term_id="kube-scheduler" length="all" >}}
### kube-controller-manager
{{< glossary_definition term_id="kube-controller-manager" length="all" >}}
Alguns tipos desses controladores são:
* Controlador de nó: responsável por perceber e responder quando os nós caem.
* Controlador de _Job_: Observa os objetos _Job_ que representam tarefas únicas e, em seguida, cria _pods_ para executar essas tarefas até a conclusão.
* Controlador de _endpoints_: preenche o objeto _Endpoints_ (ou seja, junta os Serviços e os _pods_).
* Controladores de conta de serviço e de _token_: crie contas padrão e _tokens_ de acesso de API para novos _namespaces_.
### cloud-controller-manager
{{< glossary_definition term_id="cloud-controller-manager" length="short" >}}
O cloud-controller-manager executa apenas controladores que são específicos para seu provedor de nuvem.
Se você estiver executando o Kubernetes em suas próprias instalações ou em um ambiente de aprendizagem dentro de seu
próprio PC, o cluster não possui um gerenciador de controlador de nuvem.
Tal como acontece com o kube-controller-manager, o cloud-controller-manager combina vários ciclos de controle logicamente independentes em um binário único que você executa como um processo único. Você pode escalar horizontalmente (exectuar mais de uma cópia) para melhorar o desempenho ou para auxiliar na tolerância a falhas.
Os seguintes controladores podem ter dependências de provedor de nuvem:
* Controlador de nó: para verificar junto ao provedor de nuvem para determinar se um nó foi excluído da nuvem após parar de responder.
* Controlador de rota: para configurar rotas na infraestrutura de nuvem subjacente.
* Controlador de serviço: Para criar, atualizar e excluir balanceadores de carga do provedor de nuvem.
## Node Components
Os componentes de nó são executados em todos os nós, mantendo os _pods_ em execução e fornecendo o ambiente de execução do Kubernetes.
### kubelet
{{< glossary_definition term_id="kubelet" length="all" >}}
### kube-proxy
{{< glossary_definition term_id="kube-proxy" length="all" >}}
### Container runtime
{{< glossary_definition term_id="container-runtime" length="all" >}}
## Addons
Complementos (_addons_) usam recursos do Kubernetes ({{< glossary_tooltip term_id="daemonset" >}}, {{< glossary_tooltip term_id="deployment" >}}, etc) para implementar funcionalidades do cluster. Como fornecem funcionalidades em nível do cluster, recursos de _addons_ que necessitem ser criados dentro de um _namespace_ pertencem ao _namespace_ `kube-system`.
Alguns _addons_ selecionados são descritos abaixo; para uma lista estendida dos _addons_ disponíveis, por favor consulte [Addons](/docs/concepts/cluster-administration/addons/).
### DNS
Embora os outros complementos não sejam estritamente necessários, todos os clusters do Kubernetes devem ter um [DNS do cluster](/docs/concepts/services-networking/dns-pod-service/), já que muitos exemplos dependem disso.
O DNS do cluster é um servidor DNS, além de outros servidores DNS em seu ambiente, que fornece registros DNS para serviços do Kubernetes.
Os contêineres iniciados pelo Kubernetes incluem automaticamente esse servidor DNS em suas pesquisas DNS.
### Web UI (Dashboard)
[Dashboard](/docs/tasks/access-application-cluster/web-ui-dashboard/) é uma interface de usuário Web, de uso geral, para clusters do Kubernetes. Ele permite que os usuários gerenciem e solucionem problemas de aplicações em execução no cluster, bem como o próprio cluster.
### Monitoramento de recursos do contêiner
[Monitoramento de recursos do contêiner](/docs/tasks/debug-application-cluster/resource-usage-monitoring/) registra métricas de série temporal genéricas sobre os contêineres em um banco de dados central e fornece uma interface de usuário para navegar por esses dados.
### Logging a nivel do cluster
Um mecanismo de [_logging_ a nível do cluster](/docs/concepts/cluster-administration/logging/) é responsável por guardar os _logs_ dos contêineres em um armazenamento central de _logs_ com um interface para navegação/pesquisa.
## {{% heading "whatsnext" %}}
* Aprenda sobre [Nós](/docs/concepts/architecture/nodes/).
* Aprenda sobre [Controladores](/docs/concepts/architecture/controller/).
* Aprenda sobre [kube-scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/).
* Leia a [documentação](https://etcd.io/docs/) oficial do **etcd**.

View File

@ -0,0 +1,94 @@
---
reviewers:
title: O que é Kubernetes?
description: >
Kubernetes é um plataforma de código aberto, portável e extensiva para o gerenciamento de cargas de trabalho e serviços distribuídos em contêineres, que facilita tanto a configuração declarativa quanto a automação. Ele possui um ecossistema grande, e de rápido crescimento. Serviços, suporte, e ferramentas para Kubernetes estão amplamente disponíveis.
content_type: concept
weight: 10
card:
name: concepts
weight: 10
sitemap:
priority: 0.9
---
<!-- overview -->
Essa página é uma visão geral do Kubernetes.
<!-- body -->
Kubernetes é um plataforma de código aberto, portável e extensiva para o gerenciamento de cargas de trabalho e serviços distribuídos em contêineres, que facilita tanto a configuração declarativa quanto a automação. Ele possui um ecossistema grande, e de rápido crescimento. Serviços, suporte, e ferramentas para Kubernetes estão amplamente disponíveis.
O Google tornou Kubernetes um projeto de código-aberto em 2014. O Kubernetes combina [mais de 15 anos de experiência do Google](/blog/2015/04/borg-predecessor-to-kubernetes/) executando cargas de trabalho produtivas em escala, com as melhores idéias e práticas da comunidade.
O nome **Kubernetes** tem origem no Grego, significando _timoneiro_ ou _piloto_. **K8s** é a abreviação derivada pela troca das oito letras "ubernete" por "8", se tornado _K"8"s_.
## Voltando no tempo
Vamos dar uma olhada no porque o Kubernetes é tão útil, voltando no tempo.
![Evolução das implantações](/images/docs/Container_Evolution.svg)
**Era da implantação tradicional:** No início, as organizações executavam aplicações em servidores físicos. Não havia como definir limites de recursos para aplicações em um mesmo servidor físico, e isso causava problemas de alocação de recursos. Por exemplo, se várias aplicações fossem executadas em um mesmo servidor físico, poderia haver situações em que uma aplicação ocupasse a maior parte dos recursos e, como resultado, o desempenho das outras aplicações seria inferior. Uma solução para isso seria executar cada aplicação em um servidor físico diferente. Mas isso não escalava, pois os recursos eram subutilizados, e se tornava custoso para as organizações manter muitos servidores físicos.
**Era da implantação virtualizada:** Como solução, a virtualização foi introduzida. Esse modelo permite que você execute várias máquinas virtuais (VMs) em uma única CPU de um servidor físico. A virtualização permite que as aplicações sejam isoladas entre as VMs, e ainda fornece um nível de segurança, pois as informações de uma aplicação não podem ser acessadas livremente por outras aplicações.
A virtualização permite melhor utilização de recursos em um servidor físico, e permite melhor escalabilidade porque uma aplicação pode ser adicionada ou atualizada facilmente, reduz os custos de hardware e muito mais. Com a virtualização, você pode apresentar um conjunto de recursos físicos como um cluster de máquinas virtuais descartáveis.
Cada VM é uma máquina completa que executa todos os componentes, incluindo seu próprio sistema operacional, além do hardware virtualizado.
**Era da implantação em contêineres:** Contêineres são semelhantes às VMs, mas têm propriedades de isolamento flexibilizados para compartilhar o sistema operacional (SO) entre as aplicações. Portanto, os contêineres são considerados leves. Semelhante a uma VM, um contêiner tem seu próprio sistema de arquivos, compartilhamento de CPU, memória, espaço de processo e muito mais. Como eles estão separados da infraestrutura subjacente, eles são portáveis entre nuvens e distribuições de sistema operacional.
Contêineres se tornaram populares porque eles fornecem benefícios extra, tais como:
* Criação e implantação ágil de aplicações: aumento da facilidade e eficiência na criação de imagem de contêiner comparado ao uso de imagem de VM.
* Desenvolvimento, integração e implantação contínuos: fornece capacidade de criação e de implantação de imagens de contêiner de forma confiável e frequente, com a funcionalidade de efetuar reversões rápidas e eficientes (devido à imutabilidade da imagem).
* Separação de interesses entre Desenvolvimento e Operações: crie imagens de contêineres de aplicações no momento de construção/liberação em vez de no momento de implantação, desacoplando as aplicações da infraestrutura.
* A capacidade de observação (Observabilidade) não apenas apresenta informações e métricas no nível do sistema operacional, mas também a integridade da aplicação e outros sinais.
* Consistência ambiental entre desenvolvimento, teste e produção: funciona da mesma forma em um laptop e na nuvem.
* Portabilidade de distribuição de nuvem e sistema operacional: executa no Ubuntu, RHEL, CoreOS, localmente, nas principais nuvens públicas e em qualquer outro lugar.
* Gerenciamento centrado em aplicações: eleva o nível de abstração da execução em um sistema operacional em hardware virtualizado à execução de uma aplicação em um sistema operacional usando recursos lógicos.
* Microserviços fracamente acoplados, distribuídos, elásticos e livres: as aplicações são divididas em partes menores e independentes e podem ser implantados e gerenciados dinamicamente - não uma pilha monolítica em execução em uma grande máquina de propósito único.
* Isolamento de recursos: desempenho previsível de aplicações.
* Utilização de recursos: alta eficiência e densidade.
## Por que você precisa do Kubernetes e o que ele pode fazer{#why-you-need-kubernetes-and-what-can-it-do}
Os contêineres são uma boa maneira de agrupar e executar suas aplicações. Em um ambiente de produção, você precisa gerenciar os contêineres que executam as aplicações e garantir que não haja tempo de inatividade. Por exemplo, se um contêiner cair, outro contêiner precisa ser iniciado. Não seria mais fácil se esse comportamento fosse controlado por um sistema?
É assim que o Kubernetes vem ao resgate! O Kubernetes oferece uma estrutura para executar sistemas distribuídos de forma resiliente. Ele cuida do escalonamento e do recuperação à falha de sua aplicação, fornece padrões de implantação e muito mais. Por exemplo, o Kubernetes pode gerenciar facilmente uma implantação no método canário para seu sistema.
O Kubernetes oferece a você:
* **Descoberta de serviço e balanceamento de carga**
O Kubernetes pode expor um contêiner usando o nome DNS ou seu próprio endereço IP. Se o tráfego para um contêiner for alto, o Kubernetes pode balancear a carga e distribuir o tráfego de rede para que a implantação seja estável.
* **Orquestração de armazenamento**
O Kubernetes permite que você monte automaticamente um sistema de armazenamento de sua escolha, como armazenamentos locais, provedores de nuvem pública e muito mais.
* **Lançamentos e reversões automatizadas**
Você pode descrever o estado desejado para seus contêineres implantados usando o Kubernetes, e ele pode alterar o estado real para o estado desejado em um ritmo controlada. Por exemplo, você pode automatizar o Kubernetes para criar novos contêineres para sua implantação, remover os contêineres existentes e adotar todos os seus recursos para o novo contêiner.
* **Empacotamento binário automático**
Você fornece ao Kubernetes um cluster de nós que pode ser usado para executar tarefas nos contêineres. Você informa ao Kubernetes de quanta CPU e memória (RAM) cada contêiner precisa. O Kubernetes pode encaixar contêineres em seus nós para fazer o melhor uso de seus recursos.
* **Autocorreção**
O Kubernetes reinicia os contêineres que falham, substitui os contêineres, elimina os contêineres que não respondem à verificação de integridade definida pelo usuário e não os anuncia aos clientes até que estejam prontos para servir.
* **Gerenciamento de configuração e de segredos**
O Kubernetes permite armazenar e gerenciar informações confidenciais, como senhas, tokens OAuth e chaves SSH. Você pode implantar e atualizar segredos e configuração de aplicações sem reconstruir suas imagens de contêiner e sem expor segredos em sua pilha de configuração.
## O que o Kubernetes não é
O Kubernetes não é um sistema PaaS (plataforma como serviço) tradicional e completo. Como o Kubernetes opera no nível do contêiner, e não no nível do hardware, ele fornece alguns recursos geralmente aplicáveis comuns às ofertas de PaaS, como implantação, escalonamento, balanceamento de carga, e permite que os usuários integrem suas soluções de _logging_, monitoramento e alerta. No entanto, o Kubernetes não é monolítico, e essas soluções padrão são opcionais e conectáveis. O Kubernetes fornece os blocos de construção para a construção de plataformas de desenvolvimento, mas preserva a escolha e flexibilidade do usuário onde é importante.
Kubernetes:
* Não limita os tipos de aplicações suportadas. O Kubernetes visa oferecer suporte a uma variedade extremamente diversa de cargas de trabalho, incluindo cargas de trabalho sem estado, com estado e de processamento de dados. Se uma aplicação puder ser executada em um contêiner, ele deve ser executado perfeitamente no Kubernetes.
* Não implanta código-fonte e não constrói sua aplicação. Os fluxos de trabalho de integração contínua, entrega e implantação (CI/CD) são determinados pelas culturas e preferências da organização, bem como pelos requisitos técnicos.
* Não fornece serviços em nível de aplicação, tais como middleware (por exemplo, barramentos de mensagem), estruturas de processamento de dados (por exemplo, Spark), bancos de dados (por exemplo, MySQL), caches, nem sistemas de armazenamento em cluster (por exemplo, Ceph), como serviços integrados. Esses componentes podem ser executados no Kubernetes e/ou podem ser acessados por aplicações executadas no Kubernetes por meio de mecanismos portáteis, como o [Open Service Broker](https://openservicebrokerapi.org/).
* Não dita soluções de _logging_, monitoramento ou alerta. Ele fornece algumas integrações como prova de conceito e mecanismos para coletar e exportar métricas.
* Não fornece nem exige um sistema/idioma de configuração (por exemplo, Jsonnet). Ele fornece uma API declarativa que pode ser direcionada por formas arbitrárias de especificações declarativas.
* Não fornece nem adota sistemas abrangentes de configuração de máquinas, manutenção, gerenciamento ou autocorreção.
* Adicionalmente, o Kubernetes não é um mero sistema de orquestração. Na verdade, ele elimina a necessidade de orquestração. A definição técnica de orquestração é a execução de um fluxo de trabalho definido: primeiro faça A, depois B e depois C. Em contraste, o Kubernetes compreende um conjunto de processos de controle independentes e combináveis que conduzem continuamente o estado atual em direção ao estado desejado fornecido. Não importa como você vai de A para C. O controle centralizado também não é necessário. Isso resulta em um sistema que é mais fácil de usar e mais poderoso, robusto, resiliente e extensível.
## {{% heading "whatsnext" %}}
* Dê uma olhada em [Componentes do Kubernetes](/docs/concepts/overview/components/).
* Pronto para [Iniciar](/docs/setup/)?

View File

@ -0,0 +1,8 @@
---
title: "Escalonamento"
weight: 90
description: >
No Kubernetes, agendamento refere-se a garantia de que os pods correspondam aos nós para que o kubelet possa executá-los.
Remoção é o processo de falha proativa de um ou mais pods em nós com falta de recursos.
---

View File

@ -91,4 +91,7 @@ do escalonador:
* Aprenda como [configurar vários escalonadores](/docs/tasks/administer-cluster/configure-multiple-schedulers/)
* Aprenda sobre [políticas de gerenciamento de topologia](/docs/tasks/administer-cluster/topology-manager/)
* Aprenda sobre [Pod Overhead](/docs/concepts/configuration/pod-overhead/)
* Saiba mais sobre o agendamento de pods que usam volumes em:
* [Suporte de topologia de volume](/docs/concepts/storage/storage-classes/#volume-binding-mode)
* [Rastreamento de capacidade de armazenamento](/docs/concepts/storage/storage-capacity/)
* [Limites de volumes específicos do nó](/docs/concepts/storage/storage-limits/)

View File

@ -1,9 +1,5 @@
---
reviewers:
- dchen1107
- egernst
- tallclair
title: Pod Overhead
title: Sobrecarga de Pod
content_type: concept
weight: 50
---
@ -12,10 +8,10 @@ weight: 50
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
Quando executa um Pod num nó, o próprio Pod usa uma quantidade de recursos do sistema. Estes
recursos são adicionais aos recursos necessários para executar o(s) _container(s)_ dentro do Pod.
Quando você executa um Pod num nó, o próprio Pod usa uma quantidade de recursos do sistema. Estes
recursos são adicionais aos recursos necessários para executar o(s) contêiner(s) dentro do Pod.
Sobrecarga de Pod, do inglês _Pod Overhead_, é uma funcionalidade que serve para contabilizar os recursos consumidos pela
infraestrutura do Pod para além das solicitações e limites do _container_.
infraestrutura do Pod para além das solicitações e limites do contêiner.
@ -23,27 +19,27 @@ infraestrutura do Pod para além das solicitações e limites do _container_.
<!-- body -->
No Kubernetes, a sobrecarga de _Pods_ é definido no tempo de
No Kubernetes, a sobrecarga de Pods é definido no tempo de
[admissão](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)
de acordo com a sobrecarga associada à
[RuntimeClass](/docs/concepts/containers/runtime-class/) do _Pod_.
[RuntimeClass](/docs/concepts/containers/runtime-class/) do Pod.
Quando é ativada a Sobrecarga de Pod, a sobrecarga é considerada adicionalmente à soma das
solicitações de recursos do _container_ ao agendar um Pod. Semelhantemente, o _kubelet_
solicitações de recursos do contêiner ao agendar um Pod. Semelhantemente, o _kubelet_
incluirá a sobrecarga do Pod ao dimensionar o cgroup do Pod e ao
executar a classificação de despejo do Pod.
executar a classificação de prioridade de migração do Pod em caso de _drain_ do Node.
## Possibilitando a Sobrecarga do Pod {#set-up}
## Habilitando a Sobrecarga de Pod {#set-up}
Terá de garantir que o [portão de funcionalidade](/docs/reference/command-line-tools-reference/feature-gates/)
`PodOverhead` está ativo (está ativo por defeito a partir da versão 1.18)
por todo o cluster, e uma `RuntimeClass` é utilizada que defina o campo `overhead`.
Terá de garantir que o [Feature Gate](/docs/reference/command-line-tools-reference/feature-gates/)
`PodOverhead` esteja ativo (está ativo por padrão a partir da versão 1.18)
em todo o cluster, e uma `RuntimeClass` utilizada que defina o campo `overhead`.
## Exemplo de uso
Para usar a funcionalidade PodOverhead, é necessário uma RuntimeClass que define o campo `overhead`.
Por exemplo, poderia usar a definição da RuntimeClass abaixo com um _container runtime_ virtualizado
que usa cerca de 120MiB por Pod para a máquina virtual e o sistema operativo convidado:
Por exemplo, poderia usar a definição da RuntimeClass abaixo com um agente de execução de contêiner virtualizado
que use cerca de 120MiB por Pod para a máquina virtual e o sistema operacional convidado:
```yaml
---
@ -88,9 +84,9 @@ spec:
memory: 100Mi
```
Na altura de admissão o [controlador de admissão](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) RuntimeClass
No tempo de admissão o [controlador de admissão](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) RuntimeClass
atualiza o _PodSpec_ da carga de trabalho de forma a incluir o `overhead` como descrito na RuntimeClass. Se o _PodSpec_ já tiver este campo definido
o _Pod_ será rejeitado. No exemplo dado, como apenas o nome do RuntimeClass é especificado, o controlador de admissão muda o _Pod_ de forma a
o Pod será rejeitado. No exemplo dado, como apenas o nome do RuntimeClass é especificado, o controlador de admissão muda o Pod de forma a
incluir um `overhead`.
Depois do controlador de admissão RuntimeClass, pode verificar o _PodSpec_ atualizado:
@ -99,44 +95,43 @@ Depois do controlador de admissão RuntimeClass, pode verificar o _PodSpec_ atua
kubectl get pod test-pod -o jsonpath='{.spec.overhead}'
```
O output é:
A saída é:
```
map[cpu:250m memory:120Mi]
```
Se for definido um _ResourceQuota_, a soma dos pedidos dos _containers_ assim como o campo `overhead` são contados.
Se for definido um _ResourceQuota_, a soma das requisições dos contêineres assim como o campo `overhead` são contados.
Quando o kube-scheduler está a decidir que nó deve executar um novo _Pod_, o agendador considera o `overhead` do _Pod_,
assim como a soma de pedidos aos _containers_ para esse _Pod_. Para este exemplo, o agendador adiciona os
pedidos e a sobrecarga, depois procura um nó com 2.25 CPU e 320 MiB de memória disponível.
Quando o kube-scheduler está decidindo que nó deve executar um novo Pod, o agendador considera o `overhead` do pod,
assim como a soma de pedidos aos contêineres para esse _Pod_. Para este exemplo, o agendador adiciona as requisições e a sobrecarga, depois procura um nó com 2.25 CPU e 320 MiB de memória disponível.
Assim que um _Pod_ é agendado a um nó, o kubelet nesse nó cria um novo {{< glossary_tooltip text="cgroup" term_id="cgroup" >}}
para o _Pod_. É dentro deste _pod_ que o _container runtime_ subjacente vai criar _containers_.
Assim que um Pod é agendado a um nó, o kubelet nesse nó cria um novo {{< glossary_tooltip text="cgroup" term_id="cgroup" >}}
para o Pod. É dentro deste Pod que o agente de execução de contêiners subjacente vai criar contêineres.
Se o recurso tiver um limite definido para cada _container_ (_QoS_ garantida ou _Burstrable QoS_ com limites definidos),
o kubelet definirá um limite superior para o cgroup do _pod_ associado a esse recurso (cpu.cfs_quota_us para CPU
e memory.limit_in_bytes de memória). Este limite superior é baseado na soma dos limites do _container_ mais o `overhead`
Se o recurso tiver um limite definido para cada contêiner (_QoS_ garantida ou _Burstrable QoS_ com limites definidos),
o kubelet definirá um limite superior para o cgroup do Pod associado a esse recurso (cpu.cfs_quota_us para CPU
e memory.limit_in_bytes de memória). Este limite superior é baseado na soma dos limites do contêiner mais o `overhead`
definido no _PodSpec_.
Para o CPU, se o _Pod_ for QoS garantida ou _Burstrable QoS_, o kubelet vai definir `cpu.shares` baseado na soma dos
pedidos ao _container_ mais o `overhead` definido no _PodSpec_.
Para CPU, se o Pod for QoS garantida ou _Burstrable QoS_, o kubelet vai definir `cpu.shares` baseado na soma dos
pedidos ao contêiner mais o `overhead` definido no _PodSpec_.
Olhando para o nosso exemplo, verifique os pedidos ao _container_ para a carga de trabalho:
Olhando para o nosso exemplo, verifique as requisições ao contêiner para a carga de trabalho:
```bash
kubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}'
```
O total de pedidos ao _container_ são 2000m CPU e 200MiB de memória:
O total de requisições ao contêiner são 2000m CPU e 200MiB de memória:
```
map[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi]
```
Verifique isto contra o que é observado pelo nó:
Verifique isto comparado ao que é observado pelo nó:
```bash
kubectl describe node | grep test-pod -B2
```
O output mostra que 2250m CPU e 320MiB de memória são solicitados, que inclui _PodOverhead_:
A saída mostra que 2250m CPU e 320MiB de memória são solicitados, que inclui _PodOverhead_:
```
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
@ -145,12 +140,12 @@ O output mostra que 2250m CPU e 320MiB de memória são solicitados, que inclui
## Verificar os limites cgroup do Pod
Verifique os cgroups de memória do Pod no nó onde a carga de trabalho está em execução. No seguinte exemplo, [`crictl`] (https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md)
é usado no nó, que fornece uma CLI para _container runtimes_ compatíveis com CRI. Isto é um
exemplo avançado para mostrar o comportamento do _PodOverhead_, e não é esperado que os utilizadores precisem de verificar
Verifique os cgroups de memória do Pod no nó onde a carga de trabalho está em execução. No seguinte exemplo, [`crictl`](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md)
é usado no nó, que fornece uma CLI para agentes de execução compatíveis com CRI. Isto é um
exemplo avançado para mostrar o comportamento do _PodOverhead_, e não é esperado que os usuários precisem verificar
cgroups diretamente no nó.
Primeiro, no nó em particular, determine o identificador do _Pod_:
Primeiro, no nó em particular, determine o identificador do Pod:
```bash
# Execute no nó onde o Pod está agendado
@ -163,15 +158,15 @@ A partir disto, pode determinar o caminho do cgroup para o _Pod_:
sudo crictl inspectp -o=json $POD_ID | grep cgroupsPath
```
O caminho do cgroup resultante inclui o _container_ `pause` do _Pod_. O cgroup no nível do _Pod_ está um diretório acima.
O caminho do cgroup resultante inclui o contêiner `pause` do Pod. O cgroup no nível do Pod está um diretório acima.
```
"cgroupsPath": "/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a"
```
Neste caso especifico, o caminho do cgroup do pod é `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`. Verifique a configuração cgroup de nível do _Pod_ para a memória:
Neste caso especifico, o caminho do cgroup do Pod é `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`. Verifique a configuração cgroup de nível do Pod para a memória:
```bash
# Execute no nó onde o Pod está agendado
# Mude também o nome do cgroup de forma a combinar com o cgroup alocado ao pod.
# Mude também o nome do cgroup para combinar com o cgroup alocado ao Pod.
cat /sys/fs/cgroup/memory/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/memory.limit_in_bytes
```
@ -182,10 +177,10 @@ Isto é 320 MiB, como esperado:
### Observabilidade
Uma métrica `kube_pod_overhead` está disponível em [kube-state-metrics] (https://github.com/kubernetes/kube-state-metrics)
para ajudar a identificar quando o _PodOverhead_ está a ser utilizado e para ajudar a observar a estabilidade das cargas de trabalho
Uma métrica `kube_pod_overhead` está disponível em [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics)
para ajudar a identificar quando o _PodOverhead_ está sendo utilizado e para ajudar a observar a estabilidade das cargas de trabalho
em execução com uma sobrecarga (_Overhead_) definida. Esta funcionalidade não está disponível na versão 1.9 do kube-state-metrics,
mas é esperado num próximo _release_. Os utilizadores necessitarão entretanto de construir kube-state-metrics a partir da fonte.
mas é esperado em uma próxima versão. Os usuários necessitarão entretanto construir o kube-state-metrics a partir do código fonte.

View File

@ -1,5 +0,0 @@
---
title: "Escalonamento"
weight: 90
---

Some files were not shown because too many files have changed in this diff Show More